Understanding Key Differences Between Two Voice Models for Fine-Tuning
Hello!
I'm encountering an issue while attempting to fine-tune a voice model and would appreciate some insights into understanding the differences between two models.
I have two checkpoints that I'm accessing from here: https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main
Model A:
Quality: medium
Language: en_US
Trained from scratch
Dataset: Blizzard 2013 - Lessac
Samplerate: 22,050Hz
https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main/en/en_US/lessac/medium
Model B:
Quality: medium
Language: en_US
Fine-tuned from English lessac medium on train-clean-360
Dataset: LibriTTS
Samplerate: 22,050Hz
https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main/en/en_US/libritts_r/medium
I'm trying to fine-tune my model using Model B as a starting point. However, when I attempt to load the checkpoint from Model B into my fine-tuning process, I encounter a key mismatch error. The error suggests that there are unexpected keys in the state dictionary, such as "model_g.emb_g.weight", "model_g.dec.cond.weight", and others.
I can work perfectly finetuning from model A. Why might there be differences between these models, causing a key mismatch error during fine-tuning? Are there specific architectural variances or additional components in Model B that might not exist in Model A?
Apologies as this is all quite new for me so I might not be using the right words for certain things.
Any insights or guidance on how to approach fine-tuning from Model B would be greatly appreciated.
Thank you for your help!
To add some more context. I get the following error when trying to finetune using the libritts_r medium ckpt:
RuntimeError: Error(s) in loading state_dict for VitsModel:
Unexpected key(s) in state_dict: "model_g.emb_g.weight", "model_g.dec.cond.weight", "model_g.dec.cond.bias", "model_g.enc_q.enc.cond_layer.bias", "model_g.enc_q.enc.cond_layer.weight_g", "model_g.enc_q.enc.cond_layer.weight_v", "model_g.flow.flows.0.enc.cond_layer.bias", "model_g.flow.flows.0.enc.cond_layer.weight_g", "model_g.flow.flows.0.enc.cond_layer.weight_v", "model_g.flow.flows.2.enc.cond_layer.bias", "model_g.flow.flows.2.enc.cond_layer.weight_g", "model_g.flow.flows.2.enc.cond_layer.weight_v", "model_g.flow.flows.4.enc.cond_layer.bias", "model_g.flow.flows.4.enc.cond_layer.weight_g", "model_g.flow.flows.4.enc.cond_layer.weight_v", "model_g.flow.flows.6.enc.cond_layer.bias", "model_g.flow.flows.6.enc.cond_layer.weight_g", "model_g.flow.flows.6.enc.cond_layer.weight_v", "model_g.dp.cond.weight", "model_g.dp.cond.bias".
Using lessac to finetune works fine.
Apologies if I'm missing something super obvious but as stated, all very new to me.
It's the Speaker count.
From the config.json of the model: "num_speakers": 904,
I went through my training data. Duplicated it until there were 904 instances. Then I set up my metadata.csv as though there were 904 individual speakers for each line. And now I can finetune using the libritts_r model checkpoint. Epochs are incredibly slow compared to finetuning on other models. But at least I've confirmed how/why.
You don't need to duplicate your data. Just adjust the "num_speakers" in your config.json to match the model you want to fine-tune from after the preprocessing stage.
This should allow you to fine-tune speaker #0 in the model.
Hi, thanks for the response. I tested this just now and I still receive:
Missing speaker id"
AssertionError: Missing speaker id
I tried specifying the speaker id as 0 as well, just in case that was what you meant at the end. But it returns the same error.
Ah, right. You also need to add a column to your metadata.csv for the speaker name. Check the base model's config.json and find the speaker name for speaker #0 in speaker_id_map
-- for the en_US-libritts-high
model, this is "p3922". So your dataset would look like:
path/to/1.wav|p3922|text that is spoken for 1
path/to/2.wav|p3922|text that is spoken for 2
...
Appreciate the responses! I have run preprocess again with "p3922", and with number of speakers specified to 904, and also tried specifying the speaker id in the JSON as p3922.
However, I'm still getting:
speaker_id is not None, "Missing speaker id"
AssertionError: Missing speaker id
https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main/en/en_US/libritts_r/medium
is the model I'm using so I double checked the config.json there which marks the speaker ID for #0 as "3922" rather than "p3922" so I've repeated the above and with adjustment to the metadata.csv. I'm still getting the same error. The only way I've got it to work so far is the duplication as mentioned earlier.
},
"num_symbols": 256,
"num_speakers": 904,
"speaker_id_map": {},
"piper_version": "1.0.0"
}
This is the end of my training data config.json adjusted to 904 speakers to match the libritts_r config.
Should I be altering speaker_id_map as well? I've tried putting p3922 and 3922 in there as well under each condition and still the above error. But perhaps I'm not populating it correctly. I've also tried putting the speaker id map as 0 which maps to 3922 but to no avail.
Why isn't there any x-low-quality english checkpoints available? I've noticed that the low-quality models are actually of medium size, the only difference is it's trained on data preprocessed with 16khz resolution, so they can only be trained with --quality 'medium'.
How was sharvard finetuned from Lessac without getting key mismatch errors? I'm trying to train a multispeaker voice (3 speakers) owever, I get the same errors as OP. @krones9000 mentioned having success when finetuning with Lessac but I didn't.
Why isn't there any x-low-quality english checkpoints available? I've noticed that the low-quality models are actually of medium size, the only difference is it's trained on data preprocessed with 16khz resolution, so they can only be trained with --quality 'medium'.
There wasn't enough of a difference in performance for me to spend time training x-low quality versions of all the voices.
How was sharvard finetuned from Lessac without getting key mismatch errors? I'm trying to train a multispeaker voice (3 speakers) owever, I get the same errors as OP. @krones9000 mentioned having success when finetuning with Lessac but I didn't.
You need to use --resume_from_single_speaker_checkpoint <checkpoint>
when training a multi-speaker model that's based on a single-speaker checkpoint.
You need to use
--resume_from_single_speaker_checkpoint <checkpoint>
when training a multi-speaker model that's based on a single-speaker checkpoint.
Thank you! That solved the problem.
There wasn't enough of a difference in performance for me to spend time training x-low quality versions of all the voices.
There is no need to train all the voices, but it would be nice to have at least one english x-low voice checkpoint.