Skip to content

WaveGrad Finetuning Fails – Only 54/162 Layers Restored (Universal LibriTTS WaveGrad Vocoder) #795

@Ahrane-m

Description

@Ahrane-m

I am working with the pretrained Universal LibriTTS WaveGrad vocoder in Mozilla TTS and encountered issues during finetuning:

Running tts --list_models shows vocoder_models/universal/libri-tts/wavegrad.

When finetuning, only 54/162 layers are restored from the checkpoint.

The model trains, but the generated audio is unintelligible noise.

It looks like the WaveGrad architecture has changed in the current repo, so the pretrained checkpoint is not fully compatible anymore.

Could you please share:

The original architecture/configuration used for the Universal LibriTTS WaveGrad pretrained vocoder,

The commit or repo version corresponding to that pretrained checkpoint,

Guidance on how to make the current implementation compatible for successful finetuning.

This would help ensure pretrained vocoders can be fine-tuned effectively without mismatch issues.

Thank you very much for your support!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions