v0.2.0 #720
Replies: 2 comments 3 replies
-
|
Thanks for the great work! |
Beta Was this translation helpful? Give feedback.
-
|
Just a suggestion, it would be nice to have some information on models released in every version. Like what dataset was used to train, number of steps, GPU resources used (time taken : a bit irrelevant but gives some idea about what to expect). Sometimes it is self explanatory with the model name but nonetheless, it would be good to have official statement on it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
🐸 v0.2.0
🐞Bug Fixes
💾 Code updates
Code uses the Tensorboard by default. For W&B, you need to set
log_dashboardoption in the config and defineproject_nameandwandb_entity.make_symbols()scheduler_after_epoch.do_amp_to_db_linearanddo_amp_to_db_linearoptions.🗒️ Docs updates
🤖 Model implementations
🚀 Model releases
vocoder_models--ja--kokoro--hifigan_v1 (👑 @kaiidams)
HiFiGAN model trained on Kokoro dataset to complement the existing Japanese model.
Try it out:
tts --model_name tts_models/ja/kokoro/tacotron2-DDC --text "こんにちは、今日はいい天気ですか?"tts_models--en--ljspeech--tacotronDDC_ph
TacotronDDC with phonemes trained on LJSpeech. It is to fix the pronunciation errors caused by the raw text
in the released TacotronDDC model.
Try it out:
tts --model_name tts_models/en/ljspeech/tacotronDDC_ph --text "hello, how are you today?"tts_models--en--ljspeech--vits
VITS model trained on LJSpeech.
Try it out:
tts --model_name tts_models/en/ljspeech/vits --text "hello, how are you today?"tts_models--en--vctk--vits
VITS model trained on VCTK with multi-speaker support.
Try it out:
vocoder_models--en--ljspeech--univnet
UnivNet model trained on LJSpeech to complement the TacotronDDC model above.
Try it out:
tts --model_name tts_models/en/ljspeech/tacotronDDC_ph --text "hello, how are you today?"Beta Was this translation helpful? Give feedback.
All reactions