Minimal re-implementation of HiFi-GAN training on discrete speech units.
The package is available on PyPI:
pip install unit-hifiganIt is based on the "Speech Resynthesis from Discrete Disentangled Self-Supervised Representations" repository, with a clean and minimal re-implementation for both training and inference. We follow their default hyperparameters and model configurations.
The vocoder is available via the UnitVocoder class. You can condition it on speakers and styles (support for f0 will come later).
Load a pretrained model from a local directory or a distant HuggingFace repository like this:
from unit_hifigan import UnitVocoder
vocoder = UnitVocoder.from_pretrained("coml/hubert-phoneme-classification", revision="vocoder-base-l11")You can also load models from the legacy implementation or from textlesslib with UnitVocoder.from_legacy_pretrained:
import requests
from unit_hifigan import UnitVocoder
url = "https://dl.fbaipublicfiles.com/textless_nlp/expresso/checkpoints/hifigan_expresso_lj_vctk_hubert_base_ls960_L9_km500/"
vocoder = UnitVocoder.from_legacy_pretrained(url + "generator.pt")
vocoder.speakers = requests.get(url + "speakers.txt").text.splitlines()
vocoder.styles = requests.get(url + "styles.txt").text.splitlines()You can then generate audio at 16kHz with UnitVocoder.generate as below:
import torch
from unit_hifigan import UnitVocoder
units, speaker, style = torch.randint(0, 500, (1, 100)), ["speaker-0"], ["reading"]
vocoder = UnitVocoder(500, speakers=["speaker-0", "speaker-1"], styles=["reading", "crying", "laughing"])
audio = vocoder.generate(units, speaker=speaker, style=style)For training, you need to have manifest files for the training and validation datasets.
The manifests are JSONL files with fields audio (string), units (list of integers), and optionally style (string) or speaker (string):
{"audio":"audio-1.wav","units":[24,24,173,289,289,441,487,370,370],"speaker":"spkr02"}audio: full path to the audio fileunits: discrete units from the speech encoder (for example, HuBERT layer 11 and K-means K=500)speaker: name of the speaker (for speaker conditioning)style: name of the style (for style conditioning)
Via the CLI:
# Minimal command with default configuration
python -m unit_hifigan.train --train $TRAIN_MANIFEST --val $VAL_MANIFEST --units $N_UNITS
# If you have a JSON config file
python -m unit_hifigan.train --config $CONFIGYou can also use the unit_hifigan.train.train function in your Python code if you prefer. Check out unit_hifigan.train.TrainConfig for the list of configuration options.
The pipeline supports DDP by default, when run with either torchrun or Slurm. Have a look at unit_hifigan.utils.init_distributed for how distributed training is initialized.
Via the CLI:
python -m unit_hifigan.inference $GENERATIONS $PRETRAINED_MODEL $INFERENCE_MANIFESTwhere the inference manifest has the same format as the training and validation ones.
python -m unit_hifigan.wer.whisper $GENERATIONS $ASR_MANIFEST $JSONL_OUTPUT --model $MODELwhere $MODEL is the name of the Whisper variant (large-v3 if not provided).
The ASR manifest has the followings fields:
audio: relative path to the audio files from the $GENERATIONS directory.transcript: ground truth transcription
python -m unit_hifigan.wer.torchaudio $GENERATIONS $ASR_MANIFEST $JSONL_OUTPUT --model $MODELwhere $MODEL is the name of the torchaudio ASR pipeline to use (by default WAV2VEC2_ASR_LARGE_LV60K_960H).
- Add MCD evaluation
- Add support for F0