Faster-Whisper-XXL r245.4
·
61 commits
to main
since this release
Standalone Faster-Whisper implementation using optimized CTranslate2 models.
Includes all Standalone Faster-Whisper features + some additional ones. Read here.
Last included commit: #245
Faster-Whisper-XXL includes all needed libs.
Some new stuff in r245.4:
Fixed: 'unload_model' error when using batched inference
Some new stuff in r245.3:
Fixed: Model didn't free VRAM when --model_preload False
Fixed: "cuda:1" didn't work with MDX
Fixed: Some issues in "One Click Transcribe.bat"
Fixed: Matplotlib backend error [Colab]
Fixed: cuDNN libs not found error [Linux]
Changed: --ff_mdx_kim2 to --ff_vocal_extract mdx_kim2
Changed: --mdx_device to --voc_device
Added: distil-large-v3.5 model
Updated: pyinstaller to 6.12.0
Link to the changelog.