Skip to content

Releases: Purfview/whisper-standalone-win

Faster-Whisper-XXL Pro r3.256.1

07 Nov 19:21
491a815

Choose a tag to compare

Fixed: --realign "UnicodeDecodeError" error.
Fixed: --realign error if there is more than one line per sub.
Fixed: --realign "File not found" error.
Fixed: ImportError: libctranslate2-d3638643.so.4.4.0 [Linux]
Fixed: ffmpeg: error while loading shared libraries [Linux]
Hide: "not compiled with flash attention" warning.
Fixed: "cuda:1" didn't work with MDX [untested, "r245.3" fix was broken]
New VAD models: silero_v6_fw, silero_v6 [patched], nemo_v2, ten [its threshold is offset by +0.2]
Change: --vad_method default is set to ten
Change: Batched mode has few improvements
Change: --compute_type defaults to default if CUDA, else auto.
Updated: torch to 2.8.0+cu128 [to support 50xx GPUs on torch models]
Updated: onnxruntime_gpu to 1.21.1 [cuDNN 9.x]
Updated: pyinstaller to 6.16.0

Faster-Whisper-XXL Pro

Faster-Whisper-XXL r245.4

27 Mar 01:04
3b611dc

Choose a tag to compare

Standalone Faster-Whisper implementation using optimized CTranslate2 models.

Includes all Standalone Faster-Whisper features + some additional ones. Read here.
Last included commit: #245
Faster-Whisper-XXL includes all needed libs.

Some new stuff in r245.4:

Fixed: 'unload_model' error when using batched inference

Some new stuff in r245.3:

Fixed: Model didn't free VRAM when --model_preload False
Fixed: "cuda:1" didn't work with MDX
Fixed: Some issues in "One Click Transcribe.bat"
Fixed: Matplotlib backend error [Colab]
Fixed: cuDNN libs not found error [Linux]
Changed: --ff_mdx_kim2 to --ff_vocal_extract mdx_kim2
Changed: --mdx_device to --voc_device
Added: distil-large-v3.5 model
Updated: pyinstaller to 6.12.0

Link to the changelog.

Faster-Whisper r192.3

15 Aug 16:04
a07e9de

Choose a tag to compare

Standalone Faster-Whisper implementation using optimized CTranslate2 models.

GPU execution requires cuBLAS and cuDNN 8.x libs for CUDA v11.x .

Last included commit: #192

Note:

This release branch is deprecated, use Faster-Whisper-XXL.

Link to the changelog.

Whisper-OpenAI r150

30 May 22:49
a07e9de

Choose a tag to compare

Standalone Whisper.
Last included commit: #150.

Whisper-OpenAI includes all needed libs.

cuBLAS and cuDNN

17 May 21:30
898f61a

Choose a tag to compare

Place libs in the same folder where Faster-Whisper executable is. Or to:
Windows: To System32 dir.
Linux: To dir in LD_LIBRARY_PATH env.

CUDA11_v2 is the last with support for GPUs with Kepler chip.

CUDA11_v2: - cuBLAS.and.cuDNN____v11.11.3.6__v8.7.0.84
CUDA11_v3: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.6.50
CUDA11_v4: - cuBLAS.and.cuDNN____v11.11.3.6__v8.9.7.29

CUDA12_v1: - cuBLAS.and.cuDNN____v12.4.5.8___v8.9.7.29

CUDA12_v2: - cuBLAS.and.cuDNN____v12.4.5.8___v9.5.1.17
CUDA12_v3: - cuBLAS.and.cuDNN____v12.8.4.1___v9.8.0.87