-- **WhisperX**: Selected by default. This method runs locally on your machine and can run on CPUs and CUDA GPUs, although it performs better on the latter. The transcriptions generated by **WhisperX** are generally **much more** accurate than those generated by the **Google API**, although this may vary depending on the [model size](#model-size) and [computation type](#compute-type) selected. In addition, **WhisperX** offers a wider range of features, including subtitle generation and translation into any other supported language. It's fast, especially when transcribing large files, and has no usage restrictions while remaining completely free.
0 commit comments