ArcSub is an end-to-end subtitle translation workstation that treats cloud services and local OpenVINO models as equal first-class paths. It covers media intake, speech to text, subtitle translation, and watching the finished subtitles with the video.
- English: README.md
- 繁體中文: README.zh-TW.md
- 日本語: README.ja.md
- Deutsch: README.de.md
- Français: README.fr.md
- English: docs/en/getting-started.md
- 繁體中文: docs/zh-TW/getting-started.md
- 日本語: docs/ja/getting-started.md
Dashboard overview: manage subtitle projects, monitor system resources, and move through the full workflow.
Video Fetcher: parse source metadata, select downloadable formats, and prepare assets for transcription.
Speech to Text: choose a cloud or local recognition model, configure advanced features, and generate transcript output.
Text Translation: choose a cloud or local translation model, configure language options, and compare source and translated subtitles.
Video Player: watch the finished subtitles with the video and fine-tune subtitle styling for the viewing page.
When packaged assets are published for this repository, download the latest archive for your operating system from Releases.
For normal end-user use, start ArcSub from the packaged release:
- Windows
deploy.ps1start.production.ps1
- Linux
deploy.shstart.production.sh
Start with:
If you are working from this repository:
- Windows
npm install.\start.ps1
- Linux
npm install./start.sh
The start.ps1 and start.sh helpers clean up stale dev processes and then launch npm run dev.
This repository contains the application source code and public documentation.
It does not include:
- local runtime data under
runtime/ - downloaded local ASR or translation models
- portable bootstrap runtimes such as
.arcsub-bootstrap/ - personal credentials such as
.env
- import local media or download online media
- run speech to text with cloud ASR services or local OpenVINO ASR models
- use word alignment, VAD, and diarization-related helpers
- translate subtitles with cloud translation services or local OpenVINO translation models
- watch subtitle results with the video and tune styling for the viewing page
ArcSub is designed to let each project choose the most practical model path across cloud services and local OpenVINO runtimes:
- cloud ASR and translation models are configured in
Settingswith API endpoints, keys, and provider options - local ASR and local translation models are installed from
Settingsand run through the bundled OpenVINO runtime path - model order in
Settingscontrols the default model shown in Speech to Text and Text Translation - pyannote speaker diarization uses Hugging Face assets when enabled; a missing token does not block normal startup or cloud workflows
- Docs index: docs/README.md
- Releases: Releases
- Discussions: Discussions
- Contributing: CONTRIBUTING.md
- Code of Conduct: CODE_OF_CONDUCT.md
- Security: SECURITY.md
This project is licensed under the MIT License.




