This repository is a fork of the open source VectCutAPI. This fork is maintained as a CapCut-focused variant for creating and saving local CapCut draft projects through a Python HTTP API and shell scripts.
This fork is intentionally narrower than the upstream project.
- CapCut is the only supported desktop editor documented here.
- The documented interface is the local HTTP API exposed by
capcut_server.py. - The README focuses on local draft generation, subtitle import, and scripted draft assembly.
This project lets you build CapCut drafts programmatically, then save those drafts into CapCut's local projects directory so they appear in the desktop app.
Core capabilities in this fork include:
- Create a new draft timeline
- Add video clips in sequence
- Add audio tracks
- Add subtitles from SRT
- Add text, images, stickers, effects, and keyframes
- Save the assembled project as a local CapCut draft
capcut_server.py: Flask server that exposes the local HTTP APIconfig.json.example: Example configuration for local setupscripts/create_capcut_draft_from_videos.sh: End-to-end script that assembles a CapCut draft from local video filesscripts/generate_srt_with_openai.py: Optional helper that generates SRT captions with the OpenAI APIscripts/videos/: Default input directory for the draft creation script
- Python 3.10 or newer
ffmpegffprobe- CapCut desktop installed on the machine where drafts will be saved
Notes:
ffprobeis required because the draft creation script reads each clip's duration before adding it to the timeline.- If you want automatic captions, you also need an OpenAI API key.
git clone https://github.com/jamil-islam/VectCutAPI.git
cd VectCutAPI
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtCreate a local config file:
cp config.json.example config.jsonThe example config includes:
is_capcut_env: should remaintruefor this forkdraft_domain: base domain used for generated draft URLsport: local Flask server portpreview_router: route used when building draft preview URLsopenai_api_key: optional, used byscripts/generate_srt_with_openai.py
If you do not want to store the OpenAI key in config.json, you can instead export OPENAI_API_KEY in your shell.
python3 capcut_server.pyBy default the server runs on port 9001.
Create a draft:
curl -s -X POST http://localhost:9001/create_draft \
-H 'Content-Type: application/json' \
-d '{"width":1080,"height":1920}'You should receive a JSON response containing a draft_id.
The main endpoints exposed by capcut_server.py include:
POST /create_draftPOST /add_videoPOST /add_audioPOST /add_subtitlePOST /add_textPOST /add_imagePOST /add_stickerPOST /add_effectPOST /add_video_keyframePOST /save_draftPOST /query_draft_statusPOST /query_scriptPOST /generate_draft_url
There are also multiple metadata endpoints for fonts, transitions, masks, text animations, audio effects, and video effects.
Create a draft:
import requests
create_response = requests.post(
"http://localhost:9001/create_draft",
json={"width": 1080, "height": 1920},
)
create_response.raise_for_status()
draft_id = create_response.json()["output"]["draft_id"]
print(draft_id)Add a video clip:
import requests
response = requests.post(
"http://localhost:9001/add_video",
json={
"draft_id": "your-draft-id",
"video_url": "/absolute/path/to/clip.mp4",
"target_start": 0,
"duration": 5.0,
"track_name": "video_main",
},
)
response.raise_for_status()
print(response.json())Save the draft:
import requests
response = requests.post(
"http://localhost:9001/save_draft",
json={
"draft_id": "your-draft-id",
"draft_folder": "/Users/your-user/Movies/CapCut/User Data/Projects/com.lveditor.draft",
"draft_name": "example-project",
},
)
response.raise_for_status()
print(response.json())This script is the simplest complete workflow in the repo for turning a set of local clips into a CapCut draft.
The script:
- Scans a local video directory for supported files
- Creates a new draft through
POST /create_draft - Adds each clip sequentially through
POST /add_video - Optionally adds subtitles through
POST /add_subtitle - Saves the result into CapCut's local draft directory through
POST /save_draft - Prints the final draft path so you can open it in Finder
Supported file extensions are:
mp4movm4vmkvaviwebm
By default the script uses:
- Video input directory:
scripts/videos - Subtitle file:
scripts/captions.srt - CapCut draft root:
~/Movies/CapCut/User Data/Projects/com.lveditor.draft - Server URL:
http://localhost:9001
Start the API server in one terminal:
python3 capcut_server.pyIn another terminal, activate your virtualenv if needed, place clips into scripts/videos, then run:
./scripts/create_capcut_draft_from_videos.sh- The script creates
scripts/videosif it does not already exist. - It fails immediately if no supported video files are found.
- It uses
ffprobeto calculate each clip's duration. - It appends clips in order by increasing timeline start time.
- If
scripts/captions.srtexists, it imports that file as subtitles. - If a custom
DRAFT_NAMEis provided, the script sanitizes it before creating the final folder name.
If you want the script to generate captions before building the draft, set:
AUTO_CAPTIONS=true ./scripts/create_capcut_draft_from_videos.shWhen AUTO_CAPTIONS=true, the script runs scripts/generate_srt_with_openai.py before creating the draft. That helper:
- extracts audio from each clip with
ffmpeg - sends audio to the OpenAI transcription API
- combines the resulting segments into one timeline-aligned SRT file
- writes the final SRT to
CAPTIONS_SRT_PATH
You must provide an API key through either:
OPENAI_API_KEYopenai_api_keyinconfig.json
You can customize the script without editing it by setting environment variables before running it.
General workflow variables:
SERVER_URL: API base URL, defaulthttp://localhost:9001VIDEO_DIR: input clip directory, defaultscripts/videosCAPTIONS_SRT_PATH: subtitle file path, defaultscripts/captions.srtCAPCUT_DRAFT_ROOT: destination CapCut drafts directoryDRAFT_NAME: optional final draft folder nameTRACK_NAME: video track name, defaultvideo_mainAUTO_CAPTIONS:trueorfalseOPENAI_TRANSCRIBE_MODEL: defaultwhisper-1OPENAI_TRANSCRIBE_LANGUAGE: optional language hint such asenOPENAI_TRANSCRIBE_PROMPT: optional transcription promptSUBTITLE_TRACK_NAME: subtitle track name, defaultsubtitleSUBTITLE_PRESET: subtitle preset selector, defaultcircuit_electric
Subtitle styling variables:
SUBTITLE_FONTSUBTITLE_FONT_SIZESUBTITLE_BOLDSUBTITLE_ITALICSUBTITLE_UNDERLINESUBTITLE_FONT_COLORSUBTITLE_ALPHASUBTITLE_VERTICALSUBTITLE_BORDER_COLORSUBTITLE_BORDER_WIDTHSUBTITLE_BORDER_ALPHASUBTITLE_BACKGROUND_COLORSUBTITLE_BACKGROUND_STYLESUBTITLE_BACKGROUND_ALPHASUBTITLE_TRANSFORM_XSUBTITLE_TRANSFORM_YSUBTITLE_SCALE_XSUBTITLE_SCALE_YSUBTITLE_ROTATION
DRAFT_NAME="launch-cut" \
VIDEO_DIR="$PWD/my_clips" \
CAPTIONS_SRT_PATH="$PWD/my_captions.srt" \
SUBTITLE_FONT_SIZE="12.0" \
SUBTITLE_TRANSFORM_Y="-0.78" \
./scripts/create_capcut_draft_from_videos.shOn success, the script prints the final saved draft path under CapCut's local projects directory. That draft should then appear in the CapCut desktop application.
If draft creation fails:
- confirm
python3 capcut_server.pyis running - confirm the server is reachable at
SERVER_URL - confirm
ffmpegandffprobeare installed and available onPATH - confirm CapCut is installed and
CAPCUT_DRAFT_ROOTpoints to the correct local drafts directory - confirm your clip paths are valid and readable
If auto captions fail:
- confirm
OPENAI_API_KEYis set orconfig.jsoncontainsopenai_api_key - confirm the transcoded audio for each clip stays under the OpenAI upload size limit enforced by
scripts/generate_srt_with_openai.py
This repository is based on the open source VectCutAPI project. If you need features or documentation that are not present in this fork, check the upstream repository:
https://github.com/sun-guannan/VectCutAPI