Cursor and Copilot are great for in-editor assistance. ctxeng focuses on a different problem: building a portable, budget-safe, reproducible context bundle you can use with any LLM (chat UI, API, CI).
ctxeng is especially useful when you need:
- deterministic selection (ranked evidence, not “whatever is open”)
- strict token budgeting (fits the model window)
- safety (redaction before output/traces)
- large-repo workflows (RAG chunk retrieval)
- automation (CI, snapshots, tracing)
No. ctxeng selects a subset of files (or chunks with --rag) based on your query and token budget.
Also: ctxeng itself does not call an LLM unless you use an optional integration function (e.g. ask_claude()).
When enabled (default), ctxeng masks common secrets and PII patterns before token counting, tracing, or output.
To disable:
ctxeng build "Your query" --no-redact- Discovery + keyword/path scoring: works for many text/code files.
- Python-only features:
- import graph expansion
- skeleton mode
- JS/TS/Go symbols: supported via optional tree-sitter dependencies:
pip install "ctxeng[ast]"It is currently under development and disabled to avoid unstable activation in releases. Use the CLI/Python package.
That usually means Codecov requires a token for uploads on protected branches.
Fix:
- Add
CODECOV_TOKENin your GitHub repo secrets - Or configure Codecov to allow tokenless uploads for your setup
Install it in the environment you’re using:
python -m pip install -U twine buildMost common causes:
- that version already exists on PyPI
- you’re uploading old artifacts from
dist/
Recommended flow:
rm -rf dist build
python -m build
python -m twine check dist/*
python -m twine upload dist/*Start with the smallest workflow:
pip install ctxeng
ctxeng build "Fix the auth bug" --git-diff --fmt markdown --output ctx.mdPaste ctx.md into your LLM. Add --trace once you want explainability, and --rag once your repo is large.