Parent
Part of #542 — Multi-provider LLM support
Depends on
#545 (CLI flags), #544 (env var wiring)
What
Allow per-workspace LLM configuration in .codeframe/config.yaml so teams can commit their provider/model choice alongside the project without requiring env vars on every machine.
Target config format
# .codeframe/config.yaml
llm:
provider: openai
model: qwen2.5-coder:7b
base_url: http://localhost:11434/v1 # optional, for local models
# optional per-purpose overrides (maps to Purpose enum)
planning_model: gpt-4o
execution_model: qwen2.5-coder:7b
generation_model: qwen2.5-coder:1.5b
Priority order (lowest → highest wins)
- Config file (
.codeframe/config.yaml)
- Env var (
CODEFRAME_LLM_PROVIDER, CODEFRAME_LLM_MODEL)
- CLI flag (
--llm-provider, --llm-model)
Files to modify
codeframe/core/config.py — add LLMConfig dataclass, load from config.yaml
codeframe/adapters/llm/base.py — extend ModelSelector to read from LLMConfig
codeframe/core/runtime.py — load workspace config before resolving provider
tests/core/test_config.py — test config loading and priority order
Notes
.codeframe/config.yaml already exists for other settings — add llm: as a new top-level key
base_url in config lets teams point at a shared Ollama instance without env vars
- Config file should be committed to the repo (not gitignored) — document that API keys must still come from env vars, never the config file
Acceptance criteria
Parent
Part of #542 — Multi-provider LLM support
Depends on
#545 (CLI flags), #544 (env var wiring)
What
Allow per-workspace LLM configuration in
.codeframe/config.yamlso teams can commit their provider/model choice alongside the project without requiring env vars on every machine.Target config format
Priority order (lowest → highest wins)
.codeframe/config.yaml)CODEFRAME_LLM_PROVIDER,CODEFRAME_LLM_MODEL)--llm-provider,--llm-model)Files to modify
codeframe/core/config.py— addLLMConfigdataclass, load from config.yamlcodeframe/adapters/llm/base.py— extendModelSelectorto read fromLLMConfigcodeframe/core/runtime.py— load workspace config before resolving providertests/core/test_config.py— test config loading and priority orderNotes
.codeframe/config.yamlalready exists for other settings — addllm:as a new top-level keybase_urlin config lets teams point at a shared Ollama instance without env varsAcceptance criteria
llm: {provider: openai, model: qwen2.5-coder:7b}in config.yaml selects that providerbase_urlin config routes to custom endpoint