Skip to content

[Phase 4.D] Multi-provider LLM support — connect any model to CodeFrame #542

@frankbria

Description

@frankbria

Overview

CodeFrame should be a delivery harness, not an Anthropic-specific tool. The agent execution layer should accept any LLM — Anthropic, OpenAI, Ollama, Qwen-Coder, GLM, Deepseek, vLLM, LM Studio, or any OpenAI-compatible endpoint — chosen by the user at runtime.

Current State

The right abstraction already exists:

  • LLMProvider ABC in codeframe/adapters/llm/base.py with complete() / stream(), Purpose enum, and env var overrides
  • AnthropicProvider implements it correctly
  • ReactAgent uses it correctly — never imports anthropic directly

The gap is in the worker agents and CLI plumbing:

  • worker_agent.py, backend_worker_agent.py, frontend_worker_agent.py, test_worker_agent.py all hardcode AsyncAnthropic() directly
  • streaming_chat.py is wired to anthropic.AsyncAnthropic().messages.stream()
  • No --llm-provider / --llm-model CLI flags
  • get_provider() factory only supports "anthropic" and "mock"

Architecture

Because OpenAI's API is now an industry standard (Ollama, vLLM, LM Studio, Qwen, GLM, Deepseek all expose OpenAI-compatible endpoints), two adapters cover the entire ecosystem:

codeframe/adapters/llm/
├── base.py          ✅ LLMProvider ABC, Purpose enum, ModelSelector
├── anthropic.py     ✅ Anthropic implementation
├── openai.py        ❌ OpenAI-compatible (covers OpenAI + any base_url override)
└── mock.py          ✅ Test mock

Runtime selection:

cf work start <task> --execute --llm-provider openai --llm-model qwen2.5-coder:7b
cf work start <task> --execute --llm-provider openai --llm-model gpt-4o
cf work start <task> --execute  # default: anthropic, claude-sonnet-4-5

Subissues (implement in order)

Acceptance Criteria

  • cf work start <task> --execute --llm-provider openai --llm-model gpt-4o works end-to-end
  • OPENAI_BASE_URL=http://localhost:11434/v1 routes to local Ollama
  • Default behavior (no flags) unchanged — still uses Anthropic
  • All existing tests pass
  • Worker agents use LLMProvider abstraction, not AsyncAnthropic directly

Metadata

Metadata

Assignees

No one assigned

    Labels

    architectureSystem architecture and design patternsenhancementNew feature or requestphase-4Phase 4: Multi-Agent Coordination

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions