-
Notifications
You must be signed in to change notification settings - Fork 5
[4.D.1] OpenAI-compatible LLM adapter (adapters/llm/openai.py) #543
Copy link
Copy link
Closed
Labels
enhancementNew feature or requestNew feature or requestphase-4Phase 4: Multi-Agent CoordinationPhase 4: Multi-Agent Coordination
Description
Parent
Part of #542 — Multi-provider LLM support
What
Create codeframe/adapters/llm/openai.py implementing the LLMProvider ABC.
A single implementation with a configurable base_url covers the entire OpenAI-compatible ecosystem: OpenAI, Ollama, vLLM, LM Studio, Qwen, GLM, Deepseek, Groq, Together, etc.
Files to create/modify
- Create
codeframe/adapters/llm/openai.py—OpenAIProvider(LLMProvider) - Create
tests/adapters/test_llm_openai.py
Implementation notes
- Use
openaiSDK (pip install openai) — already OpenAI-compatible - Constructor:
OpenAIProvider(api_key, model, base_url=None)base_url=None→ OpenAI productionbase_url="http://localhost:11434/v1"→ Ollama
complete(): translateMessagelist →openai.chat.completions.create(), returnLLMResponsestream(): usestream=True, yieldLLMResponsechunks- Tool use: OpenAI uses
tool_calls/functionformat; translate to/fromToolCall/ToolResultfrombase.py - Map Anthropic stop reasons (
end_turn,tool_use) ↔ OpenAI stop reasons (stop,tool_calls)
Tests
- Unit tests with
respxorpytest-httpxmocking the OpenAI endpoint - Test
complete()with and without tools - Test
stream() - Test
base_urloverride routes to custom endpoint - Test error handling (rate limit, auth error, model not found)
Acceptance criteria
-
OpenAIProviderpasses same interface contract asAnthropicProvider - Works with real OpenAI API key in integration test
- Works with Ollama via
base_urloverride - Tool use round-trip works (tool call → tool result → final answer)
- All new tests pass, existing tests unaffected
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestphase-4Phase 4: Multi-Agent CoordinationPhase 4: Multi-Agent Coordination