Skip to content

[4.D.1] OpenAI-compatible LLM adapter (adapters/llm/openai.py) #543

@frankbria

Description

@frankbria

Parent

Part of #542 — Multi-provider LLM support

What

Create codeframe/adapters/llm/openai.py implementing the LLMProvider ABC.

A single implementation with a configurable base_url covers the entire OpenAI-compatible ecosystem: OpenAI, Ollama, vLLM, LM Studio, Qwen, GLM, Deepseek, Groq, Together, etc.

Files to create/modify

  • Create codeframe/adapters/llm/openai.pyOpenAIProvider(LLMProvider)
  • Create tests/adapters/test_llm_openai.py

Implementation notes

  • Use openai SDK (pip install openai) — already OpenAI-compatible
  • Constructor: OpenAIProvider(api_key, model, base_url=None)
    • base_url=None → OpenAI production
    • base_url="http://localhost:11434/v1" → Ollama
  • complete(): translate Message list → openai.chat.completions.create(), return LLMResponse
  • stream(): use stream=True, yield LLMResponse chunks
  • Tool use: OpenAI uses tool_calls / function format; translate to/from ToolCall / ToolResult from base.py
  • Map Anthropic stop reasons (end_turn, tool_use) ↔ OpenAI stop reasons (stop, tool_calls)

Tests

  • Unit tests with respx or pytest-httpx mocking the OpenAI endpoint
  • Test complete() with and without tools
  • Test stream()
  • Test base_url override routes to custom endpoint
  • Test error handling (rate limit, auth error, model not found)

Acceptance criteria

  • OpenAIProvider passes same interface contract as AnthropicProvider
  • Works with real OpenAI API key in integration test
  • Works with Ollama via base_url override
  • Tool use round-trip works (tool call → tool result → final answer)
  • All new tests pass, existing tests unaffected

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestphase-4Phase 4: Multi-Agent Coordination

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions