Skip to content

[BUG] Ollama provider works only with OpenAI compatible endpoint #581

@altxtech

Description

@altxtech

Basic checks

  • I searched existing issues - this hasn't been reported
  • I can reproduce this consistently
  • This is a RubyLLM bug, not my application code

What's broken?

Context

Native API base url is http://localhost:11434/api
OpenAI compatible API base URL is http://localhost:11434/v1

Problem

There are no problems using the OpenAPI compatible endpoint. In fact, you have to ways to do it.

  1. With openai provider, as expected
require "ruby_llm"

RubyLLM.configure do |config| 
  config.openai_api_base =  'http://127.0.0.1:11434/v1'
  config.openai_api_key =  'ollama'
end

chat = RubyLLM.chat(
  provider: :openai,
  model: "qwen3:4b",
  assume_model_exists: true
)

response = chat.ask "Which model am I talking to?"
puts response.content
  1. With the ollama provider. This probably shouldn't work
require "ruby_llm"

RubyLLM.configure do |config| 
  config.ollama_api_base =  'http://127.0.0.1:11434/v1' # Wrong! should be `/api`
end

chat = RubyLLM.chat(
  provider: :ollama,
  model: "qwen3:4b"
)

response = chat.ask "Which model am I talking to?"
puts response.content

But if you try to use the proper endpoint with the ollama provider, you get 404 errors:

require "ruby_llm"

RubyLLM.configure do |config| 
  config.ollama_api_base =  'http://127.0.0.1:11434/api' # Should work, but yields 404 errors
end

chat = RubyLLM.chat(
  provider: :ollama,
  model: "qwen3:4b"
)

response = chat.ask "Which model am I talking to?"
puts response.content

Error:

/home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/error.rb:69:in 'RubyLLM::ErrorMiddleware.parse_error': 404 page not found (RubyLLM::Error)
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/error.rb:40:in 'block in RubyLLM::ErrorMiddleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/response.rb:46:in 'Faraday::Response#on_complete'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/error.rb:39:in 'RubyLLM::ErrorMiddleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-multipart-1.2.0/lib/faraday/multipart/middleware.rb:27:in 'Faraday::Multipart::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:171:in 'block in Faraday::Retry::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/retryable.rb:7:in 'Faraday::Retryable#with_retries'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:167:in 'Faraday::Retry::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/response/logger.rb:25:in 'Faraday::Response::Logger#call'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/rack_builder.rb:153:in 'Faraday::RackBuilder#build_response'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/connection.rb:452:in 'Faraday::Connection#run_request'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/faraday-2.14.0/lib/faraday/connection.rb:280:in 'Faraday::Connection#post'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/connection.rb:37:in 'RubyLLM::Connection#post'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/provider.rb:237:in 'RubyLLM::Provider#sync_response'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/provider.rb:58:in 'RubyLLM::Provider#complete'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/chat.rb:125:in 'RubyLLM::Chat#complete'
	from /home/andre/.rbenv/versions/3.4.7/lib/ruby/gems/3.4.0/gems/ruby_llm-1.9.1/lib/ruby_llm/chat.rb:35:in 'RubyLLM::Chat#ask'
	from ruby_ralph.rb:12:in '<main>'

Ollama server logs shows RubyLLM is still trying to use it as OpenAI compatible API

[GIN] 2026/01/25 - 17:47:24 | 404 |       4.358µs |       127.0.0.1 | POST     "/api/chat/completions"

This is very counter intuitive. As a user, I would expect the the ollama provider to work with their native API.

Solution Ideas

I think there are three ways to handle this

  1. Adding some documentation on this behavior could help
  2. Deprecate the ollama provider and guide users to use the OpenAI compatible endpoint. This is a perfectly acceptable way to interact with Ollama, and I think is more intuitive than having 2 separate redundant providers. Should be easier to maintain too
  3. Patch the ollama provider to use the native API.

How to reproduce

Sample scripts on the issue description.

Expected behavior

ollama provider should work with the native API endpoint

What actually happened

I tries to talk to the OpenAI compatible endpoint

Environment

ruby 3.4.7
ruby_llm 1.9.1
Provider ollama
Arch Linux btw

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions