fix(llmobs): skip downstream OpenAI span check for litellm_proxy/ models#17237
Draft
ZStriker19 wants to merge 1 commit intomainfrom
Draft
fix(llmobs): skip downstream OpenAI span check for litellm_proxy/ models#17237ZStriker19 wants to merge 1 commit intomainfrom
ZStriker19 wants to merge 1 commit intomainfrom
Conversation
Models with the litellm_proxy/ prefix route through a proxy and never invoke the OpenAI integration directly, so the downstream span check was incorrectly suppressing LLMObs span submission for these requests. Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Codeowners resolved as |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
The ddtrace litellm integration produces empty LLMObs spans when using a
litellm_proxy/model (e.g.litellm_proxy/azure-gpt-5-nano). The request routes through a proxy, so the OpenAI integration never fires — but the span was being suppressed as if it would.Root Cause
_has_downstream_openai_span()does a substring check for"azure","gpt","openai"in the model name. A model likelitellm_proxy/azure-gpt-5-nanomatches"azure", sosubmit_to_llmobs=Falseis set and the span is dropped entirely.Fix
Short-circuit in
_has_downstream_openai_span()when the model name starts withlitellm_proxy/— these requests always go through a proxy, never directly to OpenAI/Azure.Testing
Added 9 parametrized unit tests for
_has_downstream_openai_span()covering: proxy suppression (the fix), normal OpenAI/Azure passthrough, streaming, disabled OpenAI integration, and non-OpenAI models. No VCR cassette needed — the method is pure logic.Closes MLOB-6958
🤖 Generated with Claude Code