-
Notifications
You must be signed in to change notification settings - Fork 443
chore(llmobs): dac strip io from OpenAI #13791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 277 ± 3 ms. The average import time from base is: 279 ± 3 ms. The import time difference between this PR and base is: -1.8 ± 0.1 ms. Import time breakdownThe following import paths have shrunk:
|
BenchmarksBenchmark execution time: 2025-06-26 21:47:55 Comparing candidate commit 06e2b01 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 561 metrics, 3 unstable metrics. |
@@ -164,7 +178,7 @@ def _llmobs_set_meta_tags_from_embedding(span: Span, kwargs: Dict[str, Any], res | |||
span._set_ctx_item(OUTPUT_VALUE, "[{} embedding(s) returned]".format(len(resp.data))) | |||
|
|||
@staticmethod | |||
def _extract_llmobs_metrics_tags(span: Span, resp: Any, span_kind: str) -> Dict[str, Any]: | |||
def _extract_llmobs_metrics_tags(span: Span, resp: Any, span_kind: str) -> Optional[Dict[str, Any]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 Code Quality Violation
do not use Any, use a concrete type (...read more)
Use the Any
type very carefully. Most of the time, the Any
type is used because we do not know exactly what type is being used. If you want to specify that a value can be of any type, use object
instead of Any
.
Learn More
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, left some comments / questions! Lmk when you need another review!
"engine", | ||
"suffix", | ||
"max_tokens", | ||
"temperature", | ||
"top_p", | ||
"n", | ||
"stream", | ||
"logprobs", | ||
"echo", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind describing why we are leaving these parameters only? I guess echo seems to be related to audio models only which seems fine to leave but what about engine and suffix? I am a bit confused as to what engine is referring to as I do not see it on the list of request arguments in the Open AI API docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The base _EndpointHook
class has its own _record_request
method which does some request specific tagging on the APM span. Are we ok leaving all of those tags on the APM span? For other providers, I do not think we have any of this information (besides the model and provider), so it would be more consistent to remove this tagging; however, is the idea to keep it because we do not have this information on the LLMObs span?
I also noticed that it seems like we tag the provider as "openai.request.client" here which seems inconsistent with other integrations where we refer to this as the provider.
def _record_request(self, pin, integration, instance, span, args, kwargs): | ||
super()._record_request(pin, integration, instance, span, args, kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need these lines? If we remove this, won't the base class's _record_request
method be called automatically (possibly other examples below).
@@ -294,7 +233,7 @@ class _ChatCompletionWithRawResponseHook(_ChatCompletionHook): | |||
|
|||
class _EmbeddingHook(_EndpointHook): | |||
_request_arg_params = ("api_key", "api_base", "api_type", "request_id", "api_version", "organization") | |||
_request_kwarg_params = ("model", "engine", "user") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we remove user here?
span.set_tag_str("openai.response.choices.%d.finish_reason" % choice.index, str(choice.finish_reason)) | ||
if integration.is_pc_sampled_span(span): | ||
span.set_tag_str("openai.response.choices.%d.text" % choice.index, integration.trunc(choice.text)) | ||
integration.record_usage(span, resp.usage) | ||
return resp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it just me or is the logic here extremely convoluted 🤣 There are two separate conditional checks for if not resp
. I know this isn't in scope of this PR but I wonder if we can refactor this a bit to make this more readable while we're already working on this part of the code! Lmk if you think it makes sense to do this in a different PR though.
@@ -355,11 +333,15 @@ def _set_token_metrics_from_streamed_response(span, response, prompts, messages, | |||
estimated, prompt_tokens = _compute_prompt_tokens(model_name, prompts, messages) | |||
estimated, completion_tokens = _compute_completion_tokens(response, model_name) | |||
total_tokens = prompt_tokens + completion_tokens | |||
span.set_metric("openai.response.usage.prompt_tokens", prompt_tokens) | |||
span.set_metric("openai.request.prompt_tokens_estimated", int(estimated)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The estimated
variable is no longer being used, was this intentional? is there any downstream impact of this? Might we worth checking with @Yun-Kim who might know more about what this is used for if anything!
@@ -133,7 +147,7 @@ def _llmobs_set_tags( | |||
elif operation == "response": | |||
openai_set_meta_tags_from_response(span, kwargs, response) | |||
update_proxy_workflow_input_output_value(span, span_kind) | |||
metrics = self._extract_llmobs_metrics_tags(span, response, span_kind) | |||
metrics = self._extract_llmobs_metrics_tags(span, response, span_kind) or span._get_ctx_item(METRICS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you confirm my understanding -- if the response is streamed, we expect the metrics to be on the span context and if the response is not streamed, then we need to extract the token usage from the response itself?
@pytest.mark.skipif( | ||
parse_version(openai_module.version.VERSION) < (1, 26), reason="Stream options only available openai >= 1.26" | ||
) | ||
def test_chat_completion_stream_explicit_no_tokens(openai, openai_vcr, mock_tracer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we removing this test because we no longer include token metrics on the APM span itself? I am curious, do we test in the llmobs tests that we do not include the token metrics on the LLMObs span in this case?
Remove potentially sensitive i/o data from apm spans. This way, prompt and completion data will only appear on the llm obs spans, which are/will be subject to data access controls.
Mostly, this just removes io tag sets. A few things (mostly metrics) have llmobs tags dependent on span tags, so there is a bit more refactoring there.
Let me know if I removed anything that should really stay, or if I missed something that should be restricted.
This one does a lot that the others don't. I've left things like audio transcript and image/file retrieval that we don't duplicate.
Checklist
Reviewer Checklist