-
Notifications
You must be signed in to change notification settings - Fork 11
docs: add comprehensive metrics documentation #321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a new detailed Metrics Reference document at Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Potential review focus:
Poem
Pre-merge checks✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (4)
docs/metrics_reference.md (3)
173-176: Ensure ITL is in seconds for this inverse relationship.The formula assumes inter_token_latency_seconds; make sure the prior section defines ITL in seconds (not ns/ms) to keep this correct.
238-246: Filter to valid records in aggregate sums.Exclude failed/invalid records from totals to match the description.
Apply this diff:
-total_output_tokens = sum(output_token_count for record in records) +total_output_tokens = sum(r.output_token_count for r in records if r.valid)-total_osl = sum(output_sequence_length for record in records) +total_osl = sum(r.output_sequence_length for r in records if r.valid)-total_isl = sum(input_sequence_length for record in records) +total_isl = sum(r.input_sequence_length for r in records if r.valid)Also applies to: 254-260, 268-275
338-341: Use consistent variable naming with request.start_perf_ns.Align with earlier formulas to avoid ambiguity.
Apply this diff:
-request_latency = responses[-1].perf_ns - start_perf_ns +request_latency_ns = responses[-1].perf_ns - request.start_perf_nsREADME.md (1)
280-283: Grammar nit: “single values” → “single value”.Minor text cleanup for clarity.
Apply this diff:
-> [!IMPORTANT] -> This metric is computed as a single values across all requests, and it includes the TTFT in the equation, so it is **not** directly comparable to the [Output Token Throughput Per User](#output-token-throughput-per-user) metric. +> [!IMPORTANT] +> This metric is computed as a single value across all requests and includes TTFT in the equation, so it is **not** directly comparable to the [Output Token Throughput Per User](#output-token-throughput-per-user) metric.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
README.md(2 hunks)docs/metrics_reference.md(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/metrics_reference.md
54-54: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
67-67: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
80-80: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build (ubuntu-latest, 3.10)
🔇 Additional comments (1)
README.md (1)
15-15: Navigation update LGTM.Good addition of Metrics Reference link.
debermudez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved but I would like another set of eyes on it before we publish.
So only commenting for now since we didnt set it up for multiple approvals.
docs/metrics_reference.md
Outdated
| - [Record Metrics](#record-metrics) | ||
| - [Aggregate Metrics](#aggregate-metrics) | ||
| - [Derived Metrics](#derived-metrics) | ||
| - [Quick Reference](#quick-reference) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be the first section in this list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you help clarify? you want me to move the quick reference content up in the document? Or you want me to do something different with the ToC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
definitely. I think move this up above understanding metric types
README.md
Outdated
|
|
||
| | Metric | Tag | Formula | Unit | | ||
| |--------|-----|---------|------| | ||
| | [**Output Token Count**](docs/metrics_reference.md#output-token-count) | `output_token_count` | `len(tokenizer.encode(content))` | `tokens` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume we have add_special_tokens=False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. could be good to add a note
README.md
Outdated
| |--------|-----|---------|------| | ||
| | [**Output Token Count**](docs/metrics_reference.md#output-token-count) | `output_token_count` | `len(tokenizer.encode(content))` | `tokens` | | ||
| | [**Output Sequence Length (OSL)**](docs/metrics_reference.md#output-sequence-length-osl) | `output_sequence_length` | `(output_token_count or 0) + (reasoning_token_count or 0)` | `tokens` | | ||
| | [**Input Sequence Length (ISL)**](docs/metrics_reference.md#input-sequence-length-isl) | `input_sequence_length` | `len(tokenizer.encode(prompt))` | `tokens` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as output_token_count
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IzzyPutterman can you explain what is the same as output_token_count? Are you referring to the ISL? Is it the wording on prompt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @IzzyPutterman is intending that his feedback here is the same as his feedback in #321 (comment)
|
|
||
| | Metric | Tag | Formula | Unit | | ||
| |--------|-----|---------|------| | ||
| | [**Time to First Token (TTFT)**](docs/metrics_reference.md#time-to-first-token-ttft) | `ttft` | `responses[0].perf_ns - request.start_perf_ns` | `ms` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a mention that responses are "chunks with non-empty content"
docs/metrics_reference.md
Outdated
|
|
||
| **Formula:** | ||
| ```python | ||
| ttft = responses[0].perf_ns - request.start_perf_ns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small knit-pick, I get what this is telling me but I just was wondering why request wasn't indexed. It might make it clearer if there was a pointer to the class or structure where this is used? My initial expectation was that the i-th responses would map to the i-th request.
This isn't a gating comment, just something that on initial impression was a little confusing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I called out something like that out here: #321 (comment)
so I think this would be helpful, especially for someone looking to contribute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FrankD412 yeah, its hard to provide both easy to understand but true to life values, when the real formula is longer than the single line and you want it easy to understand.
Technically everywhere you see responses[x] it is really request.responses[x], but that was kinda wordy. One option is to drop the request from the start_perf, or to add request back in the first part.
@debermudez I agree that the links would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding to my first point. Most sum(...) type metrics do not actually use sum at all. that is to make it easier for the user to understand. instead they are computed in 2 stages like is mentioned in the above sections:
Example Scenario
request_countincrements by 1 for each successful request. At the end of a benchmark with 100 successful requests, this metric equals 100 (a single value, not a distribution).
class MinRequestTimestampMetric(BaseAggregateMetric[int]):
"""
Post-processor for calculating the minimum request time stamp metric from records.
Formula:
Minimum Request Timestamp = Min(Request Timestamps)
"""
tag = "min_request_timestamp"
header = "Minimum Request Timestamp"
short_header = "Min Req"
short_header_hide_unit = True
unit = MetricTimeUnit.NANOSECONDS
display_unit = MetricDateTimeUnit.DATE_TIME
flags = MetricFlags.HIDDEN
required_metrics = None
def __init__(self) -> None:
# Default to a large value, so that any request timestamp will be smaller.
super().__init__(default_value=sys.maxsize)
def _parse_record(
self,
record: ParsedResponseRecord,
record_metrics: MetricRecordDict,
) -> int:
"""Return the request timestamp."""
# NOTE: Use the request timestamp_ns, not the start_perf_ns, because we want wall-clock timestamps,
return record.timestamp_ns
def _aggregate_value(self, value: int) -> None:
"""Aggregate the metric value. For this metric, we just take the min of the values from the different processes."""
if value < self._value:
self._value = valueThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's more complicated to try and map to implementation, a thought in my mind is maybe move to make the first part of this document the formal definition of the metrics. Like, define a number of pseudo variables -- then once the "theory" is laid out you can have a section or a link to another guide that explains the metric implementation?
Sometimes the implementation gets in the way of clear expression.
| > [!NOTE] | ||
| > Metrics in this section are available for all benchmark runs with no special requirements. | ||
| ### Request Latency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this the same as the benchmark duration below? Is this intended to be a per-request metric?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue as my response at #321 (comment) I guess. start_perf_ns is by request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, initially I had listed the metric type for each one, but i was trying my best not to make the doc too long. I think it may be good to have it, especially since I directly explain what the differences are up above.
the other thing is that I originally grouped the metrics by type (record, aggregate, derived), but felt it flowed better to be by use-case, especially to help people understand why they are or are not seeing certain metrics. (removed my need to explain the --streaming under each streaming one as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm -- that's fair. Might it be worth explaining what a per-request statistic is in a central place and then labeling specific metrics as per-request (then linking to the per-request definition)?
| ``` | ||
|
|
||
| **Notes:** | ||
| - Error rate can be computed as `error_request_count / (request_count + error_request_count)`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-gating comment, but might be worth just defining a total requests as the summation of valid + invalid requests?
c25567e to
aa971a6
Compare
Signed-off-by: Anthony Casagrande <[email protected]>
aa971a6 to
4fb3771
Compare
|
@coderabbitai recheck |
|
@ajcasagrande: I'll review the changes in this PR now. ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
README.md (2)
259-270: General Metrics table has formula inconsistencies and unit-conversion gaps.Multiple issues need addressing:
Request Latency (line 265): Formula shows
responses[-1].perf_ns - request.start_perf_ns(incomplete reference; should berequest.responses[-1]), withmsunit but ns-based formula without conversion.Timestamp columns (lines 268–269): Formulas use nanoseconds (
timestamp_ns) but units listdatetime. This mismatch is confusing; clarify whether these are raw ns values or converted datetime strings.Benchmark Duration (line 270): Formula subtracts two timestamps without showing the nanosecond-to-seconds conversion required to produce the
secunit.Apply consistent fixes:
-| [**Request Latency**](docs/metrics_reference.md#request-latency) | `request_latency` | `responses[-1].perf_ns - request.start_perf_ns` | `ms` | +| [**Request Latency**](docs/metrics_reference.md#request-latency) | `request_latency_ms` | `(request.responses[-1].perf_ns - request.start_perf_ns) / 1e6` | `ms` | -| [**Benchmark Duration**](docs/metrics_reference.md#benchmark-duration) | `benchmark_duration` | `max_response_timestamp - min_request_timestamp` | `sec` | +| [**Benchmark Duration**](docs/metrics_reference.md#benchmark-duration) | `benchmark_duration_seconds` | `(max_response_timestamp_ns - min_request_timestamp_ns) / 1e9` | `sec` |
179-190: Fix formula-unit mismatch: nanosecond operations shown with millisecond units.README formulas show perf_ns values (nanoseconds) but specify
msunits without conversion. This conflicts with the detailed metrics_reference.md which shows explicit nanosecond→millisecond conversions. Either show conversions in the formula (e.g.,/ 1e6for ms) or update units tonsto match the raw formula.Example discrepancy:
-| [**Time to First Token (TTFT)**](docs/metrics_reference.md#time-to-first-token-ttft) | `ttft` | `responses[0].perf_ns - request.start_perf_ns` | `ms` | +| [**Time to First Token (TTFT)**](docs/metrics_reference.md#time-to-first-token-ttft) | `ttft` | `(responses[0].perf_ns - request.start_perf_ns) / 1e6` | `ms` |Apply similar fixes for TTST (line 186) and ICL (line 188).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
README.md(2 hunks)docs/metrics_reference.md(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-10-02T19:31:57.859Z
Learnt from: ajcasagrande
Repo: ai-dynamo/aiperf PR: 321
File: docs/metrics_reference.md:54-63
Timestamp: 2025-10-02T19:31:57.859Z
Learning: In the aiperf repository's docs/metrics_reference.md file, the maintainer prefers using h4 headings (####) for subsections under h2 headings instead of h3 (###) for better visual sizing and readability, even though this violates markdownlint rule MD001.
Applied to files:
docs/metrics_reference.mdREADME.md
📚 Learning: 2025-10-18T16:31:22.126Z
Learnt from: ajcasagrande
Repo: ai-dynamo/aiperf PR: 380
File: docs/tutorials/request-rate-concurrency.md:27-40
Timestamp: 2025-10-18T16:31:22.126Z
Learning: In the aiperf repository documentation, blank lines between different alert blockquotes (e.g., [!IMPORTANT], [!TIP], [!NOTE]) are intentional for visual separation, even if they trigger MD028 linting warnings. The separate blockquotes improve readability by clearly distinguishing between different types of information.
Applied to files:
docs/metrics_reference.md
📚 Learning: 2025-10-15T03:24:10.758Z
Learnt from: ajcasagrande
Repo: ai-dynamo/aiperf PR: 359
File: aiperf/metrics/types/time_to_first_output_metric.py:0-0
Timestamp: 2025-10-15T03:24:10.758Z
Learning: In TimeToFirstOutputMetric and similar metrics, invalid timestamp scenarios (where response timestamps precede request start) are automatically caught by the base class validation through the record.valid property, which checks that start_perf_ns < end_perf_ns. This validation happens in _require_valid_record before _parse_record is called, so explicit timestamp validation in _parse_record may be redundant.
Applied to files:
docs/metrics_reference.mdREADME.md
🪛 LanguageTool
README.md
[grammar] ~192-~192: Use a hyphen to join words.
Context: ...tft_seconds|tokens/sec` | ### Token Based Metrics Metrics for token-produci...
(QB_NEW_EN_HYPHEN)
🔇 Additional comments (7)
docs/metrics_reference.md (6)
1-72: Well-organized and comprehensive structure.The file layout with clear table of contents, proper linking, and introductory guidance is excellent for documentation navigation.
74-117: Clear explanation of metric computation phases.The section effectively distinguishes Record, Aggregate, and Derived metrics with concrete examples. Heading hierarchy (h2 → h3) is correct per MD001.
122-254: Streaming metrics section is comprehensive and well-documented.Formulas correctly show nanosecond-to-millisecond and nanosecond-to-second conversions. Dependencies, requirements, and important distinctions (e.g., per-user vs aggregate throughput) are clearly explained with helpful alerts.
255-381: Token-based metrics are clearly documented with proper formula syntax.Generator expressions correctly use
for r in records, and notes thoroughly address reasoning token handling for both separatereasoning_contentfields and embedded<think>blocks. This resolves the prior confusion about which tokens are excluded.
699-809: General metrics are properly documented with correct formula syntax and wall-clock handling.Generator expressions and timestamp calculations follow the correct approach. The wall-clock timestamp + duration method (line 788) correctly implements the approach clarified in previous feedback.
One clarity suggestion: The
request_latencyformula (line 712) uses nanoseconds but lacks a unit-conversion comment, andbenchmark_duration(line 801) doesn't show the nanosecond-to-seconds conversion. While the raw formulas are correct, adding explicit conversion comments (e.g.,/ 1e9for nanosecond-to-seconds) would improve clarity for readers implementing these metrics, especially when comparing to the README.md quick-reference tables.
810-840: Metric flags reference is comprehensive and well-organized.Clear descriptions of individual and composite flags with impact information provide good reference material for understanding metric computation and display behavior.
README.md (1)
15-15: Navigation and section header are properly integrated.The Metrics Reference link fits naturally into the navigation bar, and the intro correctly directs readers to the detailed documentation.
Also applies to: 175-177
| | [**Output Token Throughput Per User**](docs/metrics_reference.md#output-token-throughput-per-user) | `output_token_throughput_per_user` | `1.0 / inter_token_latency_seconds` | `tokens/sec/user` | | ||
| | [**Prefill Throughput**](docs/metrics_reference.md#prefill-throughput) | `prefill_throughput` | `input_sequence_length / ttft_seconds` | `tokens/sec` | | ||
|
|
||
| ### Token Based Metrics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use hyphen for compound adjective.
Line 192 should use "Token-Based Metrics" (hyphenated) for proper grammar.
-### Token Based Metrics
+### Token-Based Metrics📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### Token Based Metrics | |
| ### Token-Based Metrics |
🧰 Tools
🪛 LanguageTool
[grammar] ~192-~192: Use a hyphen to join words.
Context: ...tft_seconds|tokens/sec` | ### Token Based Metrics Metrics for token-produci...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In README.md around line 192, the heading "Token Based Metrics" uses an
unhyphenated compound adjective; change it to "Token-Based Metrics" by inserting
a hyphen between Token and Based so the heading follows standard grammar for
compound modifiers.
Add formulas for each metric on the main readme
add separate metric docs to explain everything in detail.
Summary by CodeRabbit