Skip to content

Conversation

@dongfengy
Copy link
Collaborator

@dongfengy dongfengy commented Nov 19, 2025

Summary by CodeRabbit

  • Tests

    • Enhanced Eagle3 test coverage with new parameters for one-model and overlap scheduling configurations.
    • Optimized KV cache memory allocation settings for improved resource management during tests.
    • Added runtime safety check to skip tests when specific hardware accuracy issues are detected.
  • Bug Fixes

    • Removed test waiver entry; previously skipped test is now expected to pass.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@dongfengy dongfengy marked this pull request as ready for review November 19, 2025 19:02
@dongfengy
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 19, 2025

📝 Walkthrough

Walkthrough

Expands the test_eagle3 test with parameterization for one_model and overlap_scheduler options, introduces a runtime skip for known Hopper/CUTLASS accuracy issues, adjusts Eagle3 decoding and KV cache configuration values, and removes a corresponding waived test entry.

Changes

Cohort / File(s) Summary
test_eagle3 Parameterization and Configuration
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Added @pytest.mark.parametrize decorators for one_model ([True, False]) and overlap_scheduler ([True, False]). Updated test signature to accept these parameters alongside existing moe_backend and mocker. Added conditional skip when SM=90 and moe_backend=CUTLASS. Modified Eagle3 config to use dynamic eagle3_one_model=one_model. Changed disable_overlap_scheduler from fixed True to not overlap_scheduler. Reduced KvCacheConfig.free_gpu_memory_fraction from 0.6 to 0.4.
Waived Test Removal
tests/integration/test_lists/waives.txt
Removed waived test entry accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_eagle3[cutlass].

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Focus areas:
    • Verify parameterization logic correctly wires one_model and overlap_scheduler through to Eagle3 config and pytorch_config respectively
    • Confirm the SM=90 and CUTLASS skip condition matches the intended Hopper accuracy workaround
    • Validate that KvCacheConfig value change (0.6 → 0.4) aligns with intended memory management strategy
    • Ensure waived test removal corresponds correctly to the new parameterized test matrix

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The PR description is entirely template placeholder text with no substantive explanation of the changes, rationale, or test coverage. All required sections (Description, Test Coverage) are empty. Fill in the Description section explaining what Eagle test enhancements are made and why, and the Test Coverage section documenting which tests validate these changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main changes: adding one-model and overlap-scheduling parameters to Eagle tests for GPTOSS, which aligns with the actual modifications to test_eagle3.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

3961-3985: Skip guard and config wiring are sound; consider clarifying the nvbugs comment or gating the known‑bad combo

  • The new get_sm_version() == 90 and moe_backend == "CUTLASS" skip is a reasonable, targeted guard for the referenced Hopper accuracy issue and should prevent flaky runs on that configuration.
  • Wiring disable_overlap_scheduler=not overlap_scheduler and eagle3_one_model=one_model is consistent with how other Eagle3 tests control these knobs and correctly exercises all four combinations.
  • Minor suggestion: the comment # https://nvbugs/5590408: 2-Model overlap scheduling has accuracy issue now sits above a test that actively runs the 2‑model + overlap‑scheduler case. If that nvbug is still expected to manifest for that specific combination, you may want to either:
    • add a conditional pytest.skip for not one_model and overlap_scheduler, or
    • update the comment to reflect the current status/scope of the issue so future readers aren’t confused.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1eae941 and f0694a8.

📒 Files selected for processing (2)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (2 hunks)
  • tests/integration/test_lists/waives.txt (0 hunks)
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt
🧰 Additional context used
🧬 Code graph analysis (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (3)
tests/integration/defs/accuracy/test_disaggregated_serving.py (1)
  • test_eagle3 (461-515)
tests/integration/defs/conftest.py (2)
  • get_sm_version (1892-1895)
  • llm_models_root (80-94)
tensorrt_llm/llmapi/llm_args.py (3)
  • CudaGraphConfig (102-159)
  • KvCacheConfig (1426-1570)
  • speculative_model_dir (1940-1941)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch.py (1)

3947-3956: Eagle3 parametrization and signature wiring look consistent and correct

The added overlap_scheduler / one_model parametrizations and the updated test_eagle3(self, moe_backend, one_model, overlap_scheduler, mocker) signature align with the patterns used by other Eagle3 tests in this file and correctly expose the new configuration space without changing behavior for existing moe_backend values.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25082 [ run ] triggered by Bot. Commit: f0694a8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25082 [ run ] completed with state FAILURE. Commit: f0694a8
/LLM/main/L0_MergeRequest_PR pipeline #18960 completed with status: 'FAILURE'

@dongfengy
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25094 [ run ] triggered by Bot. Commit: 93bbe96

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25094 [ run ] completed with state SUCCESS. Commit: 93bbe96
/LLM/main/L0_MergeRequest_PR pipeline #18970 completed with status: 'FAILURE'

@dongfengy dongfengy force-pushed the user/dongfengy/egale-test-enhance branch from 93bbe96 to 5c958f6 Compare November 21, 2025 01:14
@dongfengy
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25263 [ run ] triggered by Bot. Commit: 5c958f6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25263 [ run ] completed with state SUCCESS. Commit: 5c958f6
/LLM/main/L0_MergeRequest_PR pipeline #19110 completed with status: 'FAILURE'

@dongfengy dongfengy changed the title [None][test] Enhance Eagle Tests for GPTOSS [None][test] Add one-model and overlap-scheduling to eagle tests for GPTOSS Nov 21, 2025
Signed-off-by: Dongfeng Yu <[email protected]>
Signed-off-by: Dongfeng Yu <[email protected]>
Signed-off-by: Dongfeng Yu <[email protected]>
@dongfengy dongfengy force-pushed the user/dongfengy/egale-test-enhance branch from 5c958f6 to bcfa67c Compare November 21, 2025 18:28
@dongfengy
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25388 [ run ] triggered by Bot. Commit: bcfa67c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants