Skip to content

Conversation

@ixlmar
Copy link
Collaborator

@ixlmar ixlmar commented Nov 25, 2025

Description

Preparation for enabling FlashInfer.sampling by default.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Refactor
    • Standardized parameter naming convention for FlashInfer sampling configuration across PyTorch executor and LLM APIs to improve consistency
    • Promoted FlashInfer sampling control from private to public configuration option in TorchLlmArgs for external customization
    • Updated all related components to align with the new naming convention

✏️ Tip: You can customize this high-level summary in your review settings.

@ixlmar ixlmar requested a review from Funatiq November 25, 2025 13:26
@ixlmar ixlmar marked this pull request as ready for review November 25, 2025 13:26
@ixlmar ixlmar requested review from a team as code owners November 25, 2025 13:26
@ixlmar ixlmar requested a review from pcastonguay November 25, 2025 13:26
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 25, 2025

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 25, 2025

📝 Walkthrough

Walkthrough

Parameter renaming refactor across the PyTorch executor stack: disable_flash_infer_sampling is renamed to disable_flashinfer_sampling in function signatures, method calls, and configuration classes. Additionally, a private attribute in TorchLlmArgs is replaced with a public field using the new name.

Changes

Cohort / File(s) Change Summary
Utility and Executor Function Signatures
tensorrt_llm/_torch/pyexecutor/_util.py
Parameter disable_flash_infer_sampling renamed to disable_flashinfer_sampling in create_torch_sampler_args and instantiate_sampler function signatures.
Executor Creator Call Site
tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
Argument name updated from disable_flash_infer_sampling to disable_flashinfer_sampling in the call to instantiate_sampler.
Sampler Configuration
tensorrt_llm/_torch/pyexecutor/sampler.py
Field in TorchSampler.Args renamed from disable_flash_infer_sampling to disable_flashinfer_sampling; conditional logic updated to reference the new field name.
Public LLM Arguments API
tensorrt_llm/llmapi/llm_args.py
Private attribute _disable_flash_infer_sampling replaced with public Field named disable_flashinfer_sampling (default: True, with description and prototype status).
Unit Tests
tests/unittest/_torch/sampler/test_torch_sampler.py
Test code updated to use disable_flashinfer_sampling when constructing TorchSampler.Args.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Homogeneous refactor with consistent naming pattern applied across multiple files
  • Changes are primarily mechanical parameter/field renames with no logic modifications
  • Public API change in TorchLlmArgs requires verification that all call sites are updated (appears consistent across the diff)
  • Test updates align with the production code changes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The PR description provides a brief explanation of the purpose (preparation for enabling FlashInfer.sampling by default) but lacks detailed explanations of what was changed, why, and test coverage details as requested in the template. Add specific details about the configuration option added, explain the rationale for the changes, and clearly list which tests verify the new functionality.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding a new configuration option 'disable_flashinfer_sampling', which directly aligns with the PR's objective of preparing for enabling FlashInfer.sampling by default.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c36f144 and f567f30.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/pyexecutor/_util.py (3 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/sampler.py (2 hunks)
  • tensorrt_llm/llmapi/llm_args.py (1 hunks)
  • tests/unittest/_torch/sampler/test_torch_sampler.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tests/unittest/_torch/sampler/test_torch_sampler.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tests/unittest/_torch/sampler/test_torch_sampler.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
📚 Learning: 2025-08-27T15:03:57.149Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/sampler.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
📚 Learning: 2025-08-28T10:22:02.288Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:1191-1197
Timestamp: 2025-08-28T10:22:02.288Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, the object identity comparison `softmax_req_indices is not group_req_indices_cuda` on line ~1191 is intentional and used as an optimization to determine whether to reuse an existing indexer or create a new one, based on which code path was taken during tensor assignment.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/py_executor_creator.py
  • tensorrt_llm/_torch/pyexecutor/_util.py
🧬 Code graph analysis (1)
tensorrt_llm/llmapi/llm_args.py (1)
tensorrt_llm/builder.py (1)
  • default (45-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tests/unittest/_torch/sampler/test_torch_sampler.py (1)

1072-1082: Flag wiring for disable_flashinfer_sampling in tests is consistent.

disable_flashinfer_sampling=(not use_flashinfer) correctly mirrors the test parameter and matches the expected backend selection logic in TorchSampler (FlashInfer when available and not disabled, otherwise Torch backend). No further changes needed here.

tensorrt_llm/_torch/pyexecutor/sampler.py (1)

612-619: New disable_flashinfer_sampling flag is correctly threaded into backend selection.

The TorchSampler.Args field and the IS_FLASHINFER_AVAILABLE and not args.disable_flashinfer_sampling check cleanly implement an opt‑out flag: FlashInfer sampling is used when available unless explicitly disabled. This matches the updated tests and the new config name; no issues from this refactor.

Also applies to: 653-660

tensorrt_llm/llmapi/llm_args.py (1)

2778-2781: Public TorchLlmArgs knob for FlashInfer sampling looks consistent

The new disable_flashinfer_sampling field is wired as a regular Pydantic field with a sane default (True, preserving existing behavior) and status="prototype" for schema/telemetry. Given the rest of the stack is being renamed to this identifier, this addition is consistent and low‑risk.

tensorrt_llm/_torch/pyexecutor/_util.py (1)

823-841: Sampler flag rename & plumbing are consistent

The refactor from disable_flash_infer_sampling to disable_flashinfer_sampling in both create_torch_sampler_args and instantiate_sampler is mechanically correct: the kw‑only parameter is forwarded directly into TorchSampler.Args with the same name, and no control‑flow changes were introduced.

Given the module is internal (_util) and the AI summary notes all call sites/tests were updated, this looks safe.

If you haven’t already, a quick grep for disable_flash_infer_sampling across the repo would confirm there are no lingering uses of the old name.

Also applies to: 844-865

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25744 [ run ] triggered by Bot. Commit: f567f30

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25744 [ run ] completed with state SUCCESS. Commit: f567f30
/LLM/main/L0_MergeRequest_PR pipeline #19521 completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 25, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25754 [ run ] triggered by Bot. Commit: 373f8fe

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25754 [ run ] completed with state FAILURE. Commit: 373f8fe
/LLM/main/L0_MergeRequest_PR pipeline #19529 completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar ixlmar requested a review from a team as a code owner November 26, 2025 08:45
@ixlmar ixlmar requested a review from netanel-haber November 26, 2025 08:45
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

2 similar comments
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

3 similar comments
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@ixlmar ixlmar force-pushed the feat/flashinfer-sampling-toggle branch from 9462c2c to 9119f98 Compare November 26, 2025 15:32
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25858 [ run ] triggered by Bot. Commit: 9119f98

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25859 [ run ] triggered by Bot. Commit: 9119f98

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25858 [ run ] completed with state ABORTED. Commit: 9119f98

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 26, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25874 [ run ] triggered by Bot. Commit: 9119f98

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25859 [ run ] completed with state ABORTED. Commit: 9119f98
/LLM/main/L0_MergeRequest_PR pipeline #19605 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25874 [ run ] completed with state SUCCESS. Commit: 9119f98
/LLM/main/L0_MergeRequest_PR pipeline #19620 completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 27, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25993 [ run ] triggered by Bot. Commit: 9119f98

@ixlmar ixlmar force-pushed the feat/flashinfer-sampling-toggle branch from 9119f98 to f601dad Compare November 27, 2025 09:02
@ixlmar
Copy link
Collaborator Author

ixlmar commented Nov 27, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26010 [ run ] triggered by Bot. Commit: f601dad

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25993 [ run ] completed with state ABORTED. Commit: 9119f98
LLM/main/L0_MergeRequest_PR #19716 (Blue Ocean) completed with status: ABORTED

@ixlmar ixlmar requested review from DomBrown and removed request for netanel-haber November 27, 2025 09:15
@ixlmar ixlmar requested review from juney-nvidia and removed request for pcastonguay November 27, 2025 14:01
@tensorrt-cicd
Copy link
Collaborator

PR_Github #26010 [ run ] completed with state SUCCESS. Commit: f601dad
/LLM/main/L0_MergeRequest_PR pipeline #19734 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

disable_flashinfer_sampling:
annotation: bool
default: False
status: deprecated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we add a new flag with directly marking it as "deprecated"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's better to be marked as "prototype"?
https://nvidia.github.io/TensorRT-LLM/llm-api/reference.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants