Skip to content

Conversation

@govind-ramnarayan
Copy link
Collaborator

@govind-ramnarayan govind-ramnarayan commented Dec 3, 2025

Summary by CodeRabbit

  • New Features

    • Added configurable sampler selection to AutoDeploy, allowing users to choose between TorchSampler and TRTLLMSampler for generation workflows.
    • Exposed model configuration properties for improved model introspection and metadata access.
    • Implemented TRTLLMSampler pathway with enhanced decoding capabilities.
  • Tests

    • Added unit tests validating TRTLLMSampler functionality in AutoDeploy execution pipeline.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Support for TRTLLMSampler in AutoDeploy. Fixes: #9602

The main issue is that we need to upcast logits to FP32 for the TRTLLMSampler to run without error. It's possible this is fairly inefficient; we need to test perf and validate.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@govind-ramnarayan govind-ramnarayan requested a review from a team as a code owner December 3, 2025 00:11
@govind-ramnarayan govind-ramnarayan marked this pull request as draft December 3, 2025 00:11
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 3, 2025

📝 Walkthrough

Walkthrough

This change introduces support for an alternative sampling strategy (TRTLLMSampler) in the AutoDeploy framework. A new configuration field allows users to select between TorchSampler and TRTLLMSampler. Helper functions and classes manage sampler instantiation, model configuration exposure, and dtype handling. A unit test validates the TRTLLMSampler pathway.

Changes

Cohort / File(s) Summary
Configuration & Type Exposure
tensorrt_llm/_torch/auto_deploy/llm_args.py
Added sampler_type field to AutoDeployConfig with type Union[str, SamplerType], defaulting to SamplerType.TorchSampler. Exported SamplerType to public import surface.
Model Factory Properties
tensorrt_llm/_torch/auto_deploy/models/hf.py
Added four new accessor properties to AutoModelForCausalLMFactory: dtype, num_hidden_layers, hidden_size, and num_attention_heads. Each retrieves values from model config via _get_model_config().
Sampler Instantiation & Engine Integration
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
Introduced sampler selection logic: added TRTLLMSamplerModelConfig class, get_torch_dtype() and instantiate_sampler() functions to conditionally create TorchSampler or TRTLLMSampler. Modified ADEngine API to accept and store sampler_type parameter. Updated build_from_config() and create_autodeploy_executor() to propagate sampler type. Added dtype casting for logits in TRTLLMSampler path.
Testing
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
Added new smoke test for TRTLLMSampler pathway. Configures ExperimentConfig with sampler_type: SamplerType.TRTLLMSampler, executes generation, and validates output.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Areas requiring attention:
    • ad_executor.py: Verify sampler instantiation logic correctly handles both TorchSampler and TRTLLMSampler paths; check dtype casting and error handling for unsupported sampler types
    • ad_executor.py: Confirm ADEngine signature changes and parameter threading through build_from_config() and create_autodeploy_executor() are correct and backward compatible
    • test_ad_trtllm_sampler.py: Validate test coverage adequately exercises the TRTLLMSampler pathway

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning PR description does not reference any JIRA ticket, NVBugs ID, or GitHub issue despite the template requiring one in format like "[TRTLLM-1234]" or "[None]" if no ticket exists. Add ticket reference to PR title/description in the required format (e.g., "[TRTLLM-xxxx][feat]" or "[None][feat]") to indicate whether this addresses a tracked issue.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive The PR description provides a clear issue reference (#9602) and explanation of the main technical change (upcasting logits to FP32), but is missing the Test Coverage section details and incomplete PR checklist items. Fill in the Test Coverage section with specific test names/paths that validate the TRTLLMSampler integration, and verify all checklist items are properly addressed before merging.
✅ Passed checks (2 passed)
Check name Status Explanation
Out of Scope Changes check ✅ Passed PR introduces new configuration option and exposes model properties that alter public API surface; scope appears appropriate for the stated objective of adding TRTLLMSampler support.
Title check ✅ Passed The title clearly and concisely describes the main feature addition: support for TRTLLM Sampler in AutoDeploy, which aligns with the core changes across multiple files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)

133-136: Consider documenting the "auto" option in the description.

The SamplerType enum (from tensorrt_llm/llmapi/llm_args.py lines 2474-2478) includes an auto option in addition to TRTLLMSampler and TorchSampler. The field description only mentions the latter two. If auto is a valid selection for AutoDeploy, consider updating the description to include it. Otherwise, the field definition looks good.

     sampler_type: Union[str, SamplerType] = Field(
         default=SamplerType.TorchSampler,
-        description="The type of sampler to use. Options are TRTLLMSampler or TorchSampler. Defaults to TorchSampler.",
+        description="The type of sampler to use. Options are TRTLLMSampler, TorchSampler, or auto. Defaults to TorchSampler.",
     )
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)

44-48: Remove debug print statements.

The print statements on lines 44 and 47 appear to be debug artifacts. Consider removing them or replacing with proper test logging if debug output is needed.

-    print(f"Experiment config: {experiment_config}")
     cfg = ExperimentConfig(**experiment_config)
 
-    print("Running smoke test with TRTLLMSampler...")
     results = main(cfg)
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (3)

312-318: Logging inside forward path will be noisy; consider logging once or using debug level.

The ad_logger.info() call is inside _compute_logits(), which is called on every forward pass. This will produce repeated log messages. Consider either:

  1. Logging once during initialization
  2. Using ad_logger.debug() for per-iteration logging
  3. Using ad_logger.info_once() if available

Also, as noted in the TODO on line 313, integrating this cast into the AD graph would be more efficient.

         # Ensure logits are float32 as TRTLLMSampler expects float32
         # TODO(govind): Should this be put into the AD graph so it can be fused with other operations?
         if self.sampler_type == SamplerType.TRTLLMSampler and logits.dtype != torch.float32:
-            ad_logger.info(
-                f"Logits type {logits.dtype} is not supported by TRTLLMSampler. Casting to float32."
-            )
             logits = logits.to(torch.float32)

Alternatively, log once during __init__ after checking if the sampler type is TRTLLMSampler:

if self.sampler_type == SamplerType.TRTLLMSampler:
    ad_logger.info("TRTLLMSampler requires float32 logits; casting will be applied if needed.")

354-363: Add docstring to TRTLLMSamplerModelConfig class.

Per coding guidelines, Python interfaces should have docstrings that can be parsed by Sphinx. This class exposes model configuration for the TRTLLMSampler.

 class TRTLLMSamplerModelConfig:
+    """Configuration wrapper exposing model attributes required by TRTLLMSampler.
+
+    Args:
+        ad_config: The LlmArgs configuration containing model settings.
+    """
+
     def __init__(self, ad_config: LlmArgs):
         factory = ad_config.create_factory()
 
         self.config = SimpleNamespace()
         self.config.vocab_size = factory.vocab_size_padded
         self.config.num_hidden_layers = factory.num_hidden_layers
         self.config.hidden_size = factory.hidden_size
         self.config.num_attention_heads = factory.num_attention_heads

365-372: Potential redundant factory creation; add docstring.

The get_torch_dtype function may call ad_config.create_factory() even when model_dtype is not "auto". While the factory creation is internally cached via prefetch, consider:

  1. Adding a docstring for clarity
  2. Verifying that factory creation overhead is acceptable
 def get_torch_dtype(ad_config: LlmArgs):
-    # if the model dtype is "auto", we infer it from the model config
+    """Get the torch dtype from the AutoDeploy configuration.
+
+    If the model dtype is "auto", it is inferred from the model config via the factory.
+
+    Args:
+        ad_config: The LlmArgs configuration.
+
+    Returns:
+        The torch.dtype for the model.
+    """
     model_dtype = ad_config.dtype
     if model_dtype == "auto":
         model_dtype = ad_config.create_factory().dtype
     if isinstance(model_dtype, str):
         model_dtype = str_dtype_to_torch(model_dtype)
     return model_dtype
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 642dfae and 1257e29.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/auto_deploy/llm_args.py (2 hunks)
  • tensorrt_llm/_torch/auto_deploy/models/hf.py (1 hunks)
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (8 hunks)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧠 Learnings (11)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
📚 Learning: 2025-08-09T02:04:49.623Z
Learnt from: Fridah-nv
Repo: NVIDIA/TensorRT-LLM PR: 6760
File: tensorrt_llm/_torch/auto_deploy/models/quant_config_reader.py:81-98
Timestamp: 2025-08-09T02:04:49.623Z
Learning: In TensorRT-LLM's auto_deploy module, torch.dtype values in configuration dictionaries must be stored as string representations (e.g., "float16" instead of torch.float16) because OmegaConf.merge does not support torch.dtype types. These string representations are converted to actual torch.dtype objects in downstream code.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/models/hf.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
📚 Learning: 2025-08-27T15:03:57.149Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/auto_deploy/llm_args.py (2)
tensorrt_llm/llmapi/llm_args.py (2)
  • SamplerType (2475-2479)
  • Field (63-90)
tensorrt_llm/builder.py (1)
  • default (45-50)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (3)
tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py (1)
  • get_small_model_config (514-553)
examples/auto_deploy/build_and_run_ad.py (1)
  • ExperimentConfig (126-239)
tensorrt_llm/llmapi/llm_args.py (1)
  • SamplerType (2475-2479)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py

411-411: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)

11-11: LGTM on import addition.

The import of SamplerType follows the coding guideline to maintain namespace when importing.

tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)

50-53: LGTM on test assertions.

The assertions verify that the TRTLLMSampler path produces output, which is appropriate for a smoke test.

tensorrt_llm/_torch/auto_deploy/models/hf.py (2)

149-162: LGTM on new config accessor properties.

The num_hidden_layers, hidden_size, and num_attention_heads properties follow the established pattern used by vocab_size_padded and provide a consistent interface for accessing model configuration attributes.


144-148: Return type annotation may not match actual behavior from model_config.

The dtype property is typed as Optional[torch.dtype], but getattr(model_config, "dtype", None) retrieves the raw configuration value. Based on learnings from this codebase, torch.dtype values in configuration dictionaries are stored as string representations (e.g., "float16") due to OmegaConf.merge limitations, with conversion happening in downstream code. Either the return type annotation should reflect Optional[Union[str, torch.dtype]], or verify that model_config.dtype is already converted to torch.dtype before retrieval.

tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (4)

21-26: LGTM on import additions.

The new imports follow the coding guidelines and are necessary for the TRTLLMSampler integration.


147-154: LGTM on ADEngine constructor changes for sampler_type.

The sampler_type parameter is properly passed through build_from_config to __init__ and stored on the instance for later use in _compute_logits.

Also applies to: 163-179


512-518: LGTM on instantiate_sampler usage.

The refactored sampler instantiation correctly passes all required parameters and uses the new instantiate_sampler helper function.


375-413: Consider handling SamplerType.auto explicitly if it exists in the enum.

The SamplerType enum may include an auto option, but instantiate_sampler will raise a ValueError for it. If auto should be a valid option (perhaps defaulting to TorchSampler), consider adding explicit handling. If auto is intentionally unsupported in AutoDeploy, the error message could be more descriptive.

@govind-ramnarayan govind-ramnarayan changed the title TRTLLMSampler in AD [#9602][feat] AutoDeploy: Support TRTLLM Sampler Dec 3, 2025
@govind-ramnarayan govind-ramnarayan marked this pull request as ready for review December 3, 2025 06:19
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from 1257e29 to 3bfe777 Compare December 3, 2025 06:23
@lucaslie lucaslie linked an issue Dec 3, 2025 that may be closed by this pull request
1 task
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch 2 times, most recently from b17b2b9 to 4627ad1 Compare December 4, 2025 20:44
@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27016 [ run ] triggered by Bot. Commit: 6cd9d38

@govind-ramnarayan
Copy link
Collaborator Author

/bot kill

@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27017 [ run ] triggered by Bot. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27016 [ run ] completed with state ABORTED. Commit: 6cd9d38
LLM/main/L0_MergeRequest_PR #20600 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27018 [ kill ] triggered by Bot. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27017 [ run ] completed with state ABORTED. Commit: 6cd9d38

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27018 [ kill ] completed with state SUCCESS. Commit: 6cd9d38
Successfully killed previous jobs for commit 6cd9d38

@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27023 [ run ] triggered by Bot. Commit: 6cd9d38

@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from ae173b4 to 7e69d70 Compare December 4, 2025 23:43
Signed-off-by: Govind Ramnarayan <[email protected]>
@govind-ramnarayan govind-ramnarayan force-pushed the gramnarayan/trtllm-sampler branch from 7e69d70 to cdf7652 Compare December 4, 2025 23:47
@govind-ramnarayan
Copy link
Collaborator Author

/bot run --disable-fail-fast

@govind-ramnarayan govind-ramnarayan enabled auto-merge (squash) December 4, 2025 23:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27029 [ run ] triggered by Bot. Commit: cdf7652

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27023 [ run ] completed with state ABORTED. Commit: 6cd9d38
LLM/main/L0_MergeRequest_PR #20605 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27029 [ run ] completed with state SUCCESS. Commit: cdf7652
/LLM/main/L0_MergeRequest_PR pipeline #20611 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@govind-ramnarayan govind-ramnarayan merged commit 74df9b1 into NVIDIA:main Dec 5, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: Enable the use of TRTLLMSampler in AutoDeploy

3 participants