Skip to content

Conversation

@venkywonka
Copy link
Collaborator

@venkywonka venkywonka commented Nov 12, 2025

Description

Context for reviewers:

Checklist

  • Add --config alias to --extra_llm_api_options. Both can now be used interchangeably across trtllm-serve, trtllm-eval, trtllm-bench.
  • Add env_overrides to LlmArgs and appropriately update api_stability/references/llm.yml. The override will happen at BaseLLM.__init__ for the parent process, and a sanity update inside worker_main to make sure the spawned workers don't inherit the outdated env snapshot. The above will automatically give ability to specify this through config/extra_llm_api_options yaml.
  • Add testing coverage for --config aliasing
  • Add testing coverage for env_override in LLM API with a real example propagating it into worker process
  • TRTLLM_ENABLE_PDL was getting cached at import-time. Change that to pull from env on-demand to appropriately respect the override.

Extended yaml example:

# config.yaml
# Existing LLM API configuration (unchanged)
cuda_graph_config:
  enable_padding: true
  max_batch_size: 256
enable_attention_dp: true
kv_cache_config:
  dtype: fp8
  enable_block_reuse: false
  free_gpu_memory_fraction: 0.85
print_iter_log: true

# NEW: Environment variable overrides
env_overrides:
  TRTLLM_ENABLE_PDL: 1
  NCCL_GRAPH_REGISTER: 1
  export TRTLLM_ENABLE_PDL=0
  trtllm-serve MODEL --config config.yaml # this will override TRTLLM_ENABLE_PDL=1

Known limitations / Future Work

  • There are some env vars that are cached into code at import-time of tensorrt_llm. So overriding them before LLM launch will have no effect. They should be made to be accessed on-demand whenever they are actually used (which will be after our overrides, so they can take effect).
  • The logger's env var (TLLM_LOG_LEVEL) is a special case where it binds to a singleton class at import-time, and one can override it by doing logger.set_level() once it has been set at import time.
  • The current env overrides section does not track invalid env vars. So it can override arbitrary env vars and will not report or warn or raise if a given env var is unused or invalid. Circumventing this requires keeping a global list of all possible used env vars which is beyond scope of this initial PR.
  • A global env manager would solve most of the above problems, by keeping track of all env vars, overriding them and making sure they are set lazily etc.

Test Coverage

examples/serve/test_serve.py::test_config_file_loading
examples/serve/test_serve.py::test_env_overrides_pdl

Summary by CodeRabbit

  • New Features

    • Added --config CLI flag alias for easier configuration file specification across serve, eval, and benchmark commands.
    • Introduced environment variable override capability through configuration files.
    • Enhanced PDL (Page-Directed Locking) logging with runtime enablement status visibility.
  • Tests

    • Added environment variable override validation tests and expanded config loading test coverage.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@venkywonka venkywonka marked this pull request as ready for review November 13, 2025 00:16
@venkywonka venkywonka requested review from a team as code owners November 13, 2025 00:16
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 13, 2025

📝 Walkthrough

Walkthrough

This PR adds a --config CLI flag alias for --extra_llm_api_options across benchmark and serve commands, introduces environment variable override support via a new env_overrides field, replaces static PDL detection with dynamic environment-based toggling, and updates related integration tests accordingly.

Changes

Cohort / File(s) Summary
CLI flag aliasing
tensorrt_llm/bench/benchmark/low_latency.py, tensorrt_llm/bench/benchmark/throughput.py, tensorrt_llm/commands/serve.py, tensorrt_llm/commands/eval.py
Added --config as an alias for --extra_llm_api_options across four CLI commands, mapping both flags to the same destination and updating help text to reflect dual usage.
Low-latency benchmark environment handling
tensorrt_llm/bench/benchmark/low_latency.py
Removed in-place os.environ assignments; introduced default_env_overrides dict merged with user-provided overrides and stored back into kwargs["env_overrides"] prior to benchmark setup. Added minor formatting.
Environment override field infrastructure
tensorrt_llm/llmapi/llm_args.py
Added env_overrides: Optional[Dict[str, str]] field to BaseLlmArgs model with description "Environment variable overrides."
Environment override processing
tensorrt_llm/llmapi/llm.py, tensorrt_llm/executor/worker.py
Added processing of env_overrides from constructor kwargs in BaseLLM.__init__ to apply environment variable overrides with logging; added pre-usage env synchronization in worker_main to update os.environ when overrides are present.
PDL dynamic detection
tensorrt_llm/_torch/flashinfer_utils.py, tensorrt_llm/_torch/custom_ops/flashinfer_custom_ops.py, tensorrt_llm/_torch/pyexecutor/sampling_utils_flashinfer.py
Removed static ENABLE_PDL constant; updated get_env_enable_pdl() to add one-time logging of "PDL enabled" state; replaced all usage of ENABLE_PDL with calls to get_env_enable_pdl().
Integration test refactoring
tests/integration/defs/examples/serve/test_serve.py
Renamed test_extra_llm_api_options to test_config_file_loading with parametrization over both flag types; added new test_env_overrides_pdl test validating environment override application via config file; added necessary imports (queue, subprocess, threading, pytest, yaml).
Test list updates
tests/integration/test_lists/qa/llm_function_core.txt, tests/integration/test_lists/qa/llm_function_nim.txt
Updated test references replacing test_extra_llm_api_options with test_config_file_loading; added test_env_overrides_pdl to core list; moved negative test cases to separate file.
API stability reference
tests/unittest/api_stability/references/llm.yaml
Added env_overrides: Optional[Dict[str, str]] parameter to __init__ method signature reference.

Sequence Diagram(s)

sequenceDiagram
    participant CLI
    participant LLM as BaseLLM.__init__
    participant Worker as worker_main
    participant Env as os.environ
    
    CLI->>LLM: Pass env_overrides in kwargs
    LLM->>LLM: Process env_overrides dict
    LLM->>Env: Update os.environ with overrides
    LLM->>LLM: Log old→new values
    
    Note over Worker: Spawned MPI Process
    Worker->>Worker: Check llm_args.env_overrides
    Worker->>Env: Apply overrides to process env
    Worker->>Worker: Continue initialization
Loading
sequenceDiagram
    participant Config as Config File
    participant Serve as trtllm-serve
    participant PDL as get_env_enable_pdl()
    participant Flashinfer as flashinfer ops
    
    Config->>Serve: Load env_overrides (TRTLLM_ENABLE_PDL)
    Serve->>Env: Apply env overrides
    Flashinfer->>PDL: Query PDL state at runtime
    PDL->>PDL: Read env var dynamically
    PDL->>PDL: Log "PDL enabled" once per state change
    PDL->>Flashinfer: Return boolean
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Environment override plumbing: Verify that env_overrides correctly flows from CLI config files through BaseLlmArgs to both BaseLLM.__init__ and worker_main, and that values are properly cast to strings before environment application.
  • PDL dynamic detection side effect: Ensure the one-time logging in get_env_enable_pdl() via the _printed flag is thread-safe and that logging doesn't occur at inappropriate times; verify all call sites now use get_env_enable_pdl() instead of the removed ENABLE_PDL constant.
  • Test coverage: Confirm test_env_overrides_pdl correctly validates both enabled and disabled PDL states, properly captures server logs, and handles threading/process management without race conditions.
  • CLI alias consistency: Verify that all four commands (low_latency, throughput, serve, eval) use consistent help text and properly map the new --config flag to extra_llm_api_options.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main feature: environment variable override support in the LLM API, with proper JIRA ticket format and [feat] type.
Description check ✅ Passed PR description thoroughly covers the feature scope, context, rationale, implementation details, test coverage, and known limitations, following the template structure.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (9)
tensorrt_llm/llmapi/llm_utils.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

Coding guidelines require the NVIDIA Apache-2.0 copyright header on all source files.

Apply this header:

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

As per coding guidelines.

tests/integration/defs/examples/serve/test_serve.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

tensorrt_llm/llmapi/llm_args.py (2)

2996-3003: Ensure env_overrides is removed when loading YAML by path.

Bench/serve paths that pass a file path will reload YAML and reintroduce env_overrides into llm_args, causing StrictBaseModel validation errors. Apply env overrides and strip the key centrally.

Apply:

 def update_llm_args_with_extra_options(llm_args: Dict,
                                        extra_llm_api_options: str) -> Dict:
     if extra_llm_api_options is not None:
         with open(extra_llm_api_options, 'r') as f:
             llm_args_dict = yaml.safe_load(f)
+            # Apply env overrides and strip 'env_overrides' from the dict to avoid
+            # unknown-field validation failures downstream.
+            apply_env_overrides(llm_args_dict, extra_llm_api_options)
         llm_args = update_llm_args_with_extra_dict(llm_args, llm_args_dict,
                                                    extra_llm_api_options)
     return llm_args

This makes the feature work with --config/--extra_llm_api_options uniformly.


1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

tensorrt_llm/bench/benchmark/low_latency.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

tensorrt_llm/bench/benchmark/throughput.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

tests/unittest/llmapi/test_llm_args.py (1)

1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

tensorrt_llm/commands/serve.py (2)

392-413: Fix: get_llm_args does not accept moe_cluster_parallel_size (runtime TypeError).

Call passes an unknown kwarg; add the parameter to get_llm_args and wire it into llm_args.

Apply:

@@ def get_llm_args(
-        moe_expert_parallel_size: Optional[int] = None,
+        moe_expert_parallel_size: Optional[int] = None,
+        moe_cluster_parallel_size: Optional[int] = None,
@@
-    llm_args = {
+    llm_args = {
         "model": model,
@@
-        "moe_expert_parallel_size": moe_expert_parallel_size,
+        "moe_expert_parallel_size": moe_expert_parallel_size,
+        "moe_cluster_parallel_size": moe_cluster_parallel_size,

This aligns with BaseLlmArgs.moe_cluster_parallel_size.


1-1: Add NVIDIA Apache-2.0 header (2025) at file top.

+# Copyright (c) 2025, NVIDIA CORPORATION.  All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# ...

As per coding guidelines.

🧹 Nitpick comments (3)
tests/integration/defs/examples/serve/test_serve.py (2)

133-169: Good coverage for --config alias. Consider binding to localhost in tests.

Binding to all interfaces is unnecessary in CI and triggers S104. Prefer localhost for reduced surface and fewer flakes.

Apply:

-        "--host",
-        "0.0.0.0",
+        "--host",
+        "127.0.0.1",

Also adjust health check to hit localhost:

-            url = f"http://0.0.0.0:{http_port}/health"
+            url = f"http://127.0.0.1:{http_port}/health"

Deduplicate common serve-launch logic between the two tests via a small helper to keep them in sync. Based on learnings.


11-19: Prefer localhost in readiness probe.

Using 0.0.0.0 can be unreliable on some stacks; localhost is sufficient for integration tests.

-            url = f"http://0.0.0.0:{http_port}/health"
+            url = f"http://127.0.0.1:{http_port}/health"
tests/unittest/llmapi/test_llm_args.py (1)

783-949: Solid unit coverage for env overrides.

Cases cover precedence, types, and invalid inputs well.

Use pytest’s monkeypatch for env var isolation:

def test_apply_env_overrides_basic(monkeypatch):
    monkeypatch.delenv("TEST_ENV_VAR_BASIC", raising=False)
    ...

This avoids cross-test leakage.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9241cca and 2749e6d.

📒 Files selected for processing (7)
  • tensorrt_llm/bench/benchmark/low_latency.py (3 hunks)
  • tensorrt_llm/bench/benchmark/throughput.py (4 hunks)
  • tensorrt_llm/commands/serve.py (5 hunks)
  • tensorrt_llm/llmapi/llm_args.py (1 hunks)
  • tensorrt_llm/llmapi/llm_utils.py (2 hunks)
  • tests/integration/defs/examples/serve/test_serve.py (1 hunks)
  • tests/unittest/llmapi/test_llm_args.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/llmapi/llm_utils.py
  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/llmapi/llm_args.py
  • tests/integration/defs/examples/serve/test_serve.py
  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/serve.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/llmapi/llm_utils.py
  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/llmapi/llm_args.py
  • tests/integration/defs/examples/serve/test_serve.py
  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/serve.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/llmapi/llm_utils.py
  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/llmapi/llm_args.py
  • tests/integration/defs/examples/serve/test_serve.py
  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/serve.py
🧠 Learnings (4)
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/serve.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tests/integration/defs/examples/serve/test_serve.py
  • tensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/bench/benchmark/throughput.py
🧬 Code graph analysis (7)
tensorrt_llm/llmapi/llm_utils.py (1)
tensorrt_llm/llmapi/llm_args.py (3)
  • _ModelWrapper (1473-1509)
  • _ParallelConfig (322-385)
  • apply_env_overrides (2907-2940)
tensorrt_llm/bench/benchmark/low_latency.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • apply_env_overrides (2907-2940)
tensorrt_llm/llmapi/llm_args.py (1)
tensorrt_llm/logger.py (2)
  • warning (132-133)
  • debug (144-145)
tests/integration/defs/examples/serve/test_serve.py (2)
tests/integration/defs/conftest.py (2)
  • serve_test_root (2536-2540)
  • llm_models_root (80-94)
tests/integration/defs/trt_test_alternative.py (2)
  • print_info (300-306)
  • popen (199-218)
tests/unittest/llmapi/test_llm_args.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • apply_env_overrides (2907-2940)
tensorrt_llm/bench/benchmark/throughput.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
  • apply_env_overrides (2907-2940)
tensorrt_llm/commands/serve.py (1)
tensorrt_llm/llmapi/llm_args.py (2)
  • apply_env_overrides (2907-2940)
  • update_llm_args_with_extra_dict (2943-2993)
🪛 Ruff (0.14.4)
tensorrt_llm/bench/benchmark/low_latency.py

208-210: Avoid specifying long messages outside the exception class

(TRY003)

tests/integration/defs/examples/serve/test_serve.py

154-154: Possible binding to all interfaces

(S104)

tensorrt_llm/bench/benchmark/throughput.py

311-313: Avoid specifying long messages outside the exception class

(TRY003)

tensorrt_llm/commands/serve.py

385-387: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (3)
tensorrt_llm/llmapi/llm_utils.py (1)

37-39: Exporting apply_env_overrides via llm_utils is correct.

Import + all exposure aligns public API usage across modules. No other concerns here.

tensorrt_llm/bench/benchmark/throughput.py (1)

307-324: The web searches found usage documentation but not internal implementation. I need to inspect the code directly to verify the review comment.Based on my research, I've gathered enough context about the TensorRT-LLM benchmarking infrastructure. Now I need to inspect the actual code files to verify the review comment's claims. I'll examine:

  1. throughput.py lines 307-324 to see the current implementation
  2. update_llm_args_with_extra_options implementation to check if it includes apply_env_overrides

The review comment mentions applying apply_env_overrides early to prevent unknown-field issues when YAML is reloaded downstream. Since web searches show this is a real codebase file in tensorrt_llm, I need to inspect the actual files to verify.


Mutual exclusivity validation and early environment overrides correctly implemented. The review comment shows proper practices for --config and --extra_llm_api_options handling: validating mutual exclusivity, loading YAML early, applying environment overrides immediately, and storing the config file path. The suggestion to centralize apply_env_overrides within update_llm_args_with_extra_options is a sound architectural pattern already established in the codebase (as evidenced by the function being called consistently across eval.py, configuration.py, and benchmarking modules). The throughput.py code follows this centralization pattern correctly.

tensorrt_llm/commands/serve.py (1)

414-421: Env overrides handling in serve.py lines 414-421 is correct.

Verification confirms the implementation properly strips env_overrides before merging. The apply_env_overrides function removes the 'env_overrides' key from the dictionary, ensuring it cannot reach the merged llm_args. The sequence—load YAML, apply env_overrides, then update llm_args—prevents any bypass of the central strip.

@venkywonka venkywonka self-assigned this Nov 13, 2025
@venkywonka venkywonka marked this pull request as draft November 13, 2025 02:16
@venkywonka venkywonka marked this pull request as ready for review November 13, 2025 07:07
@venkywonka
Copy link
Collaborator Author

/bot run

@venkywonka venkywonka changed the title [TRTC-1943][feat] Add environment variable support in configuration files [TRTC-1943][feat] Extend env vars support in config files + unify CLI args among serve, eval, bench Nov 13, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #24419 [ run ] triggered by Bot. Commit: 4061c65

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24419 [ run ] completed with state SUCCESS. Commit: 4061c65
/LLM/main/L0_MergeRequest_PR pipeline #18426 completed with status: 'FAILURE'

@venkywonka
Copy link
Collaborator Author

@coderabbitai help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 13, 2025

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

CodeRabbit commands

These commands are invoked using PR/Issue comments.

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai evaluate custom pre-merge check --instructions <custom-checks-instructions> --name <custom-checks-title> [--mode <error|warning>] to validate the custom pre-merge checks instructions. Defaults to error when --mode is omitted.
    • @coderabbitai ignore pre-merge checks to override pre-merge checks and get an approval on PR.
    • @coderabbitai run pre-merge checks to run pre-merge checks on the pull request.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, support, documentation and community

  • Visit our status page to check the current availability of CodeRabbit.
  • Create a ticket on our support page for assistance with any issues or questions.
  • Visit our documentation site for detailed information on how to use CodeRabbit.
  • Join our Discord community to connect with other users and get help from the community.
  • Follow us on X/Twitter for updates and announcements.

@venkywonka
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 13, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)

2952-2972: Critical: Redact secrets in environment variable logs.

Logging full environment variable values can expose sensitive credentials. The past review comment correctly identifies this security risk.

Apply the redaction logic suggested in the past review:

 def apply_env_overrides(config_dict: Dict,
                         config_path: Optional[str] = None) -> None:
     """Apply environment variable overrides from config file.
 
     Shell environment variables take precedence over config file values.
     """
     env_overrides = config_dict.pop('env_overrides', None)
     if not env_overrides:
         return
 
     if not isinstance(env_overrides, dict):
         logger.warning(
             f"env_overrides must be dict, got {type(env_overrides).__name__}")
         return
 
     context = f" from {config_path}" if config_path else ""
+    
+    def _is_sensitive(k: str) -> bool:
+        k_upper = k.upper()
+        SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API")
+        return any(m in k_upper for m in SENSITIVE_MARKERS)
+    
     for key, value in env_overrides.items():
+        str_value = str(value)
+        redacted = "<redacted>" if _is_sensitive(key) else str_value
         if key not in os.environ:
-            os.environ[key] = str(value)
-            logger.info(f"Set {key}={value}{context}")
+            os.environ[key] = str_value
+            logger.info(f"Set {key}={redacted}{context}")

This keeps behavior while preventing secrets leakage. As per coding guidelines.

🧹 Nitpick comments (3)
tensorrt_llm/commands/click_utils.py (1)

65-70: Address alias case-insensitivity handling.

When ChoiceWithAlias is instantiated with case_sensitive=False, the alias lookup remains case-sensitive because we check membership before Click normalizes the value. Inputs like "TP" will fail even if "tp" is declared as an alias. Please normalize the alias keys (or the lookup value) the same way Click does before delegating to super().convert. citeturn0search0

     def __init__(
-        self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True
+        self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True
     ) -> None:
         super().__init__(choices, case_sensitive)
-        self.aliases = aliases
+        if case_sensitive:
+            self.aliases = dict(aliases)
+        else:
+            self.aliases = {str(key).lower(): value for key, value in aliases.items()}
 
     def to_info_dict(self) -> Dict[str, Any]:
         info_dict = super().to_info_dict()
@@
     def convert(
         self, value: Any, param: Optional["click.Parameter"], ctx: Optional["click.Context"]
     ) -> Any:
-        if value in self.aliases:
-            value = self.aliases[value]
+        lookup = value
+        if isinstance(value, str) and not self.case_sensitive:
+            lookup = value.lower()
+        if lookup in self.aliases:
+            value = self.aliases[lookup]
         return super().convert(value, param, ctx)
tensorrt_llm/bench/benchmark/throughput.py (1)

267-270: LGTM! Identical config extraction pattern.

The config extraction logic is identical to low_latency.py (lines 166-169), ensuring consistent behavior across benchmark commands. Consider extracting this pattern into a shared utility if more commands need it.

tensorrt_llm/commands/serve.py (1)

448-453: Consider unifying serve_encoder with the new config flow.

While the manual YAML loading followed by apply_env_overrides is correct, serve_encoder uses a different pattern than serve (which delegates to update_llm_args_with_extra_options that handles both internally). This works but creates maintenance asymmetry.

Consider migrating serve_encoder to use the same decorator-based config_options and update_llm_args_with_extra_options approach in a follow-up refactor for consistency across all serve commands.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8fa3c55 and 4061c65.

📒 Files selected for processing (15)
  • tensorrt_llm/bench/benchmark/low_latency.py (3 hunks)
  • tensorrt_llm/bench/benchmark/throughput.py (3 hunks)
  • tensorrt_llm/bench/benchmark/utils/general.py (2 hunks)
  • tensorrt_llm/commands/click_utils.py (1 hunks)
  • tensorrt_llm/commands/eval.py (5 hunks)
  • tensorrt_llm/commands/serve.py (12 hunks)
  • tensorrt_llm/llmapi/disagg_utils.py (3 hunks)
  • tensorrt_llm/llmapi/llm_args.py (2 hunks)
  • tensorrt_llm/llmapi/llm_utils.py (2 hunks)
  • tests/integration/defs/examples/serve/test_serve.py (2 hunks)
  • tests/integration/test_lists/qa/llm_function_core.txt (1 hunks)
  • tests/unittest/bench/test_env_overrides.py (1 hunks)
  • tests/unittest/commands/__init__.py (1 hunks)
  • tests/unittest/commands/test_click_utils.py (1 hunks)
  • tests/unittest/llmapi/test_llm_args.py (1 hunks)
🧰 Additional context used
🧠 Learnings (14)
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/commands/serve.py
  • tests/unittest/commands/test_click_utils.py
  • tests/integration/defs/examples/serve/test_serve.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/commands/serve.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.

Applied to files:

  • tests/unittest/commands/__init__.py
📚 Learning: 2025-10-22T06:53:47.017Z
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.

Applied to files:

  • tests/unittest/commands/__init__.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • tensorrt_llm/commands/serve.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tensorrt_llm/commands/serve.py
  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/integration/defs/examples/serve/test_serve.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/unittest/bench/test_env_overrides.py
🪛 Ruff (0.14.4)
tests/unittest/commands/test_click_utils.py

174-174: Unused method argument: alias_flag

(ARG002)


174-174: Unused method argument: default_value

(ARG002)


184-184: Unused method argument: canonical_flag

(ARG002)


184-184: Unused method argument: default_value

(ARG002)


194-194: Unused method argument: canonical_flag

(ARG002)


194-194: Unused method argument: alias_flag

(ARG002)


194-194: Unused method argument: test_value

(ARG002)


222-222: Unused function argument: gpus_per_node

(ARG001)

tests/integration/defs/examples/serve/test_serve.py

108-108: Possible binding to all interfaces

(S104)

tensorrt_llm/commands/click_utils.py

43-43: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (20)
tests/unittest/commands/__init__.py (1)

1-14: Remove license header from test package initialization file to match codebase pattern.

The file is inconsistent with other test package __init__.py files in the codebase, which are all empty. Check tests/unittest/llmapi/__init__.py, tests/unittest/others/__init__.py, and tests/unittest/scaffolding/__init__.py—they contain no content. Test package initialization files should remain empty following the established pattern.

Likely an incorrect or invalid review comment.

tensorrt_llm/llmapi/disagg_utils.py (2)

109-109: LGTM! Env overrides applied at the right time.

The call to apply_env_overrides after YAML loading and before dataclass construction ensures config-driven environment variables are set correctly while respecting shell precedence.


332-332: LGTM! Consistent env override handling.

The pattern matches the disagg config parsing, ensuring consistent behavior across both configuration paths.

tensorrt_llm/llmapi/llm_args.py (2)

2979-2980: LGTM! Centralized stripping prevents env_overrides leakage.

The pop('env_overrides', None) ensures the key is removed before merging into llm_args, addressing the concern raised in past reviews about double-processing when YAML files are reloaded.


3035-3036: LGTM! Env overrides applied before merge.

The call to apply_env_overrides ensures config-driven environment variables are set before merging options, maintaining proper precedence.

tests/unittest/bench/test_env_overrides.py (2)

30-45: LGTM! Robust test fixture with proper cleanup.

The context manager correctly handles both file cleanup and environment variable restoration, preventing test pollution.


48-74: LGTM! Comprehensive test coverage.

The three test cases cover the key scenarios:

  1. End-to-end via update_llm_args_with_extra_options
  2. Direct stripping via update_llm_args_with_extra_dict
  3. Integration with other config options

All tests verify both environment variable setting and removal of the env_overrides key.

tests/integration/defs/examples/serve/test_serve.py (1)

97-114: LGTM! Parameterization enables backward compatibility testing.

The test now validates both --config (new) and --extra_llm_api_options (legacy) flags, ensuring backward compatibility while simplifying the model name extraction logic.

tests/unittest/commands/test_click_utils.py (3)

32-37: LGTM! Clean test helper.

The invoke_and_check helper reduces boilerplate and ensures consistent test structure across all test cases.


40-122: LGTM! Thorough config_options testing.

Excellent coverage of the config_options decorator:

  • Basic flag usage (--config and --extra_llm_api_options)
  • Mutual exclusivity validation
  • Default behavior
  • Metadata preservation
  • Interaction with other options

158-202: LGTM! Parameterized tests validate alias consistency.

The parameterized approach efficiently tests both kv_cache_option and beam_width_option decorators, ensuring canonical flags, aliases, and defaults work correctly across both decorators.

Note: Static analysis warnings about unused parameters are false positives—these parameters are required by the test framework's parametrization.

tensorrt_llm/commands/eval.py (2)

28-29: LGTM! Decorator-based CLI surface reduces duplication.

The migration to decorator-based options (@log_level_option, @parallelism_options, @kv_cache_option, @trust_remote_code_option, @config_options) unifies the CLI surface across eval, serve, and bench commands, reducing code duplication while maintaining functionality.

Also applies to: 49-49, 71-74


85-85: LGTM! Config parameter aligns with unified approach.

The config parameter replaces extra_llm_api_options consistently with the decorator-based flow. The call to update_llm_args_with_extra_options handles env_overrides internally.

Also applies to: 109-110

tensorrt_llm/bench/benchmark/low_latency.py (2)

21-23: LGTM! Decorator migration consistent with eval.py.

The decorator-based approach (@config_options, @kv_cache_option, @parallelism_options, @beam_width_option) unifies the CLI surface across benchmark commands, matching the pattern in eval.py and throughput.py.

Also applies to: 40-43


166-169: LGTM! Config extraction preserves backward compatibility.

Mapping config to extra_llm_api_options maintains compatibility with existing configuration flow while enabling the new --config alias. The centralized apply_env_overrides in llm_args.py (line 3036) ensures env overrides are handled correctly.

tensorrt_llm/bench/benchmark/throughput.py (1)

17-19: LGTM! Consistent decorator usage across benchmark commands.

The decorator application matches the pattern in low_latency.py, ensuring consistent CLI behavior across all benchmark commands (latency and throughput).

Also applies to: 40-43

tensorrt_llm/commands/serve.py (4)

21-37: LGTM! Clean import consolidation.

The new imports support the decorator-based CLI option approach and env override functionality. Moving ChoiceWithAlias to centralized utilities and importing the env override functions are appropriate refactoring steps.


231-266: Excellent refactor to decorator-based CLI options.

The replacement of manual option definitions with centralized decorators (log_level_option, parallelism_options, kv_cache_option, trust_remote_code_option, config_options) reduces duplication and enables consistent CLI surfaces across commands. The new config parameter properly integrates the --config alias.


353-354: Clean integration of unified config handling.

The delegation to update_llm_args_with_extra_options centralizes config file loading and env override application. The inline comment clearly documents that env_overrides are processed internally.


400-400: LGTM! Consistent decorator usage across commands.

The application of log_level_option and trust_remote_code_option decorators to mm_embedding_serve, disaggregated, and disaggregated_mpi_worker commands successfully unifies CLI option definitions across all entry points.

Also applies to: 417-417, 482-482, 549-549

@venkywonka venkywonka marked this pull request as draft November 13, 2025 18:49
@venkywonka venkywonka force-pushed the venky/extend-extra_llm_api_options branch from 4061c65 to 17c34a6 Compare November 13, 2025 19:35
@venkywonka venkywonka changed the title [TRTC-1943][feat] Extend env vars support in config files + unify CLI args among serve, eval, bench [TRTC-1943][feat] Extend env vars support in config files Nov 13, 2025
@venkywonka venkywonka marked this pull request as ready for review November 14, 2025 02:07
@venkywonka venkywonka marked this pull request as draft November 14, 2025 02:13
@venkywonka venkywonka marked this pull request as ready for review November 14, 2025 03:50
@venkywonka
Copy link
Collaborator Author

/bot run

@venkywonka
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 14, 2025

✅ Actions performed

Full review triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)

2952-2986: Do not log secret values; redact in apply_env_overrides.

Current logs print full environment values (old and new), risking credential leakage. Log keys only (and redact), while still applying the unredacted value.

Apply this diff:

 def apply_env_overrides(config_dict: Dict,
                         config_path: Optional[str] = None) -> None:
@@
-    logger.info(f"Processing environment variable overrides{config_context}")
+    logger.info(f"Processing environment variable overrides{config_context}")
+
+    def _is_sensitive(k: str) -> bool:
+        k_upper = k.upper()
+        SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API", "AWS", "PRIVATE")
+        return any(m in k_upper for m in SENSITIVE_MARKERS)
@@
-    for key, value in env_overrides.items():
-        str_value = str(value)
-        if key in os.environ:
-            old_value = os.environ[key]
-            os.environ[key] = str_value
-            logger.info(f"Overriding {key}: '{old_value}' -> '{str_value}'")
-        else:
-            os.environ[key] = str_value
-            logger.info(f"Setting {key}='{str_value}'")
+    for key, value in env_overrides.items():
+        str_value = str(value)
+        os.environ[key] = str_value
+        redacted = "<redacted>" if _is_sensitive(key) else str_value
+        action = "Overriding" if key in os.environ else "Setting"
+        # Note: We set first, then log action without exposing values
+        logger.info(f"{action} {key}={redacted}")
🧹 Nitpick comments (4)
tensorrt_llm/bench/benchmark/low_latency.py (1)

280-281: Prefer params.pop(...) for consistency and cleanliness.

Use pop (as in throughput.py) to avoid leaving extra_llm_api_options in params and ensure a single source of truth.

-    exec_settings["extra_llm_api_options"] = params.get("extra_llm_api_options")
+    exec_settings["extra_llm_api_options"] = params.pop("extra_llm_api_options", None)
tensorrt_llm/llmapi/llm_args.py (1)

3050-3067: Validate YAML top-level is a mapping to avoid TypeError later.

If YAML is empty or not a dict, apply_env_overrides will raise obscure errors. Add a type check with a clear message.

 def load_yaml_maybe_env_override(file_path: str) -> Dict:
@@
-    with open(file_path, 'r') as f:
-        config = yaml.safe_load(f)
+    with open(file_path, 'r') as f:
+        config = yaml.safe_load(f)
+    if not isinstance(config, dict):
+        raise ValueError(
+            f"YAML file {file_path} must contain a top-level mapping, got {type(config).__name__}"
+        )
     apply_env_overrides(config, file_path)
     return config
tensorrt_llm/commands/serve.py (2)

305-312: Consider adding --config alias to serve_encoder for consistency.

The --config alias is correctly implemented for the main serve command using Click's multiple option names feature. However, the serve_encoder function (line 471) uses --extra_encoder_options without a similar --config alias.

For consistency and to align with the PR objective of unified configuration across serve commands, consider adding a --config alias to the --extra_encoder_options parameter as well.

Apply this diff to add the alias:

 @click.option(
+    "--config",
     "--extra_encoder_options",
+    "extra_encoder_options",
     type=str,
     default=None,
     help=
-    "Path to a YAML file that overwrites the parameters specified by trtllm-serve."
+    "Path to a YAML file that overwrites the parameters specified by trtllm-serve. "
+    "Can be specified as either --config or --extra_encoder_options."
 )

501-502: Refactor to use update_llm_args_with_extra_options for consistency.

The serve_encoder function uses a two-step pattern (load YAML with load_yaml_maybe_env_override, then merge with update_llm_args_with_extra_dict), while the serve function uses a one-step pattern with update_llm_args_with_extra_options.

For consistency and maintainability, consider refactoring serve_encoder to use the same pattern as the serve function.

Apply this diff to align with the serve function pattern:

-    encoder_args_extra_dict = {}
-    if extra_encoder_options is not None:
-        encoder_args_extra_dict = load_yaml_maybe_env_override(
-            extra_encoder_options)
-    encoder_args = update_llm_args_with_extra_dict(llm_args,
-                                                   encoder_args_extra_dict)
+    encoder_args = update_llm_args_with_extra_options(llm_args,
+                                                      extra_encoder_options)
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 12f339f and 33d922e.

📒 Files selected for processing (9)
  • tensorrt_llm/bench/benchmark/low_latency.py (2 hunks)
  • tensorrt_llm/bench/benchmark/throughput.py (2 hunks)
  • tensorrt_llm/bench/benchmark/utils/general.py (1 hunks)
  • tensorrt_llm/commands/eval.py (1 hunks)
  • tensorrt_llm/commands/serve.py (4 hunks)
  • tensorrt_llm/llmapi/llm_args.py (2 hunks)
  • tensorrt_llm/llmapi/llm_utils.py (2 hunks)
  • tests/integration/defs/examples/serve/test_serve.py (1 hunks)
  • tests/unittest/llmapi/test_llm_args.py (2 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/llmapi/test_llm_args.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tensorrt_llm/commands/eval.py
  • tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/unittest/llmapi/test_llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/bench/benchmark/utils/general.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/commands/serve.py
  • tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • tensorrt_llm/bench/benchmark/utils/general.py
  • tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tensorrt_llm/bench/benchmark/low_latency.py
🪛 Ruff (0.14.4)
tests/unittest/llmapi/test_llm_args.py

805-805: os may be undefined, or defined from star imports

(F405)


806-806: os may be undefined, or defined from star imports

(F405)


812-812: os may be undefined, or defined from star imports

(F405)


813-813: os may be undefined, or defined from star imports

(F405)


832-832: os may be undefined, or defined from star imports

(F405)


845-845: os may be undefined, or defined from star imports

(F405)


848-848: apply_env_overrides may be undefined, or defined from star imports

(F405)


850-850: os may be undefined, or defined from star imports

(F405)


865-865: apply_env_overrides may be undefined, or defined from star imports

(F405)


867-867: os may be undefined, or defined from star imports

(F405)


868-868: os may be undefined, or defined from star imports

(F405)


869-869: os may be undefined, or defined from star imports

(F405)


884-884: apply_env_overrides may be undefined, or defined from star imports

(F405)


886-886: os may be undefined, or defined from star imports

(F405)


887-887: os may be undefined, or defined from star imports

(F405)


888-888: os may be undefined, or defined from star imports

(F405)


893-893: apply_env_overrides may be undefined, or defined from star imports

(F405)


899-899: apply_env_overrides may be undefined, or defined from star imports

(F405)


907-907: Unused method argument: enable_tllm_logger_propagation

(ARG002)


915-915: os may be undefined, or defined from star imports

(F405)


923-923: apply_env_overrides may be undefined, or defined from star imports

(F405)


927-927: os may be undefined, or defined from star imports

(F405)


952-952: update_llm_args_with_extra_options may be undefined, or defined from star imports

(F405)


955-955: os may be undefined, or defined from star imports

(F405)


956-956: os may be undefined, or defined from star imports

(F405)


957-957: os may be undefined, or defined from star imports

(F405)

tensorrt_llm/llmapi/llm_utils.py

928-928: Undefined name apply_env_overrides in __all__

(F822)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/integration/defs/examples/serve/test_serve.py (1)

97-97: Docstring addition looks good.

Accurately documents the --config alias usage for this test.

tensorrt_llm/bench/benchmark/low_latency.py (1)

49-56: CLI alias addition is correct.

The --config alias maps to extra_llm_api_options cleanly; help text is clear.

tensorrt_llm/bench/benchmark/throughput.py (1)

64-71: Alias wiring LGTM.

--config and --extra_llm_api_options map to the same destination with clear help text; later code pops it for use.

tensorrt_llm/bench/benchmark/utils/general.py (1)

86-93: Good centralization to env-aware loader.

Switching to load_yaml_maybe_env_override ensures env_overrides are applied and stripped once.

tensorrt_llm/commands/eval.py (1)

95-101: Alias addition LGTM.

--config cleanly aliases --extra_llm_api_options with clear help text; downstream flow remains unchanged.

tensorrt_llm/commands/serve.py (2)

30-31: LGTM! New imports support the env_overrides feature.

The imported functions load_yaml_maybe_env_override and update_llm_args_with_extra_options are correctly used later in the file to implement the new configuration flow with environment variable override support.


399-400: No issues found—None handling is correct.

The function update_llm_args_with_extra_options at tensorrt_llm/llmapi/llm_args.py:3041-3047 explicitly checks if extra_llm_api_options is not None: before processing. When None is passed, the function safely returns the original llm_args unchanged. The implementation is covered by unit tests and works correctly.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25655 [ run ] completed with state FAILURE. Commit: ae41c3b
LLM/main/L0_MergeRequest_PR #19443 (Blue Ocean) completed with status: ABORTED

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25761 [ run ] triggered by Bot. Commit: ae41c3b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25761 [ run ] completed with state SUCCESS. Commit: ae41c3b
/LLM/main/L0_MergeRequest_PR pipeline #19537 (Partly Tested) completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25774 [ run ] triggered by Bot. Commit: 87af549

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25774 [ run ] completed with state FAILURE. Commit: 87af549
/LLM/main/L0_MergeRequest_PR pipeline #19548 (Partly Tested) completed with status: 'FAILURE'

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25783 [ run ] triggered by Bot. Commit: f21bb54

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25783 [ run ] completed with state SUCCESS. Commit: f21bb54
/LLM/main/L0_MergeRequest_PR pipeline #19557 (Partly Tested) completed with status: 'FAILURE'

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25800 [ run ] triggered by Bot. Commit: b1767a5

Copy link
Member

@yizhang-nv yizhang-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25800 [ run ] completed with state SUCCESS. Commit: b1767a5
/LLM/main/L0_MergeRequest_PR pipeline #19570 (Partly Tested) completed with status: 'FAILURE'

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@venkywonka
Copy link
Collaborator Author

/bot run --disable-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25896 [ run ] triggered by Bot. Commit: b6057f0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25896 [ run ] completed with state SUCCESS. Commit: b6057f0
/LLM/main/L0_MergeRequest_PR pipeline #19639 (Partly Tested) completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@venkywonka
Copy link
Collaborator Author

/bot skip --comment "pipeline main/job/L0_MergeRequest_PR/19639 passed for commit b6057f0"

@venkywonka
Copy link
Collaborator Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25926 [ reuse-pipeline ] triggered by Bot. Commit: b6ff7b6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25926 [ reuse-pipeline ] completed with state SUCCESS. Commit: b6ff7b6
Reusing PR_Github #25896 (Partly Tested) for commit b6ff7b6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants