-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[TRTC-1943][feat] Env vars override support in LLM API #9104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[TRTC-1943][feat] Env vars override support in LLM API #9104
Conversation
📝 WalkthroughWalkthroughThis PR adds a Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant LLM as BaseLLM.__init__
participant Worker as worker_main
participant Env as os.environ
CLI->>LLM: Pass env_overrides in kwargs
LLM->>LLM: Process env_overrides dict
LLM->>Env: Update os.environ with overrides
LLM->>LLM: Log old→new values
Note over Worker: Spawned MPI Process
Worker->>Worker: Check llm_args.env_overrides
Worker->>Env: Apply overrides to process env
Worker->>Worker: Continue initialization
sequenceDiagram
participant Config as Config File
participant Serve as trtllm-serve
participant PDL as get_env_enable_pdl()
participant Flashinfer as flashinfer ops
Config->>Serve: Load env_overrides (TRTLLM_ENABLE_PDL)
Serve->>Env: Apply env overrides
Flashinfer->>PDL: Query PDL state at runtime
PDL->>PDL: Read env var dynamically
PDL->>PDL: Log "PDL enabled" once per state change
PDL->>Flashinfer: Return boolean
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
tensorrt_llm/llmapi/llm_utils.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.Coding guidelines require the NVIDIA Apache-2.0 copyright header on all source files.
Apply this header:
+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License.As per coding guidelines.
tests/integration/defs/examples/serve/test_serve.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/llmapi/llm_args.py (2)
2996-3003: Ensure env_overrides is removed when loading YAML by path.Bench/serve paths that pass a file path will reload YAML and reintroduce env_overrides into llm_args, causing StrictBaseModel validation errors. Apply env overrides and strip the key centrally.
Apply:
def update_llm_args_with_extra_options(llm_args: Dict, extra_llm_api_options: str) -> Dict: if extra_llm_api_options is not None: with open(extra_llm_api_options, 'r') as f: llm_args_dict = yaml.safe_load(f) + # Apply env overrides and strip 'env_overrides' from the dict to avoid + # unknown-field validation failures downstream. + apply_env_overrides(llm_args_dict, extra_llm_api_options) llm_args = update_llm_args_with_extra_dict(llm_args, llm_args_dict, extra_llm_api_options) return llm_argsThis makes the feature work with --config/--extra_llm_api_options uniformly.
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/bench/benchmark/low_latency.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/bench/benchmark/throughput.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tests/unittest/llmapi/test_llm_args.py (1)
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
tensorrt_llm/commands/serve.py (2)
392-413: Fix: get_llm_args does not accept moe_cluster_parallel_size (runtime TypeError).Call passes an unknown kwarg; add the parameter to get_llm_args and wire it into llm_args.
Apply:
@@ def get_llm_args( - moe_expert_parallel_size: Optional[int] = None, + moe_expert_parallel_size: Optional[int] = None, + moe_cluster_parallel_size: Optional[int] = None, @@ - llm_args = { + llm_args = { "model": model, @@ - "moe_expert_parallel_size": moe_expert_parallel_size, + "moe_expert_parallel_size": moe_expert_parallel_size, + "moe_cluster_parallel_size": moe_cluster_parallel_size,This aligns with BaseLlmArgs.moe_cluster_parallel_size.
1-1: Add NVIDIA Apache-2.0 header (2025) at file top.+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved. +# Licensed under the Apache License, Version 2.0 (the "License"); +# ...As per coding guidelines.
🧹 Nitpick comments (3)
tests/integration/defs/examples/serve/test_serve.py (2)
133-169: Good coverage for --config alias. Consider binding to localhost in tests.Binding to all interfaces is unnecessary in CI and triggers S104. Prefer localhost for reduced surface and fewer flakes.
Apply:
- "--host", - "0.0.0.0", + "--host", + "127.0.0.1",Also adjust health check to hit localhost:
- url = f"http://0.0.0.0:{http_port}/health" + url = f"http://127.0.0.1:{http_port}/health"Deduplicate common serve-launch logic between the two tests via a small helper to keep them in sync. Based on learnings.
11-19: Prefer localhost in readiness probe.Using 0.0.0.0 can be unreliable on some stacks; localhost is sufficient for integration tests.
- url = f"http://0.0.0.0:{http_port}/health" + url = f"http://127.0.0.1:{http_port}/health"tests/unittest/llmapi/test_llm_args.py (1)
783-949: Solid unit coverage for env overrides.Cases cover precedence, types, and invalid inputs well.
Use pytest’s monkeypatch for env var isolation:
def test_apply_env_overrides_basic(monkeypatch): monkeypatch.delenv("TEST_ENV_VAR_BASIC", raising=False) ...This avoids cross-test leakage.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
tensorrt_llm/bench/benchmark/low_latency.py(3 hunks)tensorrt_llm/bench/benchmark/throughput.py(4 hunks)tensorrt_llm/commands/serve.py(5 hunks)tensorrt_llm/llmapi/llm_args.py(1 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/llmapi/llm_utils.pytensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/llmapi/llm_args.pytests/integration/defs/examples/serve/test_serve.pytests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
🧠 Learnings (4)
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/serve.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytests/integration/defs/examples/serve/test_serve.pytensorrt_llm/bench/benchmark/throughput.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/throughput.py
🧬 Code graph analysis (7)
tensorrt_llm/llmapi/llm_utils.py (1)
tensorrt_llm/llmapi/llm_args.py (3)
_ModelWrapper(1473-1509)_ParallelConfig(322-385)apply_env_overrides(2907-2940)
tensorrt_llm/bench/benchmark/low_latency.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/llmapi/llm_args.py (1)
tensorrt_llm/logger.py (2)
warning(132-133)debug(144-145)
tests/integration/defs/examples/serve/test_serve.py (2)
tests/integration/defs/conftest.py (2)
serve_test_root(2536-2540)llm_models_root(80-94)tests/integration/defs/trt_test_alternative.py (2)
print_info(300-306)popen(199-218)
tests/unittest/llmapi/test_llm_args.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/bench/benchmark/throughput.py (1)
tensorrt_llm/llmapi/llm_args.py (1)
apply_env_overrides(2907-2940)
tensorrt_llm/commands/serve.py (1)
tensorrt_llm/llmapi/llm_args.py (2)
apply_env_overrides(2907-2940)update_llm_args_with_extra_dict(2943-2993)
🪛 Ruff (0.14.4)
tensorrt_llm/bench/benchmark/low_latency.py
208-210: Avoid specifying long messages outside the exception class
(TRY003)
tests/integration/defs/examples/serve/test_serve.py
154-154: Possible binding to all interfaces
(S104)
tensorrt_llm/bench/benchmark/throughput.py
311-313: Avoid specifying long messages outside the exception class
(TRY003)
tensorrt_llm/commands/serve.py
385-387: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (3)
tensorrt_llm/llmapi/llm_utils.py (1)
37-39: Exporting apply_env_overrides via llm_utils is correct.Import + all exposure aligns public API usage across modules. No other concerns here.
tensorrt_llm/bench/benchmark/throughput.py (1)
307-324: The web searches found usage documentation but not internal implementation. I need to inspect the code directly to verify the review comment.Based on my research, I've gathered enough context about the TensorRT-LLM benchmarking infrastructure. Now I need to inspect the actual code files to verify the review comment's claims. I'll examine:
throughput.pylines 307-324 to see the current implementationupdate_llm_args_with_extra_optionsimplementation to check if it includesapply_env_overridesThe review comment mentions applying
apply_env_overridesearly to prevent unknown-field issues when YAML is reloaded downstream. Since web searches show this is a real codebase file in tensorrt_llm, I need to inspect the actual files to verify.
Mutual exclusivity validation and early environment overrides correctly implemented. The review comment shows proper practices for
--configand--extra_llm_api_optionshandling: validating mutual exclusivity, loading YAML early, applying environment overrides immediately, and storing the config file path. The suggestion to centralizeapply_env_overrideswithinupdate_llm_args_with_extra_optionsis a sound architectural pattern already established in the codebase (as evidenced by the function being called consistently across eval.py, configuration.py, and benchmarking modules). The throughput.py code follows this centralization pattern correctly.tensorrt_llm/commands/serve.py (1)
414-421: Env overrides handling in serve.py lines 414-421 is correct.Verification confirms the implementation properly strips env_overrides before merging. The apply_env_overrides function removes the 'env_overrides' key from the dictionary, ensuring it cannot reach the merged llm_args. The sequence—load YAML, apply env_overrides, then update llm_args—prevents any bypass of the central strip.
|
/bot run |
|
PR_Github #24419 [ run ] triggered by Bot. Commit: |
|
PR_Github #24419 [ run ] completed with state |
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
Status, support, documentation and community
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)
2952-2972: Critical: Redact secrets in environment variable logs.Logging full environment variable values can expose sensitive credentials. The past review comment correctly identifies this security risk.
Apply the redaction logic suggested in the past review:
def apply_env_overrides(config_dict: Dict, config_path: Optional[str] = None) -> None: """Apply environment variable overrides from config file. Shell environment variables take precedence over config file values. """ env_overrides = config_dict.pop('env_overrides', None) if not env_overrides: return if not isinstance(env_overrides, dict): logger.warning( f"env_overrides must be dict, got {type(env_overrides).__name__}") return context = f" from {config_path}" if config_path else "" + + def _is_sensitive(k: str) -> bool: + k_upper = k.upper() + SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API") + return any(m in k_upper for m in SENSITIVE_MARKERS) + for key, value in env_overrides.items(): + str_value = str(value) + redacted = "<redacted>" if _is_sensitive(key) else str_value if key not in os.environ: - os.environ[key] = str(value) - logger.info(f"Set {key}={value}{context}") + os.environ[key] = str_value + logger.info(f"Set {key}={redacted}{context}")This keeps behavior while preventing secrets leakage. As per coding guidelines.
🧹 Nitpick comments (3)
tensorrt_llm/commands/click_utils.py (1)
65-70: Address alias case-insensitivity handling.When
ChoiceWithAliasis instantiated withcase_sensitive=False, the alias lookup remains case-sensitive because we check membership before Click normalizes the value. Inputs like"TP"will fail even if"tp"is declared as an alias. Please normalize the alias keys (or the lookup value) the same way Click does before delegating tosuper().convert. citeturn0search0def __init__( - self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True + self, choices: Sequence[str], aliases: Mapping[str, str], case_sensitive: bool = True ) -> None: super().__init__(choices, case_sensitive) - self.aliases = aliases + if case_sensitive: + self.aliases = dict(aliases) + else: + self.aliases = {str(key).lower(): value for key, value in aliases.items()} def to_info_dict(self) -> Dict[str, Any]: info_dict = super().to_info_dict() @@ def convert( self, value: Any, param: Optional["click.Parameter"], ctx: Optional["click.Context"] ) -> Any: - if value in self.aliases: - value = self.aliases[value] + lookup = value + if isinstance(value, str) and not self.case_sensitive: + lookup = value.lower() + if lookup in self.aliases: + value = self.aliases[lookup] return super().convert(value, param, ctx)tensorrt_llm/bench/benchmark/throughput.py (1)
267-270: LGTM! Identical config extraction pattern.The config extraction logic is identical to
low_latency.py(lines 166-169), ensuring consistent behavior across benchmark commands. Consider extracting this pattern into a shared utility if more commands need it.tensorrt_llm/commands/serve.py (1)
448-453: Consider unifying serve_encoder with the new config flow.While the manual YAML loading followed by
apply_env_overridesis correct,serve_encoderuses a different pattern thanserve(which delegates toupdate_llm_args_with_extra_optionsthat handles both internally). This works but creates maintenance asymmetry.Consider migrating
serve_encoderto use the same decorator-basedconfig_optionsandupdate_llm_args_with_extra_optionsapproach in a follow-up refactor for consistency across all serve commands.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
tensorrt_llm/bench/benchmark/low_latency.py(3 hunks)tensorrt_llm/bench/benchmark/throughput.py(3 hunks)tensorrt_llm/bench/benchmark/utils/general.py(2 hunks)tensorrt_llm/commands/click_utils.py(1 hunks)tensorrt_llm/commands/eval.py(5 hunks)tensorrt_llm/commands/serve.py(12 hunks)tensorrt_llm/llmapi/disagg_utils.py(3 hunks)tensorrt_llm/llmapi/llm_args.py(2 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(2 hunks)tests/integration/test_lists/qa/llm_function_core.txt(1 hunks)tests/unittest/bench/test_env_overrides.py(1 hunks)tests/unittest/commands/__init__.py(1 hunks)tests/unittest/commands/test_click_utils.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(1 hunks)
🧰 Additional context used
🧠 Learnings (14)
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/commands/serve.pytests/unittest/commands/test_click_utils.pytests/integration/defs/examples/serve/test_serve.pytensorrt_llm/bench/benchmark/throughput.pytests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/eval.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.pytensorrt_llm/commands/serve.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/commands/eval.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.
Applied to files:
tests/unittest/commands/__init__.py
📚 Learning: 2025-10-22T06:53:47.017Z
Learnt from: xinhe-nv
Repo: NVIDIA/TensorRT-LLM PR: 8534
File: scripts/format_test_list.py:1-6
Timestamp: 2025-10-22T06:53:47.017Z
Learning: The file `scripts/format_test_list.py` in the TensorRT-LLM repository does not require the NVIDIA Apache-2.0 copyright header.
Applied to files:
tests/unittest/commands/__init__.py
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
tensorrt_llm/commands/serve.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tensorrt_llm/commands/serve.pytests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/integration/defs/examples/serve/test_serve.py
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/integration/test_lists/qa/llm_function_core.txt
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/unittest/bench/test_env_overrides.py
🪛 Ruff (0.14.4)
tests/unittest/commands/test_click_utils.py
174-174: Unused method argument: alias_flag
(ARG002)
174-174: Unused method argument: default_value
(ARG002)
184-184: Unused method argument: canonical_flag
(ARG002)
184-184: Unused method argument: default_value
(ARG002)
194-194: Unused method argument: canonical_flag
(ARG002)
194-194: Unused method argument: alias_flag
(ARG002)
194-194: Unused method argument: test_value
(ARG002)
222-222: Unused function argument: gpus_per_node
(ARG001)
tests/integration/defs/examples/serve/test_serve.py
108-108: Possible binding to all interfaces
(S104)
tensorrt_llm/commands/click_utils.py
43-43: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (20)
tests/unittest/commands/__init__.py (1)
1-14: Remove license header from test package initialization file to match codebase pattern.The file is inconsistent with other test package
__init__.pyfiles in the codebase, which are all empty. Checktests/unittest/llmapi/__init__.py,tests/unittest/others/__init__.py, andtests/unittest/scaffolding/__init__.py—they contain no content. Test package initialization files should remain empty following the established pattern.Likely an incorrect or invalid review comment.
tensorrt_llm/llmapi/disagg_utils.py (2)
109-109: LGTM! Env overrides applied at the right time.The call to
apply_env_overridesafter YAML loading and before dataclass construction ensures config-driven environment variables are set correctly while respecting shell precedence.
332-332: LGTM! Consistent env override handling.The pattern matches the disagg config parsing, ensuring consistent behavior across both configuration paths.
tensorrt_llm/llmapi/llm_args.py (2)
2979-2980: LGTM! Centralized stripping prevents env_overrides leakage.The
pop('env_overrides', None)ensures the key is removed before merging into llm_args, addressing the concern raised in past reviews about double-processing when YAML files are reloaded.
3035-3036: LGTM! Env overrides applied before merge.The call to
apply_env_overridesensures config-driven environment variables are set before merging options, maintaining proper precedence.tests/unittest/bench/test_env_overrides.py (2)
30-45: LGTM! Robust test fixture with proper cleanup.The context manager correctly handles both file cleanup and environment variable restoration, preventing test pollution.
48-74: LGTM! Comprehensive test coverage.The three test cases cover the key scenarios:
- End-to-end via
update_llm_args_with_extra_options- Direct stripping via
update_llm_args_with_extra_dict- Integration with other config options
All tests verify both environment variable setting and removal of the
env_overrideskey.tests/integration/defs/examples/serve/test_serve.py (1)
97-114: LGTM! Parameterization enables backward compatibility testing.The test now validates both
--config(new) and--extra_llm_api_options(legacy) flags, ensuring backward compatibility while simplifying the model name extraction logic.tests/unittest/commands/test_click_utils.py (3)
32-37: LGTM! Clean test helper.The
invoke_and_checkhelper reduces boilerplate and ensures consistent test structure across all test cases.
40-122: LGTM! Thorough config_options testing.Excellent coverage of the config_options decorator:
- Basic flag usage (--config and --extra_llm_api_options)
- Mutual exclusivity validation
- Default behavior
- Metadata preservation
- Interaction with other options
158-202: LGTM! Parameterized tests validate alias consistency.The parameterized approach efficiently tests both
kv_cache_optionandbeam_width_optiondecorators, ensuring canonical flags, aliases, and defaults work correctly across both decorators.Note: Static analysis warnings about unused parameters are false positives—these parameters are required by the test framework's parametrization.
tensorrt_llm/commands/eval.py (2)
28-29: LGTM! Decorator-based CLI surface reduces duplication.The migration to decorator-based options (
@log_level_option,@parallelism_options,@kv_cache_option,@trust_remote_code_option,@config_options) unifies the CLI surface across eval, serve, and bench commands, reducing code duplication while maintaining functionality.Also applies to: 49-49, 71-74
85-85: LGTM! Config parameter aligns with unified approach.The
configparameter replacesextra_llm_api_optionsconsistently with the decorator-based flow. The call toupdate_llm_args_with_extra_optionshandles env_overrides internally.Also applies to: 109-110
tensorrt_llm/bench/benchmark/low_latency.py (2)
21-23: LGTM! Decorator migration consistent with eval.py.The decorator-based approach (
@config_options,@kv_cache_option,@parallelism_options,@beam_width_option) unifies the CLI surface across benchmark commands, matching the pattern in eval.py and throughput.py.Also applies to: 40-43
166-169: LGTM! Config extraction preserves backward compatibility.Mapping
configtoextra_llm_api_optionsmaintains compatibility with existing configuration flow while enabling the new--configalias. The centralizedapply_env_overridesinllm_args.py(line 3036) ensures env overrides are handled correctly.tensorrt_llm/bench/benchmark/throughput.py (1)
17-19: LGTM! Consistent decorator usage across benchmark commands.The decorator application matches the pattern in
low_latency.py, ensuring consistent CLI behavior across all benchmark commands (latency and throughput).Also applies to: 40-43
tensorrt_llm/commands/serve.py (4)
21-37: LGTM! Clean import consolidation.The new imports support the decorator-based CLI option approach and env override functionality. Moving
ChoiceWithAliasto centralized utilities and importing the env override functions are appropriate refactoring steps.
231-266: Excellent refactor to decorator-based CLI options.The replacement of manual option definitions with centralized decorators (
log_level_option,parallelism_options,kv_cache_option,trust_remote_code_option,config_options) reduces duplication and enables consistent CLI surfaces across commands. The newconfigparameter properly integrates the --config alias.
353-354: Clean integration of unified config handling.The delegation to
update_llm_args_with_extra_optionscentralizes config file loading and env override application. The inline comment clearly documents that env_overrides are processed internally.
400-400: LGTM! Consistent decorator usage across commands.The application of
log_level_optionandtrust_remote_code_optiondecorators tomm_embedding_serve,disaggregated, anddisaggregated_mpi_workercommands successfully unifies CLI option definitions across all entry points.Also applies to: 417-417, 482-482, 549-549
4061c65 to
17c34a6
Compare
|
/bot run |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tensorrt_llm/llmapi/llm_args.py (1)
2952-2986: Do not log secret values; redact in apply_env_overrides.Current logs print full environment values (old and new), risking credential leakage. Log keys only (and redact), while still applying the unredacted value.
Apply this diff:
def apply_env_overrides(config_dict: Dict, config_path: Optional[str] = None) -> None: @@ - logger.info(f"Processing environment variable overrides{config_context}") + logger.info(f"Processing environment variable overrides{config_context}") + + def _is_sensitive(k: str) -> bool: + k_upper = k.upper() + SENSITIVE_MARKERS = ("SECRET", "TOKEN", "KEY", "PASS", "PASSWORD", "CREDENTIAL", "AUTH", "API", "AWS", "PRIVATE") + return any(m in k_upper for m in SENSITIVE_MARKERS) @@ - for key, value in env_overrides.items(): - str_value = str(value) - if key in os.environ: - old_value = os.environ[key] - os.environ[key] = str_value - logger.info(f"Overriding {key}: '{old_value}' -> '{str_value}'") - else: - os.environ[key] = str_value - logger.info(f"Setting {key}='{str_value}'") + for key, value in env_overrides.items(): + str_value = str(value) + os.environ[key] = str_value + redacted = "<redacted>" if _is_sensitive(key) else str_value + action = "Overriding" if key in os.environ else "Setting" + # Note: We set first, then log action without exposing values + logger.info(f"{action} {key}={redacted}")
🧹 Nitpick comments (4)
tensorrt_llm/bench/benchmark/low_latency.py (1)
280-281: Prefer params.pop(...) for consistency and cleanliness.Use pop (as in throughput.py) to avoid leaving extra_llm_api_options in params and ensure a single source of truth.
- exec_settings["extra_llm_api_options"] = params.get("extra_llm_api_options") + exec_settings["extra_llm_api_options"] = params.pop("extra_llm_api_options", None)tensorrt_llm/llmapi/llm_args.py (1)
3050-3067: Validate YAML top-level is a mapping to avoid TypeError later.If YAML is empty or not a dict, apply_env_overrides will raise obscure errors. Add a type check with a clear message.
def load_yaml_maybe_env_override(file_path: str) -> Dict: @@ - with open(file_path, 'r') as f: - config = yaml.safe_load(f) + with open(file_path, 'r') as f: + config = yaml.safe_load(f) + if not isinstance(config, dict): + raise ValueError( + f"YAML file {file_path} must contain a top-level mapping, got {type(config).__name__}" + ) apply_env_overrides(config, file_path) return configtensorrt_llm/commands/serve.py (2)
305-312: Consider adding--configalias toserve_encoderfor consistency.The
--configalias is correctly implemented for the mainservecommand using Click's multiple option names feature. However, theserve_encoderfunction (line 471) uses--extra_encoder_optionswithout a similar--configalias.For consistency and to align with the PR objective of unified configuration across serve commands, consider adding a
--configalias to the--extra_encoder_optionsparameter as well.Apply this diff to add the alias:
@click.option( + "--config", "--extra_encoder_options", + "extra_encoder_options", type=str, default=None, help= - "Path to a YAML file that overwrites the parameters specified by trtllm-serve." + "Path to a YAML file that overwrites the parameters specified by trtllm-serve. " + "Can be specified as either --config or --extra_encoder_options." )
501-502: Refactor to useupdate_llm_args_with_extra_optionsfor consistency.The
serve_encoderfunction uses a two-step pattern (load YAML withload_yaml_maybe_env_override, then merge withupdate_llm_args_with_extra_dict), while theservefunction uses a one-step pattern withupdate_llm_args_with_extra_options.For consistency and maintainability, consider refactoring
serve_encoderto use the same pattern as theservefunction.Apply this diff to align with the
servefunction pattern:- encoder_args_extra_dict = {} - if extra_encoder_options is not None: - encoder_args_extra_dict = load_yaml_maybe_env_override( - extra_encoder_options) - encoder_args = update_llm_args_with_extra_dict(llm_args, - encoder_args_extra_dict) + encoder_args = update_llm_args_with_extra_options(llm_args, + extra_encoder_options)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
tensorrt_llm/bench/benchmark/low_latency.py(2 hunks)tensorrt_llm/bench/benchmark/throughput.py(2 hunks)tensorrt_llm/bench/benchmark/utils/general.py(1 hunks)tensorrt_llm/commands/eval.py(1 hunks)tensorrt_llm/commands/serve.py(4 hunks)tensorrt_llm/llmapi/llm_args.py(2 hunks)tensorrt_llm/llmapi/llm_utils.py(2 hunks)tests/integration/defs/examples/serve/test_serve.py(1 hunks)tests/unittest/llmapi/test_llm_args.py(2 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/llmapi/test_llm_args.pytensorrt_llm/bench/benchmark/throughput.pytensorrt_llm/commands/eval.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/llmapi/test_llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/commands/serve.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/bench/benchmark/utils/general.pytensorrt_llm/bench/benchmark/low_latency.py
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Applied to files:
tensorrt_llm/bench/benchmark/low_latency.py
🪛 Ruff (0.14.4)
tests/unittest/llmapi/test_llm_args.py
805-805: os may be undefined, or defined from star imports
(F405)
806-806: os may be undefined, or defined from star imports
(F405)
812-812: os may be undefined, or defined from star imports
(F405)
813-813: os may be undefined, or defined from star imports
(F405)
832-832: os may be undefined, or defined from star imports
(F405)
845-845: os may be undefined, or defined from star imports
(F405)
848-848: apply_env_overrides may be undefined, or defined from star imports
(F405)
850-850: os may be undefined, or defined from star imports
(F405)
865-865: apply_env_overrides may be undefined, or defined from star imports
(F405)
867-867: os may be undefined, or defined from star imports
(F405)
868-868: os may be undefined, or defined from star imports
(F405)
869-869: os may be undefined, or defined from star imports
(F405)
884-884: apply_env_overrides may be undefined, or defined from star imports
(F405)
886-886: os may be undefined, or defined from star imports
(F405)
887-887: os may be undefined, or defined from star imports
(F405)
888-888: os may be undefined, or defined from star imports
(F405)
893-893: apply_env_overrides may be undefined, or defined from star imports
(F405)
899-899: apply_env_overrides may be undefined, or defined from star imports
(F405)
907-907: Unused method argument: enable_tllm_logger_propagation
(ARG002)
915-915: os may be undefined, or defined from star imports
(F405)
923-923: apply_env_overrides may be undefined, or defined from star imports
(F405)
927-927: os may be undefined, or defined from star imports
(F405)
952-952: update_llm_args_with_extra_options may be undefined, or defined from star imports
(F405)
955-955: os may be undefined, or defined from star imports
(F405)
956-956: os may be undefined, or defined from star imports
(F405)
957-957: os may be undefined, or defined from star imports
(F405)
tensorrt_llm/llmapi/llm_utils.py
928-928: Undefined name apply_env_overrides in __all__
(F822)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/integration/defs/examples/serve/test_serve.py (1)
97-97: Docstring addition looks good.Accurately documents the --config alias usage for this test.
tensorrt_llm/bench/benchmark/low_latency.py (1)
49-56: CLI alias addition is correct.The --config alias maps to extra_llm_api_options cleanly; help text is clear.
tensorrt_llm/bench/benchmark/throughput.py (1)
64-71: Alias wiring LGTM.--config and --extra_llm_api_options map to the same destination with clear help text; later code pops it for use.
tensorrt_llm/bench/benchmark/utils/general.py (1)
86-93: Good centralization to env-aware loader.Switching to load_yaml_maybe_env_override ensures env_overrides are applied and stripped once.
tensorrt_llm/commands/eval.py (1)
95-101: Alias addition LGTM.--config cleanly aliases --extra_llm_api_options with clear help text; downstream flow remains unchanged.
tensorrt_llm/commands/serve.py (2)
30-31: LGTM! New imports support the env_overrides feature.The imported functions
load_yaml_maybe_env_overrideandupdate_llm_args_with_extra_optionsare correctly used later in the file to implement the new configuration flow with environment variable override support.
399-400: No issues found—None handling is correct.The function
update_llm_args_with_extra_optionsat tensorrt_llm/llmapi/llm_args.py:3041-3047 explicitly checksif extra_llm_api_options is not None:before processing. When None is passed, the function safely returns the originalllm_argsunchanged. The implementation is covered by unit tests and works correctly.
|
PR_Github #25655 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25761 [ run ] triggered by Bot. Commit: |
|
PR_Github #25761 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25774 [ run ] triggered by Bot. Commit: |
|
PR_Github #25774 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25783 [ run ] triggered by Bot. Commit: |
|
PR_Github #25783 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25800 [ run ] triggered by Bot. Commit: |
yizhang-nv
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR_Github #25800 [ run ] completed with state |
|
/bot run --disable-multi-gpu-test |
|
/bot run --disable-multi-gpu-test |
|
/bot run --disable-multi-gpu-test |
|
PR_Github #25896 [ run ] triggered by Bot. Commit: |
|
PR_Github #25896 [ run ] completed with state |
|
/bot skip --comment "pipeline main/job/L0_MergeRequest_PR/19639 passed for commit b6057f0" |
Signed-off-by: Venky <[email protected]>
|
/bot reuse-pipeline |
|
PR_Github #25926 [ reuse-pipeline ] triggered by Bot. Commit: |
|
PR_Github #25926 [ reuse-pipeline ] completed with state |
Description
Context for reviewers:
extra_llm_api_optionsyaml file as well as the right env vars (like in this example in InferenceMax scripts)extra_llm_api_options.ymlitself.--configalias to--extra_llm_api_optionsaligns the UX with vLLM and makes intuitive sense.Checklist
--configalias to--extra_llm_api_options. Both can now be used interchangeably acrosstrtllm-serve, trtllm-eval, trtllm-bench.env_overridestoLlmArgsand appropriately updateapi_stability/references/llm.yml. The override will happen atBaseLLM.__init__for the parent process, and a sanity update insideworker_mainto make sure the spawned workers don't inherit the outdated env snapshot. The above will automatically give ability to specify this through config/extra_llm_api_options yaml.--configaliasingenv_overridein LLM API with a real example propagating it into worker processTRTLLM_ENABLE_PDLwas getting cached at import-time. Change that to pull from env on-demand to appropriately respect the override.Extended yaml example:
Known limitations / Future Work
TLLM_LOG_LEVEL) is a special case where it binds to a singleton class at import-time, and one can override it by doinglogger.set_level()once it has been set at import time.Test Coverage
Summary by CodeRabbit
New Features
--configCLI flag alias for easier configuration file specification across serve, eval, and benchmark commands.Tests
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.