-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[#9602][feat] AutoDeploy: Support TRTLLM Sampler #9641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#9602][feat] AutoDeploy: Support TRTLLM Sampler #9641
Conversation
📝 WalkthroughWalkthroughThis change introduces support for an alternative sampling strategy (TRTLLMSampler) in the AutoDeploy framework. A new configuration field allows users to select between TorchSampler and TRTLLMSampler. Helper functions and classes manage sampler instantiation, model configuration exposure, and dtype handling. A unit test validates the TRTLLMSampler pathway. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)
133-136: Consider documenting the "auto" option in the description.The
SamplerTypeenum (fromtensorrt_llm/llmapi/llm_args.pylines 2474-2478) includes anautooption in addition toTRTLLMSamplerandTorchSampler. The field description only mentions the latter two. Ifautois a valid selection for AutoDeploy, consider updating the description to include it. Otherwise, the field definition looks good.sampler_type: Union[str, SamplerType] = Field( default=SamplerType.TorchSampler, - description="The type of sampler to use. Options are TRTLLMSampler or TorchSampler. Defaults to TorchSampler.", + description="The type of sampler to use. Options are TRTLLMSampler, TorchSampler, or auto. Defaults to TorchSampler.", )tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)
44-48: Remove debug print statements.The print statements on lines 44 and 47 appear to be debug artifacts. Consider removing them or replacing with proper test logging if debug output is needed.
- print(f"Experiment config: {experiment_config}") cfg = ExperimentConfig(**experiment_config) - print("Running smoke test with TRTLLMSampler...") results = main(cfg)tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (3)
312-318: Logging inside forward path will be noisy; consider logging once or using debug level.The
ad_logger.info()call is inside_compute_logits(), which is called on every forward pass. This will produce repeated log messages. Consider either:
- Logging once during initialization
- Using
ad_logger.debug()for per-iteration logging- Using
ad_logger.info_once()if availableAlso, as noted in the TODO on line 313, integrating this cast into the AD graph would be more efficient.
# Ensure logits are float32 as TRTLLMSampler expects float32 # TODO(govind): Should this be put into the AD graph so it can be fused with other operations? if self.sampler_type == SamplerType.TRTLLMSampler and logits.dtype != torch.float32: - ad_logger.info( - f"Logits type {logits.dtype} is not supported by TRTLLMSampler. Casting to float32." - ) logits = logits.to(torch.float32)Alternatively, log once during
__init__after checking if the sampler type is TRTLLMSampler:if self.sampler_type == SamplerType.TRTLLMSampler: ad_logger.info("TRTLLMSampler requires float32 logits; casting will be applied if needed.")
354-363: Add docstring to TRTLLMSamplerModelConfig class.Per coding guidelines, Python interfaces should have docstrings that can be parsed by Sphinx. This class exposes model configuration for the TRTLLMSampler.
class TRTLLMSamplerModelConfig: + """Configuration wrapper exposing model attributes required by TRTLLMSampler. + + Args: + ad_config: The LlmArgs configuration containing model settings. + """ + def __init__(self, ad_config: LlmArgs): factory = ad_config.create_factory() self.config = SimpleNamespace() self.config.vocab_size = factory.vocab_size_padded self.config.num_hidden_layers = factory.num_hidden_layers self.config.hidden_size = factory.hidden_size self.config.num_attention_heads = factory.num_attention_heads
365-372: Potential redundant factory creation; add docstring.The
get_torch_dtypefunction may callad_config.create_factory()even whenmodel_dtypeis not"auto". While the factory creation is internally cached via prefetch, consider:
- Adding a docstring for clarity
- Verifying that factory creation overhead is acceptable
def get_torch_dtype(ad_config: LlmArgs): - # if the model dtype is "auto", we infer it from the model config + """Get the torch dtype from the AutoDeploy configuration. + + If the model dtype is "auto", it is inferred from the model config via the factory. + + Args: + ad_config: The LlmArgs configuration. + + Returns: + The torch.dtype for the model. + """ model_dtype = ad_config.dtype if model_dtype == "auto": model_dtype = ad_config.create_factory().dtype if isinstance(model_dtype, str): model_dtype = str_dtype_to_torch(model_dtype) return model_dtype
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tensorrt_llm/_torch/auto_deploy/llm_args.py(2 hunks)tensorrt_llm/_torch/auto_deploy/models/hf.py(1 hunks)tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py(8 hunks)tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/llm_args.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.pytensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
tensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/llm_args.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.pytensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧠 Learnings (11)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
📚 Learning: 2025-08-09T02:04:49.623Z
Learnt from: Fridah-nv
Repo: NVIDIA/TensorRT-LLM PR: 6760
File: tensorrt_llm/_torch/auto_deploy/models/quant_config_reader.py:81-98
Timestamp: 2025-08-09T02:04:49.623Z
Learning: In TensorRT-LLM's auto_deploy module, torch.dtype values in configuration dictionaries must be stored as string representations (e.g., "float16" instead of torch.float16) because OmegaConf.merge does not support torch.dtype types. These string representations are converted to actual torch.dtype objects in downstream code.
Applied to files:
tensorrt_llm/_torch/auto_deploy/models/hf.pytensorrt_llm/_torch/auto_deploy/llm_args.pytensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
📚 Learning: 2025-08-27T15:03:57.149Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.
Applied to files:
tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.
Applied to files:
tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tensorrt_llm/_torch/auto_deploy/llm_args.pytests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tensorrt_llm/_torch/auto_deploy/llm_args.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
Repo: NVIDIA/TensorRT-LLM PR: 7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.
Applied to files:
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
🧬 Code graph analysis (2)
tensorrt_llm/_torch/auto_deploy/llm_args.py (2)
tensorrt_llm/llmapi/llm_args.py (2)
SamplerType(2475-2479)Field(63-90)tensorrt_llm/builder.py (1)
default(45-50)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (3)
tests/unittest/_torch/auto_deploy/_utils_test/_model_test_utils.py (1)
get_small_model_config(514-553)examples/auto_deploy/build_and_run_ad.py (1)
ExperimentConfig(126-239)tensorrt_llm/llmapi/llm_args.py (1)
SamplerType(2475-2479)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
411-411: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/_torch/auto_deploy/llm_args.py (1)
11-11: LGTM on import addition.The import of
SamplerTypefollows the coding guideline to maintain namespace when importing.tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py (1)
50-53: LGTM on test assertions.The assertions verify that the TRTLLMSampler path produces output, which is appropriate for a smoke test.
tensorrt_llm/_torch/auto_deploy/models/hf.py (2)
149-162: LGTM on new config accessor properties.The
num_hidden_layers,hidden_size, andnum_attention_headsproperties follow the established pattern used byvocab_size_paddedand provide a consistent interface for accessing model configuration attributes.
144-148: Return type annotation may not match actual behavior from model_config.The
dtypeproperty is typed asOptional[torch.dtype], butgetattr(model_config, "dtype", None)retrieves the raw configuration value. Based on learnings from this codebase, torch.dtype values in configuration dictionaries are stored as string representations (e.g.,"float16") due to OmegaConf.merge limitations, with conversion happening in downstream code. Either the return type annotation should reflectOptional[Union[str, torch.dtype]], or verify that model_config.dtype is already converted to torch.dtype before retrieval.tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py (4)
21-26: LGTM on import additions.The new imports follow the coding guidelines and are necessary for the TRTLLMSampler integration.
147-154: LGTM on ADEngine constructor changes for sampler_type.The
sampler_typeparameter is properly passed throughbuild_from_configto__init__and stored on the instance for later use in_compute_logits.Also applies to: 163-179
512-518: LGTM on instantiate_sampler usage.The refactored sampler instantiation correctly passes all required parameters and uses the new
instantiate_samplerhelper function.
375-413: Consider handlingSamplerType.autoexplicitly if it exists in the enum.The
SamplerTypeenum may include anautooption, butinstantiate_samplerwill raise aValueErrorfor it. Ifautoshould be a valid option (perhaps defaulting toTorchSampler), consider adding explicit handling. Ifautois intentionally unsupported in AutoDeploy, the error message could be more descriptive.
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_sampler.py
Outdated
Show resolved
Hide resolved
1257e29 to
3bfe777
Compare
b17b2b9 to
4627ad1
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #27016 [ run ] triggered by Bot. Commit: |
|
/bot kill |
|
/bot run --disable-fail-fast |
|
PR_Github #27017 [ run ] triggered by Bot. Commit: |
|
PR_Github #27016 [ run ] completed with state |
|
PR_Github #27018 [ kill ] triggered by Bot. Commit: |
|
PR_Github #27017 [ run ] completed with state |
|
PR_Github #27018 [ kill ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #27023 [ run ] triggered by Bot. Commit: |
ae173b4 to
7e69d70
Compare
Signed-off-by: Govind Ramnarayan <[email protected]>
7e69d70 to
cdf7652
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #27029 [ run ] triggered by Bot. Commit: |
|
PR_Github #27023 [ run ] completed with state |
|
PR_Github #27029 [ run ] completed with state |
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Support for TRTLLMSampler in AutoDeploy. Fixes: #9602
The main issue is that we need to upcast logits to FP32 for the TRTLLMSampler to run without error. It's possible this is fairly inefficient; we need to test perf and validate.
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.