Skip to content

Conversation

@kaiyux
Copy link
Member

@kaiyux kaiyux commented Nov 26, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Configurable profiling ranges for context and generation phases via YAML configuration.
    • Support for extra Slurm arguments in job submission.
  • Refactor

    • Streamlined profiling parameter handling to use configuration-driven approach.
    • Improved NSYS profiling workflow for enhanced visibility and control.
    • Reorganized worker startup argument parsing for better maintainability.
  • Chores

    • Enhanced worker environment variable handling for cleaner exports.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@kaiyux kaiyux changed the title [None] [chore] Clean up PDL logics [None] [chore] Clean up slurm script logics Nov 26, 2025
@kaiyux kaiyux force-pushed the user/kaiyu/clean_up_slurm_scripts branch from 4143eff to 15a3231 Compare November 27, 2025 08:24
@kaiyux kaiyux marked this pull request as ready for review November 27, 2025 08:28
@kaiyux kaiyux requested review from a team as code owners November 27, 2025 08:28
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 27, 2025

📝 Walkthrough

Walkthrough

The pull request refactors profiling configuration in disaggregated Slurm benchmark scripts by replacing the enable_pdl mechanism with profiling range variables (ctx_profile_range and gen_profile_range). Configuration parameters are moved from runtime YAML parsing to pre-defined values, Slurm extra arguments support is added, and environment variable handling is streamlined across shell and Python scripts.

Changes

Cohort / File(s) Summary
Configuration & Python Submission
examples/disaggregated/slurm/benchmark/config.yaml, examples/disaggregated/slurm/benchmark/submit.py
Added slurm.extra_args field and profiling configuration with ctx_profile_range and gen_profile_range. Extended worker environment variables to include TRTLLM_ENABLE_PDL=1 and ENROOT_ALLOW_DEV=yes. Updated submit.py to pass extra_args and profiling ranges to sbatch command.
SLURM & Worker Scripts
examples/disaggregated/slurm/benchmark/disaggr_torch.slurm, examples/disaggregated/slurm/benchmark/start_worker.sh
Introduced profiling range variable parsing with shifted argument indices. Removed enable_pdl YAML parsing logic. Replaced enable_pdl usage with profiling ranges in worker commands. Reworked NSYS profiling flow with conditional enable_nsys logic, environment variable exports, and unified launch command prefixing. Improved environment variable parsing in start_worker.sh to export variables directly.

Sequence Diagram

sequenceDiagram
    participant Config as config.yaml
    participant SLURM as disaggr_torch.slurm
    participant Worker as start_worker.sh
    
    Config->>SLURM: Provide ctx_profile_range,<br/>gen_profile_range
    SLURM->>SLURM: Parse profiling ranges<br/>(replaces enable_pdl)
    SLURM->>Worker: Pass profile_range arg
    
    alt enable_nsys is "true"
        Worker->>Worker: Set NSYS_MPI_STORE_TEAMS_PER_RANK
        Worker->>Worker: Set TLLM_PROFILE_START_STOP<br/>from profile_range
        Worker->>Worker: Initialize nsys_prefix<br/>with profile command
        Worker->>Worker: Execute with nsys_prefix<br/>prepended to launch
    else enable_nsys is not "true"
        Worker->>Worker: Execute launch command<br/>without nsys profiling
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Argument index shifts: Verify that all argument position changes in disaggr_torch.slurm (enable_accuracy_test, accuracy_model, accuracy_tasks, model_args_extra) are consistently propagated and no indexing mismatches occur.
  • NSYS profiling flow rework: The conditional logic in start_worker.sh for NSYS-based profiling with TLLM_PROFILE_START_STOP requires careful verification of the environment variable setup and command prefix application.
  • Environment variable parsing: Confirm that direct export of worker_env_var (removing intermediate echo statements) produces equivalent behavior and proper shell quoting.
  • Profiling range variable mapping: Ensure ctx_profile_range maps correctly to CTX workers and gen_profile_range to GEN workers throughout the pipeline.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ⚠️ Warning The PR description is entirely empty except for the template structure with no actual content provided in the Description and Test Coverage sections. Fill in the Description section with a clear explanation of what the slurm script cleanup entails and why these changes are needed. Provide specific test coverage details that validate the changes to the configuration and scripts.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title references 'slurm scripts' and mentions 'enhancements and clean up', which directly align with the changeset covering configuration, profiling, and cleanup improvements across four slurm-related files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are more config files in the repo now and I'll update those as well.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
examples/disaggregated/slurm/benchmark/start_worker.sh (1)

46-46: Quote command substitutions to prevent word splitting.

While hostnames typically don't contain spaces, quoting $(hostname) follows shell scripting best practices and prevents potential issues if a hostname contains special characters.

Apply this diff:

-    echo $(hostname) > ${log_dir}/hostnames/${role}_${instance_id}.txt
+    echo "$(hostname)" > ${log_dir}/hostnames/${role}_${instance_id}.txt

And:

-        --host $(hostname) --port ${port} \
+        --host "$(hostname)" --port ${port} \

Also applies to: 65-65

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f1ed057 and 64fc022.

📒 Files selected for processing (4)
  • examples/disaggregated/slurm/benchmark/config.yaml (2 hunks)
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (4 hunks)
  • examples/disaggregated/slurm/benchmark/start_worker.sh (2 hunks)
  • examples/disaggregated/slurm/benchmark/submit.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • examples/disaggregated/slurm/benchmark/submit.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • examples/disaggregated/slurm/benchmark/submit.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
🪛 Shellcheck (0.11.0)
examples/disaggregated/slurm/benchmark/start_worker.sh

[warning] 46-46: Quote this to prevent word splitting.

(SC2046)


[warning] 65-65: Quote this to prevent word splitting.

(SC2046)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (12)
examples/disaggregated/slurm/benchmark/config.yaml (3)

38-39: LGTM! Environment configuration simplification.

Moving TRTLLM_ENABLE_PDL=1 to worker_env_var eliminates the need for runtime YAML parsing in the Slurm script. The space-separated format is correctly parsed in start_worker.sh (lines 23-26).


44-45: LGTM! Profiling range configuration.

The separate profiling ranges for context and generation workers allow for fine-grained control. The range format aligns with the TLLM_PROFILE_START_STOP environment variable usage in start_worker.sh.


8-8: Verify handling of empty extra_args in the submission script.

The concern about passing empty strings to sbatch is theoretically valid—empty string arguments can be received as positional arguments in job scripts. However, without access to the actual implementation in submit.py (line 141), I cannot confirm whether this creates a practical issue.

Recommended verification: Inspect how extra_args is integrated into the sbatch command construction—specifically whether empty strings are filtered, stripped, or passed through to the command invocation.

examples/disaggregated/slurm/benchmark/submit.py (1)

182-183: LGTM! Profiling ranges forwarded correctly.

The profiling range values are correctly extracted from the config and passed as positional arguments to the Slurm script, which expects them at positions 30-31.

examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (3)

43-56: LGTM! Argument parsing updated consistently.

The new profiling range variables are correctly inserted at positions 30-31, and all subsequent argument indices (accuracy configuration at 32-35, environment variables at 36-37) are properly shifted.


95-96: LGTM! Debug output for profiling ranges.

Adding echo statements for the new profiling range variables improves debuggability and is consistent with the existing argument printing pattern.


207-207: LGTM! Profiling ranges correctly routed to worker types.

Generation workers receive gen_profile_range and context workers receive ctx_profile_range, allowing for differentiated profiling behavior based on workload characteristics.

Also applies to: 222-222

examples/disaggregated/slurm/benchmark/start_worker.sh (5)

12-17: LGTM! Argument parsing updated for new profiling flow.

The removal of enable_pdl and addition of profile_range aligns with the refactoring to use explicit profiling ranges instead of deriving profiling configuration from YAML at runtime.


23-26: LGTM! Cleaner environment variable export.

The loop correctly exports each space-separated environment variable from the config. The export "${env_var}" syntax properly handles VAR=value pairs.


28-34: LGTM! Explicit handling of both numa_bind cases.

The added else clause makes the non-binding case explicit and provides helpful guidance for GB200 users.


50-61: LGTM! Consolidated NSYS profiling configuration.

The unified profiling setup using profile_range eliminates per-role customization and simplifies the profiling flow. The TLLM_PROFILE_START_STOP environment variable correctly receives the profiling range passed from the configuration.


63-66: LGTM! Launch command with conditional NSYS prefix.

The launch command correctly uses ${nsys_prefix} to conditionally enable profiling. When enable_nsys is not "true", the prefix is empty and the command runs normally.

Copy link
Collaborator

@dc3671 dc3671 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can try to use bash's getopts to handle named arguments, rather than use $0~$99 for arguments. Can be implemented in the future.

Signed-off-by: Kaiyu Xie <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
@kaiyux kaiyux changed the title [None] [chore] Clean up slurm script logics [None] [chore] Enhancements and clean up to slurm scripts Nov 28, 2025
@kaiyux kaiyux force-pushed the user/kaiyu/clean_up_slurm_scripts branch from 67bd298 to 14d1702 Compare November 28, 2025 03:10
Signed-off-by: Kaiyu Xie <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants