Skip to content

Conversation

@PerkzZheng
Copy link
Collaborator

@PerkzZheng PerkzZheng commented Nov 27, 2025

update nvfp4 kv cache trtllm-gen kernels && fix several bugs

@coderabbitai summary

Description

This updates optimized nvfp4-kv attention kernels, and also adds headDim=256 nvfp4-kv attention kernels.
More performance numbers are being collected, and I will post it here once it is done.

trtllm-gen unit tests

It shows that nvfp4 kv cache attention kernels can achieve up to 1.58x speedups compared to fp8 kv cache kernels in high-throughput cases (BW limited).

Qwen3-Coder-480B-A35B-Instruct + Attention DP + B200x8

concurrency ISL/OSL FP8 kv throughput (tokens/s) FP4 kv throughput (tokens/s) FP4 kv with optimized trtllm-gen kernels (tokens/s) Throughput speedup
512 4k/4k 11171.1729 11051.8975 11235.0554 0.57%
512 8k/4k 6982.7109 8530.4086 8789.6191 25.88%
512 16k/4k 4664.1927 4946.1642 4999.8237 7.20%

Note that 16k/4k has lower speedups because not all requests can be scheduled due to kv cache limitation, which causes the last round of scheduled requests only have 4. Excluding this, it should have similar speedups compared to 8k/4k.

In general, performance gains mainly come from highly number of scheduled requests. Attention kernels only have perf benefits when it is bounded by memory bandwidth (large batch size and seqlen).

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@PerkzZheng PerkzZheng requested a review from a team as a code owner November 27, 2025 05:53
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25964 [ run ] triggered by Bot. Commit: a972cf7

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25964 [ run ] completed with state SUCCESS. Commit: a972cf7
/LLM/main/L0_MergeRequest_PR pipeline #19690 completed with status: 'FAILURE'

@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26012 [ run ] triggered by Bot. Commit: 7440f2b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26012 [ run ] completed with state SUCCESS. Commit: 7440f2b
/LLM/main/L0_MergeRequest_PR pipeline #19736 completed with status: 'FAILURE'

@PerkzZheng PerkzZheng force-pushed the user/perkzz/trtllm-gen-nvfp4 branch from 7440f2b to c13c681 Compare November 27, 2025 12:37
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26036 [ run ] triggered by Bot. Commit: c13c681

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26036 [ run ] completed with state SUCCESS. Commit: c13c681
/LLM/main/L0_MergeRequest_PR pipeline #19761 completed with status: 'FAILURE'

Signed-off-by: Perkz Zheng <[email protected]>

update nvfp4 kv cache trtllm-gen kernels && fix several bugs

Signed-off-by: Perkz Zheng <[email protected]>
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26054 [ run ] triggered by Bot. Commit: c13c681

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26054 [ run ] completed with state SUCCESS. Commit: c13c681
/LLM/main/L0_MergeRequest_PR pipeline #19778 completed with status: 'FAILURE'

@PerkzZheng PerkzZheng force-pushed the user/perkzz/trtllm-gen-nvfp4 branch from c13c681 to 9e6bfa4 Compare November 28, 2025 01:46
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26082 [ run ] triggered by Bot. Commit: 9e6bfa4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26082 [ run ] completed with state FAILURE. Commit: 9e6bfa4

@PerkzZheng PerkzZheng requested a review from meenchen November 28, 2025 03:25
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26105 [ run ] triggered by Bot. Commit: 9e6bfa4

Copy link
Collaborator

@eopXD eopXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for the change under kvCacheTransferManager.cpp. May I ask for more specific detail on what do you mean by "fix several bugs" in your merge request description?

@PerkzZheng
Copy link
Collaborator Author

Looks good for the change under kvCacheTransferManager.cpp. May I ask for more specific detail on what do you mean by "fix several bugs" in your merge request description?

there are mainly two bugs, one in KernelParams.h where tma descriptor is not set properly for headDim 256 kernels, and another is in resource_manager.py where kv_cache_size is not calculated correctly which leads to lower throughput than expected.

Thanks!

Copy link
Collaborator

@eopXD eopXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the explaination. There was so many files that the diff didn't show up for resource_manager.py and I have to search for it. I see the change now. Maybe also update what you described in the merge request description (or merge commit message) for clearance?

Added some comments for resource_manager.py.

mem_per_token = kv_factor * num_attention_layers * head_dim
# The data type bytes.
quant_config = model_config.quant_config
if quant_config is not None and quant_config.quant_mode.has_fp8_kv_cache(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the following gives better hierarchy/error handling/extension.

if quant_config is not None:
    if quant_config.quant_mode.has_fp8_kv_cache():
    elif quant_config.quant_mode.has_fp4_kv_cache():
    else:
        Raise("unhandled quant config")

On the other hand, SFs (fp8) per 16 elements took me some time to understand it is expressing - "an fp8-type scaling factor for every 16 elements".

mem_per_token = math.ceil(mem_per_token / 2) + math.ceil(
mem_per_token / 16)
else:
mem_per_token *= 2
Copy link
Collaborator

@eopXD eopXD Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my own understanding, does this means that we are implying fp16-type here? No further comment is here to explain it.

@eopXD eopXD self-requested a review November 28, 2025 07:00
@tensorrt-cicd
Copy link
Collaborator

PR_Github #26105 [ run ] completed with state SUCCESS. Commit: 9e6bfa4
/LLM/main/L0_MergeRequest_PR pipeline #19823 completed with status: 'FAILURE'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants