-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[None][feat] update trtllm-gen nvfp4 kernels with better performance #9510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
/bot run |
|
PR_Github #25964 [ run ] triggered by Bot. Commit: |
|
PR_Github #25964 [ run ] completed with state |
|
/bot run |
|
PR_Github #26012 [ run ] triggered by Bot. Commit: |
|
PR_Github #26012 [ run ] completed with state |
7440f2b to
c13c681
Compare
|
/bot run |
|
PR_Github #26036 [ run ] triggered by Bot. Commit: |
|
PR_Github #26036 [ run ] completed with state |
Signed-off-by: Perkz Zheng <[email protected]> update nvfp4 kv cache trtllm-gen kernels && fix several bugs Signed-off-by: Perkz Zheng <[email protected]>
Signed-off-by: Perkz Zheng <[email protected]>
Signed-off-by: Perkz Zheng <[email protected]>
|
/bot run |
|
PR_Github #26054 [ run ] triggered by Bot. Commit: |
|
PR_Github #26054 [ run ] completed with state |
c13c681 to
9e6bfa4
Compare
|
/bot run |
|
PR_Github #26082 [ run ] triggered by Bot. Commit: |
|
PR_Github #26082 [ run ] completed with state |
|
/bot run |
|
PR_Github #26105 [ run ] triggered by Bot. Commit: |
eopXD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good for the change under kvCacheTransferManager.cpp. May I ask for more specific detail on what do you mean by "fix several bugs" in your merge request description?
there are mainly two bugs, one in Thanks! |
eopXD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the explaination. There was so many files that the diff didn't show up for resource_manager.py and I have to search for it. I see the change now. Maybe also update what you described in the merge request description (or merge commit message) for clearance?
Added some comments for resource_manager.py.
| mem_per_token = kv_factor * num_attention_layers * head_dim | ||
| # The data type bytes. | ||
| quant_config = model_config.quant_config | ||
| if quant_config is not None and quant_config.quant_mode.has_fp8_kv_cache( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the following gives better hierarchy/error handling/extension.
if quant_config is not None:
if quant_config.quant_mode.has_fp8_kv_cache():
elif quant_config.quant_mode.has_fp4_kv_cache():
else:
Raise("unhandled quant config")
On the other hand, SFs (fp8) per 16 elements took me some time to understand it is expressing - "an fp8-type scaling factor for every 16 elements".
| mem_per_token = math.ceil(mem_per_token / 2) + math.ceil( | ||
| mem_per_token / 16) | ||
| else: | ||
| mem_per_token *= 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my own understanding, does this means that we are implying fp16-type here? No further comment is here to explain it.
|
PR_Github #26105 [ run ] completed with state |
update nvfp4 kv cache trtllm-gen kernels && fix several bugs
@coderabbitai summary
Description
This updates optimized nvfp4-kv attention kernels, and also adds headDim=256 nvfp4-kv attention kernels.
More performance numbers are being collected, and I will post it here once it is done.
trtllm-gen unit tests
It shows that nvfp4 kv cache attention kernels can achieve up to 1.58x speedups compared to fp8 kv cache kernels in high-throughput cases (BW limited).
Qwen3-Coder-480B-A35B-Instruct + Attention DP + B200x8
Note that 16k/4k has lower speedups because not all requests can be scheduled due to kv cache limitation, which causes the last round of scheduled requests only have 4. Excluding this, it should have similar speedups compared to 8k/4k.
In general, performance gains mainly come from highly number of scheduled requests. Attention kernels only have perf benefits when it is bounded by memory bandwidth (large batch size and seqlen).
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.