Skip to content

Conversation

@jenchen13
Copy link
Contributor

@jenchen13 jenchen13 commented Nov 14, 2025

What does this PR do?

Type of change: ? Bug fix

Fix hf_quant_config with correct kv cache type for FP8/NVFP4

Overview: ?

Usage

# Add a code snippet demonstrating how to use this

Testing

will test export with KV cache fp8 enabled

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@jenchen13 jenchen13 requested a review from a team as a code owner November 14, 2025 03:24
@jenchen13 jenchen13 requested a review from meenchen November 14, 2025 03:25
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 14, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@jenchen13 jenchen13 requested a review from ChenhanYu November 14, 2025 03:25
@codecov
Copy link

codecov bot commented Nov 14, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.64%. Comparing base (422c58b) to head (3cdb810).
⚠️ Report is 8 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #557   +/-   ##
=======================================
  Coverage   74.64%   74.64%           
=======================================
  Files         183      183           
  Lines       18547    18547           
=======================================
  Hits        13844    13844           
  Misses       4703     4703           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

kv_cache_quantization = None
if get_kv_cache_dtype(self.model) == KV_CACHE_FP8:
# Only FP8 KV Cache is supported in VLLM for now
kv_cache_quantization = "FP8"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also add FP4 KV support? TRT-LLM actually supports FP4 kv now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just added

@jenchen13 jenchen13 force-pushed the jennifchen/fix_kv_export branch from 478abad to c2b254c Compare December 2, 2025 05:47
@jenchen13 jenchen13 enabled auto-merge (squash) December 2, 2025 05:55
@jenchen13 jenchen13 changed the title Fix hf_quant_config with kv cache type Fix hf_quant_config with kv cache type [OMNIML-2918] Dec 2, 2025
@jenchen13 jenchen13 force-pushed the jennifchen/fix_kv_export branch from 18852fe to 3ecca22 Compare December 3, 2025 01:19
@jenchen13 jenchen13 force-pushed the jennifchen/fix_kv_export branch from 3ecca22 to 3cdb810 Compare December 3, 2025 01:24
@jenchen13 jenchen13 self-assigned this Dec 4, 2025
@jenchen13 jenchen13 added the bug Something isn't working label Dec 4, 2025
@kevalmorabia97
Copy link
Collaborator

/ok to test 3cdb810

@jenchen13 jenchen13 merged commit ba19328 into main Dec 4, 2025
27 checks passed
@jenchen13 jenchen13 deleted the jennifchen/fix_kv_export branch December 4, 2025 05:52
kevalmorabia97 pushed a commit that referenced this pull request Dec 7, 2025
Update hf_quant_config with correct kv cache type for FP8 and NVFP4

---------

Signed-off-by: jenchen13 <[email protected]>
Signed-off-by: Jennifer Chen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants