-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Add truncate arg to yarn to match openai implementation of gpt-oss #28244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a truncate argument to the YaRN scaling implementation to align with OpenAI's GPT-OSS, propagating the change from the model configuration down to the rotary embedding calculation. My review identifies a couple of areas for improvement: an incorrect type hint that could affect static analysis, and a potential KeyError that could impact model loading with older configurations. The proposed suggestions aim to improve correctness and robustness.
| base: float = 10000, | ||
| max_position_embeddings: int = 2048, | ||
| truncate: bool = True, | ||
| ) -> tuple[int, int]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function's return type annotation tuple[int, int] is now incorrect. When truncate is False, the function returns a tuple of floats because yarn_find_correction_dim returns a float and no truncation is applied. This can lead to issues with static type checkers. To ensure type consistency for both truncate=True and truncate=False scenarios, the return type should be tuple[float, float]. In Python's type system, int values are compatible where float types are expected, making tuple[float, float] the correct annotation for both return paths.
| ) -> tuple[int, int]: | |
| ) -> tuple[float, float]: |
| ], | ||
| "beta_fast": config.rope_scaling["beta_fast"], | ||
| "beta_slow": config.rope_scaling["beta_slow"], | ||
| "truncate": config.rope_scaling["truncate"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Directly accessing config.rope_scaling["truncate"] will raise a KeyError if the key is not present in the configuration, which could happen with older model configs. This would cause model loading to fail. To improve robustness and maintain backward compatibility, it's safer to use the .get() method with a default value. Since this change is a bug fix to align with the GPT-OSS implementation (which should not truncate), a default of False is appropriate. This ensures that older configurations without this key will adopt the correct behavior.
| "truncate": config.rope_scaling["truncate"], | |
| "truncate": config.rope_scaling.get("truncate", False), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| ], | ||
| "beta_fast": config.rope_scaling["beta_fast"], | ||
| "beta_slow": config.rope_scaling["beta_slow"], | ||
| "truncate": config.rope_scaling["truncate"], | ||
| }, | ||
| is_neox_style=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against missing rope_scaling.truncate
The new YaRN path now unconditionally reads config.rope_scaling["truncate"] when constructing the rotary embedding. Older GPT‑OSS configs (including those in prior releases) do not carry this key because truncation used to be implicit. In that case, model initialization will raise KeyError before any generation runs, whereas before the change the model still worked (albeit with rounded correction bounds). Consider using config.rope_scaling.get("truncate", True) or another default so that existing configs continue to load while newer configs can opt out of truncation.
Useful? React with 👍 / 👎.
|
Can you fix the pre-commit and run the accuracy benchmark with the tutorial here? https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#accuracy-evaluation-panels |
|
@heheda12345 I fixed the pre-commit failure and am happy to fix and other issues that arise. Would it be possible to get help running the accuracy benchmark? I typically run vllm through another framework, so running the standalone accuracy benchmark would require some setup on my end. |
|
Thanks! You can follow the instructions in the above link. |
|
@heheda12345 I am trying to get this working but I'm having trouble running the vllm server with our cluster. This might require some ramp up on my end. If you are able to help out with running these evals, it would be very greatly appreciated. |
|
The GPQA eval looks good to me on H100 (20b, low reasoning effort 0.56, medium reasoning effort 0.66) |
|
@heheda12345 Just FYI, I also found a small issue with BF16 + EP (fixed in my latest commit). |
|
@heheda12345 are there any action items for me required for merge? |
|
Can you revert the EP bug fix and put it in another new PR? |
|
Done. Here's the new PR: #28765 |
|
can you fix the dco? |
Signed-off-by: ashors1 <[email protected]>
Signed-off-by: ashors1 <[email protected]>
Signed-off-by: ashors1 <[email protected]>
Signed-off-by: ashors1 <[email protected]>
Signed-off-by: ashors1 <[email protected]>
This reverts commit e470c8f. Signed-off-by: ashors1 <[email protected]>
cf1cb1c to
15db281
Compare
|
@heheda12345 Done |
Purpose
Refer to the issue for context: #27722. VLLM's implementation of Yarn does not match OpenAI's for GPT-OSS. This PR provides a fix.
Test Plan
I tested this change on GPT-OSS and validated that the yarn correction range is as expected.
Test Result
Yarn correction range is not rounded to an int after this fix:
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.