-
-
Notifications
You must be signed in to change notification settings - Fork 11.4k
[Attention] FlashAttention ViT support, make default backend #28763
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Signed-off-by: Matthew Bonanni <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates FlashAttention to support head sizes required for Vision Transformers (40, 72, 80). This is achieved by updating the dependency to a fork of flash-attention, generalizing the head size check in the FlashAttention backend, and updating tests. The logic for selecting the ViT attention backend is also refactored for clarity. My review has identified two main points. First, a critical issue in cmake/external_projects/vllm_flash_attn.cmake where the dependency points to a personal fork, which must be reverted before merging. Second, a high-severity issue in tests/kernels/attention/test_flash_attn.py where a test case for soft_cap has been removed, potentially hiding a feature regression. The other changes look good.
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
|
Do you know if FA2 is supported too? do you mine testing this on Ampere? I think it should be ok |
Purpose
This PR is paired with vllm-project/flash-attention#109 (merge that first after CI passes, then I'll update the git tag), which enables FA to support the head sizes required for vision transformers (40, 72, and 80). This PR also updates the selector to make FlashAttention the default backend over xFormers.
Test Plan
pytest tests/kernels/attention/test_flash_attn.py(updated with new head sizes)Test Result
Passes
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.