Skip to content

Conversation

@MatthewBonanni
Copy link
Contributor

@MatthewBonanni MatthewBonanni commented Nov 15, 2025

Purpose

This PR is paired with vllm-project/flash-attention#109 (merge that first after CI passes, then I'll update the git tag), which enables FA to support the head sizes required for vision transformers (40, 72, and 80). This PR also updates the selector to make FlashAttention the default backend over xFormers.

Test Plan

pytest tests/kernels/attention/test_flash_attn.py (updated with new head sizes)

Test Result

Passes


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Signed-off-by: Matthew Bonanni <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates FlashAttention to support head sizes required for Vision Transformers (40, 72, 80). This is achieved by updating the dependency to a fork of flash-attention, generalizing the head size check in the FlashAttention backend, and updating tests. The logic for selecting the ViT attention backend is also refactored for clarity. My review has identified two main points. First, a critical issue in cmake/external_projects/vllm_flash_attn.cmake where the dependency points to a personal fork, which must be reverted before merging. Second, a high-severity issue in tests/kernels/attention/test_flash_attn.py where a test case for soft_cap has been removed, potentially hiding a feature regression. The other changes look good.

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 15, 2025
@MatthewBonanni MatthewBonanni changed the title [DO NOT MERGE][Attention] FlashAttention ViT support [DO NOT MERGE][Attention] FlashAttention ViT support, make default backend Nov 17, 2025
Signed-off-by: Matthew Bonanni <[email protected]>
@MatthewBonanni MatthewBonanni changed the title [DO NOT MERGE][Attention] FlashAttention ViT support, make default backend [Attention] FlashAttention ViT support, make default backend Nov 17, 2025
@LucasWilkinson
Copy link
Collaborator

Do you know if FA2 is supported too? do you mine testing this on Ampere? I think it should be ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build nvidia ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants