Skip to content

Conversation

@jeffdaily
Copy link

Similar to Dao-AILab/flash-attention#1944.

See pytorch/pytorch#151845. pytorch has removed caffe2, but hipify still contained work-arounds for caffe2 vs torch compatibility.
As a result of hipify v2 changes, some torch APIs are changing.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 29, 2025
@bottler
Copy link
Contributor

bottler commented Oct 29, 2025

There's an update of the flash-attention submodule here to the latest commit, which includes a similar change. It looks like nothing has changed in FA2/FA3 on nvidia, so it should be fine.

@jeffdaily
Copy link
Author

@bottler could you retrigger CI after lint fix?

Similar to Dao-AILab/flash-attention#1944.

See pytorch/pytorch#151845.
pytorch has removed caffe2, but hipify still contained
work-arounds for caffe2 vs torch compatibility.
As a result of hipify v2 changes, some torch APIs are changing.
@jeffdaily
Copy link
Author

@bottler I had to fix a conflict. The flash-attention submodule was updated outside of this PR. Rebased and force pushed if you would restart CI. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: rocm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants