forked from Dao-AILab/flash-attention
-
Notifications
You must be signed in to change notification settings - Fork 29
Pull requests: PaddlePaddle/flash-attention
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
refactor: simplify build to single so without CMakeLists.txt
#113
opened Mar 4, 2026 by
baoqiwen
Loading…
Support Global Sliding Window (num_vec == 4) on FM4 BWD
#111
opened Mar 3, 2026 by
umiswing
Loading…
Zero-Copy FlashMaskV3 Computation-Communication Overlap
#110
opened Mar 2, 2026 by
Enigmatisms
Loading…
2 of 4 tasks
add flashmask v2 torch flash_api.cpp flashmask_interface.py setup.py
#98
opened Dec 23, 2025 by
clouds1238
Loading…
Removed redundant templates and related compile-time/runtime code
#91
opened Nov 14, 2025 by
Enigmatisms
Loading…
1 task
scan from right to left and skip masked block for each row at kernel begin
#55
opened Sep 23, 2024 by
GuoxiaWang
Loading…
Fix unpadding input with padding mask compute error
#38
opened Apr 15, 2024 by
wwbitejotunn
Loading…
ProTip!
Updated in the last three days: updated:>2026-02-28.