You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement the Block Diffusion Hybrid Attention Kernel from https://arxiv.org/abs/2503.09573 . Disclaimer I am an author. It's a simple, illustrative, and efficient kernel that can achieve >5X speedup over SDPA due to the sparsity of the mask. Therefore, probably a good example to highlight flex attention for performance reasons.