You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the examples in this repo. I am wondering if FlexAttention would bring any value over SDPA in the case of masked language modeling? In masked language modeling, random input positions are masked and their contribution is ignored during softmax. However, since the masking is random, there's no structure to potentially exploit. Does FlexAttention make sense in this case?