-
Notifications
You must be signed in to change notification settings - Fork 2k
Pull requests: Dao-AILab/flash-attention
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add flash_attn_varlen_qkvpacked_func to hopper (flash_attn_3)
#1902
opened Sep 22, 2025 by
foreverYoungGitHub
Loading…
Fix the torch.compile failure of flash_attn_varlen_func
#1894
opened Sep 17, 2025 by
zhenwendai
Loading…
fix race condition bug in cute _flash_attn_fwd in multiple gpu env
#1793
opened Aug 1, 2025 by
beiw-nv
Loading…
Add torch.compile support to flash attention 3
#1769
opened Jul 22, 2025 by
guilhermeleobas
Loading…
Enable the deterministic mode option in the backward kernel
#1766
opened Jul 21, 2025 by
GD06
Loading…
Fix illegal memory access through off-by-one error in num_splits_dynamic_ptr init
#1747
opened Jul 10, 2025 by
klondenberg-bioptimus
Loading…
Previous Next
ProTip!
no:milestone will show everything without a milestone.