Skip to content

support sparse gqa and fix flashattn#1039

Open
STwangyingrui wants to merge 2 commits intomainfrom
yr/sparse_gqa_and_fix_fa
Open

support sparse gqa and fix flashattn#1039
STwangyingrui wants to merge 2 commits intomainfrom
yr/sparse_gqa_and_fix_fa

Conversation

@STwangyingrui
Copy link
Copy Markdown
Contributor

No description provided.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates flash attention imports and reshaping logic, removes redundant device transfers for sequence lengths, and adds Grouped Query Attention (GQA) support in utility functions. Feedback was provided regarding the redundant recomputation and device transfer of cumulative sequence lengths in the transformer inference loop, suggesting they be moved outside the loop and explicitly cast to int32 for better performance and compatibility.

Comment on lines +163 to +164
cu_seqlens_q = torch.nn.functional.pad(torch.cumsum(query_lens, dim=0), (1, 0)).to(AI_DEVICE)
cu_seqlens_k = torch.nn.functional.pad(torch.cumsum(key_values_lens, dim=0), (1, 0)).to(AI_DEVICE)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The cu_seqlens_q and cu_seqlens_k tensors are recomputed and moved to the device for every layer in the transformer. Since query_lens and the updated key_values_lens are constant across all layers within a single inference step, these cumulative sequence lengths should ideally be computed once outside the layer loop (e.g., in the infer method) and passed down to self_attn to avoid redundant computations and expensive host-to-device transfers.

Additionally, you can combine the device move and type conversion into a single .to(device=AI_DEVICE, dtype=torch.int32) call to improve efficiency, as flash_attn requires int32 tensors.

Suggested change
cu_seqlens_q = torch.nn.functional.pad(torch.cumsum(query_lens, dim=0), (1, 0)).to(AI_DEVICE)
cu_seqlens_k = torch.nn.functional.pad(torch.cumsum(key_values_lens, dim=0), (1, 0)).to(AI_DEVICE)
cu_seqlens_q = torch.nn.functional.pad(torch.cumsum(query_lens, dim=0), (1, 0)).to(device=AI_DEVICE, dtype=torch.int32)
cu_seqlens_k = torch.nn.functional.pad(torch.cumsum(key_values_lens, dim=0), (1, 0)).to(device=AI_DEVICE, dtype=torch.int32)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant