Skip to content

Comments

[Docs][DeepseekV3.2] Update deepseekv3.2 docs for mha short seq prefill#12868

Merged
Fridge003 merged 4 commits intosgl-project:mainfrom
YAMY1234:doc/update-nsa-mha-pathway
Nov 8, 2025
Merged

[Docs][DeepseekV3.2] Update deepseekv3.2 docs for mha short seq prefill#12868
Fridge003 merged 4 commits intosgl-project:mainfrom
YAMY1234:doc/update-nsa-mha-pathway

Conversation

@YAMY1234
Copy link
Contributor

@YAMY1234 YAMY1234 commented Nov 8, 2025

Motivation

Update DeepSeek V3.2 documentation to document the adaptive MHA short-sequence prefill mechanism, helping users understand the new attention pathway selection logic.

Modifications

Update docs/basic_usage/deepseek_v32.md

  • Add a new bullet point "Short-sequence MHA prefill (adaptive)" explaining:
    • Default threshold of 2048 tokens
    • H200 (SM90) uses FlashAttention varlen
    • B200 (SM100) uses TRT-LLM ragged MHA
    • MHA uses MHA_ONE_SHOT mode for best performance

Accuracy Tests

Benchmarking and Profiling

Checklist

@github-actions github-actions bot added documentation Improvements or additions to documentation deepseek labels Nov 8, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @YAMY1234, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the DeepSeek V3.2 documentation by providing crucial information about the adaptive MHA short-sequence prefill mechanism. The update clarifies how the system intelligently handles shorter input sequences to optimize performance across different NVIDIA GPU architectures (H200 and B200), ensuring users have a better understanding of the underlying attention pathway selection logic.

Highlights

  • Documentation Update: The pull request updates the deepseek_v32.md documentation to include details about the adaptive Multi-Head Attention (MHA) short-sequence prefill mechanism.
  • Adaptive MHA Short-Sequence Prefill: A new section explains that for short prefill sequences (default threshold: 2048 tokens), the NSA backend automatically uses standard MHA. On H200 (SM90), this path utilizes FlashAttention varlen, while on B200 (SM100), it uses TRT-LLM ragged MHA, with MHA operating in MHA_ONE_SHOT mode for optimal performance.
  • Updated Default Configurations: The documentation for default configurations on H200 and B200 has been updated to explicitly mention how short-sequence prefill is handled within their respective prefill attention kernels (flashmla_sparse for H200 and flashmla_auto for B200).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation for DeepSeek V3.2 to include details about the adaptive MHA short-sequence prefill mechanism. The changes are clear and accurately reflect the new functionality. I've provided a couple of minor suggestions to improve clarity and consistency in the documentation.

YAMY1234 and others added 2 commits November 7, 2025 20:03
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Fridge003 Fridge003 merged commit 190002c into sgl-project:main Nov 8, 2025
52 of 54 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deepseek documentation Improvements or additions to documentation run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants