Feature/sm100 low latency nvfp4 kernels#1214
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @azhurkevich, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances FlashInfer's Mixture-of-Experts (MoE) capabilities by introducing a new, highly optimized backend for Blackwell GPUs based on NVIDIA's TensorRT-LLM Gen framework. It brings support for advanced FP4/FP8 quantization schemes, flexible routing algorithms, and specialized weight transformations, aiming to deliver speed-of-light low-latency kernels for large language models.
Highlights
- New TRT-LLM Gen MoE Backend: Introduces a new fused Mixture-of-Experts (MoE) kernel implementation specifically for Blackwell (SM100) GPUs, leveraging NVIDIA's TensorRT-LLM Gen framework for highly optimized low-latency operations.
- Advanced Quantization Support: Enables FP4 and FP8 block-scaled quantization for MoE layers, including support for different scale factor layouts (swizzled, linear) and quantization types (FP16_TO_FP4, FP8_TO_FP4, FP16_TO_MXFP8).
- Flexible Routing Methods: Integrates multiple routing algorithms, such as DeepSeekV3, Renormalize (TopK -> Softmax), and RenormalizeNaive (Softmax -> TopK -> Renormalize), providing diverse options for MoE expert selection.
- Optimized Weight Transformations: Implements specialized preprocessing steps for weight matrices, including row reordering for gated activations and shuffling for transposed MMA outputs, to maximize performance on Blackwell architecture.
- Code Refactoring: Renames the existing
FusedMoeRunnerclass toFusedMoeCutlassRunnerand updates its references, improving code clarity and distinguishing it from the new TRT-LLM Gen backend.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces enhancements for FP4 quantization on SM100 architecture, including a new trtllmgen backend for fused MoE operations. The changes involve refactoring of existing quantization kernels for better generality and the addition of new functionalities and APIs. The new tests are comprehensive and cover the new features well. I've provided a few comments to improve code clarity and robustness.
| at::Tensor num_non_exiting_ctas | ||
| = at::empty({1}, at::TensorOptions().device(routing_logits.device()).dtype(at::ScalarType::Int)); | ||
|
|
||
| // FIXME: check shape |
| CHECK_CONTIGUOUS(self); | ||
| CHECK_INPUT_TYPE(globalScale, c10::ScalarType::Float); | ||
| TORCH_CHECK(sfVecSize == 16, "sfVecSize can only be 16"); | ||
| if (sfUseUE8M0) { | ||
| TORCH_CHECK(globalScale.has_value(), "globalScale is required for UE8M0"); |
There was a problem hiding this comment.
The sfVecSize argument is checked to be 16 or 32, but it's not used to select the kernel implementation. The selection is based on sfUseUE8M0, which implicitly determines the vector size (16 for false, 32 for true). This can be confusing and error-prone. To make the code more robust and self-documenting, add a check to enforce this relationship. For example:
TORCH_CHECK((sfUseUE8M0 && sfVecSize == 32) || (!sfUseUE8M0 && sfVecSize == 16),
"sfVecSize must be 32 for mxfp4 (sfUseUE8M0=true) and 16 for nvfp4 (sfUseUE8M0=false)");This will ensure that the arguments are consistent with the kernel's expectations.
| TORCH_CHECK(false, "Other layout not implemented yet."); | ||
| } |
There was a problem hiding this comment.
This TORCH_CHECK(false, ...) will always fail if an unsupported layout is provided. It's better to provide a more informative error message by checking the layout value directly. This makes debugging easier for users.
| TORCH_CHECK(false, "Other layout not implemented yet."); | |
| } | |
| else | |
| { | |
| TORCH_CHECK(false, "Unsupported QuantizationSFLayout. Only SWIZZLED and LINEAR are supported."); | |
| } |
a65a04f to
7d1d265
Compare
Hi, may I know whether this will be merged today? Thanks! |
|
thats the plan yes |
|
Looking forward to it! |
|
its gonna be long night) |
(cherry picked from commit efc0d82)
|
@azhurkevich @yzh119 any update? |
… different workspace alloc + small router refactor
581998b to
f558d0c
Compare
|
while we work on merging. @zhyncs @fzyzcjy there are couple more items left with future PRs. Main are SGL integration and autotuner for these kernels, applies to both fp8 and fp4 (with benching/perf tuning). Considering your interest on this PR, do you wanna work together on SGL item? I'll take care of the autotuner. |
yzh119
left a comment
There was a problem hiding this comment.
Thank all of you (@kaixih @azhurkevich @aleozlx @nekorobov and Dongfeng Yu) for your contribution! Especially @azhurkevich for the last minute refactor and hotfix.
I am personally interested in nvfp4 moe + masked layout (not contiguous layout), because that is used with DeepEP in decode, and will be a prerequisite of communication computation overlap. (contiguous layout may be used for prefill) |
|
@fzyzcjy more specifically, you mean fused gather + grouped gemm kernel, is my understanding correct? |
|
@yzh119 (I replied in slack) |
|
@pavanimajety It would be great to have input/output satisfying sgl-project/sglang#7994, i.e. masked layout grouped gemm, as well as easy to modify to support computation communication overlap. |
<!-- .github/pull_request_template.md --> ## 📌 Description `fp4_swizzle_blockscale_sm100` function was removed in #1214 and thus breaking tests/test_groupwise_scaled_gemm_mxfp4.py unittest, this PR fixes the issue. ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [ ] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [ ] I have installed the hooks with `pre-commit install`. - [ ] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [ ] Tests have been added or updated as needed. - [ ] All tests are passing (`unittest`, etc.). ## Reviewer Notes cc @ttyio @azhurkevich

📌 Description
Enable Blackwell with speed of light low latency kernels. Collaboration with @nekorobov. Supporting: @aleozlx, Kaixi Hou, Dongfeng Yu.
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes