Skip to content

GLM5 optimize#382

Merged
iforgetmyname merged 1 commit intosgl-project:mainfrom
cen121212:2-24-main
Feb 26, 2026
Merged

GLM5 optimize#382
iforgetmyname merged 1 commit intosgl-project:mainfrom
cen121212:2-24-main

Conversation

@cen121212
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @cen121212, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces two new Triton kernels aimed at optimizing key operations within the GLM5 model architecture. The first kernel fuses the application of Rotary Positional Embeddings to query and key tensors, enhancing efficiency for attention mechanisms. The second kernel streamlines the process of splitting and normalizing fused QKV projections, which is crucial for handling different components of attention outputs. These additions are designed to improve the performance of GLM5 by leveraging GPU acceleration for these computationally intensive steps.

Highlights

  • Fused RoPE QK MQA Kernel: Introduced a new Triton kernel, fused_rope_qk_mqa_kernel, which efficiently applies Rotary Positional Embeddings (RoPE) to both query (Q) and key (K) tensors. This kernel supports both standard and NeoX-style RoPE, handles Multi-Query Attention (MQA) key head broadcasting, and includes a pass-through mechanism for dimensions beyond the RoPE application, optimizing the computation by fusing these operations.
  • Fused Split QK Norm Kernel: Added a new Triton kernel, fused_split_qk_norm_kernel, designed to split a fused QKV projection output into its constituent Q_LoRA, K_Nope, and K_PE components. It applies RMS normalization to the Q_LoRA and K_Nope parts, with optional bias support, and directly copies the K_PE component, streamlining the post-projection processing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sgl_kernel_npu/sgl_kernel_npu/norm/fused_rope_qk_mqa.py
    • Added a new Triton kernel for fused Rotary Positional Embedding application on query and key tensors, supporting MQA and dimension pass-through.
  • python/sgl_kernel_npu/sgl_kernel_npu/norm/fused_split_qk_norm.py
    • Added a new Triton kernel to split fused QKV projections, apply RMS normalization to Q_LoRA and K_Nope, and copy K_PE.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two new Triton kernels to optimize operations for a GLM5 model, specifically for fused RoPE application on QK tensors and for fused splitting and normalization of QK projections. The implementations look solid and should provide performance benefits. My review focuses on improving the clarity and maintainability of the code by suggesting corrections and translations for several comments.

iforgetmyname
iforgetmyname previously approved these changes Feb 25, 2026
@iforgetmyname iforgetmyname merged commit 4dce839 into sgl-project:main Feb 26, 2026
6 checks passed
1329009851 added a commit to 1329009851/sgl-kernel-npu that referenced this pull request Feb 27, 2026
…-npu into sgl-cmake2

* 'sgl-cmake2' of https://github.com/1329009851/sgl-kernel-npu:
  Fix the bug that the layout kernel crashed when the num of experts is no less than 384 (sgl-project#383)
  adapt sglang (sgl-project#357)
  GLM5 optimize (sgl-project#382)
  Update layernorm_gated.py (sgl-project#378)
  support qwen3.5 (sgl-project#377)
zzx-study pushed a commit to zzx-study/sgl-kernel-npu that referenced this pull request Feb 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants