Skip to content

[3/N][Sparse With Hicache]: Init sparse coordinator#16086

Merged
xiezhq-hermann merged 6 commits intosgl-project:mainfrom
hzh0425:sparse/sparse_coordinator_upstream
Jan 6, 2026
Merged

[3/N][Sparse With Hicache]: Init sparse coordinator#16086
xiezhq-hermann merged 6 commits intosgl-project:mainfrom
hzh0425:sparse/sparse_coordinator_upstream

Conversation

@hzh0425
Copy link
Collaborator

@hzh0425 hzh0425 commented Dec 29, 2025

Motivation

This PR introduces a flexible configuration framework for hierarchical sparse attention with retrievable KV cache compression algorithms (Quest, PQCache, SnapKV, etc.) in SGLang.
Upstream PR:#14619

Modifications

Coordinator Lifecycle Framework

  1. Designed for decode-phase retrievable algorithms
  2. Request lifecycle hooks: https://github.com/sgl-project/sglang/pull/16086/files#diff-cc98e3ecfeb5c89804f818dd8c4be3b3b15ef803d5555eb628439018d8e092cfR65
    • on_request_begin/end: State management
    • forward_begin/end: KVCache offloading synchronization
    • attention_begin/end: Sparse retrieval and representation updates

Attention BackendAdaptor: https://github.com/sgl-project/sglang/pull/16086/files#diff-e0e2432b5521042dbac9e254d142233b2d8366b964ae83ce96f748730db53276R25

  • BackendAdaptor transforms logical sparse indices to backend-specific metadata
  • Supports FlashAttention (page table modification) now
  • Extensible for future backends

Accuracy Tests

#14741 (comment)

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hzh0425, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a foundational framework for hierarchical sparse attention within SGLang, enabling the integration of various retrievable KV cache compression algorithms. The core SparseCoordinator orchestrates the entire sparse attention process, from request initiation to KV cache management and attention metadata adaptation. This design promotes modularity and extensibility, allowing for future support of different sparse algorithms and attention backends, starting with FlashAttention. The changes also introduce new configuration options for users to enable and customize sparse attention behavior.

Highlights

  • Sparse Coordinator Framework: Introduced a new SparseCoordinator to manage the lifecycle of hierarchical sparse attention, particularly for decode-phase retrievable algorithms.
  • Request Lifecycle Hooks: Implemented specific hooks (on_request_begin/end, forward_begin/end, attention_begin/end) within the coordinator for state management, KV cache offloading synchronization, and sparse retrieval/representation updates.
  • Attention Backend Adaptor: Designed an abstract BackendAdaptor to translate logical sparse indices into backend-specific attention metadata, with an initial implementation for FlashAttentionAdaptor.
  • Configurability: Added new command-line arguments (--enable-hierarchical-sparse-attention, --hierarchical-sparse-attention-extra-config) to ServerArgs for flexible configuration of sparse attention algorithms and backends.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new framework for hierarchical sparse attention, including a coordinator, algorithms, and backend adaptors. The changes are well-structured, separating concerns into distinct modules for better maintainability and extensibility. The configuration parsing is robust, handling JSON input for extra settings and providing sensible defaults. The integration with existing ServerArgs is also clear, adding new command-line arguments for enabling and configuring sparse attention.

@hzh0425
Copy link
Collaborator Author

hzh0425 commented Dec 29, 2025

@xiezhq-hermann could you take a review

@hzh0425 hzh0425 force-pushed the sparse/sparse_coordinator_upstream branch from 3e149cf to bea7153 Compare December 29, 2025 15:01
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
@wqlxx
Copy link

wqlxx commented Dec 30, 2025

@hzh0425 This pr support Speculative Decoding ?

@xiezhq-hermann
Copy link
Collaborator

Is there a dependency between this one and the last one (#15807)?

@hzh0425
Copy link
Collaborator Author

hzh0425 commented Jan 3, 2026

Is there a dependency between this one and the last one (#15807)?

It is independent and has no dependencies.

@xiezhq-hermann xiezhq-hermann added the ready-to-merge The PR is ready to merge after the CI is green. label Jan 3, 2026
@xiezhq-hermann xiezhq-hermann merged commit 2d02c15 into sgl-project:main Jan 6, 2026
26 of 37 checks passed
jamesjxliu pushed a commit to jamesjxliu/sglang that referenced this pull request Jan 6, 2026
Co-authored-by: 晟海 <huangtingwei.htw@antgroup.com>
Co-authored-by: Zhiqiang Xie <xiezhq@stanford.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-to-merge The PR is ready to merge after the CI is green. run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants