Skip to content

Comments

[Fix] Fix trtllm-mla backend when chunked prefix cache is disabled#12361

Merged
Fridge003 merged 10 commits intomainfrom
baizhou/fix-trtllm
Nov 8, 2025
Merged

[Fix] Fix trtllm-mla backend when chunked prefix cache is disabled#12361
Fridge003 merged 10 commits intomainfrom
baizhou/fix-trtllm

Conversation

@Fridge003
Copy link
Collaborator

Motivation

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Fridge003, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a functional issue within the trtllm-mla backend, specifically when the chunked prefix cache is not utilized. It implements a robust fallback mechanism to ensure correct operation by dynamically switching to an alternative MLA kernel, such as flashinfer, under specific conditions. This change enhances the stability and compatibility of the attention mechanism across different caching configurations.

Highlights

  • Fallback Mechanism: Introduced a fallback_to_flashinfer_mla flag to conditionally switch to the flashinfer MLA kernel when the trtllm-mla backend encounters specific scenarios where its chunked prefix cache is disabled.
  • Conditional Initialization: Modified init_forward_metadata to set the fallback_to_flashinfer_mla flag based on whether the chunked prefix cache is disabled and if the batch involves an 'extend' operation with a non-zero prefix length.
  • Forward Pass Adaptation: Updated the forward_extend method to check the fallback_to_flashinfer_mla flag and delegate the attention computation to the super() class's implementation if the fallback is active.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Fridge003
Copy link
Collaborator Author

cc @ishandhanani @fzyzcjy

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a fix for the trtllm-mla backend, specifically for cases where chunked prefix caching is disabled. The changes correctly implement a fallback to the standard FlashInfer MLA kernel by introducing a fallback_to_flashinfer_mla flag. The logic appears sound and effectively addresses the issue. I have one suggestion to improve the clarity of a variable name, which would make the code more maintainable.

@wenscarl
Copy link
Collaborator

Could you link to the issue here?
The failure should be caused by trtllm_ragged_attention_deepseek is used for the prefill with weight absorption. We need a "decode like" kernel to handle this case. Shouldn't trtllm_batch_decode_with_kv_cache_mla be more performant?

@Fridge003
Copy link
Collaborator Author

trtllm_batch_decode_with_kv_cache_mla

Will do it in the next PR

@zhyncs zhyncs deleted the baizhou/fix-trtllm branch November 6, 2025 06:49
@Fridge003 Fridge003 restored the baizhou/fix-trtllm branch November 8, 2025 03:49
@Fridge003 Fridge003 reopened this Nov 8, 2025
@Fridge003 Fridge003 merged commit 5f02b91 into main Nov 8, 2025
139 of 161 checks passed
@Fridge003 Fridge003 deleted the baizhou/fix-trtllm branch November 8, 2025 23:10
@Fridge003
Copy link
Collaborator Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants