[Bugfix] Fix RoBERTa position_ids accumulation on CUDA graph padding#37884
Merged
Isotr0py merged 2 commits intovllm-project:mainfrom Mar 23, 2026
Merged
Conversation
replace_roberta_positions() did an in-place += on the persistent positions buffer. CUDA graph padding slots aren't refreshed between requests, so the offset kept accumulating until the values overflowed max_position_embeddings (~4000 requests for BGE-M3). Move the padding_idx + 1 offset into RobertaEmbedding.forward as a non-in-place add, which avoids mutating the shared buffer entirely. Also fix the same pattern in the transformers legacy mixin. Fixes vllm-project#37648 Fixes vllm-project#37868
Contributor
There was a problem hiding this comment.
Code Review
This pull request addresses a critical bug in RoBERTa-based models related to position_ids accumulation when using CUDA graphs. The root cause was an in-place modification of a persistent GPU buffer. The fix correctly replaces the in-place addition (+=) with a non-in-place operation in both vllm/model_executor/models/roberta.py and vllm/model_executor/models/transformers/legacy.py. Additionally, the changes in roberta.py refactor the code by removing a helper function and moving the position adjustment logic into the RobertaEmbedding.forward method, which is a more appropriate location. The changes are well-reasoned and appear to be a solid fix for the described issue.
5 tasks
Isotr0py
approved these changes
Mar 23, 2026
RhizoNymph
pushed a commit
to RhizoNymph/vllm
that referenced
this pull request
Mar 26, 2026
HenryTangDev
pushed a commit
to HenryTangMain/vllm
that referenced
this pull request
Mar 27, 2026
khairulkabir1661
pushed a commit
to khairulkabir1661/vllm
that referenced
this pull request
Mar 27, 2026
Monishver11
pushed a commit
to Monishver11/vllm
that referenced
this pull request
Mar 27, 2026
…llm-project#37884) Signed-off-by: Monishver Chandrasekaran <monishverchandrasekaran@gmail.com>
nithinvc
pushed a commit
to nithinvc/vllm
that referenced
this pull request
Mar 27, 2026
…llm-project#37884) Signed-off-by: Nithin Chalapathi <nithin.ch10@gmail.com>
JiantaoXu
pushed a commit
to JiantaoXu/vllm
that referenced
this pull request
Mar 28, 2026
vrdn-23
pushed a commit
to vrdn-23/vllm
that referenced
this pull request
Mar 30, 2026
…llm-project#37884) Signed-off-by: Vinay Damodaran <vrdn@hey.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Fix a crash in all RoBERTa-based pooling/embedding models (BGE-M3, XLM-RoBERTa, stsb-roberta, bge-reranker-v2-m3) when CUDA graphs are enabled. After ~4000 sequential requests the server dies with an out-of-bounds position embedding index.
Root cause:
replace_roberta_positions()did an in-placeposition_ids += padding_idx + 1on the persistent GPU positions buffer. The model runner refreshes only the firstnum_scheduled_tokensentries viacopy_to_gpueach request — the remaining CUDA-graph padding slots keep their stale values. Every request adds another+(padding_idx + 1)to those slots, and eventually the values exceedmax_position_embeddings.For BAAI/bge-m3 (
padding_idx=1, offset=2) with short inputs padded to 8 tokens, overflow happens after(8194 - V_init) / 2 ≈ 4000requests.Fix: move the
padding_idx + 1offset intoRobertaEmbedding.forwardas a non-in-place add (position_ids + offsetinstead ofposition_ids += offset). This computes the correct positions each call without mutating the persistent buffer. Also fix the same in-place+=pattern in the transformersLegacyMixin.Fixes #37648
Fixes #37868
Related: #37873 (alternative fix that zeroes the padding region in
_preprocess)Test Plan
Reproduce the bug (before fix)
Existing tests
pytest tests/models/language/pooling/test_bge_m3.py -v -s pytest tests/models/language/pooling/test_embedding.py -v -s -k "stsb-roberta" pytest tests/models/language/pooling/test_scoring.py -v -sEssential Elements of an Effective PR Description Checklist