Skip to content

[Bug] Incorrect hidden_states in pd disaggregation + eagle #10059

@ZeldaHuang

Description

@ZeldaHuang

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

When use pd disaggregation with eagle , we should transfer the hidden states in spec_info, But now we transfer the hidden states in target model's logits_output.

It will cause shape error when use eagle3 with pd disaggregation, #9976 fixed it.

File "/mnt/data/huangziming/sgl_dev/sglang/python/sglang/srt/managers/scheduler.py", line 2681, in run_scheduler_process
    scheduler.event_loop_normal_disagg_prefill()
  File "/mnt/data/huangziming/sgl_new/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/data/huangziming/sgl_dev/sglang/python/sglang/srt/disaggregation/prefill.py", line 294, in event_loop_normal_disagg_prefill
    self.process_batch_result_disagg_prefill(batch, result)
  File "/mnt/data/huangziming/sgl_dev/sglang/python/sglang/srt/disaggregation/prefill.py", line 442, in process_batch_result_disagg_prefill
    self.send_kv_chunk(req, last_chunk=True)
  File "/mnt/data/huangziming/sgl_dev/sglang/python/sglang/srt/disaggregation/prefill.py", line 619, in send_kv_chunk
    self.disagg_metadata_buffers.set_buf(req)
  File "/mnt/data/huangziming/sgl_dev/sglang/python/sglang/srt/disaggregation/utils.py", line 195, in set_buf
    self.output_hidden_states[req.metadata_buffer_index].copy_(
RuntimeError: The size of tensor a (4096) must match the size of tensor b (12288) at non-singleton dimension 0

Reproduction

Prefill

python -m sglang.launch_server \
        --model-path /models/Llama-3.1-8B-Instruct \
        --disaggregation-mode prefill \
        --host $LOCAL_IP \
        --port $PORT \
        --trust-remote-code \
        --disaggregation-bootstrap-port 8998 \
        --mem-fraction-static 0.6 \
        --dtype float16 \
        --speculative-algorithm EAGLE3 \
        --speculative-draft-model-path /models/sglang-EAGLE3-Llama-3.1-Instruct-8B \
        --speculative-num-steps 1 \
        --decode-log-interval 1 \
        --speculative-eagle-topk 1 \
        --speculative-num-draft-tokens 2 \

Decode

python -m sglang.launch_server \
        --model-path /models/Llama-3.1-8B-Instruct \
        --disaggregation-mode decode \
        --host $LOCAL_IP \
        --port $PORT \
        --trust-remote-code \
        --mem-fraction-static 0.6 \
        --speculative-draft-model-path /models/sglang-EAGLE3-Llama-3.1-Instruct-8B \
        --dtype float16 \
        --speculative-algorithm EAGLE3 \
        --decode-log-interval 1 \
        --speculative-num-steps 1 \
        --speculative-eagle-topk 1 \
        --speculative-num-draft-tokens 2 \

Environment

Python: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H20-3e
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.85
CUDA Driver Version: 570.133.20
PyTorch: 2.8.0+cu128
sglang: 0.5.2rc1
sgl_kernel: 0.3.8
flashinfer_python: 0.3.0
triton: 3.4.0
transformers: 4.56.0
torchao: 0.9.0
numpy: 2.3.2
aiohttp: 3.12.15
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.34.4
interegular: 0.3.3
modelscope: 1.29.2
orjson: 3.11.3
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.2
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.23
openai: 1.99.1
tiktoken: 0.11.0
anthropic: 0.65.0
litellm: Module Not Found
decord: 0.6.0
NVIDIA Topology:
�[4mGPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 CPU Affinity NUMA Affinity GPU NUMA ID�[0m
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX PHB SYS SYS 0-47 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 PXB PHB SYS SYS 0-47 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 PHB PIX SYS SYS 0-47 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 PHB PXB SYS SYS 0-47 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS PIX PHB 48-95 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS PXB PHB 48-95 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS PHB PIX 48-95 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS PHB PXB 48-95 1 N/A
NIC0 PIX PXB PHB PHB SYS SYS SYS SYS X PHB SYS SYS
NIC1 PHB PHB PIX PXB SYS SYS SYS SYS PHB X SYS SYS
NIC2 SYS SYS SYS SYS PIX PXB PHB PHB SYS SYS X PHB
NIC3 SYS SYS SYS SYS PHB PHB PIX PXB SYS SYS PHB X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3

Hypervisor vendor: KVM
ulimit soft: 102400

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions