-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Description
Checklist
- I searched related issues but found no solution.
- The bug persists in the latest version.
- Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- Please use English. Otherwise, it will be closed.
Describe the bug
DeepEP:
Accuracy: 0.901
Invalid: 0.000
Latency: 52.745 s
Output throughput: 3133.510 token/s
Without DeepEP:
Accuracy: 0.951
Invalid: 0.000
Latency: 70.253 s
Output throughput: 1987.966 token/s
Reproduction
Launched servers with the following:
python3
- -m
- sglang.launch_server
- --model-path
- /public/models/GLM-4.6-FP8
- --mem-fraction-static
- "0.7"
- --tp
- "8"
# - --dp
# - "2"
# - --enable-dp-attention
- --dist-init-addr
- $(LWS_LEADER_ADDRESS):20000
- --nnodes
- $(LWS_GROUP_SIZE)
- --node-rank
- $(LWS_WORKER_INDEX)
- --trust-remote-code
- --enable-symm-mem
- --model-loader-extra-config
- '{"enable_multithread_load": true, "num_threads": 32}'
- "--moe-runner-backend=deep_gemm"
- "--moe-a2a-backend=deepep"
- "--deepep-mode=low_latency"
- "--chunked-prefill-size=6144"
# - "--moe-dense-tp-size=1"
# - "--enable-dp-lm-head"
# - "--enable-dp-attention"
# - "--dp=8"
- "--ep=8"
- --port
- "50050"
- --host
- "0.0.0.0"
and without deepep (comment out the deepep part)
Environment
root@glm-4-6-fp8-tp8-deepep-0:~# python3 -m sglang.check_env
Python: 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA GB200
GPU 0,1,2,3 Compute Capability: 10.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 570.172.08
PyTorch: 2.8.0+cu129
sglang: 0.5.5.post1
sgl_kernel: 0.3.17
flashinfer_python: 0.5.0
flashinfer_cubin: 0.5.0
flashinfer_jit_cache: Module Not Found
triton: 3.4.0
transformers: 4.57.1
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.13.2
fastapi: 0.123.5
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.31.0
orjson: 3.11.4
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.3
pydantic: 2.12.5
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.38.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.25
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.72.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 SYS SYS SYS SYS SYS SYS 0-71 0,3-9,11-17 2
GPU1 NV18 X NV18 NV18 SYS SYS SYS SYS SYS SYS 0-71 0,3-9,11-17 10
GPU2 NV18 NV18 X NV18 SYS SYS SYS SYS SYS SYS 72-143 1,19-25,27-33 18
GPU3 NV18 NV18 NV18 X SYS SYS SYS SYS SYS SYS 72-143 1,19-25,27-33 26
NIC0 SYS SYS SYS SYS X SYS SYS SYS SYS SYS
NIC1 SYS SYS SYS SYS SYS X SYS SYS SYS SYS
NIC2 SYS SYS SYS SYS SYS SYS X SYS SYS SYS
NIC3 SYS SYS SYS SYS SYS SYS SYS X SYS SYS
NIC4 SYS SYS SYS SYS SYS SYS SYS SYS X SYS
NIC5 SYS SYS SYS SYS SYS SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
ulimit soft: 1048576