Merged
Conversation
5 tasks
5 tasks
5 tasks
Closed
5 tasks
timethink
pushed a commit
to timethink/sglang
that referenced
this pull request
Mar 9, 2025
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 11, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Mar 14, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
This was referenced Apr 16, 2025
yanbing-j
pushed a commit
to yanbing-j/sglang
that referenced
this pull request
May 12, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
5 tasks
5 tasks
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
May 28, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
May 29, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
chunyuan-w
added a commit
to chunyuan-w/sglang
that referenced
this pull request
Jun 3, 2025
…TP size (sgl-project#7) * support the case where num_attention_heads can't be divided evenly by tp_size * refactor * move cpu specific logic to cpu_utils.py * only set padded weights to zero
zhuyijie88
pushed a commit
to zhuyijie88/sglang
that referenced
this pull request
Jul 17, 2025
npu rotary_embedding replace
pkking
pushed a commit
to pkking/sglang1
that referenced
this pull request
Jul 23, 2025
5 tasks
5 tasks
5 tasks
Xia-Weiwen
pushed a commit
to Xia-Weiwen/sglang
that referenced
this pull request
Sep 5, 2025
* improve bfloat16 gemm performance for prefilling before: ``` gemm_bf16(native): 4.772 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 15.328 ms ``` after: ``` gemm_bf16(native): 4.847 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 3.927 ms ``` * apply brgemm * improve int8 gemm performance for prefilling * apply brgemm to moe: part1 * apply brgemm to moe: part2 --------- Co-authored-by: mingfeima <mingfei.ma@intel.com>
Xia-Weiwen
pushed a commit
to Xia-Weiwen/sglang
that referenced
this pull request
Sep 9, 2025
* Revert "port prefill optimization (sgl-project#7)" This reverts commit ea0d028. * improve bfloat16 gemm performance for prefilling before: ``` gemm_bf16(native): 4.772 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 15.328 ms ``` after: ``` gemm_bf16(native): 4.847 ms, gemm_fp8(opt): 0.000 ms, gemm_int8(opt): 0.000 ms, gemm_bf16(opt): 3.927 ms ``` * improve fp8 gemm performance with large M * enable amx-int8 for gemm, fused moe, shared moe and qkv_proj kernels on PyTorch 2.7 * improve int8 gemm performance with large M * improve bf16 and int8 moe performance with large nbatches * update naming for nb0 and nb1 in fused gemm and silu_mul kernel * improve fp8 moe performance with large nbatches * remove hardcode numbers --------- Co-authored-by: mingfeima <mingfei.ma@intel.com>
someoneexistsontheinternet
pushed a commit
to someoneexistsontheinternet/sglang
that referenced
this pull request
Oct 23, 2025
add install instructions for B200
kalyank007
pushed a commit
to kalyank007/sglang
that referenced
this pull request
Nov 7, 2025
… setting profiler env. (sgl-project#7) Co-authored-by: svc_repro_tool <svc_repro_tool@habana.ai> Co-authored-by: Polisetty V R K Jyothendra Varma <polisetty.v.r.k.jyothendra.varma@intel.com>
5 tasks
5 tasks
5 tasks
nithinsubbiah
pushed a commit
to nithinsubbiah/sglang
that referenced
this pull request
Nov 21, 2025
Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Add wave extend attention kernel Signed-off-by: Harsh Menon <harsh@nod-labs.com> [Wave] Adding logit_cap and layer scaling to API Also add support for the wave backend to the model runner. And use Triton decode kernels for now. [Wave] Run chunked prefill for perf comparison on Wave test Need to rename the non chunked/regular prefill version because otherwise rpd will treat it as the same kernel Signed-off-by: Stanley Winata <stanley.winata@amd.com> [Wave] Cache the function that loads the wave kernel Also maintain a global kernel hash to avoid recomputing the hash on every call. [Wave] Don't specify block size and enable buffer ops [Wave] Enable wave runtime and update scheduling API [Wave] Update API to use wave_compile & WaveCompileOptions [Wave] Update wave backend and extend attention to latest [Wave] Add speculative decode kernel Signed-off-by: nithinsubbiah <nithinsubbiah@gmail.com> cache kernels using lru_cache Update WaveBackend to use Wave Decode (sgl-project#6) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Revert "Update WaveBackend to use Wave Decode (sgl-project#6)" (sgl-project#7) This reverts commit eac4599. Wave Backend decode (sgl-project#8) * align shapes Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> * fix Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> --------- Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Wave backend fixes (sgl-project#10) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> More fixes to Wave decode (sgl-project#12) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> is_causal Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Enable the grok in3 model (sgl-project#14) Set unique cache dir for each worker (sgl-project#16) update kernel (sgl-project#18) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> updated spec decode test as per wave Signed-off-by: xintin <gaurav.verma@amd.com> fix extend (sgl-project#23) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Refactor paged decode intermediate arrays shapes (sgl-project#24) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> remove dyn symbols (sgl-project#26) Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> cleanup shapes (sgl-project#27) Some fields were removed from `paged_decode_attention_shape`. Signed-off-by: Ivan Butygin <ivan.butygin@gmail.com> Remove `mha` param from Wave decode attention kernel (sgl-project#28) Depends on iree-org/iree-turbine#1039 Signed-off-by: Paul Zhang <paul.zhang@amd.com> nfc: fix problems reported by linting update references from iree.turbine to wave_lang
apinge
pushed a commit
to apinge/sglang
that referenced
this pull request
Nov 26, 2025
[FIX] fix fuse share expert in EP
6 tasks
yhyang201
pushed a commit
that referenced
this pull request
Dec 13, 2025
triple-mu
pushed a commit
to triple-mu/sglang
that referenced
this pull request
Jan 1, 2026
# This is the 1st commit message: rebase # This is the commit message sgl-project#2: remove duplicated code # This is the commit message sgl-project#3: add type hints # This is the commit message sgl-project#4: add clear cache for benchmark alignment # This is the commit message sgl-project#5: remove unuse arg # This is the commit message sgl-project#6: clear cache once # This is the commit message sgl-project#7: simplified VAE cache logic for qwenimage and wan # This is the commit message sgl-project#8: remove duplicated code
tpoisonooo
pushed a commit
to tpoisonooo/sglang
that referenced
this pull request
Feb 12, 2026
…hunk Support graph chunk
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.