Skip to content

Commit 4e1fde5

Browse files
vincentzedb8zhong
authored andcommitted
fix: fix rmsnorm -> layernorm in qwen3 omni (sgl-project#11791)
Co-authored-by: Brayden Zhong <b8zhong@users.noreply.github.com>
1 parent b0c6bae commit 4e1fde5

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

python/sglang/srt/models/qwen3_omni_moe.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,6 @@
3131
)
3232
from sglang.srt.configs.qwen3_vl import Qwen3VLMoeConfig
3333
from sglang.srt.layers.attention.vision import VisionAttention
34-
from sglang.srt.layers.layernorm import RMSNorm
3534
from sglang.srt.layers.linear import ColumnParallelLinear, RowParallelLinear
3635
from sglang.srt.layers.moe.fused_moe_triton.layer import FusedMoE
3736
from sglang.srt.layers.quantization.base_config import QuantizationConfig
@@ -318,7 +317,7 @@ def __init__(
318317
super().__init__()
319318
self.hidden_size = context_dim * (spatial_merge_size**2)
320319
self.use_postshuffle_norm = use_postshuffle_norm
321-
self.ln_q = RMSNorm(
320+
self.ln_q = nn.LayerNorm(
322321
self.hidden_size if use_postshuffle_norm else context_dim, eps=1e-6
323322
)
324323
self.mlp = nn.ModuleList(

0 commit comments

Comments
 (0)