-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
Thanks for the great work!
While using the /v1/embeddings endpoint with the gme-qwen2-vl model, I encountered two issues:
1. Incorrect handling of image input
According to the docs, the image input is passed like this:
payload = {
"model": "gme-qwen2-vl",
"input": [
{"type": "text", "text": text_input},
{"type": "image", "url": "image_path"},
],
}
However, this does not seem to be properly recognized. The server throws the same results when input different image_path unless "url": "image_path" is replaced with "image": "image_path"
2. No fused embedding returned for multimodal input
When sending both text and image in the input, the server currently returns separate embeddings for each (i.e., a text embedding and an image embedding), but not the fused multimodal embedding as described in the Qwen-gme official documentation.
Reproduction
https://github.com/sgl-project/sglang/blob/main/examples/runtime/multimodal_embedding.py
qwen-gme embedding model
Environment
Python: 3.11.2 (main, May 2 2024, 11:59:08) [GCC 12.2.0]
CUDA available: True
GPU 0: NVIDIA A800-SXM4-40GB
GPU 0 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.161.08
PyTorch: 2.5.1
sglang: 0.4.5
sgl_kernel: 0.0.8
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.51.0
torchao: 0.10.0
numpy: 1.26.4
aiohttp: 3.11.11
fastapi: 0.115.5
hf_transfer: 0.1.9
huggingface_hub: 0.30.2
interegular: 0.3.3
modelscope: 1.25.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.1
psutil: 6.1.1
pydantic: 2.10.2
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.29.0
uvloop: 0.21.0
vllm: 0.7.0
xgrammar: 0.1.17
openai: 1.75.0
tiktoken: 0.7.0
anthropic: 0.49.0
litellm: 1.66.2
decord: 0.6.0
NVIDIA Topology:
GPU0 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS SYS NODE PIX 60-119 1 N/A
NIC0 SYS X SYS SYS SYS SYS
NIC1 SYS SYS X NODE SYS SYS
NIC2 SYS SYS NODE X SYS SYS
NIC3 NODE SYS SYS SYS X NODE
NIC4 PIX SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
Hypervisor vendor: KVM
ulimit soft: 1024768