Skip to content

[Bug] Calling update_weights API with flush_cache=True occasionally results in flush_cache failure #14062

@ShawnY112358

Description

@ShawnY112358

Checklist

  • I searched related issues but found no solution.
  • The bug persists in the latest version.
  • Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
  • If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
  • Please use English. Otherwise, it will be closed.

Describe the bug

In unittest, the update_weights test occasionally throws the following error:

[2025-11-27` 21:35:14] Scheduler hit an exception: Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler.py", line 2658, in run_scheduler_process
    scheduler.event_loop_overlap()
  File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler.py", line 1001, in event_loop_overlap
    self.process_input_requests(recv_reqs)
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler.py", line 1162, in process_input_requests
    output = self._request_dispatcher(recv_req)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/utils.py", line 507, in __call__
    return fn(obj)
           ^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/sglang/srt/managers/scheduler_update_weights_mixin.py", line 49, in update_weights_from_disk
    assert flush_cache_success, "Cache flush failed after updating weights"
           ^^^^^^^^^^^^^^^^^^^
AssertionError: Cache flush failed after updating weights

[2025-11-27 21:35:14] SIGQUIT received. signum=None, frame=None. It usually means one child process failed.
Killed

After reviewing the code logic in tokenizer_manager and scheduler, I suspect this might be a design-level bug.

The generate_request function in tokenizer_manager sends inference requests to the scheduler within the context of model_update_lock.reader_lock, and calls wait_one_response to wait for the scheduler to return the generation results.

In the scheduler's event loop, process_batch_result_decode handles the decoding results. When a request finishes, its result is sent back to the tokenizer_manager through a series of nested function calls. Upon receiving the result, tokenizer_manager exits the reader_lock context.

Notably, although process_batch_result_decode sends the completed request’s result back to tokenizer_manager during the scheduler's event loop, it does not immediately remove the finished request from self.running_batch. Instead, self.running_batch is only updated in the next iteration via self.get_next_batch_to_run, which internally calls self.update_running_batch.

Here is the relevant part of the event loop:

@DynamicGradMode()
def event_loop_overlap(self):
    """A scheduler loop that overlaps the CPU processing and GPU computation."""
    self.result_queue: Deque[Tuple[ScheduleBatch, GenerationBatchResult]] = deque()
    disable_consecutive_prefill_overlap = (
        envs.SGLANG_DISABLE_CONSECUTIVE_PREFILL_OVERLAP.get()
    )

    def pop_and_process():
        # Process the results of the last batch
        tmp_batch, tmp_result = self.result_queue.popleft()
        self.process_batch_result(tmp_batch, tmp_result)

    while True:
        recv_reqs = self.recv_requests()
        self.process_input_requests(recv_reqs)

        if self._engine_paused:
            continue

        batch = self.get_next_batch_to_run()
        self.cur_batch = batch

        disable_overlap_for_batch = (
            disable_consecutive_prefill_overlap
            and batch
            and batch.forward_mode.is_extend()
            and self.last_batch
            and self.last_batch.forward_mode.is_extend()
        )

        if disable_overlap_for_batch:
            pop_and_process()

        batch_result = None
        if batch:
            batch_result = self.run_batch(batch)
            self.result_queue.append((batch.copy(), batch_result))

        if self.last_batch:
            if not disable_overlap_for_batch:
                pop_and_process()
        elif batch is None:
            # When the server is idle, do self-check and re-init some states
            self.self_check_during_idle()

        self.launch_batch_sample_if_needed(batch_result)
        self.last_batch = batch

        if envs.SGLANG_ENABLE_STRICT_MEM_CHECK_DURING_BUSY.get():
            self.self_check_during_busy()

Moreover, at the beginning of each loop iteration, new incoming requests are processed:

recv_reqs = self.recv_requests()
self.process_input_requests(recv_reqs)

This design can lead to a race condition:

When self.process_batch_result is called, all ongoing requests may have already completed and their results sent back to tokenizer_manager. As a result, all generate_request coroutines release the reader_lock. At this moment, an update_weights request could acquire the writer_lock and be sent to the scheduler.

However, the scheduler might just have finished the previous loop iteration. In the new iteration, when executing:

recv_reqs = self.recv_requests()
self.process_input_requests(recv_reqs)

the self.running_batch has not yet been updated, because the cleanup happens later in get_next_batch_to_run.

If this update_weights request requires flush_cache, then the check self.is_no_request() will return False, due to stale state in self.running_batch:

no_request = (
    self.running_batch.is_empty()
    and (self.last_batch is None or self.last_batch.is_empty())
    and (self.cur_batch is None or self.cur_batch.is_empty())
    and (not self.enable_overlap or len(self.result_queue) == 0)
    and (self.pp_size == 1 or all(x.is_empty() for x in self.running_mbs))
)

Thus, even though all requests have logically finished, the system incorrectly believes there are still active requests — leading to the assertion failure

Reproduction

python -m unittest test_update_weights_from_disk.TestUpdateWeightsFromDiskParameterized
A simple way to stably reproduce this error is to add time.sleep(1) in the scheduler right after process_input_requests is executed. This gives the update_weights API more time to acquire the model_update_lock and send its request to the scheduler, ensuring that the next iteration of the event loop will definitely receive the update_weights request.

To achieve this, modify the event_loop_overlap method in scheduler.py as follows:

if self.last_batch:
    if not disable_overlap_for_batch:
        pop_and_process()
elif batch is None:
    # When the server is idle, do self-check and re-init some states
    self.self_check_during_idle()

time.sleep(1)  # Add this line to delay state update and trigger the race condition

Environment

Python: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA L20X
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.8, V12.8.61
CUDA Driver Version: 570.133.20
PyTorch: 2.9.1+cu128
sglang: 0.5.5.post3
sgl_kernel: 0.3.18.post1
flashinfer_python: 0.5.3
flashinfer_cubin: 0.5.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.11
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.34.4
interegular: 0.3.3
modelscope: 1.32.0
orjson: 3.11.2
outlines: 0.1.11
packaging: 25.0
psutil: 6.1.1
pydantic: 2.12.0
python-multipart: 0.0.20
pyzmq: 26.2.1
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.11.0
anthropic: 0.64.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX PHB PHB PHB SYS SYS SYS SYS 0-31 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 PHB PIX PHB PHB SYS SYS SYS SYS 0-31 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 PHB PHB PIX PHB SYS SYS SYS SYS 0-31 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 PHB PHB PHB PIX SYS SYS SYS SYS 0-31 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX PHB PHB PHB 32-63 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS PHB PIX PHB PHB 32-63 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS PHB PHB PIX PHB 32-63 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS PHB PHB PHB PIX 32-63 1 N/A
NIC0 PIX PHB PHB PHB SYS SYS SYS SYS X PHB PHB PHB SYS SYS SYS SYS
NIC1 PHB PIX PHB PHB SYS SYS SYS SYS PHB X PHB PHB SYS SYS SYS SYS
NIC2 PHB PHB PIX PHB SYS SYS SYS SYS PHB PHB X PHB SYS SYS SYS SYS
NIC3 PHB PHB PHB PIX SYS SYS SYS SYS PHB PHB PHB X SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS PIX PHB PHB PHB SYS SYS SYS SYS X PHB PHB PHB
NIC5 SYS SYS SYS SYS PHB PIX PHB PHB SYS SYS SYS SYS PHB X PHB PHB
NIC6 SYS SYS SYS SYS PHB PHB PIX PHB SYS SYS SYS SYS PHB PHB X PHB
NIC7 SYS SYS SYS SYS PHB PHB PHB PIX SYS SYS SYS SYS PHB PHB PHB X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7

Hypervisor vendor:: KVM
ulimit soft: 1048576

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions