-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Description
What happened?
Bug Description
The BaseOpenAIChatCompletionClient.create_stream() method crashes with AttributeError: 'NoneType' object has no attribute 'model' when the OpenAI stream yields None chunks. This is especially prevalent in compiled binaries and causes premature stream termination.
Affected File: autogen_ext/models/openai/_openai_client.py
Affected Lines: 919, 949
Version: autogen-ext 0.7.5
Root Cause
# Line 908: Iterate over stream chunks
async for chunk in chunks:
if first_chunk:
...
# Line 919: Accesses chunk.model without None check
maybe_model = chunk.model # AttributeError if chunk is None!
# Line 923: Empty chunk check comes AFTER (too late)
if len(chunk.choices) == 0:
...The code accesses chunk.model before checking if chunk is None, despite the type annotation allowing None at line 893:
chunk: ChatCompletionChunk | None = NoneSteps to Reproduce
import asyncio
import os
from autogen_core.models import CreateResult
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.models import UserMessage
async def test():
client = OpenAIChatCompletionClient(
model=os.getenv("OPENAI_MODEL"),
base_url=os.getenv("OPENAI_BASE_URL"),
api_key=os.getenv("OPENAI_API_KEY"),
)
try:
while True:
prompt = input("Enter a prompt: ")
messages = [UserMessage(content=prompt, source="user")]
stream = client.create_stream(
messages=messages,
tools=[],
extra_create_args={},
)
async for chunk in stream:
if isinstance(chunk, CreateResult):
print(chunk.content, end="", flush=True)
if isinstance(chunk, str):
print(chunk, end="", flush=True)
print("\n\n")
except KeyboardInterrupt:
print("\nExiting...")
except Exception as e:
print(f"Error: {e}")
await client.close()
if __name__ == "__main__":
asyncio.run(test())Trigger conditions:
- Run in compiled binary (PyInstaller/Nuitka)
- Use long-running streams
- High latency or heavy API load
Expected Behavior
Stream should skip None chunks and complete with final CreateResult.
Actual Behavior
AttributeError: 'NoneType' object has no attribute 'model'
Stream terminates prematurely without yielding final result.
Proposed Fix
Add None check before accessing attributes:
async for chunk in chunks:
# Add this check immediately
if chunk is None:
continue
if first_chunk:
...
# Now safe to access
maybe_model = chunk.model
...Same fix needed at line 949.
Impact
Severity: High
- Breaks streaming in compiled binaries
- Unpredictable failures depending on network/API behavior
- Affects
OpenAIChatCompletionClientandAzureOpenAIChatCompletionClient
Workaround
Monkeypatch the method to filter None chunks:
from autogen_ext.models.openai import BaseOpenAIChatCompletionClient
original = BaseOpenAIChatCompletionClient._create_stream_chunks
async def patched(self, *args, **kwargs):
async for chunk in original(self, *args, **kwargs):
if chunk is not None:
yield chunk
BaseOpenAIChatCompletionClient._create_stream_chunks = patchedEnvironment
autogen-core==0.7.5
autogen-ext==0.7.5
openai==1.65.5
Python==3.12
Deployment: Compiled binary (Nuitka)
Additional Notes
- Lines 1082-1108 (
_create_stream_chunks) and 1110-1140 (_create_stream_chunks_beta_client) both need the same fix - Type annotation at line 893 correctly indicates
chunkcan beNone - OpenAI stream can yield
Nonefor keepalives, heartbeats, or under load - Issue is more frequent in compiled binaries due to different async timing
Fix complexity: Low - Single line addition: if chunk is None: continue