-
Notifications
You must be signed in to change notification settings - Fork 3.1k
provider: treat tool_calls finish without calls as stop #5139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
ry2009
commented
Dec 5, 2025
- Treats finish_reason: "tool_calls" as stop when no tool-call deltas were seen (prevents hangs when providers return an empty tool_calls list).
- Keeps tool-calls when actual tool-call chunks were emitted.
- Added a regression test for the mapping.
|
I don't think this does what you think it does |
|
Are you solely doing this for codex models specifically through the github copilot subscription? It seems unlikely that is your intention given lack of description saying this much. But this change would ONLY affect those models. |
|
Yes, sorry I was more interested in making sure this fix works at the base level as this is a bigger abstraction: I was curious how to continue if we want to add the same tool_calls into stop when no deltas mapping to the upstream @ai-sdk/openai-compatible path we use elsewhere (or wrap the finishReason after any OpenAI-compatible response). Or we can PR change upstream to ai-sdk and remove the divergence in our fork so every OpenAI-compatible flow gets the fix. |
|
what providerr were u seeing this issue w/? |
|
I saw it with LM Studio’s OpenAI‑compatible local server (model: qwen2.5‑coder‑7b‑instruct). It returned finish_reason:"tool_calls" but never streamed any tool_call deltas (tool_calls: []), so the mapper yielded unknown and the session waited forever. It isn’t a Copilot‑only problem; that’s just wherethe fork is. |
|
Yeah this code wouldn't have done anything in that case then. we should fix this in processor.ts |
|
@ry2009 can you send me the session for this? Because I think this may be a bug on LMStudio and they should fix it on their end |
|
Let me know if this loads or use the gist: https://gist.github.com/ry2009/4559c3f9b4350a1bf4853cc20acd1993 |