fix(core): prevent agent loop from stopping after tool calls with OpenAI-compatible providers#14973
Conversation
…nAI-compatible providers Some OpenAI-compatible providers (Gemini, LiteLLM) return finish_reason "stop" instead of "tool_calls" when the response contains tool calls. This caused the agent loop to exit prematurely after executing a tool, instead of continuing to process the tool results. The fix adds a check for tool parts in the last assistant message. If tool calls were made, the loop continues regardless of the provider's reported finish_reason. Fixes anomalyco#14972 Related: anomalyco#14063 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
|
Hey everyone, thanks for all the thumbs up! 👍 I've been keeping this branch updated with the latest On our end, we need this fix since our team uses a LiteLLM proxy to route requests to Gemini, OpenAI, and other providers. Without it, the agent stops after every tool call, which makes it essentially unusable for agentic workflows. Great to see this is also helping folks running local LLMs — glad it's not just us! Hoping a maintainer gets a chance to review this at some point. The fix is minimal (7 lines) and all CI checks pass. |
|
@Hona Would appreciate a look when you have a chance. This seems to affect a broad set of users of OpenAI-compatible providers, especially many companies using proxy layers. Thanks! |
|
@rekram1-node The tests failed on your comment-only commit, looks like unrelated flaky tests. |
|
Hi everyone! Just updated the branch with the latest from dev — all CI checks are passing. 🟢 |
Issue for this PR
Closes #14972
Related: #14063
Type of change
What does this PR do?
Some OpenAI-compatible providers (Gemini, LiteLLM) return
finish_reason: "stop"instead of"tool_calls"when their response contains tool calls. This differs from the OpenAI standard.The agent loop exit condition in
prompt.ts(line ~318) only checks thefinishreason to decide whether to continue. When it sees"stop", it breaks the loop — even though tools were just executed and need their results processed by the model.The fix adds one extra check: before exiting the loop, look at the last assistant message parts for any
type === "tool"entries. If tool parts exist, the model did call tools, so the loop must continue regardless of whatfinish_reasonthe provider reported.How did you verify your code works?
bun run devwith Gemini 3 Flash configured as an OpenAI-compatible provider. Before the fix, the agent stopped after every tool call. After the fix, it continues processing tool results normally.Screenshots / recordings
N/A — not a UI change.
Checklist