Skip to content

Agent stops after tool execution with OpenAI-compatible providers (Gemini, LiteLLM) #14972

@valenvivaldi

Description

@valenvivaldi

Description

When using OpenAI-compatible providers (e.g., Gemini 3 Flash via @ai-sdk/openai-compatible, LiteLLM), the agent loop stops after executing a tool call instead of continuing to process.

Root cause: These providers return finish_reason: "stop" even when the response contains tool calls. The OpenAI standard returns "tool_calls", but providers like Gemini and LiteLLM don't follow this convention.

In packages/opencode/src/session/prompt.ts, the loop exit condition only checks finish_reason:

if (
  lastAssistant?.finish &&
  !["tool-calls", "unknown"].includes(lastAssistant.finish) &&
  lastUser.id < lastAssistant.id
) {
  break // Agent stops here even though tools were called
}

Since finish_reason is "stop" (not "tool-calls"), the loop exits prematurely.

Steps to reproduce

  1. Configure an OpenAI-compatible provider with Gemini (e.g., gemini-3-flash-preview) or LiteLLM
  2. Send a prompt that triggers tool usage (e.g., "read file X")
  3. The agent executes the tool but then stops instead of continuing to process the tool result

Expected behavior

After executing a tool call, the agent should continue processing (make another LLM call with the tool results) regardless of the provider's finish_reason.

Actual behavior

The agent stops after executing the tool. The user sees the tool output but the agent never processes it further.

Related issues

OpenCode version

1.2.11

Operating System

macOS

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions