Description
Summary
When using claude-opus-4-6 via the Anthropic provider, OpenCode's session prompt loop advances to step=1 even after the model returns finish=stop (a normal text completion, not a tool call). On step 1, the conversation's last message is an assistant turn, which Opus rejects with:
"This model does not support assistant message prefill. The conversation must end with a user message."
This creates a completed assistant message with an error and zero parts, which the Web UI renders as a blank/stuck session.
Root cause
The prompt loop in session.prompt does not gate continuation on the finish reason. After step 0 completes with finish=stop, the loop enters step 1, re-resolves all tools, and fires a new LLM stream call. Since the last message in the conversation is the assistant's own response from step 0, the Anthropic API rejects the call for models that do not support assistant message prefill (like claude-opus-4-6).
Models that do support prefill (e.g. big-pickle via the opencode provider) silently accept this extra call, which is why the bug only manifests with Opus.
Why the Web UI goes blank
The failed step 1 persists a new assistant message row in the database with:
error.name = "APIError"
error.data.message = "This model does not support assistant message prefill..."
parts.length = 0
The Web UI has no content to render for this zero-part error message, so the session appears blank or stuck. The successful step 0 response (with actual content) is effectively hidden behind this empty error shell.
This is related to issue #17895 ("Completed aborted assistant message with zero parts renders as blank/stuck in web UI"), but the root cause is different: here the empty message is not from an abort, but from an invalid continuation call that should never have been made.
Evidence from server logs
Full reproduction captured on 2026-03-17T16:50:54 with OpenCode v1.2.26, session ses_3036b446affer1VC9iSCDETcYo.
Step 0 starts normally
INFO 16:50:54 service=session.prompt step=0 sessionID=ses_3036b446affer1VC9iSCDETcYo loop
INFO 16:50:55 service=llm providerID=anthropic modelID=claude-opus-4-6 ... stream
Model streams response successfully (message.part.delta flood)
INFO 16:51:04 service=bus type=message.part.delta publishing
INFO 16:51:04 service=bus type=message.part.delta publishing
... (dozens of delta events over ~4 seconds) ...
INFO 16:51:08 service=bus type=message.part.delta publishing
Step 0 finishes, step 1 starts immediately
INFO 16:51:08 service=bus type=session.status publishing
INFO 16:51:08 service=session.prompt step=1 sessionID=ses_3036b446affer1VC9iSCDETcYo loop
INFO 16:51:08 service=session.prompt status=started resolveTools
Step 1 re-resolves all tools and fires another LLM call
INFO 16:51:08 service=tool.registry status=started bash
INFO 16:51:08 service=tool.registry status=started read
INFO 16:51:08 service=tool.registry status=started glob
... (all tools re-registered) ...
INFO 16:51:08 service=session.prompt status=completed duration=173 resolveTools
INFO 16:51:08 service=llm providerID=anthropic modelID=claude-opus-4-6 ... stream
API rejects the call (last message is assistant, Opus doesn't allow prefill)
ERROR 16:51:09 service=llm providerID=anthropic modelID=claude-opus-4-6 ...
error={"error":{"name":"AI_APICallError",
"responseBody":"{\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",
\"message\":\"This model does not support assistant message prefill.
The conversation must end with a user message.\"}}"}} stream error
Error propagates, session goes idle with a broken message
ERROR 16:51:09 service=session.processor
error=This model does not support assistant message prefill. ...
stack="AI_APICallError: This model does not support assistant message prefill..."
INFO 16:51:09 service=bus type=session.error publishing
INFO 16:51:09 service=bus type=session.status publishing
INFO 16:51:09 service=bus type=session.idle publishing
INFO 16:51:09 service=bus type=message.updated publishing
Evidence from database
Querying the session's messages confirms the pattern repeats for every user prompt sent to Opus:
Msg 0 [user ] ok | model= provider=
Msg 1 [assistant] finish=stop | model=claude-opus-4-6 provider=anthropic <-- step 0, success
Msg 2 [assistant] ERROR | model=claude-opus-4-6 provider=anthropic <-- step 1, prefill error
Msg 3 [user ] ok | model= provider=
Msg 4 [assistant] finish=stop | model=claude-opus-4-6 provider=anthropic <-- step 0, success
Msg 5 [assistant] ERROR | model=claude-opus-4-6 provider=anthropic <-- step 1, prefill error
...
Msg 9 [user ] ok | model= provider=
Msg 10 [assistant] finish=stop | model=big-pickle provider=opencode <-- no error! supports prefill
Msg 11 [assistant] finish=stop | model=big-pickle provider=opencode <-- step 1 succeeds silently
Every Opus turn produces a successful response followed by a prefill error. big-pickle turns do not produce errors because that model accepts assistant-last conversations.
Expected behavior
When step N completes with finish=stop (not tool-calls), the prompt loop should not continue to step N+1. The loop should only continue when the model signals it wants to call tools.
Affected versions
- OpenCode v1.2.26 (confirmed)
- Likely affects all versions that use the prompt loop with Anthropic models that don't support prefill
Workarounds
- Use the CLI instead of the Web UI (the CLI displays the successful step 0 response correctly despite the step 1 error)
- Use a model that supports assistant message prefill (e.g.
big-pickle)
Environment
- Server: Linux (opencode serve)
- Model:
claude-opus-4-6 via anthropic provider
- OpenCode version: 1.2.26
Note: I may not be available for follow-up, but the logs and DB evidence above should be self-contained for reproduction.
Plugins
No response
OpenCode version
1.2.26
Steps to reproduce
- Start OpenCode with
opencode serve --print-logs --log-level INFO
- Open the Web UI in a browser
- Select
claude-opus-4-6 (Anthropic provider) as the model
- Send any message (e.g. "hi")
- Observe: the server logs show a successful
finish=stop on step 0, immediately followed by step 1 which fails with "This model does not support assistant message prefill"
- The Web UI shows a blank/stuck session instead of the assistant's response
Note: The response is generated successfully in step 0 - you can verify this by switching to the CLI, which displays it correctly. The issue is that step 1 creates an error message that the Web UI can't render.
Screenshot and/or share link
https://opncd.ai/share/SCDETcYo
Operating System
Debian Linux (opencode serve)
Terminal
ssh
Description
Summary
When using
claude-opus-4-6via the Anthropic provider, OpenCode's session prompt loop advances tostep=1even after the model returnsfinish=stop(a normal text completion, not a tool call). On step 1, the conversation's last message is an assistant turn, which Opus rejects with:This creates a completed assistant message with an error and zero parts, which the Web UI renders as a blank/stuck session.
Root cause
The prompt loop in
session.promptdoes not gate continuation on the finish reason. After step 0 completes withfinish=stop, the loop enters step 1, re-resolves all tools, and fires a new LLM stream call. Since the last message in the conversation is the assistant's own response from step 0, the Anthropic API rejects the call for models that do not support assistant message prefill (likeclaude-opus-4-6).Models that do support prefill (e.g.
big-picklevia theopencodeprovider) silently accept this extra call, which is why the bug only manifests with Opus.Why the Web UI goes blank
The failed step 1 persists a new assistant message row in the database with:
error.name = "APIError"error.data.message = "This model does not support assistant message prefill..."parts.length = 0The Web UI has no content to render for this zero-part error message, so the session appears blank or stuck. The successful step 0 response (with actual content) is effectively hidden behind this empty error shell.
This is related to issue #17895 ("Completed aborted assistant message with zero parts renders as blank/stuck in web UI"), but the root cause is different: here the empty message is not from an abort, but from an invalid continuation call that should never have been made.
Evidence from server logs
Full reproduction captured on
2026-03-17T16:50:54with OpenCode v1.2.26, sessionses_3036b446affer1VC9iSCDETcYo.Step 0 starts normally
Model streams response successfully (message.part.delta flood)
Step 0 finishes, step 1 starts immediately
Step 1 re-resolves all tools and fires another LLM call
API rejects the call (last message is assistant, Opus doesn't allow prefill)
Error propagates, session goes idle with a broken message
Evidence from database
Querying the session's messages confirms the pattern repeats for every user prompt sent to Opus:
Every Opus turn produces a successful response followed by a prefill error. big-pickle turns do not produce errors because that model accepts assistant-last conversations.
Expected behavior
When step N completes with
finish=stop(nottool-calls), the prompt loop should not continue to step N+1. The loop should only continue when the model signals it wants to call tools.Affected versions
Workarounds
big-pickle)Environment
claude-opus-4-6viaanthropicproviderNote: I may not be available for follow-up, but the logs and DB evidence above should be self-contained for reproduction.
Plugins
No response
OpenCode version
1.2.26
Steps to reproduce
opencode serve --print-logs --log-level INFOclaude-opus-4-6(Anthropic provider) as the modelfinish=stopon step 0, immediately followed by step 1 which fails with"This model does not support assistant message prefill"Note: The response is generated successfully in step 0 - you can verify this by switching to the CLI, which displays it correctly. The issue is that step 1 creates an error message that the Web UI can't render.
Screenshot and/or share link
https://opncd.ai/share/SCDETcYo
Operating System
Debian Linux (opencode serve)
Terminal
ssh