Name and Version
$ ./llama-server --version
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 97020 MiB):
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, VRAM: 48510 MiB
Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, VRAM: 48510 MiB
version: 8882 (ca7f7b7)
built with GNU 13.3.0 for Linux x86_64
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
llama-server --n-gpu-layers 99 --ctx-size 260000 --seed 3407 --model Qwen3.6-35B-A3B-UD-Q8_K_XL.gguf --split-mode layer --cache-type-k f16 --cache-type-v f16 --flash-attn on --metrics --host 0.0.0.0 --port 8080 --jinja --swa-full
Problem description & steps to reproduce
Problem Description
When Claude Code use it's built-in TaskUpdate tool with llama.cpp as the inference engine, the tool call validation fails incorrectly for anyOf schema types.
The TaskUpdate tool is a built-in tool provided by Claude Code itself (not a custom tool), used for updating task status in the task management system.
Schema Definition
The TaskUpdate tool's status parameter is defined as:
{
"name": "TaskUpdate",
"parameters": {
"type": "object",
"properties": {
"taskId": {
"description": "The ID of the task to update",
"type": "string"
},
"status": {
"anyOf": [
{
"type": "string",
"enum": ["pending", "in_progress", "completed"]
},
{
"type": "string",
"const": "deleted"
}
]
}
},
"required": ["taskId", "status"]
}
}
Steps to Reproduce
- Use Claude Code CLI with llama.cpp as the inference engine
- Ask Claude to update a task status (e.g., "mark task 1 as in progress")
- Claude attempts to call the
TaskUpdate tool with status: "in_progress"
- The tool call fails with a validation error
Note: The error appears in Claude Code's output when the tool call validation fails.
Error Message
{
"code": "invalid_union",
"errors": [
[
{
"code": "invalid_value",
"values": ["pending", "in_progress", "completed"],
"path": [],
"message": "Invalid option: expected one of \"pending\"|\"in_progress\"|\"completed\""
}
],
[
{
"code": "invalid_value",
"values": ["deleted"],
"path": [],
"message": "Invalid input: expected \"deleted\""
}
]
],
"path": ["status"],
"message": "Invalid input"
}
Root Cause Analysis
This issue was identified by examining the Claude Code source code. The validation error originates from the tool call processing logic in Claude Code.
The error message is confusing because:
- The user (Claude) is passing a valid enum value (
in_progress) from the first anyOf branch
- The error shows BOTH branches failed validation, including the second branch showing
expected "deleted"
- Based on the source code analysis, the actual root cause appears to be that the string value is being passed with literal quote characters included (i.e., the value is
"\"in_progress\"") rather than just in_progress
The source code reveals that when the model outputs a tool call with string parameters, the JSON string delimiters (quotes) may be incorrectly included as part of the value itself during parsing.
Expected Behavior
- If the value is valid: The tool call should succeed when passing a valid enum value from any
anyOf branch
- If the value is invalid: The error message should clearly indicate:
- What value was actually received
- Which specific branch failed and why
- The difference between the received value and expected values
Request
Please investigate:
- How tool call arguments are parsed and validated for
anyOf schema types in llama.cpp
- Whether string values are being correctly extracted from JSON (removing delimiters)
- Whether the validation error message clearly shows the actual received value
This issue affects the usability of Claude Code's built-in tools when using llama.cpp as the inference engine
First Bad Commit
No response
Relevant log output
Logs
Name and Version
$ ./llama-server --version
ggml_cuda_init: found 2 CUDA devices (Total VRAM: 97020 MiB):
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, VRAM: 48510 MiB
Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes, VRAM: 48510 MiB
version: 8882 (ca7f7b7)
built with GNU 13.3.0 for Linux x86_64
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
Problem description & steps to reproduce
Problem Description
When Claude Code use it's built-in
TaskUpdatetool with llama.cpp as the inference engine, the tool call validation fails incorrectly foranyOfschema types.The
TaskUpdatetool is a built-in tool provided by Claude Code itself (not a custom tool), used for updating task status in the task management system.Schema Definition
The
TaskUpdatetool'sstatusparameter is defined as:{ "name": "TaskUpdate", "parameters": { "type": "object", "properties": { "taskId": { "description": "The ID of the task to update", "type": "string" }, "status": { "anyOf": [ { "type": "string", "enum": ["pending", "in_progress", "completed"] }, { "type": "string", "const": "deleted" } ] } }, "required": ["taskId", "status"] } }Steps to Reproduce
TaskUpdatetool withstatus: "in_progress"Note: The error appears in Claude Code's output when the tool call validation fails.
Error Message
{ "code": "invalid_union", "errors": [ [ { "code": "invalid_value", "values": ["pending", "in_progress", "completed"], "path": [], "message": "Invalid option: expected one of \"pending\"|\"in_progress\"|\"completed\"" } ], [ { "code": "invalid_value", "values": ["deleted"], "path": [], "message": "Invalid input: expected \"deleted\"" } ] ], "path": ["status"], "message": "Invalid input" }Root Cause Analysis
This issue was identified by examining the Claude Code source code. The validation error originates from the tool call processing logic in Claude Code.
The error message is confusing because:
in_progress) from the firstanyOfbranchexpected "deleted""\"in_progress\"") rather than justin_progressThe source code reveals that when the model outputs a tool call with string parameters, the JSON string delimiters (quotes) may be incorrectly included as part of the value itself during parsing.
Expected Behavior
anyOfbranchRequest
Please investigate:
anyOfschema types in llama.cppThis issue affects the usability of Claude Code's built-in tools when using llama.cpp as the inference engine
First Bad Commit
No response
Relevant log output
Logs