Bug type
Regression (worked before, now fails)
Beta release blocker
No
Summary
After updating to OpenClaw 2026.4.14, direct CLI inference on openai-codex/gpt-5.4 fails even for a tiny prompt. The raw failure returned by the provider/runtime is HTML, but OpenClaw surfaces it as LLM request failed: DNS lookup for the provider endpoint failed. This does not appear to be a real local DNS issue.
Steps to reproduce
- Install/update to OpenClaw 2026.4.14 on macOS.
- Ensure default/authenticated Codex model is
openai-codex/gpt-5.4.
- Run:
openclaw infer model run --model openai-codex/gpt-5.4 --prompt "hi" --json
- Observe the command fail and surface:
LLM request failed: DNS lookup for the provider endpoint failed.
- Check logs and observe
rawError / rawErrorPreview contains HTML.
Expected behavior
Either:
- the request succeeds, or
- OpenClaw surfaces the real upstream/runtime failure
It should not present an HTML provider/runtime error as a DNS lookup failure.
Actual behavior
The direct CLI repro fails every time on:
- provider:
openai-codex
- model:
gpt-5.4
CLI output surfaces:
LLM request failed: DNS lookup for the provider endpoint failed.
Logs show:
rawError / rawErrorPreview is HTML
providerRuntimeFailureKind flips between rate_limit and unknown
- after repeated retries, it can also surface:
API rate limit reached. Please try again later.
OpenClaw version
2026.4.14
Operating system
macOS 26.3.1
Install method
Homebrew / local CLI install
Model
openai-codex/gpt-5.4
Provider / routing chain
openclaw -> openai-codex OAuth -> gpt-5.4
Additional provider/model setup details
- Default model:
openai-codex/gpt-5.4
- Allowed/configured models at time of testing:
openai-codex/gpt-5.4
moonshot/kimi-k2.5
- Codex OAuth was removed and re-added cleanly via:
openclaw models auth login --provider openai-codex
models.providers.openai-codex is absent from config
models.mode is merge
- agent-level
models.json was checked and the openai-codex block looked healthy
- issue reproduces outside UI and outside Telegram
Logs, screenshots, and evidence
Direct CLI repro:
`openclaw infer model run --model openai-codex/gpt-5.4 --prompt "hi" --json`
Example CLI JSON result:
{
"ok": true,
"capability": "model.run",
"transport": "local",
"provider": "openai-codex",
"model": "gpt-5.4",
"attempts": [],
"outputs": [
{
"text": "LLM request failed: DNS lookup for the provider endpoint failed.",
"mediaUrl": null
}
]
}
Relevant observed log pattern:
- embedded run failures on `provider=openai-codex`, `model=gpt-5.4`
- `rawError=<html>...`
- `rawErrorPreview` begins with HTML
- `providerRuntimeFailureKind: rate_limit`
- later mostly `providerRuntimeFailureKind: unknown`
Local checks:
- `chatgpt.com` resolves
- `api.openai.com` resolves
- local `curl` can reach both endpoints
Impact and severity
Severity: High
Impact:
- Codex GPT-5.4 is unusable in this setup
- issue reproduces in direct CLI, so it is not limited to UI or Telegram
- repeated retries can trigger cooldown/rate-limit noise
- current workaround is switching work to Kimi
Additional information
What was already checked:
- restarted gateway multiple times
- waited for cooldown
- tested tiny prompts only
- removed and re-added Codex OAuth
- confirmed gateway/browser/Telegram health
- confirmed issue reproduces outside UI and outside Telegram
- confirmed local DNS/network looks healthy
- checked config and did not find a stale
models.providers.openai-codex override
Why this seems important:
This looks like either:
- a Codex runtime/upstream failure on the
openai-codex/gpt-5.4 path, or
- an OpenClaw bug in how HTML provider/runtime failures are classified and surfaced
because the raw failure is HTML, not a DNS exception.
Bug type
Regression (worked before, now fails)
Beta release blocker
No
Summary
After updating to OpenClaw 2026.4.14, direct CLI inference on
openai-codex/gpt-5.4fails even for a tiny prompt. The raw failure returned by the provider/runtime is HTML, but OpenClaw surfaces it asLLM request failed: DNS lookup for the provider endpoint failed.This does not appear to be a real local DNS issue.Steps to reproduce
openai-codex/gpt-5.4.openclaw infer model run --model openai-codex/gpt-5.4 --prompt "hi" --jsonLLM request failed: DNS lookup for the provider endpoint failed.rawError/rawErrorPreviewcontains HTML.Expected behavior
Either:
It should not present an HTML provider/runtime error as a DNS lookup failure.
Actual behavior
The direct CLI repro fails every time on:
openai-codexgpt-5.4CLI output surfaces:
LLM request failed: DNS lookup for the provider endpoint failed.Logs show:
rawError/rawErrorPreviewis HTMLproviderRuntimeFailureKindflips betweenrate_limitandunknownAPI rate limit reached. Please try again later.OpenClaw version
2026.4.14
Operating system
macOS 26.3.1
Install method
Homebrew / local CLI install
Model
openai-codex/gpt-5.4
Provider / routing chain
openclaw -> openai-codex OAuth -> gpt-5.4
Additional provider/model setup details
openai-codex/gpt-5.4openai-codex/gpt-5.4moonshot/kimi-k2.5openclaw models auth login --provider openai-codexmodels.providers.openai-codexis absent from configmodels.modeismergemodels.jsonwas checked and theopenai-codexblock looked healthyLogs, screenshots, and evidence
Impact and severity
Severity: High
Impact:
Additional information
What was already checked:
models.providers.openai-codexoverrideWhy this seems important:
This looks like either:
openai-codex/gpt-5.4path, orbecause the raw failure is HTML, not a DNS exception.