Bug Report — QA Bug Bash 4/16
Environment: Staging (hosted-staging / crab shack) — https://agent-stg.near.ai/
Description:
The orchestrator surfaces a raw error traceback to the user when the LLM provider returns a 502 Bad Gateway response. The full error includes internal file paths (orchestrator.py lines 907, 548), Python traceback, and raw JSON error payload. The agent status badge also shows "Loading..." instead of providing a graceful error state.
Error displayed to user:
Error: Orchestrator error: effect execution error: Orchestrator error after resume: Traceback (most recent call last): File "orchestrator.py", line 907, in File "orchestrator.py", line 548, in run_loop RuntimeError: LLM call failed: Provider nearai_chat request failed: HTTP 502 Bad Gateway
Steps to Reproduce:
- Open agent "proud-wolf-damug" on staging (or any agent during provider outage)
-
- Send a message when the LLM provider is experiencing issues
Actual Result:
Raw Python traceback with internal file paths and error JSON is displayed directly to the user. Agent status shows "Loading..." indefinitely.
Expected Result:
A user-friendly error message like "The AI model is temporarily unavailable. Please try again in a few moments." Internal error details should be logged server-side only, not shown to the user.
Severity: Medium — Poor error handling exposes internals, but functionally equivalent to a retry scenario
Frequency: Intermittent — depends on LLM provider availability
Related: #2276 (Orchestrator crashes with HTTP 413), #2408 (Context length overflow)
Bug Report — QA Bug Bash 4/16
Environment: Staging (hosted-staging / crab shack) — https://agent-stg.near.ai/
Description:
The orchestrator surfaces a raw error traceback to the user when the LLM provider returns a 502 Bad Gateway response. The full error includes internal file paths (orchestrator.py lines 907, 548), Python traceback, and raw JSON error payload. The agent status badge also shows "Loading..." instead of providing a graceful error state.
Error displayed to user:
Error: Orchestrator error: effect execution error: Orchestrator error after resume: Traceback (most recent call last): File "orchestrator.py", line 907, in File "orchestrator.py", line 548, in run_loop RuntimeError: LLM call failed: Provider nearai_chat request failed: HTTP 502 Bad GatewaySteps to Reproduce:
Actual Result:
Raw Python traceback with internal file paths and error JSON is displayed directly to the user. Agent status shows "Loading..." indefinitely.
Expected Result:
A user-friendly error message like "The AI model is temporarily unavailable. Please try again in a few moments." Internal error details should be logged server-side only, not shown to the user.
Severity: Medium — Poor error handling exposes internals, but functionally equivalent to a retry scenario
Frequency: Intermittent — depends on LLM provider availability
Related: #2276 (Orchestrator crashes with HTTP 413), #2408 (Context length overflow)