LLM-as-a-Judge Evaluator not running on live traces #12913
Replies: 1 comment 1 reply
-
|
Several common reasons can cause LLM-as-a-Judge evaluators to not run on live traces even when active with 100% sampling: 1. Variable Mapping Issues 2. SDK Version Requirements For trace-level evaluators with OTel-based SDKs (Python v3+ or JS/TS v4+), trace input/output is derived from the root observation by default. To explicitly set trace input/output, use 3. Attribute Propagation for Observation-Level Evaluators 4. Filter Configuration Recommended Steps:
Check out these potentially useful sources for more troubleshooting: Troubleshooting and FAQ, Why is my observation-level evaluator not executing?(2) 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe your question
why no live incoming traces are evaluated by llm as judge even it is active and 100% sampling rate there. logs page in UI shows empty.
Langfuse Cloud or Self-Hosted?
Langfuse Cloud
If Self-Hosted
No response
If Langfuse Cloud
SDK and integration versions
No response
Pre-Submission Checklist
Beta Was this translation helpful? Give feedback.
All reactions