OpenClaw integration: production readiness + OpenAI-compatible server hardening#99
Merged
saschabuehrle merged 54 commits intomainfrom Feb 9, 2026
Merged
Conversation
Copy tool_calls from metadata to message.tool_calls for standard OpenAI
format compatibility. Also set finish_reason to "tool_calls" when tools
are present. Add /health endpoint returning {"status": "ok"}.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- 10 tool call scenarios (single, multiple, nested, error handling) - 10 Q&A scenarios (factual, code, math, creative) - 4x10 multi-turn conversations (project planning, debug, travel, language) - 2 multi-agent tool calls (parallel, sequential) - 2 agent loop tests - 2 streaming/webhook tests - Stats validation after each batch - Response metadata validation (cascadeflow fields) Generated by Codex task_e_6984e6bd31008333944422b002b2cc78
…assertion - cascade_overhead_ms → cascade_overhead - avg_cascade_overhead_ms → avg_cascade_overhead - Increase tolerance for acceptance rate from 0.02 to 0.05 - Remove tool_calls >= tool_queries assertion (invalid for Q&A tests)
Problem: When tool results are provided (role='tool' in messages), the drafter correctly responds with TEXT summary. But cascade quality check rejected it with 'no_tool_calls_generated' because has_tools=true. Solution: - Add _has_tool_result_in_messages() helper function - Modify _should_accept_tool_draft() to check for tool results in messages - If tool results provided and draft has text content (no tool_calls), accept it This fixes the agent loop flow: 1. User: 'Get weather in London' 2. Assistant: tool_call(weather, London) 3. Tool: 'Sunny, 22°C' 4. Assistant: 'The weather in London is sunny at 22°C' ← NOW ACCEPTED Also fixes test field names and simplifies provider configs.
Two fixes for agent loop support: 1. normalize_messages() now preserves tool_calls, tool_call_id, name fields - Previously stripped all fields except role/content - OpenAI tool calling requires these fields 2. Anthropic provider: _convert_messages_to_anthropic() - Converts role="tool" → role="user" with tool_result content blocks - Converts assistant tool_calls → tool_use content blocks - Extracts system messages (Anthropic wants them separate) - Merges consecutive same-role messages (Anthropic requirement) All 3 configs now pass 9/9 tests: - openai-only: 100% acceptance, 752ms latency - anthropic-only: 100% acceptance, 867ms latency - mixed: 100% acceptance, 732ms latency
Three provider preset configurations for OpenClaw integration: 1. mixed-domains.yaml - Full 12-domain routing - medical/legal/finance → opus verifier (strict) - code/technical/math → gpt-4o verifier - creative/conversation → claude models 2. mixed-domains-v2.yaml - Tuned thresholds (0.70 for medical/legal) 3. mixed-domains-fixed.yaml - Experimental with tool routing Tested: 62.5% acceptance, 0.88 quality mean, proper domain detection
…l bypass, forced escalation - Added COMPLEXITY_SIGNALS in complexity.py for proof/math/implementation detection - Removed trivial-mode auto-pass bypass in quality.py (now validates hedging/hallucination) - Added forced escalation in cascade.py (expert always, hard when quality<0.85) - Added comprehensive tests (25 tests, all passing) Fixes: Hard/Expert queries now properly escalate instead of auto-accepting
…thresholds - Add 7 exemplars each for CONVERSATION, FACTUAL, FINANCIAL domains - Add DOMAIN_THRESHOLDS dict with domain-specific confidence levels - Enable hybrid mode by default (rule-based + semantic combined) - Add 16 comprehensive tests for new functionality Fixes domain detection gaps where finance/conversation/factual queries were incorrectly routed to GENERAL domain.
- pip install cascadeflow[openclaw] auto-enables FastEmbed v0.7+ - Added conversation domain keywords for rule-based fallback - Covers small talk, greetings, casual chat patterns - Ensures good routing when FastEmbed unavailable
The _keyword_matches method used rf"\\b..." which in a raw string produces literal \\b (escaped backslash + b) instead of the regex word boundary \b. This caused ALL single-word keyword matches to fail silently, dropping domain detection accuracy to ~16%. Fixed by using r"\b" + re.escape(keyword) + r"\b" which correctly produces word boundary assertions. Also added missing keywords for math (prove, irrational, eigenvalue, etc.), medical (symptoms, diabetes, hypertension, etc.), finance (mortgage, tax, inflation, etc.), factual (when was, who was, signed, etc.), and reasoning domains. Detection accuracy now 100% across 50 test queries. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Domain Detection Fixes: - Enhanced CONVERSATION with greetings, small talk, casual chat patterns - Enhanced FACTUAL with knowledge-seeking patterns (capital, invented, when was) - Enhanced FINANCIAL with retirement, savings, pension, ETF, hedge fund - Enhanced MEDICAL with anatomy, physiology, organ-specific terms - Reduced GENERAL domain to true fallback (removed knowledge patterns) Results (60+ queries tested): - Overall: 90.2% accuracy (was 70.5%) - Conversation: 100% (was 12%) - Factual: 88% (was 50%) - Medical: 100% - Legal: 100% - Financial: 90% Remaining edge cases are acceptable domain overlaps: - 'How many planets' → math (counting) - 'Compare ETF vs mutual fund' → comparison (legitimate) - 'Python vs JavaScript' → code (language keywords)
- test_detect_general_domain: Use truly generic query (not knowledge questions) - test_rag_vs_general_separation: Expect FACTUAL for knowledge questions - test_low_confidence_fallback: Use non-greeting query (greetings → CONVERSATION now) These tests now reflect the improved domain detection behavior where: - Knowledge questions (capital of, population of) → FACTUAL - Greetings (hello, hi) → CONVERSATION - Only truly ambiguous queries → GENERAL
- Add FASTEMBED_DOMAIN_MODELS registry (e5-large, bge-large, MiniLM) - Add model_name param to SemanticDomainDetector (default: e5-large-v2) - Implement adaptive hybrid blending: - Rule lock for high-confidence decisions - Semantic override when confident + separated - Adaptive weights based on agreement - Add tuning constants (HYBRID_RULE_LOCK_*, HYBRID_SEMANTIC_*) - Add tests for hybrid behavior with mocked embedder - Create fastembed-investigation-report.md
P0 Measurement Correctness: - Add cascadeflow/pricing/pricebook.py with PriceBook and PricingResolver - Add cascadeflow/schema/usage.py with canonical Usage schema - Add tests/test_pricing_resolver.py P0 Tool Loop Engine: - Update cascadeflow/agent.py with tool loop closure support - Add tests/test_agent_p0_tool_loop.py Audit Reports: - dx-plan-v2-audit-report.md (P0 status audit) - fastembed-audit-report.md (plan vs implementation) Tests: - Add semantic domain/complexity detector tests - TypeScript usage type tests Co-authored-by: Codex <codex@openai.com>
All extras now require FastEmbed 0.7.0+ which supports: - BAAI/bge-base-en-v1.5 (default for SemanticDomainDetector) - BAAI/bge-large-en-v1.5 - sentence-transformers/all-MiniLM-L6-v2 Prevents cryptic model-not-found errors with older versions.
Allows agents to bypass plan review for evaluation-only tasks (no code modifications, just running tests/benchmarks).
- Default use_hybrid=False in CascadeAgent constructor - Add use_hybrid parameter to preset functions and auto_agent - OpenClaw integration passes use_hybrid=True when creating agent - Standard CascadeFlow usage now uses pure semantic/rule-based detection - Hybrid mode (ML + rules combined) reserved for OpenClaw integration only
- domain_config.py: Remove require_verifier=True for medical/legal domains - domain_config.py: Lower thresholds from 0.95/0.90 to 0.70 - openai-only.yaml: Add medical/legal domains, pure OpenAI stack - anthropic-only.yaml: Pure Anthropic stack (haiku→sonnet) - mixed-anthropic-openai.yaml: gpt-5-mini drafter, sonnet verifier All three configs tested successfully: - Medical queries now use cascade (not direct routing) - Draft acceptance working for all providers
The fallback savings calculation was using 2x multiplier, but the actual cost difference between gpt-5-mini ($0.00025) and gpt-5 ($0.015) is 60x. This fix corrects the savings percentage calculation when bigonly_cost metadata is not available, properly reflecting the significant cost savings from cascade acceptance.
The telemetry.record() was being called with the raw SpeculativeResult before _build_cascade_result() ran the CostCalculator to fix cost_saved. This caused stats to show tiny negative savings even when drafts were accepted, because the raw cascade module uses different cost logic. Now the record() call happens after the CostCalculator corrects the cost values, giving accurate savings stats (98% instead of -1.7%).
Changes: - Standardize FastEmbed on bge-small-en-v1.5 everywhere (fix domain.py defaulting to bge-base, saving 70MB memory + 40ms/call) - Add semantic fallback to alignment scorer v15: when rule-based score is in uncertain zone (0.35-0.55), blend 70% rule + 30% semantic - Add adaptive threshold learning (quality/adaptive.py): per-domain rolling-window tracking auto-adjusts thresholds toward 55% acceptance - Add semantic cache dedup (caching.py): optional cosine >= 0.95 matching for paraphrased queries, only adds latency on cache miss - Accept list[dict] as query param in agent.run() for cleaner DX - Expand PriceBook with 15+ models, prefix matching, runtime update(), update_batch(), and sync_from_litellm() for live pricing refresh - Add 47 production readiness tests covering adaptive learning, semantic cache, alignment fallback, pricing, cost calculator, OpenClaw routing, and domain detection accuracy Latency overhead: +8.5ms worst case (hybrid domain + alignment fallback + semantic cache miss). Tests: 765 passed, 1 timeout, 33 skipped. https://claude.ai/code/session_01GHZwaDDP2ajrPxL1t46KWx
…c stub, TS alignment
1. Streaming test coverage: 14% → 81% (base.py), 11% → 53% (tools.py), 19% → 92% (utils.py)
- 82 new tests covering ProgressiveJSONParser, ToolCallValidator, estimate_confidence,
StreamEvent/ToolStreamEvent data classes, StreamManager helpers (tokens, cost, confidence),
ToolStreamManager helpers (messages, tool calls, costs), and full async stream() integration
tests with mock providers for accepted/rejected/direct/error flows
2. _execute_tool_calls_parallel() stub → real implementation
- Accepts ToolExecutor instance (uses execute() with ToolCall parsing) or async callable
- When no executor registered, returns informative error instead of "not_implemented"
- Added tool_executor parameter to CascadeAgent.__init__
- Error handling with structured JSON error responses
3. TypeScript alignment scorer v15 parity with Python
- Added SEMANTIC_FALLBACK_LOW/HIGH constants (0.35/0.55)
- Added constructor with useSemanticFallback and getSemanticScore callback
- Semantic fallback logic: 70% rule + 30% semantic blend in uncertain zone
- Added semanticFallback/semanticScore to AlignmentAnalysis features
- 4 new vitest tests for semantic fallback behavior
- Backward compatible: default constructor has no semantic fallback
Full suite: 848 passed, 33 skipped, 0 failed
https://claude.ai/code/session_01GHZwaDDP2ajrPxL1t46KWx
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Scope:\n- OpenClaw integration (adapter/wrapper/pre-router) + OpenAI-compatible local server (/v1/chat/completions)\n- Cost tracking / savings baseline fixes + telemetry robustness\n- FastEmbed optional ML routing improvements (OpenClaw-only hybrid)\n- Remove investor demo folders + add .gitignore guard\n\nNotes:\n- Server hardening is backward compatible for localhost: auth is optional unless --auth-token is provided.\n- This PR is intended to land before/alongside the upcoming multi-turn conversation PR; please flag any merge-order constraints.