Problem
Lightweight routines (RoutineAction::Lightweight) execute via llm.complete() — plain text completion with no tool definitions. Scheduled and reactive routines can generate text from workspace context but cannot call any tools.
FullJob mode is intended for multi-turn tool-using routines but falls back to the same text-only path:
// routine_engine.rs:321
tracing::warn!("FullJob mode not yet implemented; falling back to lightweight execution");
This means routines can produce reminders and digests, but they can't:
- Search memory (
memory_search) before generating a follow-up
- Write findings back to memory (
memory_write) after a scheduled check
- Call domain tools (HTTP, shell, WASM extensions) to verify conditions
- Do any multi-step reasoning with tool feedback
The scheduling infrastructure is solid (cron, event triggers, cooldowns, concurrency guards, DB persistence, notifications). The only missing piece is tool access at execution time.
Proposed design
-
Full agentic loop, not single LLM call
Switch execute_lightweight() from llm.complete() to llm.complete_with_tools() in a loop, matching the pattern in dispatcher.rs:run_agentic_loop() but simplified (no approval UI, no session tracking, no hooks).
OpenClaw uses the same runEmbeddedPiAgent() path for cron as for interactive messages — the agent gets full multi-turn reasoning with tool calls. The rationale: a scheduled check that can call one tool and reason about the result is dramatically more useful than one that can only generate text.
Alternative the maintainers might prefer: A single tool-call round (call tools once, feed results back, generate final text) instead of a full loop. Simpler, lower blast radius, still covers the majority of useful routines. Could be capped at e.g. max_tool_rounds: 3.
-
Same tool set as interactive, filtered by existing policy
OpenClaw gives cron runs the same tools as interactive runs. It relies on the standard tool policy system — not a separate restricted tool list — because the infrastructure for filtering already exists.
In IronClaw, this means passing the existing ToolRegistry to EngineContext and building tool definitions via tools.tool_definitions(). Tools that requires_approval() should be skipped in routine execution since there's no human to approve. This mirrors OpenClaw's exec approval behavior: commands that aren't pre-allowed will effectively timeout/fail in autonomous runs.
// Filter out approval-gated tools for autonomous execution
let tool_defs: Vec<_> = all_tool_defs
.into_iter()
.filter(|td| {
if let Some(tool) = tools.get_sync(&td.name) {
!tool.requires_approval()
} else {
true
}
})
.collect();
Alternative: An opt-in tool allowlist per routine (e.g., allowed_tools: ["memory_search", "memory_write", "http"] in RoutineAction::Lightweight). More conservative, but adds schema complexity. OpenClaw doesn't do this — it trusts the global policy layer.
-
Opt-in via field on RoutineAction (backwards-compatible)
Add a use_tools field to RoutineAction::Lightweight:
Lightweight {
prompt: String,
context_paths: Vec<String>,
max_tokens: u32,
/// Enable tool access during execution (default: false).
#[serde(default)]
use_tools: bool,
}
Existing routines keep text-only behavior. New routines opt in. This is more conservative than OpenClaw (which gives tools unconditionally) but avoids surprising existing users.
The routine_create tool schema gains a use_tools boolean parameter so the LLM can create tool-enabled routines conversationally.
-
Safety layer integration
Tool outputs should pass through SafetyLayer sanitization, same as in the interactive chat path (dispatcher.rs:576). Add SafetyLayer to EngineContext alongside ToolRegistry.
-
Skill attenuation (if skills are active)
OpenClaw applies the same skill/tool policy regardless of run context. If IronClaw skills are relevant during routine execution, attenuate_tools() should apply the same trust-based tool ceiling as in dispatcher.rs:156. This prevents an installed (untrusted) skill from gaining shell access via a scheduled routine.
Alternatively, routines could skip skill injection entirely for simplicity — skills are message-activated and routines have explicit prompts, so there's less need for dynamic skill selection.
Implementation scope
-
routine_engine.rs (~100–120 lines):
- Add
tools: Arc<ToolRegistry> and safety: Arc<SafetyLayer> to RoutineEngine and EngineContext
- New function: mini agentic loop in
execute_lightweight() when use_tools is true — call complete_with_tools(), execute approved tools with sanitization, feed results back, repeat up to N iterations
- Keep
ROUTINE_OK sentinel and notification behavior unchanged
-
agent_loop.rs (~3 lines):
- Pass
self.tools().clone() and self.safety().clone() to RoutineEngine::new() at line 397
-
routine.rs (~3 lines):
- Add
use_tools: bool field to RoutineAction::Lightweight
-
builtin/routine.rs (~5 lines):
- Add
use_tools parameter to routine_create tool schema
-
Tests (~50 lines):
- Tool-enabled execution path
- Approval-gated tools are skipped
use_tools: false preserves existing behavior
Code pointers
- execute_lightweight() — src/agent/routine_engine.rs:422
- EngineContext struct — src/agent/routine_engine.rs:301
- RoutineEngine::new() call site — src/agent/agent_loop.rs:397
- RoutineAction::Lightweight — src/agent/routine.rs:158
- Interactive agentic loop (reference) — src/agent/dispatcher.rs:39
- Tool attenuation — src/skills/attenuation.rs
- routine_create tool — src/tools/builtin/routine.rs:27
Problem
Lightweight routines (
RoutineAction::Lightweight) execute viallm.complete()— plain text completion with no tool definitions. Scheduled and reactive routines can generate text from workspace context but cannot call any tools.FullJob mode is intended for multi-turn tool-using routines but falls back to the same text-only path:
This means routines can produce reminders and digests, but they can't:
memory_search) before generating a follow-upmemory_write) after a scheduled checkThe scheduling infrastructure is solid (cron, event triggers, cooldowns, concurrency guards, DB persistence, notifications). The only missing piece is tool access at execution time.
Proposed design
Full agentic loop, not single LLM call
Switch
execute_lightweight()fromllm.complete()tollm.complete_with_tools()in a loop, matching the pattern indispatcher.rs:run_agentic_loop()but simplified (no approval UI, no session tracking, no hooks).OpenClaw uses the same
runEmbeddedPiAgent()path for cron as for interactive messages — the agent gets full multi-turn reasoning with tool calls. The rationale: a scheduled check that can call one tool and reason about the result is dramatically more useful than one that can only generate text.Alternative the maintainers might prefer: A single tool-call round (call tools once, feed results back, generate final text) instead of a full loop. Simpler, lower blast radius, still covers the majority of useful routines. Could be capped at e.g.
max_tool_rounds: 3.Same tool set as interactive, filtered by existing policy
OpenClaw gives cron runs the same tools as interactive runs. It relies on the standard tool policy system — not a separate restricted tool list — because the infrastructure for filtering already exists.
In IronClaw, this means passing the existing
ToolRegistrytoEngineContextand building tool definitions viatools.tool_definitions(). Tools thatrequires_approval()should be skipped in routine execution since there's no human to approve. This mirrors OpenClaw's exec approval behavior: commands that aren't pre-allowed will effectively timeout/fail in autonomous runs.Alternative: An opt-in tool allowlist per routine (e.g.,
allowed_tools: ["memory_search", "memory_write", "http"]inRoutineAction::Lightweight). More conservative, but adds schema complexity. OpenClaw doesn't do this — it trusts the global policy layer.Opt-in via field on
RoutineAction(backwards-compatible)Add a
use_toolsfield toRoutineAction::Lightweight:Existing routines keep text-only behavior. New routines opt in. This is more conservative than OpenClaw (which gives tools unconditionally) but avoids surprising existing users.
The
routine_createtool schema gains ause_toolsboolean parameter so the LLM can create tool-enabled routines conversationally.Safety layer integration
Tool outputs should pass through SafetyLayer sanitization, same as in the interactive chat path (
dispatcher.rs:576). AddSafetyLayertoEngineContextalongsideToolRegistry.Skill attenuation (if skills are active)
OpenClaw applies the same skill/tool policy regardless of run context. If IronClaw skills are relevant during routine execution,
attenuate_tools()should apply the same trust-based tool ceiling as indispatcher.rs:156. This prevents an installed (untrusted) skill from gaining shell access via a scheduled routine.Alternatively, routines could skip skill injection entirely for simplicity — skills are message-activated and routines have explicit prompts, so there's less need for dynamic skill selection.
Implementation scope
routine_engine.rs (~100–120 lines):
tools: Arc<ToolRegistry>andsafety: Arc<SafetyLayer>toRoutineEngineandEngineContextexecute_lightweight()whenuse_toolsis true — callcomplete_with_tools(), execute approved tools with sanitization, feed results back, repeat up to N iterationsROUTINE_OKsentinel and notification behavior unchangedagent_loop.rs (~3 lines):
self.tools().clone()andself.safety().clone()toRoutineEngine::new()at line 397routine.rs (~3 lines):
use_tools: boolfield toRoutineAction::Lightweightbuiltin/routine.rs (~5 lines):
use_toolsparameter toroutine_createtool schemaTests (~50 lines):
use_tools: falsepreserves existing behaviorCode pointers