Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
307 changes: 299 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,9 +205,7 @@ If you install Lobster via npm/pnpm, it installs a small shim executable named:

These shims forward to the Lobster pipeline command of the same name.

### Example: invoke llm-task

Prereqs:
### Prerequisites

- `OPENCLAW_URL` points at a running OpenClaw gateway
- optionally `OPENCLAW_TOKEN` if auth is enabled
Expand All @@ -217,19 +215,312 @@ export OPENCLAW_URL=http://127.0.0.1:18789
# export OPENCLAW_TOKEN=...
```

In a workflow:
### openclaw.invoke — Call any OpenClaw tool

The `openclaw.invoke` command calls any OpenClaw tool with typed arguments.

**Basic syntax:**
```bash
openclaw.invoke --tool <tool-name> --action <action> --args-json '<json-args>'
```

**Example: Send a message via OpenClaw**
```yaml
name: send-notification
steps:
- id: notify
command: >
openclaw.invoke --tool message --action send --args-json '{"provider":"discord","channel":"alerts","message":"Build completed!"}'
```

**Example: List Discord channels**
```yaml
name: list-channels
steps:
- id: list
command: >
openclaw.invoke --tool message --action channel-list --args-json '{"guildId":"123456789"}'
```

**Using --each to process pipeline items:**
```yaml
name: broadcast
steps:
- id: users
command: echo '[{"user":"alice"},{"user":"bob"}]'
- id: notify-each
command: >
openclaw.invoke --tool message --action send --each --item-key to --args-json '{"provider":"discord","message":"Hello!"}'
stdin: $users.stdout
```

### Calling the LLM task tool via openclaw.invoke

Use `openclaw.invoke` with `--tool llm_task` (or your configured LLM tool name) to call the LLM:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use default llm-task tool name in LLM examples

These docs tell users to call OpenClaw with --tool llm_task, but Lobster’s built-in LLM invocation path uses tool: 'llm-task' (src/commands/stdlib/llm_task_invoke.ts:454). In environments that only expose the default tool name, copying these examples will fail with an unknown-tool error, so the new LLM workflow guidance is not runnable as written.

Useful? React with 👍 / 👎.


**Example: Simple LLM call in a workflow**
```yaml
name: hello-world
name: daily-summary
args:
topic:
default: "project updates"
steps:
- id: greeting
- id: generate
command: >
openclaw.invoke --tool llm-task --action json --args-json '{"prompt":"Hello"}'
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Write a brief summary of today'"'"'s project updates"}'
```

**Example: LLM with structured output (JSON schema)**
```yaml
name: classify-tickets
steps:
- id: classify
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Classify this feedback as positive, negative, or neutral","output_schema":{"type":"object","required":["sentiment"],"properties":{"sentiment":{"type":"string","enum":["positive","negative","neutral"]}}}}'
```

**Example: LLM with input artifacts**
```yaml
name: summarize-article
args:
article:
default: ""
steps:
- id: prepare
env:
ARTICLE: "$LOBSTER_ARG_ARTICLE"
command: |
jq -n --arg text "$ARTICLE" '{"kind":"text","text":$text}'
- id: summarize
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Summarize this article in 3 bullet points"}'
stdin: $prepare.stdout
```

**Example: Daily standup with Jira tickets (from issue #26)**
```yaml
name: daily-standup
args:
team:
default: "CLAW"
project:
default: "E-commerce"
limit:
default: "10"

steps:
- id: list-tickets
command: >
jira issues search "" --status Todo 2>/dev/null |
jq -s '[.[][] | {id: .identifier, title, status: .state.name, priority: .priority, assignee: (.assignee.name // "unassigned")}] | sort_by(.priority) | .[:20]'

- id: summarize
env:
TEAM: "$LOBSTER_ARG_TEAM"
PROJECT: "$LOBSTER_ARG_PROJECT"
LIMIT: "$LOBSTER_ARG_LIMIT"
command: |
# Read ticket data from stdin, build args JSON safely with jq
TICKETS=$(cat)
ARGS=$(jq -n --argjson tickets "$TICKETS" --arg team "$TEAM" --arg project "$PROJECT" --arg limit "$LIMIT" '{"prompt":("Summarize the top " + $limit + " most urgent tickets for the daily standup. Team: " + $team + ", Project: " + $project),"context":$tickets}')
openclaw.invoke --tool llm_task --action invoke --args-json "$ARGS"
stdin: $list-tickets.stdout
```

### Passing data between steps (no temp files)

Use `stdin: $stepId.stdout` to pipe output from one step into the next.
Use `stdin: $stepId.stdout` to pipe output from one step into the next:

```yaml
steps:
- id: fetch
command: curl -s https://api.example.com/data
- id: transform
command: jq '.items'
stdin: $fetch.stdout # Pipe previous step's output to stdin
- id: analyze
command: openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Analyze this data"}'
stdin: $transform.stdout # Chain multiple steps
```

Access JSON output with `$stepId.json`:
```yaml
steps:
- id: parse
command: echo '{"count": 42}'
- id: report
command: |
COUNT=$(echo '$parse.json' | jq -r '.count')
openclaw.invoke --tool llm_task --action invoke --args-json "{\"prompt\":\"The count is $COUNT. Write a brief status report.\"}"
```

### Accessing workflow arguments safely

For shell-safe argument handling, use environment variables:

```yaml
args:
text:
default: ""
user:
default: "unknown"
steps:
- id: safe
env:
TEXT: "$LOBSTER_ARG_TEXT"
USER: "$LOBSTER_ARG_USER"
command: |
# Use jq to safely JSON-escape arguments before passing to --args-json
ARGS=$(jq -n --arg text "$TEXT" --arg user "$USER" '{"prompt":("Process this: " + $text + " for user " + $user)}')
openclaw.invoke --tool llm_task --action invoke --args-json "$ARGS"
```

## Cookbook: Common Patterns

### Pattern 1: Fetch → LLM → Notify

```yaml
name: fetch-analyze-notify
args:
url:
default: "https://api.example.com/news"
channel:
default: "general"
steps:
- id: fetch
command: curl -s "$LOBSTER_ARG_URL"

- id: analyze
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Summarize this content in 3 key points"}'
stdin: $fetch.stdout

- id: notify
env:
CHANNEL: "$LOBSTER_ARG_CHANNEL"
command: |
# Use jq to safely JSON-escape the LLM output
MSG=$(cat)
PAYLOAD=$(jq -n --arg msg "$MSG" --arg channel "$CHANNEL" '{"provider":"discord","channel":$channel,"message":$msg}')
openclaw.invoke --tool message --action send --args-json "$PAYLOAD"
stdin: $analyze.stdout
```

### Pattern 2: Approval workflow with LLM recommendation

```yaml
name: approve-with-llm
steps:
- id: gather
command: echo '{"amount": 5000, "requester": "alice", "reason": "New equipment"}'

- id: recommend
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Should this expense request be approved? Answer yes or no with a brief reason."}'
stdin: $gather.stdout

- id: approve
command: >
echo '{"requiresApproval":{"prompt":"Approve expense based on LLM recommendation?","items":[]}}'
stdin: $recommend.stdout
approval: required

- id: finalize
command: echo "Expense processed"
condition: $approve.approved
```

### Pattern 3: Batch processing with --each

```yaml
name: batch-translate
args:
texts:
default: '["Hello", "Goodbye", "Thank you"]'
target_lang:
default: "Spanish"
steps:
- id: prepare-items
command: echo "$LOBSTER_ARG_TEXTS"

- id: translate-all
env:
TARGET_LANG: "$LOBSTER_ARG_TARGET_LANG"
command: |
openclaw.invoke --tool llm_task --action invoke --each --item-key text --args-json "{\"prompt\":\"Translate to $TARGET_LANG:\"}"
stdin: $prepare-items.stdout
```

### Pattern 4: Conditional steps based on LLM output

Lobster conditions only support `true`/`false` literals or `$<stepId>.approved|skipped`. For complex conditional logic, use the approval mechanism:

```yaml
name: smart-router
steps:
- id: classify
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"prompt":"Classify this message as urgent or normal. Reply with ONLY the word urgent or normal.","output_schema":{"type":"object","required":["priority"],"properties":{"priority":{"type":"string"}}}}'

- id: route
env:
PRIORITY: "$classify.json.priority"
command: |
PRIORITY=$(echo '$classify.json' | jq -r '.priority // "normal"')
if [ "$PRIORITY" = "urgent" ]; then
echo '{"requiresApproval":{"prompt":"Urgent item detected! Send to urgent channel?","items":[]}}'
else
echo "Added to normal queue"
fi
approval: required

- id: handle-urgent
command: openclaw.invoke --tool message --action send --args-json '{"provider":"discord","channel":"urgent","message":"Urgent item detected!"}'
condition: $route.approved
```

**Note:** For more complex branching, consider using separate workflows or shell logic within a single step.

### Pattern 5: Retry with different models (manual approval on failure)

This pattern requires manual approval before using the fallback model. The `approval: required` step always pauses for user confirmation, so you'll be prompted even when the primary succeeds (simply approve to continue).

```yaml
name: robust-llm-call
steps:
- id: try-primary
command: |
openclaw.invoke --tool llm_task --action invoke --args-json '{"model":"claude-3-opus","prompt":"Complex analysis task"}' || echo '{"error": true}'

- id: on-error
# This step always pauses for approval. Approve to continue (primary succeeded)
# or to retry with fallback (primary failed). Reject to stop the workflow.
command: |
if echo '$try-primary.json' | jq -e '.error' > /dev/null 2>&1; then
echo '{"requiresApproval":{"prompt":"Primary model failed. Approve to retry with fallback model?","items":[]}}'
else
echo '{"requiresApproval":{"prompt":"Primary model succeeded. Approve to finish, or reject to try fallback?","items":[]}}'
fi
approval: required
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Remove unconditional approval from retry control step

Marking check-error as approval: required forces an approval pause on every run, including the success path that emits {"success": true}; workflow execution then sets $check-error.approved from the human response, so approving that fallback prompt can trigger the fallback model call even when the primary call succeeded. This breaks the intended “retry only on failure” behavior of the pattern.

Useful? React with 👍 / 👎.


- id: fallback
command: >
openclaw.invoke --tool llm_task --action invoke --args-json '{"model":"claude-3-sonnet","prompt":"Complex analysis task"}'
# Only run fallback if error was detected AND user approved retry
condition: $on-error.approved
Comment on lines +509 to +510
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Gate fallback step on primary failure

This condition checks only whether the approval step was accepted, not whether the primary call failed. As written, approving after a successful primary invocation still runs the fallback model, which contradicts the inline comment and triggers an unnecessary second LLM request (extra cost/latency and potentially conflicting outputs).

Useful? React with 👍 / 👎.

```

**Automatic retry without approval** (uses shell fallback logic):

```yaml
name: auto-retry-llm-call
steps:
- id: call-with-fallback
command: |
# Try primary model; if it fails, automatically use fallback
openclaw.invoke --tool llm_task --action invoke --args-json '{"model":"claude-3-opus","prompt":"Complex analysis task"}' || \
openclaw.invoke --tool llm_task --action invoke --args-json '{"model":"claude-3-sonnet","prompt":"Complex analysis task"}'
```

## Args and shell-safety

Expand Down