Demonstrates how an AI agent can dynamically extend its tool capabilities at runtime — the core use case for agentic dataflows.
Initial topology (1 tool):
Timer (1 Hz) --> Agent --> tool-request --> Echo Tool --> response --> Agent
After adding calculator tool:
┌── Echo Tool ── response ──┐
Timer (1 Hz) --> Agent -- tool-request─┤ ├── Agent
└── Calc Tool ── response ──┘
Both tools receive all requests via the shared tool-request topic.
Each tool filters for its own "tool" field in the JSON request.
agent (agent.py) — Sends JSON tool requests ({"tool": "echo", "message": "..."})
and logs responses. Cycles through echo and calc tasks.
echo-tool (echo_tool.py) — Built-in. Echoes back messages for "tool": "echo".
calc-tool (calc_tool.py) — Dynamically added. Evaluates math expressions
for "tool": "calc" requests.
pip install dora-rs pyarrow
dora up
dora start examples/dynamic-agent-tools/dataflow.yml --detach --name agent-demoThe agent sends requests. Echo tool responds to echo requests. Calc requests go unanswered (no calc tool yet).
dora node add --from-yaml examples/dynamic-agent-tools/calc-tool-node.yml --dataflow agent-demoAfter wiring the calc tool's response to the agent:
dora node connect --dataflow agent-demo calc-tool/response agent/tool-responseThe agent now receives responses from both tools on the same tool-response
input (fan-in). Multiple sources can map to the same target input — messages
from all sources are interleaved in arrival order.
dora node remove agent-demo calc-toolBack to echo-only. Calc requests go unanswered again.
dora stop agent-demo
dora down| Feature | How It's Used |
|---|---|
| Dynamic tool addition | Agent gains new capabilities at runtime |
| Fan-out to multiple tools | Same tool-request topic, multiple subscribers |
| Tool-specific filtering | Each tool processes only its own request type |
| Graceful removal | Removing a tool doesn't affect others |
| AI agent architecture | Matches the LLM -> Tools service pattern |
This example models the LLM function-calling pattern:
- Agent decides which tool to call (simulated by cycling through tasks)
- Agent sends a structured request (JSON with
toolfield) - Tool processes and responds
- Agent receives and logs the response
In production, the agent node would be an LLM inference node, and tools would be specialized nodes (web search, database query, code execution, etc.) added dynamically based on the conversation context.