Give your coding agent superpowers over your Metaflow workflows. Instead of writing throwaway scripts to check run status or dig through logs, just ask -- your agent will figure out the rest.
Works with any Metaflow backend: local, S3, Azure, GCS, or Netflix internal.
| Tool | Description |
|---|---|
get_config |
What backend am I connected to? (also returns your default namespace) |
list_flows |
What flows exist in a namespace? |
search_runs |
Find recent runs of any flow |
get_run |
Step-by-step breakdown of a run |
get_task_logs |
Pull stdout/stderr from a task |
list_artifacts |
What did this step produce? |
get_artifact |
Grab an artifact's value |
get_latest_failure |
What broke and why? |
search_artifacts |
Which runs produced a named artifact? |
pip install metaflow-mcp-server
claude mcp add --scope user metaflow -- metaflow-mcp-serverThat's it. Restart Claude Code and start asking questions about your flows.
To upgrade:
pip install --upgrade metaflow-mcp-serverThen restart Claude Code (or reconnect via /mcp) to pick up the new version.
If Metaflow lives in a specific venv, point to it:
claude mcp add --scope user metaflow -- /path/to/venv/bin/metaflow-mcp-serverFor other MCP clients, the server speaks stdio: metaflow-mcp-server
Wraps the Metaflow client API. Whatever backend your Metaflow is pointed at, the server uses too -- no separate config needed. Sets namespace(None) at startup so production runs (Argo, Step Functions, Maestro) are visible alongside your dev runs.
Starts once per session, communicates over stdin/stdout. No daemon, no port.
Apache-2.0
