Progressive disclosure MCP pattern: Load tools on-demand (150k→2k tokens, 98.7% reduction). Lazy server connections, auto-gen Python wrappers, efficient data retrieval, filesystem tool discovery. Scripts fetch data; LLM handles processing/summarization in follow-up turns.
uv run mcp-generate- Gen Python wrappers frommcp_config.jsonuv run mcp-discover- Gen Pydantic types from actual API responses (seediscovery_config.json)uv run mcp-exec <script.py>- Run script w/ MCP- Example scripts:
workspace/example_progressive_disclosure.py,tests/integration/test_*.py - User scripts go in:
workspace/(gitignored)
src/runtime/mcp_client.py-McpClientManager: lazy loading,initialize()loads config only,call_tool()connects on-demand, tool format"serverName__toolName", singleton viaget_mcp_client_manager()src/runtime/harness.py- Exec harness: asyncio, MCP init, signal handlers, cleanupsrc/runtime/generate_wrappers.py- Auto-gen: connects all servers, introspects schemas, generatesservers/<server>/<tool>.py+__init__.pysrc/runtime/discover_schemas.py- Schema discovery: calls safe read-only tools, generatesservers/<server>/discovered_types.pyfrom real responsessrc/runtime/normalize_fields.py- Field normalization: auto-converts inconsistent API field casing (e.g., ADO:system.parent→System.Parent)
servers/ (gitignored, regen w/ uv run mcp-generate):
servers/<serverName>/<toolName>.py # Pydantic models, async wrapper
servers/<serverName>/__init__.py # Barrel exports
servers/<serverName>/discovered_types.py # Optional: Pydantic types from actual API responses
mcp_config.json format:
{"mcpServers": {"name": {"command": "command", "args": ["arg1"], "env": {}}}}discovery_config.json format (optional, for schema discovery):
{"servers": {"name": {"safeTools": {"tool_name": {"param1": "value"}}}}}Add server: edit mcp_config.json → uv run mcp-generate → from servers.name import tool_name → auto-connect on first call
Optional schema discovery: copy discovery_config.example.json → edit w/ safe read-only tools + real params → uv run mcp-discover → from servers.name.discovered_types import ToolNameResult
Script pattern (workspace/ for user scripts, tests/ for examples):
from servers.name import tool_name
from servers.name.discovered_types import ToolNameResult # optional
result = await tool_name(params) # Pydantic model
# Use defensive coding: result.field or fallback
# Return data - LLM can process/summarize in follow-up interactions
# Not all processing needs to happen in-script- Tool ID:
"serverName__toolName"(double underscore) - Progressive disclosure: list
servers/→ read needed tools → lazy connect → fetch data → LLM processes - Processing flexibility: Scripts can return raw data for LLM to process, pre-process for efficiency, or reshape for chaining tool calls - choose based on use case
- Type gen: Pydantic models for all schemas, handles primitives, unions, nested objects, required/optional, docstrings
- Schema discovery: only use safe read-only tools (never mutations), types are hints (fields marked Optional), still use defensive coding
- Field normalization: auto-applied per server (e.g., ADO normalizes all fields to PascalCase for consistency)
- Python: asyncio for concurrency, Pydantic for validation, mypy for type safety
- "MCP server not configured": check
mcp_config.jsonkeys - "Connection closed": verify server command with
which <command> - Missing wrappers:
uv run mcp-generate - Import errors: ensure
src/in sys.path (harness handles this) - Type checking:
uv run mypy src/for validation