FastMCP server exposing a focused set of Google Meridian model-analysis tools for agents.
This project wraps Google Meridian models behind a small, read-only MCP surface so agents can discover available models, inspect model setup, and request structured analysis outputs without needing to understand Meridian's internal APIs directly.
It is designed for both local development and containerized deployment. The current tool surface covers model discovery, model overview metadata, training data extraction, channel summaries, contribution outputs, adstock decay outputs, and response curves.
Meridian currently targets Python 3.11 or 3.12.
python3.11 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pippython -m pip install -e .[dev]Create .env in the repository root.
cp .env.example .envFor local filesystem-backed development, a minimal .env looks like this:
MCP_TRANSPORT=streamable-http
MCP_HOST=127.0.0.1
MCP_PORT=8000
PERSISTENCE_BACKEND=local
LOCAL_MODELS_ROOT=./models
MODEL_CACHE_ROOT=/tmp/mmm-models
DISCOVERY_TTL_SECONDS=7200
RESULT_CACHE_ENABLED=true.env belongs at the project root because the runtime loads it from there explicitly.
Both flat and nested layouts are supported. Nested directories are usually clearer.
models/
├── geo-revenue/
│ └── model.binpb
└── experiment-a/
└── model.pkl
The catalog will expose those examples as model IDs like geo-revenue and experiment-a.
python -m google_meridian_mcp_server.serverFor interactive Inspector testing, the repository includes fastmcp.json, so from the project root you can run:
fastmcp dev inspector ./src/google_meridian_mcp_server/server.py --with-editable .The external config value remains streamable-http. Internally, the current FastMCP runtime is started with its HTTP transport and binds to MCP_HOST and PORT or MCP_PORT.
The current MCP surface includes:
list_modelsget_model_overviewget_training_dataget_channel_summaryget_contributionget_adstock_decayget_response_curves
Every tool is annotated as read-only and uses typed parameters with documented validation metadata so the generated schema is stricter and easier for agents to call correctly.
Tool responses are canonical JSON payloads. For row-oriented analysis tools, the response includes
model_id, row_count, data, any selector fields such as output_type or datasets, and a
result_metadata block that lists the detected columns, dimensions, and measures for the returned
rows.
get_model_overview returns the model's time range, geo scope, channel/input groups, flattened data schema, and the supported dataset/output-type values for the other analysis tools.
get_training_data accepts one or more dataset keys and returns a single merged result set for the requested selections.
Grouped analysis tools return posterior-only rows. Prior rows are removed from tool results,
and the transport payloads do not include a distribution field.
get_channel_summary exposes:
baseline_summary_metricspaid_summary_metricsroicpikmarginal_roimarginal_cpik
get_adstock_decay exposes:
adstock_decayalpha_summary
get_response_curves exposes:
response_curves, which returns numeric curve rows including spend, spend multiplier, metric, and incremental outcomeresponse_curve_summary, which returns numeric summarized rows keyed by channel, spend, and spend multiplier withmean,ci_lo, andci_hi
Run tests:
pytestRun Ruff:
ruff check src tests
ruff format src testsWhen using the GCS backend, authenticate with Application Default Credentials locally:
gcloud auth application-default loginThen set these variables in .env:
PERSISTENCE_BACKEND=gcs
GCS_BUCKET=my-project.appspot.com
GCS_MODELS_PREFIX=modelsBuild locally:
docker build -t google-meridian-mcp-server .Run locally in Docker:
docker run --rm -p 8080:8080 --env-file .env -e MCP_HOST=0.0.0.0 google-meridian-mcp-serverThe container listens on 0.0.0.0 and respects the injected PORT environment variable.
Cloud Run is usually the best fit when this server is deployed with the GCS backend. That keeps model files outside the container image, avoids rebuilds when models change, and works well with Cloud Run's ephemeral filesystem.
Before deploying:
- Create or choose a GCS bucket and prefix that hold your Meridian models.
- Grant the Cloud Run service account access to read those objects
(for example,
roles/storage.objectViewer). - Create an Artifact Registry repository for the image if you do not already have one.
Build and publish the container:
gcloud builds submit \
--tag REGION-docker.pkg.dev/PROJECT_ID/REPOSITORY/google-meridian-mcp-serverDeploy to Cloud Run:
gcloud run deploy google-meridian-mcp-server \
--image REGION-docker.pkg.dev/PROJECT_ID/REPOSITORY/google-meridian-mcp-server \
--region REGION \
--platform managed \
--service-account CLOUD_RUN_SERVICE_ACCOUNT \
--set-env-vars=MCP_TRANSPORT=streamable-http,MCP_HOST=0.0.0.0,PERSISTENCE_BACKEND=gcs,GCS_BUCKET=MY_BUCKET,GCS_MODELS_PREFIX=models,MODEL_CACHE_ROOT=/tmp/mmm-models,DISCOVERY_TTL_SECONDS=7200,RESULT_CACHE_ENABLED=trueCloud Run injects the PORT environment variable automatically, and the server already uses that
value when it starts its HTTP transport.
If you need a public endpoint, add --allow-unauthenticated to the deploy command. If the service
should stay private, keep IAM restricted and put it behind your existing gateway or identity layer.
Using the local backend on Cloud Run is only practical when models are baked into the image at
build time. For most deployments, PERSISTENCE_BACKEND=gcs is the safer and simpler default.