π Read the full article here: MCP Servers over Streamable HTTP (Step-by-Step)
This repository provides a complete, production-ready example of building and deploying MCP (Model Context Protocol) servers using Python, mcp, FastAPI, and uvicorn. You'll learn how to:
- Build MCP servers with custom tools and functions
- Expose tools over HTTP using streamable transport
- Test MCP servers locally with the MCP Inspector
- Deploy MCP servers to production (e.g., Render)
- Connect MCP servers to AI assistants like Cursor
- Mount multiple MCP servers in a single FastAPI application
.
βββ docs/ # Documentation assets and diagrams
β βββ mcp-client-server.png # MCP architecture diagram
βββ fast_api/ # Multi-server FastAPI setup
β βββ crewai_docs_server.py # CrewAI documentation MCP server
β βββ echo_server.py # Simple echo tool MCP server
β βββ math_server.py # Math operations MCP server
β βββ server.py # FastAPI app mounting all servers
β βββ tavily_server.py # Tavily web search MCP server
βββ services/ # Shared services and clients
β βββ __init__.py
β βββ github_client.py # GitHub API client for docs
β βββ search_engine.py # Documentation search engine
βββ utils/ # Utility functions
β βββ __init__.py
β βββ doc_parser.py # MDX parsing utilities
βββ .gitignore
βββ .python-version # Python 3.11.0
βββ CLAUDE.md # Codebase documentation for AI assistants
βββ pyproject.toml # Project dependencies and metadata
βββ README.md # This file
βββ runtime.txt # Python runtime specification for deployment
βββ server.py # Standalone Tavily search server
βββ uv.lock # Dependency lockfile for uv- Python 3.11+ (3.12+ recommended)
- uv package manager (recommended)
- Tavily API key for web search functionality (get one at tavily.com)
- OpenAI API key for semantic search (get one at platform.openai.com)
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh- Clone the repository and install dependencies:
git clone https://github.com/yourusername/CrewAIDocsMCP.git
cd CrewAIDocsMCP
uv sync- Set up environment variables:
echo "TAVILY_API_KEY=your_tavily_api_key_here" > .env
echo "OPENAI_API_KEY=your_openai_api_key_here" >> .envThe simplest way to create an MCP server is using the FastMCP class:
from mcp.server.fastmcp import FastMCP
# Create server instance
mcp = FastMCP("my-server", host="0.0.0.0", port=10000)
# Define tools using decorators
@mcp.tool()
async def my_tool(query: str) -> str:
"""Tool description shown to the AI"""
return f"Processed: {query}"
# Run the server
mcp.run(transport="streamable-http")Single MCP server (Tavily search):
uv run server.pyCrewAI Documentation server:
PYTHONPATH=. uv run python fast_api/crewai_docs_server.pyMultiple MCP servers via FastAPI:
PYTHONPATH=. uv run python fast_api/server.pyThis mounts:
- Echo server at
http://localhost:8000/echo/mcp/ - Math server at
http://localhost:8000/math/mcp/ - Tavily search at
http://localhost:8000/tavily/mcp/ - CrewAI docs at
http://localhost:8000/crewai/mcp/
The MCP Inspector is the recommended tool for testing MCP servers during development.
- Install the MCP Inspector globally:
npm install -g @modelcontextprotocol/inspector- Launch the inspector for single server:
npx @modelcontextprotocol/inspector http://localhost:10000/mcp//mcp/ to your server URL.
- Testing multiple servers mounted on FastAPI:
When testing servers mounted on different paths, modify the URL accordingly:
# Test the echo server
npx @modelcontextprotocol/inspector http://localhost:8000/echo/mcp/
# Test the math server
npx @modelcontextprotocol/inspector http://localhost:8000/math/mcp/
# Test the CrewAI documentation server
npx @modelcontextprotocol/inspector http://localhost:8000/crewai/mcp/
# Test the Tavily search server
npx @modelcontextprotocol/inspector http://localhost:8000/tavily/mcp/# Add MCP CLI support to the project
uv add 'mcp[cli]'
# Run the inspector via uv
uv run mcp dev server.pyThen navigate to the URL shown (e.g., http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=...)
This project is configured for easy deployment to Render.
-
Create a new Web Service on Render
-
Connect your GitHub repository
-
Configure the service:
- Build Command:
uv sync - Start Command:
PYTHONPATH=. uv run python fast_api/server.py - Environment: Python 3
- Instance Type: Free or paid tier based on your needs
- Build Command:
-
Add environment variables:
TAVILY_API_KEY: Your Tavily API keyOPENAI_API_KEY: Your OpenAI API key for embeddingsPORT: Set by Render automatically- Any other required secrets
-
Deploy: Render will automatically deploy your service
The FastAPI server automatically uses the PORT environment variable:
port = int(os.getenv("PORT", 8000))FROM python:3.11-slim
# Install uv
RUN pip install uv
WORKDIR /app
COPY . .
# Install dependencies
RUN uv sync
# Expose port
EXPOSE 8000
# Run the server
CMD ["sh", "-c", "PYTHONPATH=. uv run python fast_api/server.py"]Create a Procfile:
web: PYTHONPATH=. uv run python fast_api/server.py
Use similar configuration with uv sync for build and PYTHONPATH=. uv run python fast_api/server.py for start command.
- Open Cursor Settings β MCP Servers
- Add your server configuration:
For local development:
{
"mcpServers": {
"tavily-search": {
"url": "http://localhost:10000/mcp/"
}
}
}For deployed servers:
{
"mcpServers": {
"tavily-search": {
"url": "https://your-app.onrender.com/mcp/"
}
}
}Multiple servers configuration:
{
"mcpServers": {
"echo-server": {
"url": "http://localhost:8000/echo/mcp/"
},
"math-server": {
"url": "http://localhost:8000/math/mcp/"
}
}
}/ in the URL.
- Tool:
web_search- Search the web using Tavily API - Port: 10000 (standalone)
- Requires:
TAVILY_API_KEYenvironment variable
- Tools:
search_crewai_docs- AI-powered semantic search using OpenAI embeddingsget_search_suggestions- Example queries for semantic searchget_search_status- Check indexing status and progresslist_available_concepts- Dynamically discovered concept listget_concept_docs- Get documentation for specific concepts (auto-discovered)get_code_examples- Extract code examples with semantic relevanceget_doc_file- Retrieve full documentation filesrefresh_search_index- Force refresh of search index
- Port: 10001 (standalone)
- Features:
- AI-powered search: Semantic search using OpenAI's text-embedding-3-small model
- Natural language queries: Ask questions like "How do I create an agent?"
- No timeouts: Background embedding generation with status tracking
- Auto-discovery: Dynamic concept mapping using pathlib
- Persistent embeddings: Fast server restarts with cached vectors
- Smart chunking: Documents split into ~500 token chunks for granular search
- Once-per-day indexing: Automatic refresh every 24 hours
- Category filtering: Search within specific documentation categories
- Tools:
echo- Echo back messagesreverse_echo- Echo messages in reverse
- Port: 9001 (standalone)
- Tools:
add- Add two numbersmultiply- Multiply two numberscalculate- Evaluate mathematical expressions
- Port: 9002 (standalone)
# Clone and setup
git clone <repository>
cd CrewAIDocsMCP
# Install dependencies
uv sync
# Run single server
uv run server.py
# Or run multi-server FastAPI app
PYTHONPATH=. uv run python fast_api/server.py- Create a new MCP server file:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-tools")
@mcp.tool()
async def my_custom_tool(param: str) -> dict:
"""Description of what this tool does"""
# Tool implementation
return {"result": "processed"}
if __name__ == "__main__":
mcp.run(transport="streamable-http")- Test with MCP Inspector:
npx @modelcontextprotocol/inspector http://localhost:10000/mcp/- Add to FastAPI app (optional):
# In fast_api/server.py
from my_tools_server import mcp as my_tools_mcp
# Mount the server
app.mount("/my-tools", my_tools_mcp.get_app(with_lifespan=False))# Add a new dependency
uv add package-name
# Add development dependency
uv add --dev pytest
# Update all dependencies
uv sync --upgrade
# Lock dependencies
uv lockCreate a .env file in the project root:
TAVILY_API_KEY=your_tavily_api_key
OPENAI_API_KEY=your_openai_api_key
PORT=10000
HOST=0.0.0.0Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.