A self-contained FastMCP server that exposes practical tools for:
- Web search and news search (DuckDuckGo via
ddgs) - Fetching and extracting content from URLs (lightweight parse + readability-style content)
- Small site crawls (BFS, same-domain optional)
- User-isolated RAG (LlamaIndex + ChromaDB + Ollama embeddings)
- Time / weather (Open-Meteo) and stock quotes (Stooq)
The server runs over HTTP transport and exposes a single MCP endpoint:
http://127.0.0.1:8090/mcp
- Python >= 3.11
- (Optional, recommended for RAG) Ollama running locally
- (Optional, for MySQL)
asyncmy(async) andpymysql(sync) drivers are included
uv venv
uv pip install -e .python -m venv .venv
source .venv/bin/activate
pip install -e .mcp-tools-serverOr:
python server.pyBy default it listens on:
http://127.0.0.1:8090/mcp
Configure your MCP client to use an HTTP MCP server with:
- Base URL:
http://127.0.0.1:8090/mcp - Transport: HTTP
This repo registers tools by name; your client will discover them from the server.
Below are the tools registered by the server (see server.py).
web_search_tool(query: str, region: str, max_results: int = 5)news_search_tool(query: str, region: str, max_results: int = 5)fetch_url_tool(url: str, max_length: int = 10_000)fetch_url_readable_tool(url: str, max_length: int = 20_000)crawl_site_tool(url: str, max_pages: int = 5, same_domain_only: bool = True, readable: bool = True)
Notes:
regionis required (strict mode). Example values:us-en,uk-en,wt-wt.- All web/news tools return a unified
sourceslist for citation-friendly downstream use.
docs_add_document(path: str, user_id: str, chat_id: str)docs_add_directory(path: str, user_id: str, chat_id: str, recursive: bool = True)docs_list(user_id: str, chat_id: str)docs_remove(doc_id: str, user_id: str, chat_id: str)docs_search(query: str, user_id: str, chat_id: str, file_names: list[str], k: int = 5)rag_answer_tool(question: str, user_id: str, chat_id: str, file_names: list[str], k: int = 5)rag_status_tool(user_id: str, chat_id: str)rag_rebuild(user_id: str, chat_id: str)rag_add_url(url: str, user_id: str, chat_id: str)rag_add_crawl(url: str, user_id: str, chat_id: str, max_pages: int = 5, same_domain_only: bool = True)rag_export_registry(user_id: str, chat_id: str)rag_import_registry(user_id: str, registry: list[dict[str, str]])
Important RAG notes:
- Strict mode:
user_id,chat_id, andfile_namesmust be passed explicitly for document search/answer. - Isolation: indices are per
(user_id, chat_id); cross-user access is blocked. - Local persistence: vector store persists under
./chroma_db/by default.
time_local(tz: str)time_local_by_place_tool(place: str)weather_now_tool(place: str)weather_daily_tool(place: str, days: int = 7)stock_quote_tool(symbol: str)stock_quotes_tool(symbols: list[str])
This repo reads configuration from environment variables and supports .env loading.
Used by settings.py:
MCP_DATABASE_URL(fallback:DATABASE_URL, default:sqlite+aiosqlite:///./app.db)MCP_DOCUMENTS_DIR(fallback:DOCUMENTS_DIR, default:./uploads)MCP_UPLOAD_DIR(fallback:UPLOAD_DIR, default:./uploads)
Used by core/common/config.py (Settings). Defaults shown:
-
MCP_DDG_REGION_DEFAULT(default:us-en) -
MCP_WEB_TIMEOUT_S(default:30) -
MCP_WEB_MAX_RESULTS(default:10) -
MCP_CHUNK_SIZE(default:600) -
MCP_CHUNK_OVERLAP(default:120) -
MCP_TOP_K(default:10) -
MCP_RETRIEVE_MULTIPLIER(default:10) -
MCP_MAX_CONTEXT_CHARS(default:12000) -
MCP_OLLAMA_BASE_URL(default:http://localhost:11434) -
MCP_OLLAMA_EMBED_MODEL(default:bge-m3) -
MCP_OLLAMA_LLM_MODEL(default:gpt-oss:20b) -
MCP_OLLAMA_REQUEST_TIMEOUT(default:180.0) -
MCP_LLM_OPTIONS_JSON(default JSON):
{"temperature":0.2,"num_predict":1024,"repeat_penalty":1.05,"num_ctx":8192}MCP_RESPECT_ROBOTS(default:true)MCP_CRAWL_USER_AGENT(default:mcp-app/1.0)MCP_HTTP_TIMEOUT_S(default:20)MCP_HTTP_MAX_HTML(default:2000000)MCP_CACHE_MAX_ENTRIES(default:256)MCP_RATE_BUCKET_CAPACITY(default:5)MCP_RATE_REFILL_PER_SEC(default:1.0)MCP_CRAWL_MAX_PAGES_DEFAULT(default:5)MCP_READABILITY_MIN_LEN(default:400)
# storage
MCP_DATABASE_URL=sqlite+aiosqlite:///./app.db
MCP_UPLOAD_DIR=./uploads
MCP_DOCUMENTS_DIR=./uploads
# web/search
MCP_DDG_REGION_DEFAULT=us-en
MCP_WEB_MAX_RESULTS=10
# crawling
MCP_RESPECT_ROBOTS=true
MCP_CRAWL_USER_AGENT=mcp-tools-server/1.0
# RAG
MCP_OLLAMA_BASE_URL=http://localhost:11434
MCP_OLLAMA_EMBED_MODEL=bge-m3
MCP_MAX_CONTEXT_CHARS=12000- Files you ingest are read from the filesystem path you pass.
- Vector index data persists under
./chroma_db/(relative to the repo). - If you are running this in production or multi-user environments, you should review storage paths, access controls, and network exposure.
Schema definitions are kept under schemas/ and split by intent:
schemas/inputs/: request/parameter shapes (TypedDicts)schemas/outputs/: response/result shapes (TypedDicts)
pre-commit run --all-filespython -c "import server"
python -m compileall -q .If RAG tools error on model calls, confirm Ollama is running and reachable:
curl -sSf http://localhost:11434/api/tags >/dev/nullThen ensure your embedding model exists (e.g., bge-m3) or set MCP_OLLAMA_EMBED_MODEL.
The server binds to 127.0.0.1:8090 by default. If that port is occupied, stop the conflicting process or adjust the port in server.py.
See LICENSE.