Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop, Cursor, Windsurf, OpenAI Agents and others to fetch real-time social media data.
- Get your API key from Macrocosmos. There is a free tier with $5 of credits to start.
- Install
uv(Python package manager), install withcurl -LsSf https://astral.sh/uv/install.sh | shor see theuvrepo for additional install methods. - Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json to include the following:
{
"mcpServers": {
"macrocosmos": {
"command": "uvx",
"args": ["macrocosmos-mcp"],
"env": {
"MC_API": "<insert-your-api-key-here>"
}
}
}
}Fetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
source |
string | REQUIRED. Platform: 'X' or 'REDDIT' (case-sensitive) |
usernames |
list | Up to 5 usernames. For X: @ is optional. Not available for Reddit |
keywords |
list | Up to 5 keywords. For Reddit: first item is subreddit (e.g., 'r/MachineLearning') |
start_date |
string | ISO format (e.g., '2024-01-01T00:00:00Z'). Defaults to 24h ago |
end_date |
string | ISO format. Defaults to now |
limit |
int | Max results 1-1000. Default: 10 |
keyword_mode |
string | 'any' (default) or 'all' |
Example prompts:
- "What has @elonmusk been posting about today?"
- "Get me the latest posts from r/bittensor about dTAO"
- "Fetch 50 tweets about #AI from the last week"
Create a Gravity task for collecting large datasets over 7 days. Use this when you need more than 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
tasks |
list | REQUIRED. List of task objects (see below) |
name |
string | Optional name for the task |
email |
string | Email for notification when complete |
Task object structure:
{
"platform": "x", // 'x' or 'reddit'
"topic": "#Bittensor", // For X: MUST start with '#' or '$'
"keyword": "dTAO" // Optional: filter within topic
}Important: For X (Twitter), topics MUST start with # or $ (e.g., #ai, $BTC). Plain keywords are rejected.
Example prompts:
- "Create a gravity task to collect #Bittensor tweets for the next 7 days"
- "Start collecting data from r/MachineLearning about neural networks"
Monitor your Gravity task and see how much data has been collected.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID from create_gravity_task |
include_crawlers |
bool | Include detailed stats. Default: True |
Returns: Task status, crawler IDs, records_collected, bytes_collected
Example prompts:
- "Check the status of my Bittensor data collection task"
- "How many records have been collected so far?"
Build a dataset from collected data before the 7-day completion.
Warning: This will STOP the crawler and de-register it from the network.
Parameters:
| Parameter | Type | Description |
|---|---|---|
crawler_id |
string | REQUIRED. Get from get_gravity_task_status |
max_rows |
int | Max rows to include. Default: 10000 |
email |
string | Email for notification when ready |
Example prompts:
- "Build a dataset from my Bittensor crawler with 5000 rows"
- "I have enough data, build the dataset now"
Check dataset build progress and get download links when ready.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID from build_dataset |
Returns: Build status (10 steps), and when complete: download URLs for Parquet files
Example prompts:
- "Is my dataset ready to download?"
- "Get the download link for my Bittensor dataset"
Cancel a running Gravity task.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID to cancel |
Cancel a dataset build or purge a completed dataset.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID to cancel/purge |
User: "What's the sentiment about $TAO on Twitter today?"
→ Uses query_on_demand_data to fetch recent tweets
→ Returns up to 1000 results instantly
User: "I need to collect a week's worth of #AI tweets for analysis"
1. create_gravity_task → Returns gravity_task_id
2. get_gravity_task_status → Monitor progress, get crawler_ids
3. build_dataset → When ready, build the dataset
4. get_dataset_status → Get download URL for Parquet file
- "What has the president of the U.S. been saying over the past week on X?"
- "Fetch me information about what people are posting on r/politics today."
- "Please analyze posts from @elonmusk for the last week."
- "Get me 100 tweets about #Bittensor and analyze the sentiment"
- "Create a gravity task to collect data about #AI from Twitter and r/MachineLearning from Reddit"
- "Start a 7-day collection of $BTC tweets with keyword 'ETF'"
- "Check how many records my gravity task has collected"
- "Build a dataset with 10,000 rows from my crawler"
MIT License Made with love by the Macrocosmos team