loading…
Search for a command to run...
loading…
🎖️ 🐍 ☁️ Access real-time X/Reddit/YouTube data directly in your LLM applications with search phrases, users, and date filtering.
🎖️ 🐍 ☁️ Access real-time X/Reddit/YouTube data directly in your LLM applications with search phrases, users, and date filtering.
Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop, Cursor, Windsurf, OpenAI Agents and others to fetch real-time social media data.
uv (Python package manager), install with curl -LsSf https://astral.sh/uv/install.sh | sh or see the uv repo for additional install methods.{
"mcpServers": {
"macrocosmos": {
"command": "uvx",
"args": ["macrocosmos-mcp"],
"env": {
"MC_API": "<insert-your-api-key-here>"
}
}
}
}
query_on_demand_data - Real-time Social Media QueriesFetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
source |
string | REQUIRED. Platform: 'X' or 'REDDIT' (case-sensitive) |
usernames |
list | Up to 5 usernames. For X: @ is optional. Not available for Reddit |
keywords |
list | Up to 5 keywords. For Reddit: first item is subreddit (e.g., 'r/MachineLearning') |
start_date |
string | ISO format (e.g., '2024-01-01T00:00:00Z'). Defaults to 24h ago |
end_date |
string | ISO format. Defaults to now |
limit |
int | Max results 1-1000. Default: 10 |
keyword_mode |
string | 'any' (default) or 'all' |
Example prompts:
create_gravity_task - Large-Scale Data CollectionCreate a Gravity task for collecting large datasets over 7 days. Use this when you need more than 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
tasks |
list | REQUIRED. List of task objects (see below) |
name |
string | Optional name for the task |
email |
string | Email for notification when complete |
Task object structure:
{
"platform": "x", // 'x' or 'reddit'
"topic": "#Bittensor", // For X: MUST start with '#' or '$'
"keyword": "dTAO" // Optional: filter within topic
}
Important: For X (Twitter), topics MUST start with # or $ (e.g., #ai, $BTC). Plain keywords are rejected.
Example prompts:
get_gravity_task_status - Check Collection ProgressMonitor your Gravity task and see how much data has been collected.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID from create_gravity_task |
include_crawlers |
bool | Include detailed stats. Default: True |
Returns: Task status, crawler IDs, records_collected, bytes_collected
Example prompts:
build_dataset - Build & Download DatasetBuild a dataset from collected data before the 7-day completion.
Warning: This will STOP the crawler and de-register it from the network.
Parameters:
| Parameter | Type | Description |
|---|---|---|
crawler_id |
string | REQUIRED. Get from get_gravity_task_status |
max_rows |
int | Max rows to include. Default: 10000 |
email |
string | Email for notification when ready |
Example prompts:
get_dataset_status - Check Build Progress & DownloadCheck dataset build progress and get download links when ready.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID from build_dataset |
Returns: Build status (10 steps), and when complete: download URLs for Parquet files
Example prompts:
cancel_gravity_task - Stop Data CollectionCancel a running Gravity task.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id |
string | REQUIRED. The task ID to cancel |
cancel_dataset - Cancel Build or Purge DatasetCancel a dataset build or purge a completed dataset.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id |
string | REQUIRED. The dataset ID to cancel/purge |
User: "What's the sentiment about $TAO on Twitter today?"
→ Uses query_on_demand_data to fetch recent tweets
→ Returns up to 1000 results instantly
User: "I need to collect a week's worth of #AI tweets for analysis"
1. create_gravity_task → Returns gravity_task_id
2. get_gravity_task_status → Monitor progress, get crawler_ids
3. build_dataset → When ready, build the dataset
4. get_dataset_status → Get download URL for Parquet file
MIT License Made with love by the Macrocosmos team
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"macrocosm-os-macrocosmos-mcp": {
"command": "npx",
"args": []
}
}
}Transcripts, channel stats, search
AI image generation using various models.
Unified GPU inference API with 30 AI services (LLM, image gen, video, TTS, whisper, embeddings, reranking, OCR) as MCP tools. Pay-per-use via x402 USDC or API k
A powerful image generation tool using Google's Imagen 3.0 API through MCP. Generate high-quality images from text prompts with advanced photography, artistic,