loading…
Search for a command to run...
loading…
A Model Context Protocol server providing comprehensive read-only access to YouTube data, including video search, transcripts, and channel forensics. It feature
A Model Context Protocol server providing comprehensive read-only access to YouTube data, including video search, transcripts, and channel forensics. It features 16 specialized tools designed for content analysis and metadata retrieval in LLM applications.
Python 3.13+ MCP License: MIT Deploy to Render
YtMCP is a production-grade Model Context Protocol (MCP) server providing comprehensive, read-only access to YouTube data through 16 specialized tools. Designed for both local development (STDIO) and cloud deployment (HTTPS on Render), it combines multiple battle-tested libraries to deliver robust YouTube intelligence for LLM applications.
search_videos - Basic keyword search with customizable limitssearch_filtered - Advanced search with filters (upload date, duration, sort)get_trending_videos - Fetch current trending videosfind_channels - Search for channels by name or topicfind_playlists - Discover playlists by keywordget_transcript - Extract time-synced transcripts/subtitles (CRITICAL for content analysis)get_video_metadata - Comprehensive video data (views, tags, description, likes, duration)get_video_chapters - Extract video chapters/key momentsget_thumbnail - High-resolution thumbnail URLs (all qualities)get_comments - Fetch top comments (rate-limited for safety)get_channel_videos - List channel videos with sorting (newest, oldest, popular)get_channel_shorts - List YouTube Shorts from a channelget_channel_streams - List live streams (past and present)get_playlist_items - Flatten playlist contentsget_channel_about - Channel description and statisticsget_audio_stream_url - Get direct audio stream URLsPrerequisites:
Installation:
# Clone the repository
git clone https://github.com/utkarshchaudhary009/ytmcp.git
cd ytmcp
# Install with UV (recommended)
uv sync
# OR install with pip
pip install -e .
Run the server:
# Using UV
uv run ytmcp
# OR using pip
ytmcp
The server will start in STDIO mode, ready to accept MCP client connections.
One-Click Deploy:
Manual Deployment:
Fork this repository
Create a new Web Service on Render:
Configure the service:
Name: ytmcp
Environment: Python 3
Build Command: pip install -e .
Start Command: ytmcp --transport streamable-http --host 0.0.0.0 --port $PORT
Set environment variables (optional):
FASTMCP_LOG_LEVEL=INFO
Deploy - Render will automatically deploy your MCP server with HTTPS
Your server will be available at: https://ytmcp-<random>.onrender.com
ytmcp/
├── src/
│ └── ytmcp/
│ ├── __init__.py
│ ├── server.py # Main FastMCP server with health check
│ ├── middleware/
│ │ ├── __init__.py
│ │ └── rate_limiter.py # Global rate limiting (0.75s delay)
│ └── tools/
│ ├── __init__.py
│ ├── search.py # Category A: Search tools
│ ├── video.py # Category B: Video intelligence
│ ├── channel.py # Category C: Channel forensics
│ └── utils.py # Category D: Utilities
├── examples/ # MCP client configurations
├── research/ # Library research & feasibility docs
├── render.yaml # Render deployment config
├── Procfile # Process definition
├── runtime.txt # Python version specification
├── pyproject.toml # Project metadata & dependencies
└── README.md
scrapetube → Fast channel/playlist listingyoutube-search-python → Search & filteringyt-dlp → Comprehensive metadata extractionyoutube-transcript-api → Transcript fetching/health endpoint for load balancers.gemini/mcp_config.json){
"mcpServers": {
"ytmcp-local": {
"command": "uv",
"args": ["run", "--directory", "/path/to/ytmcp", "ytmcp"],
"description": "YouTube MCP (Local)"
},
"ytmcp-prod": {
"url": "https://your-ytmcp.onrender.com/mcp",
"description": "YouTube MCP (Production)"
}
}
}
~/Library/Application Support/Claude/claude_desktop_config.json){
"mcpServers": {
"ytmcp": {
"command": "uv",
"args": ["--directory", "/path/to/ytmcp", "run", "ytmcp"]
}
}
}
.cursor/mcp.json){
"mcpServers": {
"ytmcp": {
"command": "uv",
"args": ["--directory", "/path/to/ytmcp", "run", "ytmcp"]
}
}
}
~/.continue/config.json){
"mcpServers": {
"ytmcp": {
"command": "uv",
"args": ["run", "--directory", "/path/to/ytmcp", "ytmcp"]
}
}
}
| Variable | Default | Description |
|---|---|---|
FASTMCP_LOG_LEVEL |
INFO |
Logging level (DEBUG, INFO, WARNING, ERROR) |
FASTMCP_HOST |
127.0.0.1 |
Host to bind (HTTP transports) |
FASTMCP_PORT |
8000 |
Port to bind (HTTP transports) |
PORT |
- | Render auto-assigns this (production) |
search_videos_tool(
query="python tutorial",
limit=10
)
Returns: Markdown-formatted list with titles, channels, views, URLs
get_transcript_tool(
video_id="dQw4w9WgXcQ", # Or full URL
languages="en,de" # Fallback languages
)
Returns: Time-synced transcript with [MM:SS] timestamps
get_channel_videos_tool(
channel_id="@fireship", # Supports @handle, ID, or URL
sort_by="popular",
limit=20
)
Returns: Sorted video list with metadata
get_video_metadata_tool(
video_id="https://youtube.com/watch?v=dQw4w9WgXcQ"
)
Returns: Comprehensive metadata (views, likes, description, tags, etc.)
# Install with dev dependencies
uv sync --dev
# Run tests
uv run pytest
# Type checking
uv run mypy src/
# Linting
uv run ruff check src/
# STDIO (for MCP clients)
uv run ytmcp
# SSE (for web clients)
uv run ytmcp --transport sse --port 8000
# StreamableHTTP (for production)
uv run ytmcp --transport streamable-http --host 0.0.0.0 --port 8080
Each tool follows this pattern:
from ..middleware.rate_limiter import rate_limiter
@rate_limiter # Automatic rate limiting
async def tool_name(param: str) -> str:
"""Tool description."""
# 1. Extract/validate IDs
# 2. Define library options
# 3. Fetch data in thread pool
# 4. Format as Markdown
# 5. Return LLM-optimized output
Advantages:
Steps:
render.yaml configuration (included)Health Check: https://your-app.onrender.com/health
MCP Endpoint: https://your-app.onrender.com/mcp
# Login to Heroku
heroku login
# Create app
heroku create ytmcp
# Deploy
git push heroku main
# Set environment
heroku config:set FASTMCP_LOG_LEVEL=INFO
FROM python:3.13-slim
WORKDIR /app
COPY . .
RUN pip install -e .
EXPOSE 8080
CMD ["ytmcp", "--transport", "streamable-http", "--host", "0.0.0.0", "--port", "8080"]
docker build -t ytmcp .
docker run -p 8080:8080 ytmcp
⚠️ YouTube Terms of Service: This server uses scraping libraries that bypass official YouTube API quotas. Review YouTube's ToS before deploying for commercial purposes.
Check Python version:
python --version # Should be 3.13+
Reinstall dependencies:
uv sync --reinstall
Adjust in src/ytmcp/middleware/rate_limiter.py:
rate_limiter = RateLimiter(delay_seconds=0.5) # Faster (risky)
Check build logs:
runtime.txt specifies python-3.13Common fix:
buildCommand: pip install --upgrade pip && pip install -e .
Local (STDIO):
uv run ytmcpProduction (HTTPS):
curl https://your-app.onrender.com/healthhttps://your-app.onrender.com/mcpContributions welcome! Please:
/research for library capabilitiesDevelopment Workflow:
# Fork and clone
git clone https://github.com/utkarshchaudhary009/ytmcp.git
# Create feature branch
git checkout -b feature/new-tool
# Make changes and test
uv run ytmcp
# Submit PR
MIT License - See LICENSE file
Built with these excellent libraries:
/research for design decisionsBuilt with ❤️ for the LLM ecosystem
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"ytmcp": {
"command": "npx",
"args": []
}
}
}Transcripts, channel stats, search
AI image generation using various models.
Unified GPU inference API with 30 AI services (LLM, image gen, video, TTS, whisper, embeddings, reranking, OCR) as MCP tools. Pay-per-use via x402 USDC or API k
A powerful image generation tool using Google's Imagen 3.0 API through MCP. Generate high-quality images from text prompts with advanced photography, artistic,