loading…
Search for a command to run...
loading…
A Model Context Protocol (MCP) server that provides comprehensive video tools: transcript retrieval, video downloading, and automatic subtitle generation using
A Model Context Protocol (MCP) server that provides comprehensive video tools: transcript retrieval, video downloading, and automatic subtitle generation using AI speech-to-text. Works with YouTube, Bilibili, Vimeo, and any platform supported by yt-dlp.
A Model Context Protocol (MCP) server that provides comprehensive video tools: transcript retrieval, video downloading, and automatic subtitle generation using AI speech-to-text. Works with YouTube, Bilibili, Vimeo, and any platform supported by yt-dlp.
| Tool | Description |
|---|---|
get-transcript |
Retrieve existing transcripts from video platforms |
list-transcript-languages |
List available transcript languages for a video |
download-video |
Download videos to local storage |
list-downloads |
List downloaded video files |
generate-subtitles |
Generate subtitles using AI speech-to-text |
yt-dlp (required):
# Using Homebrew (macOS)
brew install yt-dlp
# Using pip
pip install yt-dlp
ffmpeg (required for subtitle generation):
# Using Homebrew (macOS)
brew install ffmpeg
# Using apt (Ubuntu/Debian)
sudo apt install ffmpeg
Local Whisper (optional, for local subtitle generation):
pip install openai-whisper
git clone <repository-url>
cd video-toolkit-mcp
npm install
npm run build
npm install -g video-toolkit-mcp
Add the MCP server to your configuration file:
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"video-toolkit-mcp": {
"command": "node",
"args": ["/path/to/video-toolkit-mcp/dist/index.js"],
"env": {
"VIDEO_TOOLKIT_STORAGE_DIR": "/path/to/downloads",
"OPENAI_API_KEY": "your-openai-api-key"
}
}
}
}
Cursor (~/.cursor/mcp.json):
{
"mcpServers": {
"video-toolkit-mcp": {
"command": "node",
"args": ["/path/to/video-toolkit-mcp/dist/index.js"],
"env": {
"VIDEO_TOOLKIT_STORAGE_DIR": "/path/to/downloads",
"OPENAI_API_KEY": "your-openai-api-key"
}
}
}
}
| Variable | Description | Default |
|---|---|---|
VIDEO_TOOLKIT_STORAGE_DIR |
Default directory for downloaded videos | ~/.video-toolkit/downloads |
OPENAI_API_KEY |
OpenAI API key for Whisper-based subtitle generation | None |
VIDEO_TOOLKIT_WHISPER_ENGINE |
Preferred whisper engine: openai, local, or auto |
auto |
WHISPER_BINARY_PATH |
Path to local whisper binary | whisper |
WHISPER_MODEL_PATH |
Path to whisper model (for local whisper) | Auto-download |
YT_DLP_PATH |
Path to yt-dlp binary | yt-dlp |
FFMPEG_PATH |
Path to ffmpeg binary | ffmpeg |
DEBUG |
Enable debug logging | 0 |
Retrieve existing transcripts from video platforms.
Parameters:
url (required): Video URLlang (optional): Language code (e.g., 'en', 'es', 'zh')include_timestamps (optional): Include timestamps (default: true)Example:
Get the transcript from https://www.youtube.com/watch?v=VIDEO_ID
List available transcript languages for a video.
Parameters:
url (required): Video URLExample:
What transcript languages are available for https://www.youtube.com/watch?v=VIDEO_ID?
Download a video to local storage.
Parameters:
url (required): Video URL to downloadoutput_dir (optional): Custom output directoryfilename (optional): Custom filenameformat (optional): Video format - mp4, webm, mkv (default: mp4)quality (optional): Quality - best, 1080p, 720p, 480p, 360p, audio (default: best)Example:
Download this video: https://www.youtube.com/watch?v=VIDEO_ID
List all downloaded video files.
Parameters:
directory (optional): Directory to list (default: storage directory)Example:
List my downloaded videos
Generate subtitles for a local video file using AI speech-to-text.
Parameters:
video_path (required): Absolute path to the video fileengine (optional): openai or local (default: auto-detect)language (optional): Language code for transcriptionoutput_format (optional): srt or vtt (default: srt)Example:
Generate subtitles for /path/to/video.mp4
OPENAI_API_KEY environment variablepip install openai-whisperThe tool auto-detects which engine to use:
OPENAI_API_KEY is set, uses OpenAI Whisper1. Download this video: https://www.youtube.com/watch?v=VIDEO_ID
2. Generate subtitles for the downloaded file
Get the transcript from https://www.youtube.com/watch?v=VIDEO_ID and summarize the key points
1. Download the video: https://vimeo.com/123456789
2. Generate English subtitles for it
Any platform supported by yt-dlp, including:
Full list: https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md
video-toolkit-mcp/
├── src/
│ ├── index.ts # Main MCP server entry point
│ ├── transcript-fetcher.ts # Transcript fetching using yt-dlp
│ ├── video-downloader.ts # Video download functionality
│ ├── subtitle-generator.ts # AI-powered subtitle generation
│ ├── config.ts # Configuration management
│ ├── url-detector.ts # Platform detection from URLs
│ ├── parser.ts # Transcript parsing (SRT, VTT, JSON)
│ └── errors.ts # Custom error classes
├── test/
│ └── transcript.test.ts # Unit tests
├── dist/ # Compiled JavaScript (after build)
└── package.json
# Build
npm run build
# Test
npm test
# Development mode
npm run dev
brew install yt-dlp
# or
pip install yt-dlp
brew install ffmpeg
Either:
OPENAI_API_KEY environment variable, orpip install openai-whisperMIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"video-toolkit-mcp": {
"command": "npx",
"args": []
}
}
}