loading…
Search for a command to run...
loading…
Video editing MCP server with 26 tools for trimming, merging, text overlays, audio sync, filters, color grading, audio normalization, picture-in-picture, split-
Video editing MCP server with 26 tools for trimming, merging, text overlays, audio sync, filters, color grading, audio normalization, picture-in-picture, split-screen, batch processing, format conversion, subtitles, watermarks, and more. 380 tests, CI on Python 3.11+3.12, progress callbacks, works with Claude Code, Cursor, and any MCP client.
Video editing and creation for AI agents.
Edit existing video with FFmpeg. Create new video from code with Remotion.
Install • Quick Start • Tools • Full Reference • Agent Discovery • Contributing • Changelog
An open-source video editing server built on the Model Context Protocol (MCP). It gives AI agents, developers, and video creators the ability to programmatically edit and create video files.
Two modes:
Three interfaces:
| Interface | Best For | Example |
|---|---|---|
| MCP Server | AI agents (Claude Code, Cursor) | "Trim this video and add a title" |
| Python Client | Scripts, automation, pipelines | editor.trim("v.mp4", start="0:30", duration="15") |
| CLI | Shell scripts, quick ops, humans | mcp-video trim video.mp4 -s 0:30 -d 15 |
Prerequisites: FFmpeg must be installed. For Remotion features, you also need Node.js 18+.
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt install ffmpeg
Install:
pip install mcp-video
# or run without installing:
uvx mcp-video
Verify your setup:
mcp-video doctor
mcp-video doctor --json
Claude Code:
claude mcp add mcp-video -- uvx mcp-video
Claude Desktop:
{
"mcpServers": {
"mcp-video": {
"command": "uvx",
"args": ["mcp-video"]
}
}
}
Cursor:
{
"mcpServers": {
"mcp-video": {
"command": "uvx",
"args": ["mcp-video"]
}
}
}
Then just ask your agent: "Trim this video from 0:30 to 1:00, add a title card, and resize for TikTok."
from mcp_video import Client
editor = Client()
info = editor.info("interview.mp4")
clip = editor.trim("interview.mp4", start="00:02:15", duration="00:00:30")
video = editor.merge(clips=["intro.mp4", clip.output_path, "outro.mp4"])
video = editor.add_text(video.output_path, text="EPISODE 42", position="top-center", size=48)
result = editor.resize(video.output_path, aspect_ratio="9:16")
For autonomous agents, prefer inspection, pipeline chaining, and a release checkpoint:
from mcp_video import Client
client = Client()
print(client.inspect("create_from_images")) # Real params, aliases, return type
result = client.pipeline(
[
{"op": "create_from_images", "images": frames, "fps": 30},
{"op": "effect_glow", "intensity": 0.2}, # safe capped default
{"op": "add_audio", "audio_path": "soundtrack.wav", "mix": True},
{"op": "export", "quality": "high"},
],
output_path="final.mp4",
)
checkpoint = client.release_checkpoint(result.output_path)
print(checkpoint["thumbnail"], checkpoint["storyboard"])
Agent contract:
EditResult with .output_path.Client.inspect(name) exposes parameters, aliases, category, and return type.MCPVideoError guidance.assert_quality() or release_checkpoint() plus human visual/audio inspection.mcp-video info video.mp4
mcp-video trim video.mp4 -s 00:02:15 -d 30
mcp-video convert video.mp4 -f webm -q high
mcp-video template tiktok video.mp4 --caption "Check this out!"
82 unique MCP tools across 12 categories, plus a search_tools meta-tool for fast discovery. All return structured JSON. See the full tool reference for complete details.
| Category | Count | Highlights |
|---|---|---|
| Core Video | 26 | trim, merge, text, audio, resize, convert, filters, stabilize, chroma key, subtitles, watermark, batch |
| AI-Powered | 10 | transcribe (Whisper), scene detect, stem separation (Demucs), upscale, color grade |
| Remotion | 8 | create project, scaffold, render, studio preview, pipeline |
| Audio Synthesis | 7 | generate waveforms, presets, sequences, effects, spatial audio — pure NumPy |
| Visual Effects | 5 | vignette, chromatic aberration, scanlines, noise, glow |
| Transitions | 3 | glitch, pixelate, morph |
| Layout & Motion | 6 | grid, pip, animated text, counters, progress bars, auto-chapters |
| Analysis | 9 | scene detect, thumbnail, preview, storyboard, quality compare, metadata, waveform, release checkpoint |
| Image Analysis | 3 | color extraction, palette generation, product analysis |
| Meta | 1 | search_tools — keyword search across all tools |
Tool discovery:
from mcp_video import Client
editor = Client()
results = editor.search_tools("subtitle") # Find subtitle-related tools
Create videos programmatically with Remotion — a React framework for video.
1. Create project -> remotion_create_project
2. Scaffold -> remotion_scaffold_template
3. Preview live -> remotion_studio
4. Render -> remotion_render
5. Post-process -> remotion_to_mcpvideo
See Remotion docs and the Python client reference.
from mcp_video import Client
editor = Client()
See the full Python client reference for all methods and return types.
mcp-video [command] [options]
See the full CLI reference for all commands and options.
For complex multi-track edits, describe everything in a single JSON object:
editor.edit({
"width": 1080,
"height": 1920,
"tracks": [
{
"type": "video",
"clips": [
{"source": "intro.mp4", "start": 0, "duration": 5},
{"source": "main.mp4", "start": 5, "trim_start": 10, "duration": 30},
{"source": "outro.mp4", "start": 35, "duration": 10},
],
"transitions": [
{"after_clip": 0, "type": "fade", "duration": 1.0},
],
},
{
"type": "audio",
"clips": [
{"source": "music.mp3", "start": 0, "volume": 0.7, "fade_in": 2},
],
},
],
"export": {"format": "mp4", "quality": "high"},
})
Pre-built templates for common social media formats:
from mcp_video.templates import tiktok_template, youtube_shorts_template
timeline = tiktok_template(video_path="clip.mp4", caption="Check this out!", music_path="bgm.mp3")
result = editor.edit(timeline)
Supports: TikTok, YouTube Shorts, Instagram Reels/Posts, YouTube Videos.
Structured, actionable errors with auto-fix suggestions:
{
"success": false,
"error": {
"type": "encoding_error",
"code": "unsupported_codec",
"message": "Codec error: vp9 — Auto-convert input from vp9 to H.264/AAC before editing",
"suggested_action": {
"auto_fix": true,
"description": "Auto-convert input from vp9 to H.264/AAC before editing"
}
}
}
ICM-style staged pipelines for common productions — with CONTEXT.md stage contracts, references/ factory config, and runnable workflow.py scripts.
cd workflows/01-social-media-clip
python workflow.py /path/to/video.mp4
| Workflow | Stages | Description |
|---|---|---|
01-social-media-clip |
5 | Landscape → TikTok / Short / Reel |
02-podcast-clip |
6 | Highlight with chapters + burned captions |
03-explainer-video |
7 | Branded explainer from scratch |
See workflows/CONTEXT.md for the routing table.
mcp_video/
client/ # Python Client API (mixins per domain)
client/meta.py # Client discovery mixin (search_tools)
server.py # MCP server (82 tools + 4 resources + search_tools meta-tool)
server_tools_*.py # Tool registration by category
engine.py # Core FFmpeg engine
engine_*.py # Specialized engines (thumbnail, edit, probe, etc.)
models.py # Pydantic models
errors.py # Error hierarchy + FFmpeg stderr parser
ffmpeg_helpers.py # Shared FFmpeg utilities
audio_engine.py # Procedural audio synthesis
effects_engine.py # Visual effects + motion graphics
transitions_engine.py # Clip transitions
ai_engine.py # AI features (Whisper, Demucs, Real-ESRGAN)
remotion_engine.py # Remotion CLI wrapper
image_engine.py # Image color analysis
quality_guardrails.py # Automated quality checks
workflows/ # ICM staged pipelines
CONTEXT.md # Layer 1 routing table
01-social-media-clip/ # Stage contract + runnable script
02-podcast-clip/ # Stage contract + runnable script
03-explainer-video/ # Stage contract + runnable script
| Video | Audio (extraction) | Subtitles |
|---|---|---|
| MP4, WebM, MOV, GIF | MP3, AAC, WAV, OGG, FLAC | SRT, WebVTT |
git clone https://github.com/pastorsimon1798/mcp-video.git
cd mcp-video
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
Tests are excluded from the PyPI package. To run locally:
pip install -e ".[dev]"
pytest tests/ -v -m "not slow and not remotion"
See docs/TESTING.md for full test categories and CI details.
Apache 2.0 — see LICENSE.
Built on FFmpeg, Remotion, and the Model Context Protocol.
See docs/LEGAL_REVIEW.md for dependency licensing notes.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"pastorsimon1798-mcp-video": {
"command": "npx",
"args": []
}
}
}