loading…
Search for a command to run...
loading…
MCP server for local audio transcription using Whisper
MCP server for local audio transcription using Whisper
A lightweight MCP (Model Context Protocol) server for local audio transcription using whisper.cpp. There are several Whisper MCP implementations out there. This one is minimal and pairs with apple-voice-memo-mcp for a complete voice memo workflow.
brew install whisper-cppbrew install ffmpegnpm install -g whisper-mcp
Or run directly:
npx whisper-mcp
Add to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"whisper-mcp": {
"command": "npx",
"args": ["-y", "whisper-mcp"]
}
}
}
After editing, restart Claude Desktop.
For Claude Code, add to your project's .mcp.json file:
{
"mcpServers": {
"whisper-mcp": {
"command": "npx",
"args": ["-y", "whisper-mcp"]
}
}
}
Or for user-wide configuration, add to ~/.claude/settings.json:
{
"mcpServers": {
"whisper-mcp": {
"command": "npx",
"args": ["-y", "whisper-mcp"]
}
}
}
Tip: Use /mcp in Claude Code to verify the server is connected.
If running from source instead of npm:
{
"mcpServers": {
"whisper-mcp": {
"command": "node",
"args": ["/path/to/whisper-mcp/dist/index.js"]
}
}
}
For a complete voice memo workflow, use alongside apple-voice-memo-mcp:
{
"mcpServers": {
"apple-voice-memo-mcp": {
"command": "npx",
"args": ["-y", "apple-voice-memo-mcp"]
},
"whisper-mcp": {
"command": "npx",
"args": ["-y", "whisper-mcp"]
}
}
}
transcribe_audioTranscribe an audio file using Whisper.
Parameters:
file_path (required): Absolute path to the audio filemodel (optional): Model to use (tiny.en, base.en, small.en, medium.en, large). Default: base.enlanguage (optional): Language code. Default: enoutput_format (optional): text, timestamps, or json. Default: textExample:
{
"file_path": "/path/to/audio.m4a",
"model": "medium.en",
"output_format": "timestamps"
}
list_whisper_modelsList available Whisper models and their download status.
Returns:
{
"models": [
{
"name": "base.en",
"size": "142 MB",
"downloaded": true,
"path": "/Users/you/.whisper/ggml-base.en.bin"
}
]
}
download_whisper_modelDownload a Whisper model for local use.
Parameters:
model (required): Model to download (tiny.en, base.en, small.en, medium.en, large)| Model | Size | Speed | Quality |
|---|---|---|---|
| tiny.en | 75 MB | Fastest | Basic |
| base.en | 142 MB | Fast | Good |
| small.en | 466 MB | Medium | Better |
| medium.en | 1.5 GB | Slow | Great |
| large | 2.9 GB | Slowest | Best |
Models are stored in ~/.whisper/.
list_voice_memosget_audio with memo IDtranscribe_audio with the file path# Clone and install
git clone https://github.com/jwulff/whisper-mcp.git
cd whisper-mcp
npm install
# Build
npm run build
# Test with MCP inspector
npm run inspector
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"whisper-mcp": {
"command": "npx",
"args": [
"-y",
"whisper-mcp"
]
}
}
}