loading…
Search for a command to run...
loading…
Enables AI agents to generate MIDI clips from natural language descriptions and export them for import into digital audio workstations. Wraps Scribbletune to pr
Enables AI agents to generate MIDI clips from natural language descriptions and export them for import into digital audio workstations. Wraps Scribbletune to provide music composition tools for creating riffs, chords, and arpeggios with scale-aware progressions, rhythmic patterns, and genre-specific parameters.
Work in progress. The server is functional but actively evolving. Feedback, ideas, bug reports, and contributions are warmly welcome — please open an issue or a PR.
An MCP (Model Context Protocol) server that brings MIDI generation into AI agent workflows. It wraps Scribbletune — a wonderful open-source library by Walmik Deshpande — and exposes it as a set of tools and resources that an LLM can call to compose and export MIDI clips.
Modern AI assistants can talk about music — genres, moods, harmony, rhythm — but they can't produce it in a form you can load into your DAW. This project bridges that gap.
The intended workflow is an AI agent acting as a creative co-producer:
.mid file straight into Ableton, Logic, FL Studio, or any DAWNo coding required on your end. The agent handles the translation from musical intent to MIDI.
generate-clip tool — generate a MIDI clip from a musical description (riff, chord progression, or arpeggio)get-progression tool — resolve scale degrees to chord names before generating a clipYour AI Agent (Claude, GPT, etc.)
│
│ MCP (Streamable HTTP, stateless)
▼
scribbletune-mcp-server :3000
│
│ POST /clips
▼
scribbletune-midi-store :3001
│
▼
Redis / Valkey
↓
User downloads .mid → imports into DAW
git clone https://github.com/claboran/scribbletune-mcp-ws.git
cd scribbletune-mcp-ws
npm install
npm run valkey:up # start Valkey in Docker
npm run dev # start both apps in parallel
http://localhost:3000/mcphttp://localhost:3001http://localhost:3001/apiTo run apps individually:
npm run mcp:dev # MCP server only
npm run store:dev # MIDI store only
With the dev server running, open a second terminal:
npm run mcp:inspect
This launches the @modelcontextprotocol/inspector UI in your browser. Follow these steps to connect:
Streamable HTTPhttp://localhost:3000/mcpgenerate-clip or get-progression interactively
npm run mcp:build
The real value of the MCP server is the back-and-forth between you and the agent. Here's what a real session looks like, including the notes override for precise melodic control.
You: I want a Deep Tech top bassline, 2 bars, 1/16 notes, D#2, a bit of movement in the second bar going up a perfect fifth.
Agent:
Perfect fifth from D#2 is A#2 (+7 semitones). I'll keep bar 1 sparse and anchored, then climb in bar 2. Using phrygian feel — the minor 2nd (E2/F2) adds that hypnotic tension typical of deep tech.
(calls
generate-clip)
{
"command": "riff",
"notes": "D#2 D#2 F2 D#2 F2 D#2 D#2 F2 G2 G#2 A#2 A#2 A#2 A#2 G#2 A#2",
"pattern": "x--x-x--x---x--xx-x-xx-x-xx-x-xx",
"subdiv": "16n",
"bpm": 131,
"amp": 90
}
Download your clip:
http://localhost:3001/clips/8c7d8d4e-…A rendered example of this exact clip is available at midi/clip.mid.
You: Nice. The second bar feels a bit crowded — space it out and make the climb slower.
Agent:
I'll thin out bar 2 — fewer hits, longer gaps between the ascending notes, so the climb breathes more.
(calls
generate-clipagain with revised notes and pattern)
You: Can you add a sub bass underneath? Same key, just D#1, whole notes.
Agent:
That's a separate clip — one whole note per bar, sitting underneath.
(calls
generate-clipwithroot=D#1,mode=phrygian,pattern=x_,subdiv=1m) Download your sub:http://localhost:3001/clips/…
Import both .mid files into separate DAW tracks, layer them — done.
Note on the
notesoverride: Whenever you describe a melody with specific pitches ("go up a fifth", "land on the major 7th"), the agent uses thenotesparameter directly rather than deriving from a scale. This gives it precise per-step melodic control that scale-based generation can't provide.
generate-clipGenerates a MIDI clip and returns a download key and URL.
| Parameter | Type | Description |
|---|---|---|
command |
riff | chord | arp |
Type of clip: melodic line, chord block, or arpeggio |
root |
string | Root note with octave, e.g. "G#3" |
mode |
string | Scale name, e.g. "minor", "dorian", "phrygian" |
pattern |
string | Rhythm pattern: x=hit -=rest _=sustain R=random |
subdiv |
string | Step duration: "16n" "8n" "4n" "1m" … |
progression |
string | Roman numeral scale degrees, e.g. "I IV V ii" or "i VI III VII" — required for chord / arp. Use get-progression to discover degrees. |
bpm |
number | Tempo in BPM |
sizzle |
string | Velocity envelope: sin cos rampUp rampDown |
amp |
number | Max velocity 0–127 |
accent |
string | Accent pattern, e.g. "x--x" |
arpCount |
number | Notes per chord for arp command (default 4) |
arpOrder |
string | Arp note order, e.g. "0123" ascending, "3210" descending |
Returns { key, downloadUrl, ttlSeconds, meta }.
Pattern vs chord count: each
xin the pattern plays the next chord in sequence. With 4 chords andpattern: "x---"only the first chord sounds. Use"xxxx"(or a pattern with as manyxsteps as chords) to hit every chord.
get-progressionResolves scale degrees to chord names for human reference. Call this before generate-clip when working with chord or arp commands — then pass the returned degrees (not the chord names) as the progression parameter. Not needed for basslines or melodic riffs.
| Parameter | Type | Description |
|---|---|---|
root |
string | Root note with octave |
mode |
enum | Scale/mode name |
degrees |
string | Space-separated degrees, e.g. "I IV V ii" — random if omitted |
count |
number | Number of chords when generating randomly (2–8) |
Returns { degrees, chordNames, hint }.
The agent reads these before making tool calls to avoid hallucinating invalid parameter values.
| URI | Description |
|---|---|
scribbletune://docs/overview |
What Scribbletune is, the clip model, riff/chord/arp commands |
scribbletune://docs/clip |
Full clip() parameter reference |
scribbletune://docs/notes-and-patterns |
Note naming, octaves, pattern syntax |
scribbletune://docs/scales |
All 80+ valid scale names |
scribbletune://docs/chords |
Chord types and notation |
scribbletune://docs/progression |
Progression API and scale degree tables by mode |
scribbletune://docs/genre-scale-guide |
Genre → scale, BPM, and pattern recommendations |
The scribbletune-midi-store exposes a full OpenAPI 3 spec via its Swagger UI (served at /api). The scribbletune-mcp-server consumes this spec through a generated TypeScript client rather than a hand-written HTTP wrapper.
The MIDI store serves its spec as JSON and YAML:
GET http://localhost:3001/api-jsonGET http://localhost:3001/api-yamlThe spec is committed to apps/scribbletune-mcp-server/open-api/scribbletune-open-api.yml
The OpenAPI Generator CLI produces a typescript-fetch client into apps/scribbletune-mcp-server/src/midi-store-client/
The MidiStoreClient in the MCP server imports the generated ClipsApi class instead of making raw axios / form-data calls
open-api-generator-cli/openapi-generator-cli.jar — download from the OpenAPI Generator releases:mkdir -p open-api-generator-cli
curl -L https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/7.19.0/openapi-generator-cli-7.19.0.jar \
-o open-api-generator-cli/openapi-generator-cli.jar
With the MIDI store running (npm run store:dev), export the current spec, then regenerate:
# 1. Export the live spec
curl http://localhost:3001/api-yaml -o apps/scribbletune-mcp-server/open-api/scribbletune-open-api.yml
# 2. Regenerate the TypeScript fetch client
npm run store:generate:client
The generated files land in apps/scribbletune-mcp-server/src/midi-store-client/ and are committed alongside the spec. Re-run these two steps whenever the MIDI store API changes.
apps/scribbletune-mcp-server/
├── open-api/
│ └── scribbletune-open-api.yml # committed spec snapshot
└── src/
└── midi-store-client/ # generated — do not edit by hand
├── apis/
│ └── ClipsApi.ts
├── models/
│ └── SaveClipResponseDto.ts
└── ...
This is an Nx monorepo.
apps/
scribbletune-mcp-server/ # MCP server — tools, resources, Scribbletune integration
scribbletune-midi-store/ # MIDI store REST service — Valkey-backed, Swagger UI at /api
docker-compose.yml # Valkey 8 for local development
The test suite is split by concern: unit-style tests for the MIDI generation logic, and container-backed integration tests for the storage layer.
npm run mcp:test # ScribbletunService — MIDI generation (Jest, no Docker needed)
npm run store:test # ClipsService + ClipsController — Valkey integration (requires Docker)
mcp:test)Tests live in apps/scribbletune-mcp-server/src/scribbletune/scribbletune.service.spec.ts.
ScribbletunService is instantiated directly — no NestJS bootstrap needed — and the
generated MIDI buffers are parsed with midi-file
to make structural assertions on the binary output.
Musical scenarios covered:
| Scenario | What is verified |
|---|---|
README Deep Tech bassline (notes override, 16n, sizzle:sin) |
Pitches match the note string, correct BPM, velocities within [1, amp] range, hit count equals x steps in pattern |
G minor riff (scale-based riff) |
All pitch classes belong to G natural minor, BPM encoding, hit count |
C major chord progression I IV V I |
Multi-voice noteOn events (> chord count), pitch class C present, BPM |
Descending arpeggio i v in A minor |
Buffer parseable, noteOn events produced, BPM |
| Sub bass D#1 whole note | MIDI note number 27, sub-bass register confirmed (< 30) |
| BPM → µs/beat encoding at 60 / 120 / 130 / 174 BPM | microsecondsPerBeat within ±1 µs of 60 000 000 / bpm |
| Error paths | Throws when root+mode missing, when progression absent for chord or arp |
Why these tests matter: Writing the tests surfaced three real bugs in the original documentation and tool schema:
getChordsByProgressiononly accepts Roman numeral degrees ("I IV V ii"), not raw chord names ("CM FM GM"); the pattern'sx-count determines how many chords are hit, not the length of the progression array; and theget-progressiontool returns chord names for human reference — the degrees are what flows intogenerate-clip. All three are now corrected in the schema, MCP resources, and this README.
store:test)Tests live in apps/scribbletune-midi-store/src/clips/clips.integration.spec.ts.
A real Valkey instance is started via
Testcontainers (valkey/valkey:8-alpine) before the suite
runs and torn down afterwards. Docker must be available on the host.
The suite is split into two describe blocks that share the same container:
ClipsService (6 tests) — storage logic:
ioredis is injected directly into the service, bypassing the NestJS module system for speed.
| Test | What is verified |
|---|---|
save |
Returns a v4 UUID id and a clips:<id> key |
fetchById after save |
Retrieved bytes are byte-for-byte identical to the original |
fetchById unknown id |
Throws NotFoundException |
deleteById then fetchById |
Throws NotFoundException after deletion |
| Custom TTL | Key TTL in Valkey is within 2 s of the requested value |
| Two independent clips | Both are retrievable without cross-contamination |
ClipsController HTTP (5 tests) — full HTTP stack:
NestJS testing module with the real ClipsModule, REDIS_CLIENT overridden with the
shared test connection, supertest for HTTP assertions.
| Test | What is verified |
|---|---|
POST /clips |
201 response, key matches clips:*, downloadUrl contains /clips/, ttlSeconds > 0 |
GET /clips/:id |
200, Content-Type: audio/midi, body bytes match the uploaded buffer |
GET /clips/:unknown |
404 |
DELETE /clips/:id → GET |
204 on delete, then 404 on subsequent fetch |
Content-Disposition |
attachment; filename="clip.mid" present on download |
Each test runs against a clean store — redis.flushdb() is called in afterEach.
scribbletune-midi-store — Valkey-backed MIDI storage with HTTP download endpoint and Swagger UIscribbletune-mcp-server (replace handcrafted MidiStoreClient)amp values (< 20) for ghost notes and subtle velocity layers, with an explicit ghostNotes: true flag to distinguish musical intent from accidental misconfigurationThis project would not exist without the work of others.
Scribbletune by Walmik Deshpande is the MIDI generation engine at the heart of this server. Scribbletune is a beautifully designed library that makes programmatic music composition expressive and accessible. Please visit the project, star the repo, and consider contributing.
@rekog/mcp-nest by the rekog-labs team provides first-class NestJS integration for the Model Context Protocol and made building this server a pleasure.
Model Context Protocol by Anthropic is the open standard that makes tool-augmented AI agents possible.
This project is in active early development and there is plenty of room to grow — in tooling, musical concepts, documentation quality, and agent integration examples.
MIT — © 2026 the scribbletune-mcp-server contributors
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"scribbletune-mcp-server": {
"command": "npx",
"args": []
}
}
}