loading…
Search for a command to run...
loading…
MCP server for sharing source-backed engineering memory across AI coding clients like Cursor and VS Code.
MCP server for sharing source-backed engineering memory across AI coding clients like Cursor and VS Code.
Local-first, repo-scoped memory for engineering agents.
repo-memory-mcp is an early MCP prototype for sharing source-backed engineering memory across AI coding clients like Cursor, VS Code, Claude Code, Gemini CLI, Codex CLI, and other MCP-capable tools.
Engineering memory should follow the repo across AI clients, not be trapped inside one application.
This is not a generic personal-memory bot. It is a project memory layer for software work: decisions, gotchas, commands, artifacts, task checkpoints, and stale warnings tied back to repo evidence.
AI coding tools are useful, but each one tends to start from a blank slate. Repo docs get stale, prior debugging context disappears, and decisions are scattered across chats, commits, and terminal output.
Repo memory is designed to answer questions like:
git pull?Clone, install, build, and test:
git clone https://github.com/pinchworth-ops/repo-memory-mcp.git
cd repo-memory-mcp
npm install
npm run build
npm test
npm run demo
Run the MCP server directly:
node /absolute/path/to/repo-memory-mcp/dist/server.js
Run the CLI directly:
node /absolute/path/to/repo-memory-mcp/dist/cli.js --help
Optional local link for nicer CLI usage:
npm link
repo-memory --help
See Installation for tarball/npm install and MCP client config.
Initialize a test repo:
cd /path/to/test-repo
repo-memory init --update-gitignore
repo-memory context --task "understand this repo"
Recommended MCP server config shape:
{
"mcpServers": {
"repo-memory": {
"command": "node",
"args": ["/absolute/path/to/repo-memory-mcp/dist/server.js"],
"env": {
"REPO_MEMORY_ALLOWED_ROOT": "/absolute/path/to/test-repo"
}
}
}
}
See MCP and CLI reference for per-client setup.
Typical agent loop:
load_project_context
→ search_project_memory / get_memory if needed
→ store_artifact for important raw evidence
→ checkpoint_task after meaningful progress on multi-step work
→ run tests / validation
→ finish_task with summary and optional proposedMemories
The important distinction:
Read the full workflow in Agent workflow.
| Doc | What it covers |
|---|---|
| Installation | Local clone, npm/tarball install, MCP config, smoke test |
| Design notes | Design principles, features, roadmap, MVP snapshot |
| MCP and CLI reference | CLI commands, MCP tools, client configuration, cross-client test plan |
| Agent workflow | Context loading, checkpoints, finish_task, proposed memories, statuses |
| Setup and troubleshooting | Repo initialization, hooks, env vars, path mismatch, SQLite busy |
| Dogfood and evaluation | Alternating Claude/Cursor suite, checkpoint recovery dogfood, checklists |
npm install
npm run build
npm test
npm run demo
Additional checks:
npm run stress
scripts/run-memory-dogfood-suite.sh /path/to/memory-test
scripts/run-checkpoint-dogfood.sh /path/to/memory-checkpoint-test
The smoke test creates a fake git repo and verifies storage/search, artifact paging, command capture, context packs, git revalidation, listing/deletion, audit trail, deduplication, and lifecycle operations.
Start the local dashboard to review memories, audit history, and useful project stats:
repo-memory dashboard
The dashboard includes a Needs attention review queue for proposed, needs-revalidation, and stale memories. It supports the same audited review actions as the CLI: accept/verify, reject, mark stale/historical, or delete.
REPO_MEMORY_ALLOWED_ROOT during early dogfooding.Prototype. Useful enough to test, not production-ready.
Current implementation is intentionally boring:
better-sqlite3The deterministic storage/retrieval/staleness layer comes first. LLM-assisted extraction or summarization can come later.
MIT. See LICENSE.
Run in your terminal:
claude mcp add repo-memory-mcp -- npx