loading…
Search for a command to run...
loading…
A self-hosted MCP server that provides AI assistants with a shared, persistent SQLite-backed memory for storing and retrieving project context, decisions, and d
A self-hosted MCP server that provides AI assistants with a shared, persistent SQLite-backed memory for storing and retrieving project context, decisions, and discoveries. It enables cross-session continuity and team-wide knowledge sharing to keep AI coding tools aligned and informed.
One shared brain for every AI on your team — persistent across sessions, searchable, always in sync.
A lightweight, self-hosted MCP server that gives your AI coding assistants long-term memory. Built for teams where multiple people use AI-powered editors (Claude Code, Cursor, Windsurf) and need their AIs to remember past decisions, share discoveries, and stay aligned — without re-explaining everything every session.
Mono Memory gives your team's AI assistants a shared, persistent memory backed by a single SQLite file. Any AI can save and retrieve observations, project context, and decisions — across sessions, across team members.
Why "Mono"? — Like a monorepo manages all code in one place, Mono Memory manages all your team's AI knowledge in one server.
Session 1 (Alice — morning)
├─ AI discovers a tricky bug in auth logic
├─ → memory_save: "JWT refresh token race condition fix — added mutex lock"
└─ Session ends. AI forgets everything.
Session 2 (Bob — afternoon)
├─ AI starts working on auth-related feature
├─ → memory_search: "auth"
├─ ← Gets Alice's bug fix context instantly
└─ Avoids the same pitfall, builds on her solution.
Session 3 (Alice — next day)
├─ → memory_timeline: project="my-app", since="2025-03-01"
└─ ← Sees everything the team's AIs learned this week.
Every observation is stored in a shared SQLite database. Any team member's AI can save and query it through 6 MCP tools.
There are two roles: Host (runs the server) and Client (connects via plugin).
The host is the person (or machine) that runs the Mono Memory server for the team.
git clone https://github.com/potato-castle/mono-memory-mcp.git
cd mono-memory-mcp
uv run python server.py
The server starts on http://0.0.0.0:8765/mcp (streamable-http). Share this URL with your team — replace 0.0.0.0 with your machine's IP address (e.g. http://192.168.0.10:8765/mcp).
Custom configuration:
# Change port
MONO_MEMORY_PORT=9000 python server.py
# Change database directory
MONO_MEMORY_DB_DIR=/path/to/data python server.py
# Run in background
nohup python server.py > /tmp/mono-memory.log 2>&1 &
Clients do not need to clone the repo. Just run three commands in Claude Code:
1. Register the marketplace:
/plugin marketplace add potato-castle/mono-memory-mcp
2. Install the plugin:
/plugin install mono-memory-mcp@mono-memory-mcp
When prompted for scope, select "Install for you, in this repo only (local scope)". This keeps the plugin active only in the current project.
3. Run the setup skill:
/mono-memory-mcp:setup
This will prompt you for:
http://192.168.0.10:8765/mcp)The project name is automatically detected from your current directory name.
The setup will:
.mcp.json in your project root (MCP server connection)CLAUDE.md (so your AI automatically saves discoveries)Restart Claude Code to activate.
memory_save — Save an observationStore a discovery, decision, debugging insight, or any knowledge.
| Parameter | Required | Description |
|---|---|---|
author |
Yes | Author name (e.g. "alice") |
project |
Yes | Project name (e.g. "my-app") |
content |
Yes | The content to save |
tags |
No | Comma-separated tags (e.g. "bug,fix,api") |
memory_get — Retrieve by ID| Parameter | Required | Description |
|---|---|---|
id |
Yes | UUID of the observation |
memory_search — Keyword searchSearches both observations and project contexts.
| Parameter | Required | Description |
|---|---|---|
query |
Yes | Search keywords (space = AND) |
author |
No | Filter by author |
project |
No | Filter by project |
limit |
No | Max results (default 20) |
memory_timeline — Chronological view| Parameter | Required | Description |
|---|---|---|
project |
No | Filter by project |
author |
No | Filter by author |
since |
No | Start date (ISO 8601, e.g. "2025-01-01") |
until |
No | End date (ISO 8601, e.g. "2025-01-31") |
limit |
No | Max results (default 50) |
memory_init — Initialize/update project contextStore project information by section. Same project+section overwrites (upsert).
| Parameter | Required | Description |
|---|---|---|
project |
Yes | Project name |
section |
Yes | Section name (e.g. "overview", "architecture", "api") |
content |
Yes | Section content |
author |
No | Who updated it |
memory_context — Retrieve project context| Parameter | Required | Description |
|---|---|---|
project |
Yes | Project name |
section |
No | Section name (omit to list all sections) |
User: "Save that the login timeout was caused by Redis connection pool exhaustion."
Tool: memory_save
project: "auth-service"
content: "Login timeout root cause: Redis connection pool exhaustion under load. Fix: increased pool size from 10 to 50 and added retry logic in auth/session.py"
tags: "bug,fix,redis,performance"
Response: {"status": "saved", "id": "a1b2c3d4-...", "author": "alice", "created_at": "2025-06-15T10:30:00+09:00"}
User: "What do we know about Redis in auth-service?"
Tool: memory_search
query: "redis"
project: "auth-service"
Response: {"count": 2, "results": [
{"author": "alice", "content": "Login timeout root cause: Redis connection pool...", "source": "observation"},
{"author": "bob", "content": "Migrated Redis from 6.x to 7.x for ACL support...", "source": "observation"}
]}
User: "Set up the architecture overview for the payments project."
Tool: memory_init
project: "payments"
section: "architecture"
content: "Microservice arch. Gateway (Express) -> Payment Service (FastAPI) -> Stripe API. PostgreSQL for transactions, Redis for idempotency keys."
author: "carol"
Response: {"status": "updated", "project": "payments", "section": "architecture", "updated_at": "2025-06-15T14:00:00+09:00"}
/api-docs — Generate API DocumentationGenerates a Swagger-style HTML API documentation page from memories stored in the mono-memory server.
/api-docs
The skill automatically:
api-docs.html with:Prerequisite: Save some API observations first so the skill has data to work with:
memory_save(project: "my-app", content: "GET /api/users - returns paginated user list with {page} and {limit} query params", tags: "api,endpoint")
memory_save(project: "my-app", content: "POST /api/users - creates user. Request: {name, email, role}. Response: {id, name, email, created_at}", tags: "api,endpoint")
memory_init(project: "my-app", section: "api", content: "REST API base URL: /api/v1. Auth: Bearer token required.")
| Variable | Default | Description |
|---|---|---|
MONO_MEMORY_HOST |
0.0.0.0 |
Server bind address |
MONO_MEMORY_PORT |
8765 |
Server port |
MONO_MEMORY_DB_DIR |
./data |
Directory for the SQLite database |
DEFAULT_AUTHOR |
(empty) | Default author name for memory_save |
cd mono-memory-mcp
uv run python test_server.py
The test script spawns the server with an isolated temporary database and verifies all 6 tools via streamable-http.
Scripts are provided in the scripts/ directory:
./scripts/start.sh # Start the server
./scripts/stop.sh # Stop the server
./scripts/restart.sh # Restart the server
./scripts/logs.sh # Tail server logs in real-time
By default: ./data/memory.db (SQLite, WAL mode)
The /mono-memory-mcp:setup skill automatically appends auto-recording rules to your project's CLAUDE.md. This tells your AI assistant to:
For manual setup, see [CLAUDE_MD_TEMPLATE.md](CLAUDE_MD_TEMPLATE.md).
Mono Memory MCP is a fully self-hosted, local server.
Your memory data is entirely under your control.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mono-memory-mcp": {
"command": "npx",
"args": []
}
}
}