loading…
Search for a command to run...
loading…
Enables semantic search and retrieval from your local markdown brain (Remember.md) via MCP tools, running entirely offline with local embeddings.
Enables semantic search and retrieval from your local markdown brain (Remember.md) via MCP tools, running entirely offline with local embeddings.
Local MCP server for the Remember.md second brain. Run via npx, point any MCP client at it, query your markdown brain semantically.
Status: v0.1.0 — first functional release. One tool:
search_brain. Active development continues.
Exposes your local markdown brain (a folder of .md files organised PARA-style by the Remember.md plugin) as a set of MCP tools any MCP client can call — Claude Code, OpenClaw, Cursor, Codex CLI, Claude.ai web, ChatGPT custom GPTs, anything that speaks the Model Context Protocol.
Tools shipped in v0.1.0:
search_brain(query, top_k) — hybrid retrieval. BM25 + vector + RRF fusion + 1-hop wikilink expansion. Lexical-first: BM25 results land immediately on first run, vector embeddings build in background and layer in once ready.Tools planned for v0.2+:
get_file(path) — read a brain filelist_recent(period, kind?) — recent journal / notes / decisionsquery_persona() — current Persona.md contentdashboard_snapshot() — counts + top beliefs + active projectspropose_belief(claim, evidence) — write candidate to Inbox/node:sqlite (Node 22.5+ stdlib) + sqlite-vec extension for vector search + FTS5 for BM25 — no server, no native compilation, no toolchain.Xenova/bge-micro-v2 (384d, ~17 MB) locally — no cloud calls..remember/index.db is rebuildable.You don't install it. Point your MCP client at it via npx:
/remember:init)The Remember.md plugin automatically configures Claude Code's MCP layer to launch this server. Just run /remember:init.
Add to your MCP config:
{
"mcpServers": {
"remember": {
"command": "npx",
"args": ["-y", "@remember-md/mcp"],
"env": {
"REMEMBER_BRAIN_PATH": "/absolute/path/to/your/brain"
}
}
}
}
First run downloads the package (~15–30s) and the embedding model (~17 MB, one-time). After that, queries are sub-second.
| Env var | Default | Purpose |
|---|---|---|
REMEMBER_BRAIN_PATH |
~/remember |
Brain root directory (folder of markdown files) |
REMEMBER_INDEX_DIR |
${brain}/.remember |
Where the SQLite index lives |
REMEMBER_EMBEDDING_MODEL |
Xenova/bge-micro-v2 |
Hugging Face model id |
REMEMBER_TIER |
auto | auto / vec / fts5 / ripgrep (force a fallback tier) |
Local-only. No cloud calls. No telemetry. The brain folder + index never leave your machine. Embedding model runs in-process via ONNX Runtime.
MIT — see LICENSE.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"remember-md-mcp": {
"command": "npx",
"args": []
}
}
}