loading…
Search for a command to run...
loading…
Persistent, searchable memory for Cursor AI — You control what your AI remembers.
Persistent, searchable memory for Cursor AI — You control what your AI remembers.
Persistent, searchable memory for Cursor AI — You control what your AI remembers.
License: MIT Node Version MCP npm npm downloads
The Problem • Quick Demo • Installation • Commands • How It Works • Troubleshooting
You just spent an hour with Cursor AI figuring out the right architecture. You made decisions, weighed trade-offs, landed on a solution.
Then you open a new chat.
❌ "What database did we choose last week?"
→ "I don't have access to previous conversations."
❌ "Continue the migration plan from yesterday."
→ "Could you provide context about the migration?"
❌ "Why did we pick EFS over EBS again?"
→ "I don't have information about previous decisions."
Every new chat, your AI has amnesia. Every decision you made, every context you built — gone.
The workarounds make it worse:
📝 Save to a .md file |
Now you have 1.000 files. Which one was it again? |
| 📎 Attach files to every chat | Token costs pile up. Most of it isn't even relevant. |
| 🔁 Retype context manually | You become the memory for a tool that's supposed to help you think. |
▶ Can't see the video? Watch on GitHub
You decide what gets saved. Type /memo when something matters — AI creates a structured memo, tags it, and stores it locally. Next time you need it, it's there.
| 🎛️ You control what's saved | Nothing gets saved without you triggering /memo. No background processes, no noise. |
| ⚡ Context-aware auto-search | AI detects when your question refers to past context and automatically searches your memories — no command needed. /recall is available as a manual fallback. |
| 🔍 Finds what you mean, not what you type | Hybrid FTS5 keyword + vector semantic search. Searches by meaning, not just exact words. |
| 🌍 Cross-language | Save in any language, search in any language. Multilingual E5 — 100+ languages, fully cross-lingual. |
| 📝 Structured summaries | AI generates organized memos with Decisions → Key Details → Context → Next Steps — not raw text dumps. |
| 📄 Handles long content | Long discussions are automatically split into overlapping chunks — every section is searchable, nothing gets lost. |
| 📁 Global + per-repo scope | Global memories visible everywhere. Repo memories isolated per project — one repo never sees another's context. |
| 🧠 Choose your model | Small (~50MB), Medium (~115MB), or Large (~270MB) — pick the size that fits your machine. |
| 🔒 Fully private, runs offline | No cloud. No API keys. No telemetry. Everything stays on your machine. |
build-essential on Ubuntu, VS Build Tools on Windows)# 1. Install globally
npm install -g cursor-memory
# 2. Setup — downloads model, configures Cursor automatically
cursor-memory setup
# 3. Restart Cursor — done 🎉
The CLI handles everything:
| Model | Size | RAM | Best for |
|---|---|---|---|
| Small | ~50MB | ~200MB | Lightweight, fast |
| Medium | ~115MB | ~500MB | Good balance |
| Large ⭐ | ~270MB | ~1GB | Best accuracy (recommended) |
All models support 100+ languages and run fully offline after download.
Three commands. That's it.
| Command | What it does |
|---|---|
/memo or /memo [text] |
💾 With text → saves directly. Without → AI summarizes the conversation into a structured memo |
/recall [query] |
🔍 Searches your memories by keyword + semantic meaning |
/forget [query] |
🗑️ Searches → previews matches → confirms before deleting |
AI detects when your question refers to past context and automatically searches your memories — no command needed.
/recallis available as a manual fallback.
cursor-memory setup # First-time setup or switch model
cursor-memory status # Check MCP, rules, model, database health
cursor-memory reset # Clear all data and start fresh
cursor-memory -v # Show version
cursor-memory --help # Show all commands
| Component | Technology | Why |
|---|---|---|
| 🔌 MCP Server | @modelcontextprotocol/sdk |
Standard protocol for AI tool integration |
| 🗄️ Database | better-sqlite3 |
Zero-config, fast, embedded, WAL mode |
| 🔎 Vector search | sqlite-vec |
Native C extension, cosine KNN, no external DB |
| 📝 Full-text search | SQLite FTS5 | BM25 ranking, auto-sync via triggers |
| 🧠 Embeddings | @huggingface/transformers |
Local ONNX inference, no API keys |
| 🌍 Model | Multilingual E5 (Q8) | 100+ languages, asymmetric search, quantized |
| ⌨️ CLI | commander |
Interactive setup, model management |
| 💻 Language | TypeScript (ESM) | Type safety, modern module system |
Restart Cursor completely (quit and reopen — not just reload window).
cursor-memory status # check system health
If auto-config failed, manually add MCP server in Cursor:
Cursor → Settings → MCP → Add server, or edit your MCP config file:
{
"mcpServers": {
"cursor-memory": {
"command": "npx",
"args": ["-y", "cursor-memory"]
}
}
}
Native modules are compiled for a specific Node version. If you switch versions:
npm install -g cursor-memory
cursor-memory setup
Run cursor-memory setup again to reinstall rules.
If that doesn't work, manually add the rule via Cursor → Settings → Rules → create a new User rule and paste the following:
## cursor-memory MCP
### Auto-recall
BEFORE answering, ask yourself: "Does the user expect me to know something from a previous chat?"
If YES → call search_memory from cursor-memory MCP immediately. Do NOT answer first.
If UNSURE → answer normally, do NOT search.
Signs of past context (any language):
- References to previous decisions ("what did we choose", "as we discussed")
- Continuation requests ("continue the plan", "pick up where we left off")
- "We/our" referring to past work, not general questions
- Temporal cues: "last time", "before", "already", "remember", "yesterday"
### Auto-save awareness
After a substantive conversation, assess whether it produced knowledge worth preserving:
SUGGEST SAVING when:
- A decision was reached (chose X over Y, with reasoning)
- A plan, strategy, or approach was agreed upon
- A problem was analyzed and a solution was identified
- A comparison or evaluation was completed with a conclusion
- Important context, constraints, or requirements were established
- Knowledge was shared that would be useful to recall in future sessions
Do NOT suggest when:
- Quick Q&A with a generic/textbook answer
- Still exploring — no conclusion or decision yet
- User already said /memo in this conversation
How to suggest: at the END of your response, briefly ask:
"This seems worth remembering. Want me to /memo this?"
Do NOT auto-save without user confirmation.
### Commands
/memo → save to memory. With content: save directly. Without content: summarize conversation then save.
/recall → search via search_memory
/forget → delete via delete_memory
git clone https://github.com/tranhuucanh/cursor-memory.git
cd cursor-memory
npm install
npm run build # build once
npm run dev # watch mode
node dist/cli.js setup
node dist/index.js
git checkout -b feature/your-featuregit commit -m 'feat: your feature'git push origin feature/your-featureMIT — see LICENSE.
Built with ❤️ for developers who are tired of repeating themselves to AI
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"cursor-memory": {
"command": "npx",
"args": [
"-y",
"cursor-memory"
]
}
}
}