loading…
Search for a command to run...
loading…
An MCP server that provides persistent memory for AI assistants by automatically recording and injecting project-specific rules, decisions, and configurations t
An MCP server that provides persistent memory for AI assistants by automatically recording and injecting project-specific rules, decisions, and configurations through session hooks. It utilizes Gemini to analyze conversations, manage token-efficient context, and perform autonomous background tasks like documentation updates.
Teach your AI coding agent to learn from its mistakes.
npm version License: MIT Node.js Ko-fi
wasurenagusa (forget-me-not) — a Japanese flower whose name means "don't forget me."
AI coding agents are powerful but amnesiac. Every session starts from scratch — your project conventions, past decisions, and hard-learned lessons vanish the moment a session ends.
Existing solutions either require manual effort or simply store raw memories that grow until they overwhelm the context window.
wasurenagusa is an MCP server that doesn't just remember — it learns.
positiveRule alongside each principle: "don't do X" becomes "do Y instead." Research shows LLMs follow affirmative instructions significantly better than prohibitions (Pink Elephant problem)Fully automated via Claude Code hooks — zero configuration after setup.
From the author's daily use across 8 production projects (with cross-project memory sharing between them):
1,581 "dont" entries → 5-9 principles per project (LLM consolidation)
each with positiveRule → affirmative-only injection (Pink Elephant fix)
29 config entries → 4-5 thematic summaries (LLM consolidation)
21,800 chars raw data → 6,200 chars injected (71% reduction)
Most memory tools store what happened. wasurenagusa teaches your AI why things went wrong — and ensures it never repeats the same mistake.
It's not a memory bank. It's a learning system.
| wasurenagusa | claude-mem | mcp-memory-service | CLAUDE.md | |
|---|---|---|---|---|
| Auto-detect mistakes | Yes (retry + sentiment) | No | No | No |
| Auto-consolidate (LLM) | Yes (dont→principles, config→themes) | No | Yes (decay-based) | No |
| Vector semantic search | Yes (local inference, offline) | Yes (ChromaDB) | Yes (SQLite-vec / ChromaDB) | No |
| Memory tiers (short/mid/long) | Yes (cosine distance thresholds) | No | No | No |
| Auto-promotion (intensity) | Yes (access count → intensity 5) | No | No | No |
| Zero-effort via hooks | Yes | Yes | Partial | No |
| Human-readable storage | No (SQLite — auto-migrated from v1 Markdown) | No (SQLite) | No (SQLite-vec) | Yes |
| Multi-LLM support | Gemini / OpenAI / Anthropic (embedding is local — no API key needed) | Claude only | Local (MiniLM-L6-v2) | N/A |
| Token-efficient retrieval | Yes (index → detail, 70-90% savings) | Yes (3-layer) | N/A | No |
| Cross-project memory | Yes (top 5 active projects) | No | No | No |
| License | MIT | AGPL-3.0 | Apache-2.0 | N/A |
Session Start (Hook) — injection mode
→ Checks if consolidation is stale
→ Spawns background LLM worker if needed (non-blocking)
→ Spawns background embedding backfill worker (non-blocking)
→ Injects consolidated config + principles (layer 1) + recent 30-day entries (layer 2) + owner profile
→ Vector search injects semantically related short-term memories (layer 3)
→ Cross-project vector search injects related memories from other active projects (layer 4)
→ Only customized settings injected (defaults stripped)
Session Start (Hook) — agent mode
→ Injects dont summary + config index + owner profile (minimal footprint)
→ No vector search at startup (deferred to on-demand recall)
User Prompt (Hook) — agent mode
→ Injects 1-line reminder: "search memory if relevant"
→ Main agent spawns memory-recall sub-agent as needed
→ Sub-agent runs memory_search → returns summary only (no raw data in main context)
→ Survives compaction (re-injected on every user message)
During Session
→ memory_save auto-generates embedding via local inference (no API call)
→ memory_save enriches tags with LLM-assigned weights (0.0-1.0) (when API key available)
→ Theme shift triggers background re-tagging of related past entries
→ memory_search merges keyword + vector semantic + tag-weighted results
→ Vector hits increment access counts → auto-promote to intensity 5 at threshold
Session End (Hook)
→ LLM analyzes the conversation
→ Detects mistakes, frustration, retry patterns
→ Auto-saves lessons learned (with embedding)
→ Deduplicates against existing entries before saving
→ Updates active projects tracker (top 5 recent projects)
Background (async workers)
→ Consolidates "dont" entries → behavioral principles
→ Consolidates "config" entries → thematic summaries
→ Backfills embeddings for entries created before vector layer (20/run)
→ Results used in next session start
💡 Recommended: Paste this README into Claude Code and ask it to set up wasurenagusa for you. It'll handle everything below automatically.
npm install -g wasurenagusa-mcp
Or from source:
git clone https://github.com/tsutushi0628/wasurenagusa-mcp.git
cd wasurenagusa-mcp
npm install && npm run build
npm link
npm run buildautomatically runschmod +xon CLI entry points. No manual permission setup needed.
Create ~/.wasurenagusa/.env:
# Set at least one API key
GEMINI_API_KEY=your-key-here
# OPENAI_API_KEY=your-key-here
# ANTHROPIC_API_KEY=your-key-here
| Variable | Required | Description |
|---|---|---|
GEMINI_API_KEY |
One of three | Google Gemini API key |
OPENAI_API_KEY |
One of three | OpenAI API key |
ANTHROPIC_API_KEY |
One of three | Anthropic API key |
LLM_PROVIDER |
No | gemini (default), openai, or anthropic |
LLM_MODEL |
No | Override the default model for your provider |
MEMORY_DIR |
No | Memory directory (default: .wasurenagusa) |
MAX_ENTRIES_PER_CATEGORY |
No | Entry limit per category before auto-archiving (default: 100) |
LOG_RETENTION_DAYS |
No | Log retention period in days (default: 30) |
SLACK_WEBHOOK_URL |
No | Slack notifications for autonomous tasks |
claude mcp add wasurenagusa -- wasurenagusa-mcp
⚠️ Required — Without this step, memory is never injected at session start. This is the most commonly missed setup step.
Add to ~/.claude/settings.json (or settings.local.json if you prefer to keep hooks separate):
{
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "wasurenagusa-context",
"timeout": 5
}
]
}
],
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "wasurenagusa-context",
"timeout": 5
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "wasurenagusa-analyze",
"timeout": 30
}
]
}
],
"PreCompact": [
{
"hooks": [
{
"type": "command",
"command": "wasurenagusa-context",
"timeout": 15
}
]
}
]
}
}
Launch Claude Code. That's it.
.wasurenagusa/ directory is created automaticallyAdd
.wasurenagusa/to your.gitignore— it contains project-specific memory data.
| Category | What it stores | File |
|---|---|---|
| config | API URLs, ports, auth locations | memory.db |
| dont | Mistakes, anti-patterns, user frustrations | memory.db |
| decision | Architecture decisions, tech choices | memory.db |
| log | Implementation records, resolved errors | memory.db |
| snippet | Frequently used commands & queries | memory.db |
| Tool | Description |
|---|---|
memory_get_context |
Get config + consolidated principles (auto-called at session start) |
memory_search |
Lightweight index search (ID, title, tags only). Use project: "active" for cross-project search |
memory_get_detail |
Get full detail by ID(s) |
memory_save |
Save a memory entry explicitly |
memory_stash |
Temporarily stash memories to save context window space |
memory_restore |
Restore previously stashed memories back into active context |
memory_delete |
Delete entries by ID |
task_submit |
Submit an autonomous task for 24/7 execution |
task_status |
Check task execution status |
task_action_list |
List and manage pending human actions |
project_init |
Initialize project quality standards |
| Command | Purpose | Invoked by |
|---|---|---|
wasurenagusa-context |
Output config + dont + vector memories to stdout | SessionStart / UserPromptSubmit / PreCompact Hook |
wasurenagusa-analyze |
LLM-analyze conversation and auto-save | Stop Hook |
wasurenagusa-backfill |
Generate embeddings for entries without vectors | Background (auto-spawned) |
wasurenagusa-rebuild |
Repair corrupted memory data (dedup, re-sort logs) | Manual |
wasurenagusa-spec-update |
Auto-update spec documents | cron / systemd timer |
wasurenagusa-consolidate-all |
Run consolidation across all active projects | Manual / Scheduler |
wasurenagusa-scheduler |
Install/uninstall/status nightly consolidation scheduler | Manual |
wasurenagusa supports two output modes for the SessionStart Hook, configurable per project via .wasurenagusa/config.json.
| Mode | Description | Best for |
|---|---|---|
| injection (default) | Injects full memory text at session start | Environments without sub-agents (Cursor, Windsurf, etc.) |
| agent | Injects minimal index at session start + memory-recall reminder on each user message. Details retrieved on-demand via sub-agents | Claude Code + Agent Teams |
Add outputMode to your project's .wasurenagusa/config.json:
{
"outputMode": "agent"
}
If the file doesn't exist or outputMode is not set, the default is "injection" (full backward compatibility).
When using "agent" mode with Claude Code Agent Teams, add these rules to your project's CLAUDE.md:
- Read/write memories via sub-agents (memory_search / memory_get_detail / memory_save)
- Do not bring raw memory data into the main context
- When system-reminder suggests memory recall, spawn a sub-agent to run memory_search and return summary only
wasurenagusa introduces a biologically-inspired memory system powered by local embeddings. Every memory is converted to a 384-dimensional vector, enabling meaning-based retrieval that goes far beyond keyword matching.
Three-tier architecture with cosine distance thresholds:
| Tier | Threshold | Use case |
|---|---|---|
| Short-term | ≤ 0.2 | Highly relevant — auto-injected at session start |
| Medium-term | ≤ 0.45 | Contextually related — surfaced during memory_search |
| Long-term | ≤ 0.7 | Loosely related — discoverable but not proactively shown |
Automatic promotion: Every time a memory is retrieved via vector search, its access count increments. After 5 retrievals, the memory auto-promotes to intensity: 5 — ensuring frequently-needed knowledge gets maximum weight in consolidation. Long-dormant memories can be "woken up" by relevance and eventually earn top intensity through repeated access.
How it works:
memory_save
→ Text → local inference (Hugging Face Transformers) → embedding → SQLite (sqlite-vec)
memory_search "authentication setup"
→ Full-text search (FTS5, Japanese support) ─┐
→ Embed query → vector similarity search ─┤→ merge, deduplicate → results
└→ increment access count
→ auto-promote if threshold met
SessionStart Hook
→ Embed project name → short-tier search → inject related memories
No external API required — embeddings are generated locally via @huggingface/transformers. Data is stored in SQLite with sqlite-vec for vector indexing. Works completely offline.
Automatic migration from v1 — existing Markdown-based memory files are automatically migrated to SQLite on first run. No manual steps required.
Smart Tag Retrieval improves search precision through three mechanisms — without ever deleting or forgetting data:
All memories are preserved at full fidelity. Smart Tag Retrieval only optimizes retrieval priority, never discards data.
wasurenagusa automatically tracks your top 5 most recently used projects and searches across their memories for relevant context.
How it works:
~/.wasurenagusa/scheduler/active-projects.jsonmemory_search with project: "active" searches across all active projects (keyword + vector)Example: You're working on project-a and previously discussed authentication in project-b. When you start a session in project-a and the topic is related, wasurenagusa automatically surfaces the relevant auth memories from project-b.
No configuration needed — works automatically after two or more projects have been used.
When memory entries accumulate, the LLM automatically compresses them into compact summaries:
sourceCount × maxIntensity. Each principle includes both the original rule (❌→💡→✅ format) and a positiveRule (affirmative-only phrasing). The positiveRule is injected by default — research on the Pink Elephant problem shows LLMs struggle with negation in instructions.Consolidation runs as a detached background process during session start, and optionally as a nightly scheduled job (2:00 AM). Results are cached as JSON and used from the next session onward. Staleness is detected by comparing file modification times and entry counts.
Raw entries are always preserved. The consolidated version is injected at session start; original entries remain searchable via memory_search.
Every consolidated principle stores two forms:
| Field | Format | Purpose |
|---|---|---|
rule |
❌ Bad pattern → 💡 Why it's bad → ✅ Correct behavior | Full context for memory_get_detail |
positiveRule |
Affirmative-only action statement ("do X", "use Y") | Injected into LLM context |
Why? LLM attention mechanisms activate concepts mentioned in negations — "don't use innerHTML" still activates "innerHTML." Affirmative instructions ("use textContent") activate only the desired behavior. The raw user feedback (dont.md) is preserved unchanged; conversion happens only at the consolidation layer.
Every dont entry carries an intensity score (1-5) representing the severity of the lesson:
| Intensity | Meaning | Example |
|---|---|---|
| 5 | Rage / resignation — user nearly gave up | "I told you 10 times, STOP doing this" |
| 4 | Strong frustration — explicit anger | "No! Don't do that!" |
| 3 | Clear correction — firm but calm | "That's wrong, do it this way" |
| 2 | Mild note — gentle guidance | "Next time, prefer X over Y" |
| 1 | Suggestion — informational | "FYI, we usually do it like this" |
Auto-detection: The LLM analyzes user messages for emotional signals (exclamation marks, strong language, repeated corrections) and assigns intensity automatically. Conversation metadata (turns since last positive feedback, message length ratio) provides additional boost signals.
Manual override: Pass intensity: N to memory_save to set or adjust the score.
Scoring formula: During consolidation, each principle gets score = sourceCount × maxIntensity. Principles are sorted by score descending — frequently repeated, high-anger lessons appear first with stronger wording.
Each memory category has an entry limit (default: 100). When exceeded, oldest entries are automatically moved to archive files (*-archive.md). Logs have separate 30-day rotation. Your data is never deleted — just moved out of the active search path.
Detects user frustration through text patterns, message length changes, and absence of positive signals. Records what went wrong, why, and what to do instead.
Submit tasks via task_submit and wasurenagusa runs them using Claude CLI as a subprocess. The LLM evaluates completion conditions and retries if needed. Useful for spec updates, refactoring, and test generation.
On first run, an owner-profile.md template is generated. Fill it in to teach the AI your decision-making preferences for autonomous task execution.
Only sections you've actually customized are injected — default selections and empty fields are automatically stripped, keeping injection minimal.
Instead of only consolidating at session start, you can schedule nightly consolidation across all active projects — like "sleeping on it overnight."
# Install (macOS: launchd, Linux: crontab)
wasurenagusa-scheduler install
# Check status
wasurenagusa-scheduler status
# Remove
wasurenagusa-scheduler uninstall
Runs daily at 2:00 AM, consolidating dont and config entries for all recently active projects. This ensures your AI starts every morning with freshly organized principles, even if you never close your sessions.
prompts/ as plain text. Iterate without rebuilding.npm run build # Compile TypeScript
npm test # Run tests
npm run test:watch # Watch mode
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"wasurenagusa-mcp": {
"command": "npx",
"args": []
}
}
}