loading…
Search for a command to run...
loading…
A local MCP server that gives AI assistants a long-term memory by capturing sessions verbatim and surfacing relevant context automatically.
A local MCP server that gives AI assistants a long-term memory by capturing sessions verbatim and surfacing relevant context automatically.
Every claim ships with the harness that proves it. Run the benchmarks yourself.
Independent Autistic Intelligence — a local memory layer for Claude (and other MCP-compatible assistants).
IAI stands for Independent Autistic Intelligence.
A local server that speaks the MCP protocol and gives Claude, and any other MCP-compatible assistant, a long-term memory. It captures every turn of every session verbatim, organizes those captures over time into a personal map of who you are, and serves a small slice of relevant memory back at the start of each new conversation. You never have to say "remember this" or "what did we say last time?".
I built this for myself. It worked. I've been running it daily for months, and now I'm sharing it. The benchmarks were mostly for my own curiosity. I wanted to know if it actually works or if I'd just gotten used to it.
You do not call iai-mcp directly during a session. Once it's connected:
Capture is automatic. Every turn, yours and the assistant's, is recorded verbatim with timestamps and session metadata. You don't say "remember this."
Recall is automatic. When a new session starts, the daemon assembles a small relevant slice of your history and injects it into the conversation prefix. You don't say "what did we say."
Consolidation runs idle. Between sessions, the daemon merges duplicates, strengthens recall pathways for things retrieved often, and prunes weak edges. The system gets quietly better at remembering you over time.
After a few weeks of regular use the difference becomes noticeable. The assistant stops asking the same orientation questions, references things you mentioned in passing, and adapts to your style without being told.
The daemon is a Python process that runs in the background. Your MCP client connects to it via a Unix socket. No network exposure.
Memory is stored in three tiers:
Episodic is verbatim, timestamped fragments of what was said. Write-once, never overwritten or rewritten.
Semantic is summaries induced from clusters of related episodes during idle-time consolidation.
Procedural is a small set of stable parameters about you, learned over time: preferences, style cues, recurring patterns. Eleven sealed knobs that shift based on what works.
A background pass runs periodically (sleep cycles): it clusters episodes, builds semantic summaries, decays old unreinforced connections, and reinforces frequently co-retrieved paths. Things you haven't revisited fade naturally. There's an optional "insight of the day" step that makes one Anthropic API call, but it's off by default.
Recall combines three signals: semantic similarity, graph-link strength, and recency. All ranked together.
All records are encrypted at rest with AES-256-GCM. The key lives in ~/.iai-mcp/.key (mode 0600). Back it up. Lose the key, lose the memories.
Everything lives at ~/.iai-mcp/. Embeddings are computed locally with bge-small-en-v1.5. The only data that leaves the machine is your normal conversation with whatever LLM API your client uses.
Claude Code <--MCP-stdio--> TypeScript wrapper <--UNIX socket--> Python daemon <--> LanceDB
Windows not supported. WSL2 untested.
git clone https://github.com/CodeAbra/iai-mcp.git
cd iai-mcp
bash scripts/install.sh
The installer creates a Python venv, installs dependencies (LanceDB, sentence-transformers, torch-hd, NetworkX, igraph), builds the TypeScript MCP wrapper, pre-downloads the default embedding model (~130 MB), symlinks the CLI to ~/.local/bin/iai-mcp, and on macOS registers the daemon with launchd.
Make sure ~/.local/bin is on your PATH:
export PATH="$HOME/.local/bin:$PATH" # add to ~/.zshrc or ~/.bashrc
iai-mcp --version
On Linux, install the systemd unit manually:
mkdir -p ~/.config/systemd/user
cp deploy/systemd/iai-mcp-daemon.service ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable iai-mcp-daemon
systemctl --user start iai-mcp-daemon
This is what makes capture ambient. Without it you'd have to save memories by hand.
mkdir -p ~/.claude/hooks
cp deploy/hooks/iai-mcp-session-capture.sh ~/.claude/hooks/
chmod +x ~/.claude/hooks/iai-mcp-session-capture.sh
Register in ~/.claude/settings.json:
{
"hooks": {
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "$HOME/.claude/hooks/iai-mcp-session-capture.sh"
}
]
}
]
}
}
claude mcp add iai-mcp -- node "$(pwd)/mcp-wrapper/dist/index.js"
Or edit ~/.claude.json directly:
{
"mcpServers": {
"iai-mcp": {
"command": "node",
"args": ["/absolute/path/to/iai-mcp/mcp-wrapper/dist/index.js"]
}
}
}
Use the absolute path. ~ and $HOME won't expand here.
For Claude Desktop (untested), edit ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or the equivalent path on Linux.
iai-mcp doctor
iai-mcp daemon status
Restart Claude Code. Start a session, do some work, exit. Then:
tail ~/.iai-mcp/logs/capture-$(date -u +%Y-%m-%d).log
You should see a rc=0 line. That's your first memory.
iai-mcp doctor runs 14 checks against the daemon, the store, and the runtime state. Output is one line per check: PASS, WARN, or FAIL.
iai-mcp doctor
What it checks:
| # | Check | What it means |
|---|---|---|
| a | Daemon alive | Is the daemon process running? |
| b | Socket fresh | Can the UNIX socket accept a connection? |
| c | Lock healthy | Is the process lock held correctly? |
| d | No orphan core | No leftover stdio core process without a daemon |
| e | State file valid | .daemon-state.json parses and has expected fields |
| f | LanceDB readable | Can the records table be opened and queried? |
| g | No duplicate binders | Only one process is bound to the socket |
| h | Crypto file state | Encryption key exists, correct permissions (0600) |
| i | Lance versions count | LanceDB version manifests aren't piling up |
| j | Lifecycle current state | Current FSM state is valid |
| k | Lifecycle history 24h | Recent lifecycle transitions look sane |
| l | Sleep cycle status | Last sleep cycle completed or is running normally |
| m | Heartbeat scanner | Wrapper heartbeat files are fresh |
| n | HID idle source | Idle detection source is available |
14/14 PASS is healthy. 13/14 with check (b) failing during a sleep cycle is also normal (the socket is busy during consolidation). Multiple FAILs or a FAIL on (a) or (f) means something is actually wrong.
If you're an AI model helping a user install or debug iai-mcp, read this before diagnosing anything.
The daemon has four FSM states: WAKE, TRANSITIONING, SLEEP, DREAMING. Separately, the wrapper-layer lifecycle has four states: WAKE, DROWSY, SLEEP, HIBERNATION. These are two different state machines in two different files. Don't confuse them.
Things that look broken but are not:
last_tick_at hasn't changed in hours. This is a sleep-cycle tick timestamp, not a process heartbeat. The daemon can be alive and healthy with a stale last_tick_at. Check iai-mcp daemon status instead.fsm_state: DREAMING for hours. Normal. The daemon is running a sleep cycle. Wake transitions are event-driven (a new session arriving wakes it), not time-driven.status: deferred in the capture log. This is the only path the Stop hook takes. It writes captures to ~/.iai-mcp/.deferred-captures/ and the daemon picks them up on its next tick. This is by design, not a failure.iai-mcp doctor shows 13/14 PASS with check (b) "socket unreachable." Expected when the daemon FSM is in SLEEP or DREAMING. The socket is intentionally unresponsive during consolidation passes. This is healthy.Actual problems look like: iai-mcp daemon status returning ok: False, doctor showing multiple FAIL results, or the daemon process not existing at all (pgrep -f iai_mcp.daemon returns nothing).
When in doubt, run iai-mcp doctor and read what it says. The output is self-explanatory.
I made these because I wanted honest numbers. Every harness ships in bench/. Run them on your machine, get your own results.
| Metric | Target | Measured |
|---|---|---|
| Verbatim recall (byte-exact) | >=99% | >=99% at N=10k |
| Recall p95 latency | <100 ms | <100 ms at N=10k |
| RAM at steady state | <=300 MB | ~150-300 MB |
| Session-start tokens (warm cache) | <=3,000 | <=3,000 |
| Session-start tokens (cold) | <=8,000 | <=8,000 |
python -m bench.verbatim # verbatim fidelity
python -m bench.neural_map # recall latency
python -m bench.memory_footprint # RAM usage
python -m bench.tokens # session-start cost
python -m bench.total_session_cost # full 10-turn cost
python -m bench.trajectory # 30-session corpus
python -m bench.contradiction_longitudinal # falsifiability
python -m bench.longmemeval_blind # LongMemEval-S blind run
The LongMemEval-S run is blind on purpose. No dataset-specific tuning, no hyperparameter sweep. The numbers are what they are.
| Variable | Default | What it does |
|---|---|---|
IAI_MCP_STORE |
~/.iai-mcp/ |
Data directory |
IAI_MCP_EMBED_MODEL |
bge-small-en-v1.5 |
Embedding model. bge-m3 for multilingual at ~3x size. |
Switching embedders requires re-embedding the store: iai-mcp migrate reembed.
This is experimental. I built it for myself, it works on my machine, and I'm sharing it because it might be useful to you. No SLA, no support guarantee. Breaking changes are possible between versions. Pin a commit hash if you depend on stability.
Limitations worth knowing about:
bge-m3 model removes this constraint at a cost of ~3x storage and slower indexing.cp -a ~/.iai-mcp/ somewhere safe.iai-mcp doctor, iai-mcp daemon status, iai-mcp topology).Claude Code is the primary host, validated in daily use.
Claude Desktop should work (uses claude_desktop_config.json instead of ~/.claude.json) but hasn't been tested end to end.
Other MCP-over-stdio hosts speak the same protocol and should work in principle. Not tested.
If you get it running on something else, open an issue or PR.
By Areg Aramovich Noya, in collaboration with the team at lcgc.dev.
I built this because I needed it. It works for me. If it works for you, take it.
Issues and PRs welcome. If your change touches retrieval, capture, or consolidation, include bench re-runs.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"iai-mcp": {
"command": "npx",
"args": []
}
}
}