loading…
Search for a command to run...
loading…
A personal memory system that provides AI assistants with long-term memory capabilities through semantic search and vector storage. It enables Claude Code to st
A personal memory system that provides AI assistants with long-term memory capabilities through semantic search and vector storage. It enables Claude Code to store, retrieve, and manage personal context and project preferences using flexible LLM backends.
A personal memory system for AI assistants — runs entirely on your machine, stores everything locally, and integrates with Claude Code, Cursor, Gemini CLI, and Codex via lifecycle hooks.
Privacy first. Nothing leaves your machine by default. Memories are stored as local files. The dashboard polls your own REST API. Plugin hooks call
localhostonly.
http://localhost:8080/uigit clone https://github.com/your-org/memovault
cd memovault
pip install -e . # or: uv sync
cp .env.example .env
pip install memovault
1. Install Ollama and pull models
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1 # main LLM
ollama pull nomic-embed-text # embeddings
2. Configure .env
MEMOVAULT_LLM_BACKEND=ollama
MEMOVAULT_OLLAMA_MODEL=llama3.1:latest
MEMOVAULT_OLLAMA_API_BASE=http://localhost:11434
MEMOVAULT_EMBEDDER_BACKEND=ollama
MEMOVAULT_EMBEDDER_OLLAMA_MODEL=nomic-embed-text:latest
# simple = BM25 (no vector DB), vector = Qdrant (semantic search)
MEMOVAULT_MEMORY_BACKEND=simple
MEMOVAULT_DATA_DIR=./memovault_data
3. Start
memovault service start
open http://localhost:8080/ui
MEMOVAULT_LLM_BACKEND=openai
MEMOVAULT_OPENAI_API_KEY=sk-...
MEMOVAULT_OPENAI_MODEL=gpt-4o-mini
MEMOVAULT_EMBEDDER_BACKEND=openai
MEMOVAULT_EMBEDDER_OPENAI_MODEL=text-embedding-3-small
MEMOVAULT_MEMORY_BACKEND=vector
Memory content is sent to OpenAI's API for scoring and embedding when using this backend.
Add to ~/.claude/claude.json:
Local (Ollama)
{
"mcpServers": {
"memovault": {
"command": "memovault",
"args": ["mcp"],
"env": {
"MEMOVAULT_LLM_BACKEND": "ollama",
"MEMOVAULT_OLLAMA_MODEL": "llama3.1:latest"
}
}
}
}
OpenAI
{
"mcpServers": {
"memovault": {
"command": "memovault",
"args": ["mcp"],
"env": {
"MEMOVAULT_LLM_BACKEND": "openai",
"MEMOVAULT_OPENAI_API_KEY": "sk-..."
}
}
}
}
Hooks automatically inject memory context before each prompt and save session summaries on exit. Requires the REST API to be running.
memovault service start # start REST API
memovault plugins install claude-code # install hooks
memovault plugins list # show status for all platforms
memovault plugins install claude-code
memovault plugins install cursor
memovault plugins install gemini
memovault plugins install codex
memovault plugins uninstall claude-code # remove hooks
| Hook | Trigger | Action |
|---|---|---|
UserPromptSubmit |
Before every prompt | Fetches recent session summaries + relevant memories, prepends as context |
Stop |
When the tool exits | Summarizes the session and stores it to LTM |
Claude Code — writes hooks to ~/.claude/settings.json:
{
"hooks": {
"UserPromptSubmit": [{
"matcher": ".*",
"command": "memovault hook prompt-submit --api http://localhost:8080"
}],
"Stop": [{
"command": "memovault hook session-end --api http://localhost:8080"
}]
}
}
Cursor — writes memovault.hooks config to Cursor's settings.json.
Gemini CLI / Codex CLI — adds a shell wrapper function to ~/.zshrc. Run source ~/.zshrc once after install to activate.
# Add to ~/.zshrc or ~/.bash_profile
memovault service start 2>/dev/null
memovault service start # start REST API in background
memovault service start --port 9090
memovault service status
memovault service stop
# Foreground (useful for debugging)
memovault api --host 127.0.0.1 --port 8080
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"memovault": {
"command": "npx",
"args": []
}
}
}