loading…
Search for a command to run...
loading…
Q-learning memory for Claude Code. Persistent memory that learns which context helps you get work done. Memories that lead to productive sessions (commits, PRs,
Q-learning memory for Claude Code. Persistent memory that learns which context helps you get work done. Memories that lead to productive sessions (commits, PRs, tests) earn higher retrieval rank automatically. 16 MCP tools, hybrid BM25 + vector + Q-value scoring, local-first with Qdrant + FastEmbed.
How did this happen? — a hippocampus for AI agents.
Capture trajectories raw. Grade only when reality returns its verdict. Build a labeled corpus of human-AI decisions tied to grounded outcomes.
Quick Start · How It Works · Pipeline · Publish · MCP Tools · Status
When you close a deal, ship a feature, or lose a client — how did it happen? Which decisions, in what order, against which context, on which hypotheses? Today's AI agents can't answer that. They follow skills and instructions perfectly, but they don't accumulate grounded knowledge about how outcomes actually arrived.
OpenExp captures every human-AI decision as a step in a trajectory, links those steps into coherent journeys, and grades each journey retroactively when reality returns its verdict — a deal closes, a sprint ships, a payment lands. The result is a continuously growing labeled dataset of decisions tied to outcomes, ready to train domain-specific intuition.
We do not hand-craft features at step level (tone: urgent, signal: positive, hypothesis: probable). Pre-labeling injects the labeler's biases and corrupts the eventual training signal. Same hygiene as credit scoring: collect rich features per applicant, label only the terminal outcome (paid / didn't), let the model learn what predicts repayment from data alone.
Only terminal outcomes get labels:
outcome — closed_won / closed_lost / failed / abandonedgrade — 0.0 to 1.0, school-styleSteps are stored raw. Authors annotate their own intent, hypotheses, and decisions ("I believed X at this point", "I chose Y because Z"). They do not label the signal quality of individual events — that's what the eventual model learns.
Casual analogy: kids in school don't get annotations on every homework problem. They turn in work, get a grade at the end of the term, and develop intuition over hundreds of grades.
git clone https://github.com/anthroos/openexp.git
cd openexp
./setup.sh
That installs the four hooks into Claude Code, brings up Qdrant in Docker, and registers the MCP server.
Prerequisites: Python 3.11+, Docker, jq.
No API key required for core functionality. Embeddings run locally via FastEmbed. An Anthropic API key is optional and only powers the two-prompt pipeline (anonymize + extract experience) when you publish.
Four hooks run automatically inside Claude Code:
| Hook | When | What |
|---|---|---|
SessionStart |
Session opens | Searches Qdrant for relevant memories, injects top results as context |
UserPromptSubmit |
Every message | Lightweight per-prompt recall |
PostToolUse |
After Write / Edit / Bash | Captures observations as JSONL |
SessionEnd |
Session closes | Ingests transcript into Qdrant; extracts decisions via Opus 4.x (async) |
Retrieval ranks via semantic similarity + BM25 + recency. No magic numbers. No Q-value scoring component.
When you decide to publish an experience — turn a real, terminal trajectory into a shareable artifact — two prompts do the work:
prompts/anonymize.md — takes raw trajectory data (transcripts, emails, decisions) and produces an anonymized YAML trajectory. PII is replaced by category tokens (<counterparty_cto>, <dual_use_hardware>, <value:10k-100k>, <local_currency>, day_+5) while structural features are preserved.
prompts/extract_experience.md — wraps the anonymized trajectory with the terminal outcome, grade, and grade reason; produces a final experience.yaml artifact ready to share.
You run both prompts inside your own Claude Code, against your own Qdrant. Nothing is sent to a central server.
A published experience is three files in a UUID-named directory:
experiences/<uuid>/
├── experience.yaml # wrapper: applies_when, terminal, searchable_summary, metadata
├── trajectory.anonymized.yaml # full ordered timeline of N steps
└── README.md # human-readable face for the marketplace
experience.yaml shape (abridged from seed d49e0997):
experience:
id: d49e0997-8455-4d3c-90ca-d6cf54d0f662
experience_type: acquisition
domain: sales
duration_days: 57
applies_when: |
A counterparty technical decision-maker books an inbound discovery
call and you are running a "free pilot work → commercial contract"
acquisition arc, with a counterparty-prepared NDA and a
local-jurisdiction e-signing platform.
terminal:
outcome: closed_won
grade: 1.0
grade_source: manual
grade_reason: |
"Successful case. Exactly how my business should look — a clean,
typical good case."
closed_at: day_+57
trajectory_ref: trajectory.anonymized.yaml
searchable_summary: |
An inbound discovery call from a counterparty technical decision-maker
led to a counterparty-prepared mutual NDA and a free pilot batch — a
sample dataset delivered 24 days after first contact. The counterparty
went silent for ~3 weeks after pilot delivery, then re-engaged and
introduced a project manager to own day-to-day. ...
metadata:
verified: false
license: MIT
A published experience is a namespaced Claude Code skill:
openexp:<author-handle>:<experience-slug>
Drop the pack into ~/.claude/skills/openexp:<author>:<slug>/ (rename the directory to the skill-namespaced form on copy). Claude Code auto-discovers it on the next session.
# Install the seed pack as a skill
cp -r ~/openexp/experiences/d49e0997 \
~/.claude/skills/openexp:ivan-pasichnyk:inbound-acquisition-with-free-pilot
Two layers of identity:
SKILL.md inside the pack is the entry point — it tells the user's Claude when to invoke, how to use the trajectory, and what not to do (no fabrication, no de-anonymization, attribution required).
See docs/skill-architecture.md for the full naming convention, install flow, and design rationale.
The experiences/ directory in this repo is the seed of an eventual marketplace. The publication format works; seeds will accumulate. A directory of installable experiences is the eventual surface, not a built product today.
Five focused tools (hippocampus model — write everything, retrieve selectively):
| Tool | Description |
|---|---|
search_memory |
Hybrid search: semantic similarity + BM25 + recency |
add_memory |
Store a memory. Supports client_id for entity tagging |
log_prediction |
Track a prediction for later outcome resolution |
log_outcome |
Resolve a prediction with an outcome |
memory_stats |
Collection stats: point counts by source/type, session count |
openexp search -q "stalled enterprise procurement" -n 5
openexp ingest # ingest pending transcripts into Qdrant
openexp stats # Q-cache + collection stats
Environment variables (.env):
| Variable | Default | Description |
|---|---|---|
QDRANT_HOST |
localhost |
Qdrant server host |
QDRANT_PORT |
6333 |
Qdrant server port |
OPENEXP_COLLECTION |
openexp_memories |
Qdrant collection name |
OPENEXP_DATA_DIR |
~/.openexp/data |
Predictions, retrieval logs |
OPENEXP_OBSERVATIONS_DIR |
~/.openexp/observations |
Hook output |
OPENEXP_SESSIONS_DIR |
~/.openexp/sessions |
Session summaries |
OPENEXP_EMBEDDING_MODEL |
BAAI/bge-small-en-v1.5 |
Embedding model (local, free) |
ANTHROPIC_API_KEY |
(optional) | Required only for the publishing pipeline |
Pilot. Architecture freeze landed 2026-04-26. First experience seed published: experiences/d49e0997/ — a 57-day inbound acquisition that closed at grade 1.0 (author's own assessment), anonymized to category tokens.
Honest about what isn't done:
author_intent, author_hypothesis, author_decision) are a likely near-term addition.See docs/redesign-2026-04-26.md for the full architecture freeze and docs/claude-design-brief.md for the v2 product framing.
This project is in early stages. See CONTRIBUTING.md for setup and workflow.
The most useful contribution right now is publishing a real experience. Take one of your own closed trajectories, run it through prompts/anonymize.md and prompts/extract_experience.md, and open a PR adding a new directory under experiences/.
MIT © Ivan Pasichnyk
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"openexp": {
"command": "npx",
"args": []
}
}
}PRs, issues, code search, CI status
Database, auth and storage
Reference / test server with prompts, resources, and tools.
Secure file operations with configurable access controls.