loading…
Search for a command to run...
loading…
Cortex MCP provides AI agents with long-term portfolio memory by transforming project history and developer decisions into a structured, queryable knowledge gra
Cortex MCP provides AI agents with long-term portfolio memory by transforming project history and developer decisions into a structured, queryable knowledge graph. It enables assistants to maintain context across different repositories by exposing historical patterns, architectural decisions, and technology preferences through the Model Context Protocol.
Portfolio memory for AI agents.
Transforms your real project history into structured context that any AI assistant can query in real time.
License: MIT Node.js 20+ TypeScript Strict CI
⚠️ Status: Active MVP — functional and tested, but under active development. Feedback and contributions are welcome.
Cortex MCP is a local server that implements the Model Context Protocol (MCP) — the open standard that lets AI assistants safely access external data.
It reads synthesized knowledge about your projects (a lightweight adjacency knowledge graph mapping relations between apps, technologies, and domains; patterns; observations; developer profile) and exposes it as tools, resources, and prompts consumable by any MCP-compatible agent.
When you open Claude, Copilot, or Cursor in a project, the agent does not know:
Every session starts from scratch — amnesiac pair programming.
Cortex gives AI agents the same long-term memory you have as a developer. It accumulates knowledge across repositories and makes it instantly accessible in every session.
"AI is your mirror — it reveals who you are faster. If you are incompetent, it will produce bad things faster. If you are competent, it will produce good things faster." — Akita
Cortex works like the "CLAUDE.md" of your entire portfolio — not of one project, but of your whole career.
| Problem | Without Cortex | With Cortex |
|---|---|---|
| Agent suggests a stack | Generic, based on popularity | Based on your real history |
| Agent solves a problem | Standard solution, may reinvent the wheel | "You already did this in X, here it is" |
| Architecture decision | Over-engineering (agent never says no) | "Your pattern is to simplify, see Y" |
| Context lost between sessions | Starts from zero every time | Accumulates decisions, patterns, pitfalls |
100% local, no cloud, no telemetry. Your data never leaves your machine.
Read this section before using. Most users only use Layer 1 and find the system "shallow". The real value is in Layers 2 and 3.
Cortex does not learn passively. It is a structured knowledge repository — the more you feed it, the more useful it becomes. The right mental model: a portfolio wiki that AI agents query in real time.
cortex-mcp scan automatically detects:
Important limitation: the scanner only sees what is in the code and git history. It does not know why you chose a technology, what problems you encountered, or what you learned. A project with 1 commit (shallow clone) generates a very poor profile.
The real power of Cortex is fed by you during work sessions:
| Tool | When to use | Example |
|---|---|---|
add_observation |
When you learn something relevant, find a pitfall, measure a metric | "Accuracy bottleneck is dataset size, not architecture" |
track_decision |
Every architecture or product decision with rationale | "Chose REST over GraphQL because it is simple CRUD" |
add_pattern |
When you identify something you repeat across projects | "FastAPI + PostgreSQL + Redis for APIs with caching" |
start_session |
When starting work on any project | Automatically injects historical context |
end_session |
When finishing — with summary and next steps | Accumulates progression between sessions |
add_skill |
For prompt templates you reuse | "How to do a code review in Python projects" |
⚠️ The real challenge: Layer 2 is where 60% of the value lives, but it requires actively remembering to call
start_session,track_decision, andend_sessionevery day.Most developers will not sustain this habit without intentional effort. The gap between "installed" and "actually useful" is real — Cortex is only as good as the discipline you bring to curation.
The roadmap item "Dynamic prompts based on session history" points toward closing this gap, but it has not shipped yet. In the meantime: set up reminders, or make these calls part of your team's Definition of Done.
The file knowledge/operator-profile.yaml is the developer's personal context. It is generated automatically by the scanner but requires manual curation to have real substance. Edit it directly:
identity:
name: Your Name
role: Your Role
domain: Your areas of expertise
github: https://github.com/your-username
You can add third-party repositories (public or private) as sources for analysis and comparison.
--name ref-repo-name).git clone https://github.com/BUGG1N/cortex-mcp.git cortex-mcp
cd cortex-mcp
npm install
npm run build
# 1. Initialize the knowledge directory
npx cortex-mcp init
# 2. Add repositories (local, public GitHub, or private with token)
npx cortex-mcp add ./my-project
npx cortex-mcp add https://github.com/user/public-repo
npx cortex-mcp add https://github.com/user/private-repo --token ghp_xxx
# 3. Scan everything → generates knowledge files automatically
npx cortex-mcp scan
# 4. Start the MCP server
npx cortex-mcp
Security: the token is only used to authenticate Git operations and is not written into the remote URL of the cloned repository.
⚠️ Repositories cloned from a URL arrive with a shallow history. This causes the scanner to report
1 commitand degrades the quality of the generated profile. To fix:cd repos/repo-name && git fetch --unshallow cd ../.. && npx cortex-mcp scan
After the first scan, edit knowledge/operator-profile.yaml to add real context:
identity:
name: Your Real Name
role: Your Role (e.g. Founder CTO, Senior Engineer)
domain: Your domains (e.g. fintech, IoT, healthcare)
github: https://github.com/your-username
Without this, the profile defaults to name: Developer with expertise inferred only by commit volume.
The scanner generates a starting point. Useful knowledge comes from curation — via MCP tools with the agent connected:
# Examples of prompts that feed Cortex automatically:
"Record that I chose FastAPI over Flask because I needed native async"
"Add an observation to project-x that the accuracy bottleneck is dataset size, not architecture"
"Start a work session on project-y focused on Sprint 0"
As the agent executes these actions, Cortex persists the knowledge in YAML/JSONL files and makes it available in all future sessions.
Add to claude_desktop_config.json:
{
"mcpServers": {
"cortex": {
"command": "node",
"args": ["/absolute/path/to/cortex-mcp/dist/cli.js"]
}
}
}
Windows:
{
"mcpServers": {
"cortex": {
"command": "node",
"args": ["C:\\dev\\cortex-mcp\\dist\\cli.js"]
}
}
}
macOS/Linux:
{
"mcpServers": {
"cortex": {
"command": "node",
"args": ["/home/user/cortex-mcp/dist/cli.js"]
}
}
}
Restart Claude Desktop. You will see "cortex" available with tools and resources.
In .vscode/mcp.json at the Cortex project root:
{
"servers": {
"cortex": {
"command": "node",
"args": ["<path>/dist/cli.js"]
}
}
}
To use Cortex in any VS Code window (e.g., open another project and still have access to the full knowledge base), create the global MCP config file:
File location:
%APPDATA%\Code\User\mcp.json~/Library/Application Support/Code/User/mcp.json~/.config/Code/User/mcp.json{
"servers": {
"cortex": {
"command": "node",
"args": ["<path>/dist/cli.js"],
"type": "stdio",
"env": {
"CORTEX_ROOT": "<path>"
}
}
}
}
⚠️
CORTEX_ROOTis required in the global configuration. Without it, Cortex tries to find the root from the current working directory (cwd). When VS Code opens another project, thecwdis that project — and Cortex cannot find theknowledge/folder, returning empty results.CORTEX_ROOTresolves this by explicitly pointing to the directory where data is stored.
Replace <path> with the absolute path to the Cortex root (e.g., C:\\dev\\CORTEX on Windows, /home/user/cortex on Linux).
Cortex is most useful when integrated into the natural development rhythm, not only used during initial setup.
[in the agent chat, Agent mode]
"Start a session on project X focused on [goal]"
start_session automatically injects: project stack, previous observations, relevant patterns, last recorded decisions, and operator context.
"Record that I found an N+1 query problem in the /observations endpoint"
"Add the decision to use Alembic instead of manual migrations, reason: traceability"
"Note that the current model accuracy is below the target threshold — the bottleneck is dataset size, not architecture"
"End the session with summary: [what was done], decisions: [list], next steps: [list]"
"What do you know about project X?" → the agent queries Cortex and summarizes current state
"What were the last decisions on project Y?" → returns curated decisions log
"What is my pattern for Python APIs?" → returns from the knowledge store
# Update GitHub sources
npx cortex-mcp sync
# Re-scan after stack changes
npx cortex-mcp scan
# Diagnostics if something seems wrong
npx cortex-mcp doctor
cortex-mcp [command] [options]
Commands:
init Initialize knowledge directory with empty files
add <source> Add a repository source (local path or GitHub URL)
scan Scan all sources and update the knowledge base
sync Update GitHub sources (pull latest)
sources List all configured sources
remove <name> Remove a source
serve Start the MCP server (default if no command given)
doctor Diagnose the environment and knowledge files
Adding sources:
cortex-mcp add ./local/path Local directory
cortex-mcp add https://github.com/u/repo Public GitHub repo (auto-clones)
cortex-mcp add https://github.com/u/repo --token ghp_xxx Private repo
cortex-mcp add <source> --name my-name Custom name
cortex-mcp add <source> --branch develop Specific branch
Options:
--root <path> Root directory (auto-detected from cwd)
--no-watch Disable file watching for hot-reload
--help, -h Show help
--version, -v Show version
Environment variables:
CORTEX_ROOT — overrides root directoryCORTEX_KNOWLEDGE_PATH — overrides knowledge files pathGITHUB_TOKEN — GitHub token for private repositories (alternative to --token)Read tools are used automatically by the agent when answering questions. Curation tools are how knowledge is accumulated — each call persists something to YAML/JSONL files.
Reading and Querying
| Tool | What it does |
|---|---|
search_portfolio |
Full-text search across projects, technologies, and patterns |
get_app_context |
Complete context for an app: stack, patterns, connections |
query_graph |
Navigate the knowledge graph from any entity |
who_uses |
List apps that use a given technology |
find_similar_apps |
Find similar apps by stack overlap |
get_portfolio_overview |
Overview: profile, stats, top techs, distribution |
find_patterns |
Architectural and workflow patterns identified |
get_conventions |
Code conventions for a specific context |
find_reusable |
Find reusable components/projects |
get_tech_radar |
Technology radar: adopt / experiment / assess / hold |
suggest_stack |
Suggest a stack for a new project with reasoning and risk analysis |
run_health |
Health check of the knowledge base |
get_portfolio_diff |
Portfolio changes in the last N days |
compare_stacks |
Compare stacks between two projects |
get_module_map |
Intra-repository import/module map |
export_context_bundle |
Export portfolio as a single bundle for onboarding |
get_file |
Read a file from a portfolio repository |
grep_codebase |
Regex search in a repository's code |
get_file_tree |
Directory structure of a repository |
list_skills |
List available skills/prompts (built-in + custom) |
get_skill |
Return a full skill |
invoke_skill |
Execute a skill with context injection |
Curation and Knowledge Accumulation (use actively — this is where the real value is)
| Tool | What it persists | When to use |
|---|---|---|
add_observation |
Observation in observations.jsonl |
Learnings, real metrics, pitfalls encountered |
track_decision |
Decision with rationale in observations.jsonl |
Every architecture, stack, or product choice |
add_pattern |
Reusable pattern in patterns.yaml |
When you identify something you repeat across projects |
update_app_status |
Status/health in registry.yaml |
After significant project state changes |
start_session |
Opens session in sessions.jsonl with injected context |
When starting any work session |
end_session |
Closes session with summary and next steps | When finishing — do not skip this step |
add_skill |
Prompt template in skills.yaml |
For prompts you reuse frequently |
| URI | Content |
|---|---|
cortex://portfolio |
Complete portfolio overview (JSON) |
cortex://graph |
Full knowledge graph |
cortex://registry |
All apps with metadata |
cortex://patterns |
All identified patterns |
cortex://profile |
Developer profile |
cortex://stats |
Quick numeric statistics |
cortex://app/{id} |
Complete context for any app |
cortex://app/{id}/file/{path} |
A repository file accessed via URI |
cortex://sessions/{appId} |
Session history for an app |
cortex://skills |
List all skills |
cortex://skill/{id} |
Content of a specific skill |
| Prompt | Function |
|---|---|
session-context |
Session bootstrap — injects profile, app, patterns, conventions |
code-review |
Code review with stack and pattern awareness |
new-project |
Plan a new project based on portfolio history |
Cortex reads these files from the knowledge/ directory. Each file has an origin and an expected utility level:
| File | Generated by | Curated by you | Content |
|---|---|---|---|
knowledge-graph.yaml |
auto scan |
Not required | Entities + relations (apps, techs, domains) |
registry.yaml |
auto scan |
update_app_status |
App registry with metadata (stack, health, status) |
operator-profile.yaml |
auto scan |
Manual editing recommended | Developer profile (name, domain, expertise) |
patterns.yaml |
scan (partial) |
add_pattern |
Recurring architectural and workflow patterns |
observations.jsonl |
Never automatic | add_observation, track_decision |
Observations, decisions, pitfalls — the most valuable file |
sessions.jsonl |
Never automatic | start_session / end_session |
Session history per app |
skills.yaml |
Never automatic | add_skill |
User-defined skills/prompts |
sources.yaml |
add + scan |
Not required | Registered sources (local paths, GitHub URLs) |
observations.jsonlis the most important file and the only one that is never populated automatically. A portfolio without observations only has detected stack — no memory of why things were done that way.
operator-profile.yamlis generated withname: Developerand expertise inferred by commit volume. For real substance, edit it manually adding name, domain, and personal context.
The examples/knowledge/ directory contains a complete example portfolio with 4 apps, 14 technologies, 24 relations, 6 patterns, and 7 observations. Use it as a reference to understand the file structure or as a starting point for your own portfolio.
cortex-mcp/
├── src/
│ ├── cli.ts # CLI entry point (init, add, scan, sync, serve)
│ ├── config.ts # Config resolution (flags → yaml → env → defaults)
│ ├── index.ts # Public API exports
│ ├── server.ts # MCP server (stdio transport)
│ ├── types.ts # Core TypeScript types
│ ├── engine/
│ │ ├── knowledge-engine.ts # Orchestrator — load, index, query
│ │ ├── yaml-parser.ts # YAML/JSONL parsers
│ │ ├── search-index.ts # MiniSearch full-text index
│ │ ├── graph-traversal.ts # BFS/DFS graph queries
│ │ ├── file-reader.ts # Repository file reader
│ │ └── file-watcher.ts # Hot-reload via Chokidar
│ ├── mcp/
│ │ ├── tools/ # MCP tools
│ │ ├── resources/ # MCP resources
│ │ └── prompts/ # MCP prompts
│ ├── scanner/
│ │ ├── index.ts # Scanner orchestrator
│ │ └── detectors/ # Stack auto-detection
│ │ ├── package-json.ts # Node.js / TypeScript
│ │ ├── python.ts # Python (pip, poetry, pipenv)
│ │ ├── java.ts # Java (Maven, Gradle)
│ │ ├── dotnet.ts # .NET / C#
│ │ ├── go.ts # Go (go.mod)
│ │ ├── rust.ts # Rust (Cargo.toml)
│ │ ├── infra.ts # Terraform, Kubernetes, Helm
│ │ ├── docker.ts # Docker / containerization
│ │ ├── ci.ts # CI/CD (GitHub Actions, GitLab, Jenkins)
│ │ └── git.ts # Git metadata (commits, contributors)
│ ├── sources/
│ │ └── index.ts # Source manager (local, GitHub)
│ └── writer/
│ └── index.ts # Knowledge writer (YAML/JSONL persistence)
├── test/ # 35 test files, 90 tests
├── package.json
├── tsconfig.json
└── tsup.config.ts
| Component | Technology |
|---|---|
| Runtime | Node.js 20+ |
| Language | TypeScript (strict, zero any) |
| MCP SDK | @modelcontextprotocol/sdk |
| YAML | yaml (npm) |
| Search | MiniSearch (BM25 + lexical fuzzy) |
| Watch | Chokidar (debounce 500ms) |
| Build | tsup (ESM, Node 20 target) |
| Tests | Vitest |
npm install # Install dependencies
npm run build # Compile with tsup
npm run typecheck # TypeScript check (zero errors expected)
npm run test # Run tests
npm run dev # Watch mode
Full contribution guide: CONTRIBUTING.md
any, no @ts-ignore, no @ts-expect-errorMAX_ITEMS=10 to avoid context overflow| Element | Convention |
|---|---|
| Files | kebab-case.ts |
| Classes | PascalCase |
| Functions/vars | camelCase |
| Constants | UPPER_SNAKE_CASE |
| Imports | ESM (.js extension required) |
npm run build without errorsnpm run typecheck zero errorsnpm test all passany, no @ts-ignore| Tool | Multi-repo | Tech Graph | Dev Profile | Local MCP | Privacy | Cost |
|---|---|---|---|---|---|---|
| Cortex MCP | ✅ | ✅ | ✅ | ✅ | ✅ Total | Free |
| Anthropic Memory | ❌ | ⚠️ | ❌ | ✅ | ✅ | Free |
| Mem0/OpenMemory | ⚠️ | ⚠️ | ⚠️ | ✅ | ✅ Opt. | Freemium |
| Repomix | ⚠️ | ❌ | ❌ | ✅ | ✅ | Free |
| Sourcegraph Cody | ✅ | ⚠️ | ❌ | ❌ | ✅ Self | $$$ |
| Augment Code | ✅ | ✅ | ⚠️ | ❌ | ❌ Cloud | $$ |
This table compares features, not maturity. Cortex is an individual MVP-stage project; the other tools have teams, communities, and larger production histories.
Cortex combines multi-repo scanning + local technology graph + developer profile + delivery via local MCP.
Without Cortex:
You: "I need to build a real-time chat app"
Claude: "You can use Socket.IO with Express..."
(Generic advice, no context about your experience)
With Cortex:
You: "I need to build a real-time chat app"
Claude: "I can see in your portfolio that you have experience with Socket.IO
in 'collab-tool' and 'gaming-platform'. You prefer NestJS for structured APIs.
I can reuse your auth middleware from 'user-service'
and the Redis session pattern from 'marketplace-backend'."
Question: "GraphQL or REST for this project?"
Answer with Cortex: "Looking at your history: you used GraphQL in 2 of 12 projects, both complex dashboards with many data relations. For simple CRUD APIs, you consistently chose REST + Express. This project looks like CRUD, so REST aligns with your proven patterns."
Complete example of knowledge extraction from NestJS in examples/nestjs-analysis.md — demonstrating automatic detection of 18 technologies, enterprise patterns, and integration strategies.
"Will loading all this knowledge eat up my context window?"
No — and here is the proof.
| Element | Approx. tokens |
|---|---|
| Tool call invocation (name + schema) | ~150 |
start_session response (stack, last 3 decisions, patterns, operator profile) |
~1,200 |
| Additional tool calls during session (3–5 ×) | ~600–1,000 |
| Total session bootstrap | ~2,000–2,500 |
Token counts measured on real
start_sessionresponses serialised as JSON-RPC. Your figures will be slightly lower if your knowledge base is sparse, or higher if you have dense operator-profile notes.
| Model | Context window | Cortex budget | % used |
|---|---|---|---|
| Claude Sonnet 3.7 / 3.5 | 200,000 tokens | ~2,500 | ~1.25 % |
| GPT-4o | 128,000 tokens | ~2,500 | ~1.95 % |
| GitHub Copilot (GPT-4o) | 128,000 tokens | ~2,500 | ~1.95 % |
| Gemini 2.0 Flash | 1,048,576 tokens | ~2,500 | ~0.24 % |
For ≈ 2 % of your context window you get injected into every session:
Compare that to a typical codebase dump with Repomix (50,000–300,000 tokens — 25–150 % of a 200k window). Cortex gives you the knowledge layer — the distilled "why" and "how" — at a fraction of the cost of pasting raw source files.
The ratio: ~1–2 % of context window → portfolio-wide memory. That is a return of 50–100× on context invested.
doctor for first-use diagnosticsexamples/knowledge/)npx cortex-mcp init)| Resource | Link |
|---|---|
| Model Context Protocol | https://modelcontextprotocol.io/ |
| Anthropic MCP Memory | https://github.com/modelcontextprotocol/servers/tree/main/src/memory |
| Graphiti (Zep) | https://github.com/getzep/graphiti |
| Mem0 | https://github.com/mem0ai/mem0 |
| Repomix | https://github.com/yamadashy/repomix |
If Cortex saves you time, consider buying me a coffee ☕
Crypto
| Network | Address |
|---|---|
| Bitcoin (BTC) | bc1qwvmzcy62c9kcd44zy67s57cn6pktmnctjk9zws |
| Ethereum / EVM (ETH) | 0x797eca0D88f92d08Ccc6dd10E3DEcFEacAc511Ce |
Tip: Use a wallet created specifically for donations — never your main or trading wallet. For Ethereum you can register a human-readable ENS name (e.g.
yourname.eth) so the address is easy to share and verify.
PIX (Brazil)
Chave PIX: 4978dd10-e12d-42e8-8a32-257ad00594e3
You can also support the project by:
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"cortex-mcp-server": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also