loading…
Search for a command to run...
loading…
A persistent long-term memory system that enables AI clients to store and recall notes, code, and research via semantic search. It utilizes Google Gemini embedd
A persistent long-term memory system that enables AI clients to store and recall notes, code, and research via semantic search. It utilizes Google Gemini embeddings and Supabase pgvector to provide a secure, searchable 'Second Brain' for MCP-compatible applications.
A Second Brain powered by Model Context Protocol (MCP), Google Gemini Embedding 2, and Supabase pgvector — deployed on Vercel.
Connect any MCP-compatible AI client (Claude, Cursor, OpenCode, Copilot, etc.) and give it persistent long-term memory. Store text, images, PDFs, audio, and video — all embedded in a unified vector space for cross-modal semantic search.
AI Client (Claude / Cursor / OpenCode / Copilot)
│
▼ MCP Protocol (Streamable HTTP + SSE)
│ Authorization: Bearer <api-key>
┌──────────────────────────────────────────┐
│ Vercel (Next.js) │
│ /api/mcp/[transport] │
│ │
│ ┌── Auth Middleware ──┐ │
│ │ Bearer token check │ │
│ └─────────────────────┘ │
│ │
│ Tools: │
│ • store_memory (text) │
│ • store_file (base64 upload) │
│ • store_file_from_url (URL fetch) │
│ • search_memory (cross-modal) │
│ • get_file_url (signed download) │
│ • list_memories │
│ • update_memory │
│ • delete_memory │
│ • get_stats │
│ │
│ REST Endpoint: │
│ • POST /api/upload (direct file) │
└──────────┬─────────────┬─────────────────┘
│ │
┌─────┴─────┐ ┌───┴──────────────┐
▼ ▼ ▼ ▼
┌─────────┐ ┌──────────────┐ ┌───────────┐
│ Gemini │ │ Supabase │ │ Supabase │
│ Embed 2 │ │ PostgreSQL │ │ Storage │
│ API │ │ + pgvector │ │ (files) │
│ │ │ vector(768) │ │ │
└─────────┘ └──────────────┘ └───────────┘
Gemini Embedding 2 maps all modalities into the same 768-dimension vector space. This means:
| Modality | MIME Types | Limits |
|---|---|---|
| Image | image/png, image/jpeg, image/webp, image/gif |
Up to 6 per request |
application/pdf |
Up to 6 pages | |
| Audio | audio/mpeg, audio/wav, audio/ogg, audio/mp3, audio/aac, audio/flac |
— |
| Video | video/mp4, video/quicktime, video/webm |
Up to 120 seconds |
When you provide a description alongside a file, the system creates an interleaved embedding — a single vector that captures both the visual/audio content AND your text description. This produces significantly richer search results compared to embedding the file alone.
store_memory toolsearch_memory embeds your query, runs cosine similarity search, returns the matching memoryFor files, the flow is the same — except the file bytes are sent to Gemini for multimodal embedding, and the raw file is stored in Supabase Storage with a signed download URL generated on retrieval.
The server uses Bearer token authentication on every request:
DIGITAL_BRAIN_API_KEYS so each client gets its own key (and you can rotate independently)memories table — only service_role can access data. The anon key has zero access.brain-files bucket is private — files are only accessible via time-limited signed URLs (1 hour expiry)# Generate a strong 256-bit key
openssl rand -hex 32
| Component | Technology | Purpose |
|---|---|---|
| Embeddings | Gemini Embedding 2 (gemini-embedding-2-preview) |
Multimodal embeddings — text, images, audio, video, PDF all in one vector space |
| Vector DB | Supabase + pgvector | PostgreSQL with vector similarity search (HNSW index, cosine distance) |
| File Storage | Supabase Storage | Private bucket for images, PDFs, audio, video with signed URL access |
| MCP Server | Next.js + mcp-handler |
Exposes tools via MCP protocol with SSE transport |
| Hosting | Vercel | Serverless deployment, auto-scaling, scale-to-zero |
| Session Store | Upstash Redis (via Vercel KV) | Redis-backed SSE session management |
| Auth | Bearer token middleware | API key validation on every request |
Gemini Embedding 2 outputs 3072 dimensions by default but supports Matryoshka Representation Learning (MRL) — you can truncate to 768 with minimal quality loss. This saves ~75% storage and makes queries significantly faster, which matters a lot more for a personal knowledge base than that last fraction of accuracy.
store_memorySave text-based knowledge to the Digital Brain.
| Parameter | Type | Required | Description |
|---|---|---|---|
content |
string | ✅ | The text content to store |
source |
string | Where it came from (e.g. "conversation", "web-research", a URL) |
|
tags |
string[] | Tags for categorization (e.g. ["work", "azure", "ebr"]) |
|
content_type |
enum | text, note, code, conversation, research, decision, reference |
|
metadata |
object | Arbitrary structured metadata |
store_fileStore an image, PDF, audio, or video file via base64-encoded data. The file is embedded with Gemini Embedding 2 in the same vector space as text memories.
| Parameter | Type | Required | Description |
|---|---|---|---|
file_data |
string | ✅ | Base64-encoded file content |
file_name |
string | ✅ | Original filename with extension (e.g. "diagram.png") |
mime_type |
string | ✅ | MIME type (see Supported File Types above) |
description |
string | Text description — creates a richer interleaved embedding. Highly recommended. | |
source |
string | Source attribution | |
tags |
string[] | Tags for categorization | |
metadata |
object | Arbitrary structured metadata |
store_file_from_urlFetch a file from a URL and store it with a multimodal embedding. Downloads the file, embeds it, and saves to Supabase Storage.
| Parameter | Type | Required | Description |
|---|---|---|---|
url |
string | ✅ | URL of the file to download |
description |
string | Text description for interleaved embedding | |
file_name |
string | Override filename (derived from URL if omitted) | |
source |
string | Source attribution (defaults to the URL) | |
tags |
string[] | Tags for categorization | |
metadata |
object | Arbitrary structured metadata |
search_memorySemantic search across ALL modalities — text, images, PDFs, audio, video. Your text query is embedded and matched against everything in the brain.
| Parameter | Type | Required | Description |
|---|---|---|---|
query |
string | ✅ | Natural language search query |
limit |
number | Max results (default 10, max 50) | |
threshold |
number | Minimum similarity 0–1 (default 0.4) | |
filter_tags |
string[] | Only return memories with at least one matching tag | |
filter_type |
enum | Filter by type: text, note, code, conversation, research, decision, reference, image, pdf, audio, video |
File-based results include file_name, file_mime_type, file_size_bytes, and a signed file_url for download.
get_file_urlGet a temporary signed download URL for a stored file (valid 1 hour).
| Parameter | Type | Required | Description |
|---|---|---|---|
id |
number | ✅ | The memory ID that has a file attached |
list_memoriesBrowse memories with optional filters. Includes both text and file-based memories.
| Parameter | Type | Required | Description |
|---|---|---|---|
content_type |
enum | Filter by type (includes image, pdf, audio, video) |
|
tags |
string[] | Filter by tags | |
limit |
number | Max results (default 20, max 100) | |
offset |
number | Pagination offset |
update_memoryModify an existing memory. If content changes, a new embedding is generated automatically.
| Parameter | Type | Required | Description |
|---|---|---|---|
id |
number | ✅ | Memory ID (from search/list results) |
content |
string | New content (re-embeds automatically) | |
tags |
string[] | Replace tags | |
source |
string | Update source | |
metadata |
object | Replace metadata |
delete_memoryPermanently remove a memory by ID. If it has a file, the file is also deleted from Supabase Storage.
| Parameter | Type | Required | Description |
|---|---|---|---|
id |
number | ✅ | Memory ID to delete |
get_statsGet brain statistics: total count, breakdown by content type (including file types), and top tags.
No parameters.
git clone https://github.com/dswillden/digital-brain-mcp.git
cd digital-brain-mcp
npm install
supabase/migrations/001_create_memories.sql — creates the base schemasupabase/migrations/002_multimodal_upgrade.sql — adds file columns and updates search functionsCreate the Storage Bucket:
brain-filesGet your credentials from Supabase → Settings → API:
SUPABASE_URL — the Project URLSUPABASE_SERVICE_ROLE_KEY — the service_role secret (NOT the anon key)GEMINI_API_KEYopenssl rand -hex 32
Save the output as DIGITAL_BRAIN_API_KEYS.
# Create .env.local with your keys
cp .env.example .env.local
# Edit .env.local with your actual values
# Start the dev server
npm run dev
The MCP endpoint will be at http://localhost:3000/api/mcp/sse.
DIGITAL_BRAIN_API_KEYS — your generated key(s)GEMINI_API_KEY — your Google AI keySUPABASE_URL — your Supabase project URLSUPABASE_SERVICE_ROLE_KEY — your Supabase service role keyREDIS_URLYour production MCP endpoint: https://digital-brain-mcp.vercel.app/api/mcp/sse
Add to your Claude MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):
{
"mcpServers": {
"digital-brain": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://digital-brain-mcp.vercel.app/api/mcp/sse",
"--header",
"Authorization:Bearer YOUR_API_KEY_HERE"
]
}
}
}
Go to Settings → Cursor Settings → Tools & MCP → Add Server:
https://digital-brain-mcp.vercel.app/api/mcp/sseAuthorization: Bearer YOUR_API_KEY_HEREAdd to your OpenCode MCP config (.opencode/config.json or equivalent):
{
"mcp": {
"servers": {
"digital-brain": {
"type": "remote",
"url": "https://digital-brain-mcp.vercel.app/api/mcp/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY_HERE"
}
}
}
}
}
Use the SSE endpoint https://digital-brain-mcp.vercel.app/api/mcp/sse with an Authorization: Bearer <key> header.
digital-brain-mcp/
├── src/
│ ├── app/
│ │ ├── api/
│ │ │ └── mcp/
│ │ │ └── [transport]/
│ │ │ └── route.ts ← MCP endpoint (9 tools + auth)
│ │ ├── layout.tsx ← Root layout
│ │ └── page.tsx ← Landing page
│ │ ├── upload/
│ │ │ └── route.ts ← Direct file upload endpoint (POST /api/upload)
│ └── lib/
│ ├── embeddings.ts ← Gemini Embedding 2 multimodal client
│ └── supabase.ts ← Supabase client + data helpers + file storage
├── docs/
│ ├── setup-guide.md ← Step-by-step setup instructions
│ ├── technical-spec.md ← Detailed spec for AI agents to understand/recreate
│ └── explainer.md ← Beginner-friendly guide with diagrams
├── supabase/
│ └── migrations/
│ ├── 001_create_memories.sql ← Base schema (text only)
│ └── 002_multimodal_upgrade.sql ← File columns + updated functions
├── .env.example ← Template for environment variables
├── .mcp.json ← MCP client connection config
├── package.json
├── tsconfig.json
├── next.config.js
└── README.md ← This file
Once connected, you can say things like:
"Remember that the EBR system uses Azure Functions for the API layer"
→ Calls store_memory with appropriate tags
"Store this screenshot of the dashboard" (with image attached)
→ Calls store_file with the image, creates a multimodal embedding
"Save this PDF from https://example.com/report.pdf"
→ Calls store_file_from_url, downloads and embeds the PDF
Upload a local file directly (from terminal):
curl -X POST https://digital-brain-mcp.vercel.app/api/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@./diagram.png" \
-F "description=System architecture diagram" \
-F "tags=work,architecture"
"What do I know about authentication patterns?"
→ Calls search_memory, finds text AND image/PDF results across modalities
"Show me all my stored images"
→ Calls list_memories with content_type: "image"
"Get the download link for memory #42"
→ Calls get_file_url, returns a signed URL valid for 1 hour
"How many memories do I have?"
→ Calls get_stats, shows breakdown by type including file counts
| Service | Free Tier | Paid Threshold |
|---|---|---|
| Supabase | 500 MB database, 1 GB storage | ~650K text memories or ~1K large files before hitting limit |
| Vercel | Hobby plan (100 GB bandwidth) | Heavy team usage |
| Gemini API | Generous free quota | Thousands of embeddings/day |
| Upstash Redis | 10K commands/day | Heavy concurrent sessions |
For personal second-brain use, everything stays well within free tiers.
In addition to the MCP tools, there's a simple REST endpoint for uploading files directly from your terminal or any HTTP client — no base64 encoding needed:
curl -X POST https://digital-brain-mcp.vercel.app/api/upload \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@/path/to/photo.jpg" \
-F "description=Team photo from Q1 offsite" \
-F "tags=team,photos" \
-F "source=manual-upload"
| Field | Required | Description |
|---|---|---|
file |
Yes | The file to upload (multipart form) |
description |
No | Text description — improves search quality significantly |
tags |
No | Comma-separated tags |
source |
No | Where it came from (defaults to "file-upload") |
metadata |
No | JSON string for extra structured data |
Your AI clients (Claude Code, Cursor, OpenCode) can also run this curl command on your behalf when you ask them to upload a local file.
Detailed docs are in the docs/ folder:
| Document | Audience | Description |
|---|---|---|
| Setup Guide | You | Step-by-step setup with full SQL, Vercel deploy, and client configs |
| Technical Spec | AI agents | Exhaustive specification — enough for an AI to understand, maintain, or recreate the system |
| Explainer | Beginners | What embeddings, vectors, MCP, and Supabase are, with diagrams and analogies |
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"digital-brain-mcp": {
"command": "npx",
"args": []
}
}
}