loading…
Search for a command to run...
loading…
Lets AI assistants understand what you're working on — current screen content, recent dictation, clipboard, and saved notes — running entirely on your own machi
Lets AI assistants understand what you're working on — current screen content, recent dictation, clipboard, and saved notes — running entirely on your own machine with nothing sent to the cloud.
Local-first ambient context for AI agents.
Screen capture, voice dictation, clipboard, keyboard/mouse activity. All local, all private.
Developer Preview (v0.1-alpha). ContextPulse is under active development. APIs and configuration may change between releases. Report issues.
ContextPulse is a desktop daemon that captures your screen, voice, and keyboard/mouse activity in real time, then delivers it to AI agents through the Model Context Protocol (MCP). One process, one tray icon, 35 MCP tools, zero cloud dependency.
Everything stays local. No cloud. No telemetry. Your data never leaves your machine.
┌─────────────────────────────────────────────────┐
│ ContextPulse Daemon │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Sight │ │ Voice │ │ Touch │ │
│ │ Screen │ │ Dictate │ │ Keys + │ │
│ │ OCR │ │ Whisper │ │ Mouse │ │
│ │ Clipboard│ │ Vocab │ │ Bursts │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ └──────────────┼──────────────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ EventBus │ │
│ │ (Spine) │ │
│ └──────┬───────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ activity.db │ │
│ │ SQLite+FTS5 │ │
│ └──────────────┘ │
└────────────────────┬────────────────────────────┘
│ MCP (streamable-http :8420)
┌────────────┼────────────┐
▼ ▼ ▼
Claude Code Cursor Any MCP
Client
> You: "What am I looking at right now?"
Claude calls get_screenshot → get_screen_text:
Active window: VS Code, src/auth/middleware.ts
OCR text: "export async function validateToken(req: Request)..."
Clipboard: "Bearer eyJhbGciOiJIUzI1NiIs..."
> Claude: "You're editing the auth middleware in VS Code.
> I can see the validateToken function, and you have
> a Bearer token in your clipboard. Want me to help
> debug the token validation?"
> You hold Ctrl+Space and say: "refactor this to use the new auth provider"
ContextPulse Voice transcribes → pastes into editor
Claude calls get_recent_transcriptions → get_screen_text:
> Claude: "I can see your dictation and the current file.
> Here's the refactored middleware using AuthProvider..."
> You: "What was I working on before lunch?"
Claude calls get_activity_summary(hours=4) → search_history("auth"):
10:15 AM VS Code, auth/middleware.ts (45 min)
11:00 AM Chrome, JWT.io (12 min)
11:15 AM VS Code, auth/tests/test_middleware.py (30 min)
11:45 AM Slack, #backend channel (15 min)
> Claude: "You spent the morning on auth middleware: writing
> the implementation, checking JWT specs, then adding
> tests. You were in Slack #backend right before lunch."
AI coding assistants are powerful but blind. They can't see your screen, hear your voice notes, or know what you were just doing. ContextPulse bridges this gap:
| Capability | ContextPulse | Typically Available? |
|---|---|---|
| Screen capture + OCR | Yes, native resolution | Common |
| Voice dictation | Yes, local Whisper | Rare as integrated feature |
| Keyboard + mouse tracking | Yes | Rare |
| Semantic memory | Yes, three-tier with hybrid search | Rare |
| All modalities in one daemon | Yes, single lightweight process | No, usually separate tools |
| MCP-native | Yes, 35 tools | Emerging |
| 100% local, zero cloud | Yes, privacy by architecture | Uncommon |
| Open source | AGPL-3.0 | Varies |
| Platform | Status |
|---|---|
| Windows 10+ | Full support |
| macOS 13+ (Apple Silicon and Intel) | Full support |
| Linux | Community contributions welcome -- core abstractions are in place, platform modules need implementation |
git clone https://github.com/ContextPulse/contextpulse
cd contextpulse
pip install -e packages/core -e packages/screen -e packages/voice -e packages/touch -e packages/project
# Optional: persistent memory + semantic search
pip install -e packages/memory
Configure your AI agent and install companion skills:
contextpulse --setup claude-code # configures MCP + installs skills
# or: contextpulse --setup gemini # for Gemini CLI
# or: contextpulse --setup all # both
Start ContextPulse:
contextpulse # starts the background daemon
contextpulse-mcp # starts the MCP server on port 8420
That's it. Your AI agent now has tools for reading your screen, voice, activity, and memory.
Add to ~/.claude.json:
{
"mcpServers": {
"contextpulse": {
"type": "http",
"url": "http://127.0.0.1:8420/mcp"
}
}
}
| Tool | What it does |
|---|---|
get_screenshot |
Capture screen (active monitor, all monitors, or a region) |
get_recent |
Recent frames from the rolling buffer (with diff filtering) |
get_screen_text |
OCR the current screen at native resolution |
get_monitor_summary |
Lightweight text summary of all monitors (low token cost) |
get_buffer_status |
Daemon health check + buffer stats |
get_activity_summary |
App usage breakdown over last N hours |
search_history |
Full-text search across window titles + OCR text |
get_context_at |
Frame + metadata from N minutes ago |
get_clipboard_history |
Recent clipboard entries |
search_clipboard |
Search clipboard by text content |
get_agent_stats |
Which MCP clients are consuming context, and how often |
| Tool | What it does |
|---|---|
get_recent_transcriptions |
Recent voice dictation history (raw + cleaned) |
get_voice_stats |
Dictation count, duration, accuracy stats |
get_vocabulary |
Current word correction entries |
| Tool | What it does |
|---|---|
get_recent_touch_events |
Typing bursts, clicks, scrolls, drags |
get_touch_stats |
Keystroke count, WPM, click/scroll totals |
get_correction_history |
Voice-to-typing correction detections |
| Tool | What it does |
|---|---|
identify_project |
Score text against all projects, return best match |
get_active_project |
Detect current project from CWD or window title |
list_projects |
All indexed projects with overviews |
get_project_context |
Full PROJECT_CONTEXT.md for a project |
route_to_journal |
Route an insight to the project journal |
Basic memory is free forever. No license required.
| Tool | Tier | What it does |
|---|---|---|
memory_store |
Free | Store a key-value memory with optional tags and TTL |
memory_recall |
Free | Retrieve a memory by exact key |
memory_list |
Free | List memories, optionally filtered by tag |
memory_forget |
Free | Delete a memory by key |
memory_stats |
Free | Storage statistics (entry counts, DB sizes, tiers) |
memory_search |
Pro | Hybrid/keyword/semantic search across all stored memories |
memory_semantic_search |
Pro | Pure vector search using all-MiniLM-L6-v2 embeddings |
Memory uses a 3-tier hot/warm/cold architecture: in-memory LRU cache → SQLite WAL + FTS5 → compressed archive. The optional pip install contextpulse-memory package ships these tools.
| Tool | What it does |
|---|---|
memory_search |
Hybrid/keyword/semantic search across stored memories |
memory_semantic_search |
Pure vector search using sentence embeddings |
search_all_events |
Cross-modal full-text search across screen, voice, clipboard, keys |
get_event_timeline |
Temporal view of all events across all modalities |
Free forever: 27 tools (Sight × 11, Voice × 3, Touch × 3, Project × 5, Memory × 5) Pro: adds 4 search tools: semantic memory search plus cross-modal event queries Trial: 30-day Pro trial on first use, no credit card required
Additionally, ContextPulse includes several background learning tools (vocabulary consolidation, correction detection) that run automatically to improve transcription quality over time.
ContextPulse is a monorepo with modular packages:
| Package | Purpose |
|---|---|
contextpulse-core |
Daemon, EventBus (spine), config, licensing, settings |
contextpulse-sight |
Screen capture, OCR, clipboard monitoring |
contextpulse-voice |
Hold-to-dictate, Whisper transcription, vocabulary |
contextpulse-touch |
Keyboard/mouse activity capture, correction detection |
contextpulse-project |
Project detection and journal routing |
contextpulse-memory |
Persistent key-value memory with semantic search (optional) |
All modules emit events to a shared EventBus (the "spine"), which writes to a local SQLite database with FTS5 full-text search. MCP servers are read-only processes that query this database.
git clone https://github.com/ContextPulse/contextpulse
cd contextpulse
uv venv
.venv\Scripts\activate
uv pip install -e "packages/core[dev]" -e packages/screen -e packages/voice -e packages/touch -e packages/project
pytest packages/ -x -q
See CONTRIBUTING.md for guidelines.
A canary script exercises every exposed MCP tool and reports pass/fail. It runs automatically on a cron/Task Scheduler schedule to catch regressions before users do.
# Run manually
python scripts/canary_health_check.py
# Verbose (shows each tool as it runs)
python scripts/canary_health_check.py --verbose
# JSON output (for CI or external monitoring)
python scripts/canary_health_check.py --json
What it does:
logs/canary_results.json (last 100 runs retained)0 if all tools pass, 1 if any failScheduling (Windows Task Scheduler):
<path-to-contextpulse>\.venv\Scripts\python.exescripts/canary_health_check.py<path-to-contextpulse>ContextPulse is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
For commercial licensing inquiries, visit contextpulse.ai.
ContextPulse's unified multi-modal context delivery system is patent pending.
Built by Jerard Ventures LLC
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"contextpulse": {
"command": "npx",
"args": []
}
}
}