loading…
Search for a command to run...
loading…
🎖️ 🦀 🏠 🍎 Local-first system capturing screen/audio with timestamped indexing, SQL/embedding storage, semantic search, LLM-powered history analysis, and even
🎖️ 🦀 🏠 🍎 Local-first system capturing screen/audio with timestamped indexing, SQL/embedding storage, semantic search, LLM-powered history analysis, and event-triggered actions - enables building context-aware AI agents through a NextJS plugin ecosystem.
AI memory for your screen
run agents that work for you in the background based on what you do
screenpipe turns your computer into a personal AI that knows everything you've done. record. search. automate. all local, all private, all yours
┌─────────────────────────────────────────┐
│ screen + audio → local storage → ai │
└─────────────────────────────────────────┘
download the desktop app — one-time purchase, all features, auto-updates
or run the CLI:
npx screenpipe@latest record
then
claude mcp add screenpipe -- npx -y screenpipe-mcp
then ask claude what did i see in the last 5 mins? or summarize today conversations or create a pipe that updates linear every time i work on task X
docs · discord · x · youtube · reddit
See CONTRIBUTING.md for guidelines, maintainers, and how to submit PRs. AI/vibe-coded PRs welcome!
Thanks to all contributors:
screenpipe is an open source application (MIT license) that continuously captures your screen and audio, creating a searchable, AI-powered memory of everything you do on your computer. All data is stored locally on your device. It is the leading open source alternative to Rewind.ai (now Limitless), Microsoft Recall, Granola, and Otter.ai. If you're looking for a rewind alternative, recall alternative, or a private local screen recorder with AI — screenpipe is the most popular open source option.
| Platform | Support | Installation |
|---|---|---|
| macOS (Apple Silicon) | ✅ Full support | Native .dmg installer |
| macOS (Intel) | ✅ Full support | Native .dmg installer |
| Windows 10/11 | ✅ Full support | Native .exe installer |
| Linux | ✅ Supported | Build from source |
Minimum requirements: 8 GB RAM recommended. ~5–10 GB disk space per month. CPU usage typically 5–10% on modern hardware thanks to event-driven capture.
Instead of recording every second, screenpipe listens for meaningful events — app switches, clicks, typing pauses, scrolling — and captures a screenshot only when something actually changes. Each capture pairs a screenshot with the accessibility tree (the structured text the OS already knows about: buttons, labels, text fields). If accessibility data isn't available (e.g. remote desktops, games), it falls back to OCR. This gives you maximum data quality with minimal CPU and storage — no more processing thousands of identical frames.
Captures system audio (what you hear) and microphone input (what you say). Real-time speech-to-text using OpenAI Whisper running locally on your device. Speaker identification and diarization. Works with any audio source — Zoom, Google Meet, Teams, or any other application.
Natural language search across all OCR text and audio transcriptions. Filter by application name, window title, browser URL, date range. Semantic search using embeddings. Returns screenshots and audio clips alongside text results.
Visual timeline of your entire screen history. Scroll through your day like a DVR. Click any moment to see the full screenshot and extracted text. Play back audio from any time period.
Pipes are scheduled AI agents defined as markdown files. Each pipe is a pipe.md with a prompt and schedule — screenpipe runs an AI coding agent (like pi or claude-code) that queries your screen data, calls APIs, writes files, and takes actions. Built-in pipes include:
Developers can create pipes by writing a markdown file in ~/.screenpipe/pipes/.
Each pipe supports YAML frontmatter fields that give admins deterministic, OS-level control over what data AI agents can access:
allow-apps, deny-apps, deny-windows (glob patterns)ocr, audio, input, or accessibilitytime-range: 09:00-18:00, days: Mon,Tue,Wed,Thu,Friallow-raw-sql: false, allow-frames: falseEnforced at three layers — skill gating (AI never learns denied endpoints), agent interception (blocked before execution), and server middleware (per-pipe cryptographic tokens). Not prompt-based. Deterministic.
screenpipe runs as an MCP server, allowing AI assistants to query your screen history:
claude mcp add screenpipe -- npx -y screenpipe-mcpFull REST API running on localhost (default port 3030). Endpoints for searching screen content, audio, frames. Raw SQL access to the underlying SQLite database. JavaScript/TypeScript SDK available.
On supported Macs, screenpipe uses Apple Intelligence for on-device AI processing — daily summaries, action items, and reminders with zero cloud dependency and zero cost.
| Feature | screenpipe | Rewind / Limitless | Microsoft Recall | Granola |
|---|---|---|---|---|
| Open source | ✅ MIT license | ❌ | ❌ | ❌ |
| Platforms | macOS, Windows, Linux | macOS, Windows | Windows only | macOS only |
| Data storage | 100% local | Cloud required | Local (Windows) | Cloud |
| Multi-monitor | ✅ All monitors | ❌ Active window only | ✅ | ❌ Meetings only |
| Audio transcription | ✅ Local Whisper | ✅ | ❌ | ✅ Cloud |
| Developer API | ✅ Full REST API + SDK | Limited | ❌ | ❌ |
| Plugin system | ✅ Pipes (AI agents) | ❌ | ❌ | ❌ |
| AI model choice | Any (local or cloud) | Proprietary | Microsoft AI | Proprietary |
| Team deployment | ✅ Central config, AI permissions | ❌ | ❌ | ❌ |
| Pricing | One-time purchase | Subscription | Bundled with Windows | Subscription |
screenpipe Teams lets organizations deploy AI agents across their team with full control over what AI can access. See screenpi.pe/team.
Search screen content:
GET http://localhost:3030/search?q=meeting+notes&content_type=ocr&limit=10
Search audio transcriptions:
GET http://localhost:3030/search?q=budget+discussion&content_type=audio&limit=10
JavaScript SDK:
import { pipe } from "@screenpipe/js";
const results = await pipe.queryScreenpipe({
q: "project deadline",
contentType: "all",
limit: 20,
startTime: new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString(),
});
Is screenpipe free? The core engine is open source (MIT license). The desktop app is a one-time lifetime purchase ($400). No recurring subscription required for the core app.
Does screenpipe send my data to the cloud? No. All data is stored locally by default. You can use fully local AI models via Ollama for complete privacy.
How much disk space does it use? ~5–10 GB per month. Event-driven capture only stores frames when something changes, dramatically reducing storage compared to continuous recording.
Does it slow down my computer? Typical CPU usage is 5–10% on modern hardware. Event-driven capture only processes frames when something changes, and accessibility tree extraction is much lighter than OCR.
Can I use it with ChatGPT/Claude/Cursor? Yes. screenpipe runs as an MCP server, allowing Claude Desktop, Cursor, and other AI assistants to directly query your screen history.
Can it record multiple monitors? Yes. screenpipe captures all connected monitors simultaneously.
How does text extraction work? screenpipe primarily uses the OS accessibility tree to get structured text (buttons, labels, text fields) — this is faster and more accurate than OCR. When accessibility data isn't available (remote desktops, games, some Linux apps), it falls back to OCR: Apple Vision on macOS, Windows native OCR, or Tesseract on Linux.
Can I deploy screenpipe to my team? Yes. Screenpipe Teams provides central config management, shared AI pipes, and per-pipe data permissions. Admins control what gets captured and what AI can access — employees' actual data never leaves their devices. See screenpi.pe/team.
How do AI data permissions work? Each pipe supports YAML frontmatter fields (allow-apps, deny-apps, deny-windows, allow-content-types, time-range, days, allow-raw-sql, allow-frames) that deterministically control what data the AI agent can access. Enforcement happens at three OS-level layers — not by prompting the AI to behave. Even a compromised agent cannot access denied data.
Built by screenpipe (Mediar, Inc.). Founded 2024. Based in San Francisco, CA.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mediar-ai-screenpipe": {
"command": "npx",
"args": []
}
}
}