loading…
Search for a command to run...
loading…
Open-source meeting bot API with MCP server. Search, retrieve, and analyze meeting transcripts from Google Meet, Zoom, and Microsoft Teams directly from your AI
Open-source meeting bot API with MCP server. Search, retrieve, and analyze meeting transcripts from Google Meet, Zoom, and Microsoft Teams directly from your AI tools.
Open-source meeting bot API & transcription API
meeting bots • real-time transcription • interactive bots • MCP server • self-hosted
Google Meet
•
Microsoft Teams
•
Zoom
What's new • Quickstart • API • Docs • Roadmap • Discord
Vexa is an open-source, self-hostable meeting bot API and meeting transcription API for Google Meet, Microsoft Teams, and Zoom. Alternative to Recall.ai, Otter.ai, and Fireflies.ai — self-host so meeting data never leaves your infrastructure, or use vexa.ai hosted.
Data sovereignty — self-host so meeting data never leaves your infrastructure
Cost — replace $20/seat SaaS with your own infrastructure
Embed in your product — multi-tenant meeting bot API with scoped tokens
AI agents — MCP server with 17 tools
| Meeting bot API | Send a bot to any meeting: auto-join, record, speak, chat, share screen. Open-source alternative to Recall.ai. |
| Meeting transcription API | Real-time transcripts via REST API and WebSocket. Self-hosted alternative to Otter.ai and Fireflies.ai. |
| Real-time transcription | Sub-second per-speaker transcripts during the call. 100+ languages via Whisper. WebSocket streaming. |
| Interactive bots | Make bots speak, send/read chat, share screen content, and set avatar in live meetings. |
| Browser bots | CDP + Playwright browser automation with persistent authenticated sessions via S3. |
| MCP server | 17 meeting tools for Claude, Cursor, Windsurf. AI agents join calls, read transcripts, speak in meetings. |
| Multi-tenant | Users, scoped API tokens, isolated containers. Deploy once, serve your team. |
| Dashboard | Open-source Next.js web UI — meetings, transcripts, agent chat, browser sessions. Ready to use out of the box. |
| Self-hostable | Run on your infra. Meeting data never leaves your infrastructure. |
Every feature is a separate service. Pick what you need, skip what you don't. Self-host everything or use vexa.ai hosted.
For regulated industries — banks, financial services, healthcare — meeting data can't leave your infrastructure. Self-hosting Vexa means zero external data transmission and full audit trail on your own infrastructure.
For cost-conscious teams — replace per-seat SaaS pricing. A team paying $17/seat/mo for meeting transcription can self-host Vexa and drop that to infrastructure cost.
For developers — embed a meeting bot API in your product. Multi-tenant, scoped API tokens, no per-user infrastructure.
Build meeting assistants like Otter.ai, Fireflies.ai, or Fathom — or build a meeting bot API like Recall.ai — self-hosted on your infrastructure.
Or use vexa.ai hosted — get an API key and start sending bots immediately, no infrastructure needed.
Meeting data never leaves your infrastructure. Self-host for complete control. Modular architecture scales from edge devices to millions of users.
1. Hosted service At vexa.ai — get an API key and start sending bots. No infrastructure needed. Ready to integrate
2. Self-host with Vexa transcription Run Vexa yourself, use vexa.ai for transcription — ready to go, no GPU needed. Control with minimal DevOps — see deploy/ for setup guides.
3. Fully self-host Run everything including your own GPU transcription service. Meeting data never leaves your infrastructure — see deploy/ for setup guides.
v0.10.4
platform=zoom works out of the box, no SDK setup.v0.10 — full architecture refactor
v0.9
See full release notes: https://github.com/Vexa-ai/vexa/releases
On a fresh Linux machine (Ubuntu 24.04):
apt-get update && apt-get install -y make git curl
curl -fsSL https://get.docker.com | sh
git clone https://github.com/Vexa-ai/vexa.git && cd vexa
Then choose:
| Command | What you get | Best for |
|---|---|---|
make lite |
Single container, all services | Quick evaluation, small teams |
make all |
Full stack, each service separate | Development, production |
Both prompt for a transcription token on first run. Get one at vexa.ai/account, or self-host transcription with a GPU.
Guides: Vexa Lite | Docker Compose | Helm (K8s)
Get your API key at vexa.ai/account and start sending bots immediately.
Send a bot, get real-time transcripts with per-speaker audio and interactive controls (speak, chat, share screen).
# Send a bot to Google Meet
curl -X POST "$API_BASE/bots" \
-H "Content-Type: application/json" \
-H "X-API-Key: <API_KEY>" \
-d '{"platform": "google_meet", "native_meeting_id": "abc-defg-hij"}'
# Get transcripts
curl -H "X-API-Key: <API_KEY>" \
"$API_BASE/transcripts/google_meet/abc-defg-hij"
Works with Google Meet, Microsoft Teams, and Zoom. Set API_BASE to https://api.cloud.vexa.ai (hosted) or http://localhost:8056 (self-hosted).
For real-time WebSocket streaming, see the WebSocket guide. For full REST details, see the User API Guide.
Remote browser containers with CDP + Playwright access and persistent session storage via S3. Agents get a real browser that stays logged in across restarts — Google, Microsoft, or any web session.
See features/browser-session/ and features/remote-browser/ for details.
17 tools that let AI agents join meetings, read transcripts, speak, chat, and share screen. Works with Claude, Cursor, Windsurf, and any MCP-compatible client.
Your AI agent can join a meeting, listen to the conversation, and participate — all through MCP tool calls. See services/mcp/ for setup and tool reference.
Vexa is a toolkit, not a monolith. Every feature works independently. Use one or all — they compose when you need them to.
| You're building... | Features you need | Skip the rest |
|---|---|---|
| Self-hosted Otter replacement | transcription + multi-platform + webhooks | agent runtime, scheduler, MCP |
| Meeting data pipeline | transcription + webhooks + post-meeting | speaking-bot, chat, agent runtime |
| AI meeting assistant product | transcription + MCP + speaking-bot + chat | remote-browser, scheduler |
| Meeting bot API (like Recall.ai) | multi-platform + transcription + token-scoping | agent runtime, workspaces |
You don't pay complexity tax for features you don't use. Each service is a separate container. Don't need agents? Don't run agent-api. Don't need TTS? Don't run tts-service. Services communicate via REST and Redis, not tight coupling.
For the up-to-date roadmap and priorities, see GitHub Issues and Milestones. Issues are grouped by milestones to show what's coming next, in what order, and what's currently highest priority.
For discussion/support, join our Discord.
Each service and feature has its own README with architecture, DoD table, and evidence-based confidence scores.
We use GitHub Issues as our main feedback channel — triaged within 72 hours. Look for good-first-issue to get started. Join Discord to discuss ideas and get assigned.
Website • Docs • Discord • LinkedIn • X (@grankin_d) • Meet Founder
Related: vexa-lite-deploy • Vexa Dashboard
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"vexa": {
"command": "npx",
"args": []
}
}
}