loading…
Search for a command to run...
loading…
MCP server connecting Claude to Metabase with 28 tools for natural language data analysis, dashboard management, SQL queries, and automated insights. Features S
MCP server connecting Claude to Metabase with 28 tools for natural language data analysis, dashboard management, SQL queries, and automated insights. Features SQL guardrails, rate limiting, and audit logging.
npm version npm downloads CI License: MIT Node Glama score
The write-enabled, AI-augmented MCP server for Metabase — create dashboards, ask questions in plain English, and get automated insights through Claude, on any Metabase version.
Metabase shipped an official MCP server in v0.60 focused on read and search. This server complements it with write operations, AI-generated insights, production security controls, and support for Metabase versions older than v0.60.
| Capability | @ai-1luvc0d3/metabase-mcp | Metabase Official (v0.60+) | Other community servers |
|---|---|---|---|
| Read dashboards / cards / databases | ✅ | ✅ | ✅ |
| Write ops (create/update/delete cards, dashboards, collections) | ✅ | ❌ | partial |
| Batch execution (parallel multi-op in one call) | ✅ | ❌ | ❌ |
| Workflow pipelines (chained steps with output references) | ✅ | ❌ | ❌ |
| Natural language → SQL (+ explain / optimize / validate) | ✅ | partial | ❌ |
| Automated insights & trend analysis | ✅ | ❌ | ❌ |
| SQL injection guardrails | ✅ | n/a | ❌ |
| Tiered rate limiting (read / write / LLM) | ✅ | n/a | ❌ |
| Audit logging with risk levels | ✅ | n/a | ❌ |
| Token-optimized compact responses (default) | ✅ | ❌ | partial |
| Server modes (read / write / full) | ✅ | ❌ | ❌ |
| Works on Metabase < v0.60 (no upgrade required) | ✅ | ❌ | varies |
| OAuth per-user permission scoping | ❌ (API key) | ✅ | varies |
Use this if: you want Claude to create content in Metabase, you want AI-generated insights on query results, or you're on a Metabase version older than v0.60.
Use Metabase's official MCP if: you're on v0.60+, only need read/search, and want per-user permission scoping via OAuth.
$stepName.path output references between stepsformat: "default"read (safe default), write, or full (with AI insights)metabase-mcp-*.mcpb from GitHub Releasesnpx @ai-1luvc0d3/metabase-mcp
npm install -g @ai-1luvc0d3/metabase-mcp
metabase-mcp
git clone https://github.com/1luvc0d3/metabase-mcp.git
cd metabase-mcp
npm install
npm run build
npm start
Set environment variables or create a .env file (see .env.example):
| Variable | Required | Default | Description |
|---|---|---|---|
METABASE_URL |
Yes | - | Your Metabase instance URL |
METABASE_API_KEY |
Yes | - | Metabase API key |
MCP_MODE |
No | read |
Server mode: read, write, or full |
ANTHROPIC_API_KEY |
No | - | Enables NLQ and insight tools |
METABASE_TIMEOUT |
No | 30000 |
Request timeout (ms) |
METABASE_MAX_ROWS |
No | 10000 |
Max rows returned per query |
LOG_LEVEL |
No | info |
Logging: debug, info, warn, error |
RATE_LIMIT_REQUESTS_PER_MINUTE |
No | 60 |
Rate limit threshold |
METABASE_API_KEYAdd to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"metabase": {
"command": "npx",
"args": ["@ai-1luvc0d3/metabase-mcp"],
"env": {
"METABASE_URL": "https://your-metabase.example.com",
"METABASE_API_KEY": "mb_your_api_key_here",
"MCP_MODE": "read"
}
}
}
}
| Mode | Tools | Description |
|---|---|---|
read |
12 + NLQ | Read-only access, batch execution, and workflow pipelines |
write |
22 + NLQ | Adds create/update/delete for cards, dashboards, collections |
full |
30 | All tools including automated insights and trend analysis |
Read (always available)
list_dashboards, get_dashboard, list_cards, get_card, execute_card, list_databases, get_database_schema, execute_query, search_content, get_collections
Batch & Workflow (always available)
batch_execute, run_workflow
Write (write/full modes)
create_card, update_card, delete_card, create_dashboard, update_dashboard, delete_dashboard, add_card_to_dashboard, remove_card_from_dashboard, create_collection, move_to_collection
NLQ (requires ANTHROPIC_API_KEY)
nlq_to_sql, explain_sql, optimize_sql, validate_sql
Insights (full mode + ANTHROPIC_API_KEY)
ask_data, generate_insights, compare_metrics, trend_analysis
You: What dashboards do we have related to customer retention?
Claude uses search_content to find retention-related dashboards, then get_dashboard to summarize the key metrics. You see a ranked list with the most relevant results.
You: Run the "Monthly Active Users" card for the last 90 days
Claude calls list_cards to locate the card, then execute_card with the appropriate time filter. Results come back as a table you can ask follow-up questions about ("what was the biggest dip and when?").
You: Show me the top 10 products by revenue last quarter from the sales database
Claude calls list_databases to find the sales database, get_database_schema to inspect the relevant tables, then generates and runs a SELECT query via execute_query. The query is validated against the SQL guardrails (no DROP/DELETE/UNION, single statement only) before execution. Audit log entry is written with the query and row count.
You: DROP TABLE users
Request is blocked. Claude surfaces: "Blocked SQL pattern detected: DROP — this operation is not allowed." The block is logged as a high-risk audit event.
You: Which support agents closed the most tickets this week, and how does that compare to last week?
Claude uses nlq_to_sql with the database schema as context to generate a comparative SQL query. You can ask it to explain_sql in plain English before running, or optimize_sql to suggest performance improvements — all before hitting your database.
You: Save the MAU trend query we just ran as a card called "MAU — Last 90 Days" in the Growth collection
Claude calls get_collections to find "Growth", then create_card with your validated SQL. The card now lives in your Metabase library and can be re-executed by name in future conversations via execute_card — no LLM tokens spent on re-generating the query.
You: Get me the details for dashboards 1, 3, and 7, plus the schema for the sales database
Claude uses batch_execute to run all four operations in parallel in a single call:
{
"operations": [
{ "tool": "get_dashboard", "args": { "dashboard_id": 1 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 3 } },
{ "tool": "get_dashboard", "args": { "dashboard_id": 7 } },
{ "tool": "get_database_schema", "args": { "database_id": 2 } }
]
}
One tool call instead of four. Results come back with per-operation success/failure, so partial failures don't block the rest.
You: Find dashboards about revenue, get the first one's cards, and run the top card
Claude uses run_workflow to chain the steps with output references:
{
"steps": [
{ "name": "find", "tool": "search_content", "args": { "query": "revenue", "type": "dashboard" } },
{ "name": "dash", "tool": "get_dashboard", "args": { "dashboard_id": "$find.results[0].id" } },
{ "name": "data", "tool": "execute_card", "args": { "card_id": "$dash.dashcards[0].card_id" } }
]
}
Each step can reference results from previous steps using $stepName.path[index].field syntax. One round trip instead of three back-and-forth exchanges.
You: Run last quarter's revenue query and tell me what's interesting
Claude uses execute_query to run the query, then generate_insights which asks the Claude API to identify trends, outliers, and recommendations. You get a structured summary: headline number, 3-5 bullet points, and suggested follow-up questions.
Note on data privacy:
generate_insights,ask_data,compare_metrics, andtrend_analysissend query result rows to the Anthropic API for analysis. See Data Privacy Note for details.
This server is designed for production use with multiple layers of protection:
SELECT and WITH queries are allowed by default. DDL/DML statements (DROP, DELETE, INSERT, etc.) are blocked. Injection patterns (UNION, comments, multi-statement, file ops, time-based attacks) are detected and rejected.When using NLQ or insight tools (ask_data, generate_insights, etc.), query result data is sent to the Anthropic API for analysis. If your queries return sensitive data (PII, financial records, etc.), that data will be processed by Claude. Consider this when enabling NLQ features on databases containing sensitive information.
What this extension collects:
What this extension transmits:
Data retention:
AUDIT_LOG_FILE) are written to your local filesystem only, with owner-only permissions (0600)Third-party privacy policies:
Reporting security issues: See SECURITY.md for responsible disclosure.
METABASE_URL is correct and reachable (test: curl $METABASE_URL/api/health)METABASE_API_KEY is valid (regenerate in Metabase Admin > Settings > API Keys if needed)SELECT and WITH queries are allowed by defaultSELECT, patterns like UNION SELECT, SQL comments (--, /* */), xp_cmdshell, INTO OUTFILE, etc. are blockedINSERT, UPDATE, DELETE), you must run in write or full mode AND the SQL must still pass guardrails (it won't — by design)RATE_LIMIT_REQUESTS_PER_MINUTE env varANTHROPIC_API_KEY — verify it's setsk- and has remaining creditsMCP_MODE=full~/Library/Logs/Claude/mcp*.log on macOSnode --version is >= 18This project is young and your input shapes where it goes next — especially now that Metabase has shipped its own official MCP. A minute of your time helps a lot:
MCP_MODE, and reproduction steps.npm install # Install dependencies
npm run build # Compile TypeScript
npm run dev # Watch mode
npm test # Run all tests
npm run type-check # Type checking
npm run lint # Linting
See CONTRIBUTING.md for more details.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"1luvc0d3-metabase-mcp": {
"command": "npx",
"args": []
}
}
}