loading…
Search for a command to run...
loading…
AI agent cost intelligence — track spend across providers, optimize model selection, manage budgets with enforcement, detect cost leaks, and prove ROI. 23 tools
AI agent cost intelligence — track spend across providers, optimize model selection, manage budgets with enforcement, detect cost leaks, and prove ROI. 23 tools across 10 domains.
npm version CI License: MIT Smithery Glama
Your AI agents are wasting money. Metrx finds out how much, and fixes it.
The official MCP server for Metrx — the AI Agent Cost Intelligence Platform. Give any MCP-compatible agent (Claude, GPT, Gemini, Cursor, Windsurf) the ability to track its own costs, detect waste, optimize model selection, and prove ROI.
| Problem | What Metrx Does |
|---|---|
| No visibility into agent spend | Real-time cost dashboards per agent, model, and provider |
| Overpaying for LLM calls | Provider arbitrage finds cheaper models for the same task |
| Runaway costs | Budget enforcement with auto-pause when limits are hit |
| Wasted tokens | Cost leak scanner detects retry storms, context bloat, model mismatch |
| Can't prove AI ROI | Revenue attribution links agent actions to business outcomes |
npx @metrxbot/mcp-server --demo
This starts the server with sample data so you can explore all 23 tools instantly.
Option A — Interactive login (recommended):
npx @metrxbot/mcp-server --auth
Opens your browser to get an API key, validates it, and saves it to ~/.metrxrc so you never need to set env vars.
Option B — Environment variable:
METRX_API_KEY=sk_live_your_key_here npx @metrxbot/mcp-server --test
Get your free API key at app.metrxbot.com/sign-up.
If you used --auth, no env block is needed — the key is read from ~/.metrxrc automatically:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"]
}
}
}
Or pass the key explicitly via environment:
{
"mcpServers": {
"metrx": {
"command": "npx",
"args": ["@metrxbot/mcp-server"],
"env": {
"METRX_API_KEY": "sk_live_your_key_here"
}
}
}
}
For remote agents (no local install needed):
POST https://metrxbot.com/api/mcp
Authorization: Bearer sk_live_your_key_here
Content-Type: application/json
npm install @metrxbot/mcp-server
| Tool | Description |
|---|---|
metrx_get_cost_summary |
Comprehensive cost summary — total spend, call counts, error rates, and optimization opportunities |
metrx_list_agents |
List all agents with status, category, cost metrics, and health indicators |
metrx_get_agent_detail |
Detailed agent info including model, framework, cost breakdown, and performance history |
| Tool | Description |
|---|---|
metrx_get_optimization_recommendations |
AI-powered cost optimization recommendations per agent or fleet-wide |
metrx_apply_optimization |
One-click apply an optimization recommendation to an agent |
metrx_route_model |
Model routing recommendation for a specific task based on complexity |
metrx_compare_models |
Compare LLM model pricing and capabilities across providers |
| Tool | Description |
|---|---|
metrx_get_budget_status |
Current status of all budget configurations with spend vs. limits |
metrx_set_budget |
Create or update a budget with hard, soft, or monitor enforcement |
metrx_update_budget_mode |
Change enforcement mode of an existing budget or pause/resume it |
| Tool | Description |
|---|---|
metrx_get_alerts |
Active alerts and notifications for your agent fleet |
metrx_acknowledge_alert |
Mark one or more alerts as read/acknowledged |
metrx_get_failure_predictions |
Predictive failure analysis — identify agents likely to fail before it happens |
| Tool | Description |
|---|---|
metrx_create_model_experiment |
Start an A/B test comparing two LLM models with traffic splitting |
metrx_get_experiment_results |
Statistical significance, cost delta, and recommended action |
metrx_stop_experiment |
Stop a running model routing experiment and lock in the winner |
| Tool | Description |
|---|---|
metrx_run_cost_leak_scan |
Comprehensive 7-check cost leak audit across your entire agent fleet |
| Tool | Description |
|---|---|
metrx_attribute_task |
Link agent actions to business outcomes for ROI tracking |
metrx_get_task_roi |
Calculate return on investment for an agent — costs vs. attributed outcomes |
metrx_get_attribution_report |
Multi-source attribution report with confidence scores and top contributors |
| Tool | Description |
|---|---|
metrx_configure_alert_threshold |
Set cost or operational alert thresholds with email, webhook, or auto-pause |
| Tool | Description |
|---|---|
metrx_generate_roi_audit |
Board-ready ROI audit report for your AI agent fleet |
| Tool | Description |
|---|---|
metrx_get_upgrade_justification |
ROI report for tier upgrades based on current usage patterns |
Pre-built prompt templates for common workflows:
| Prompt | Description |
|---|---|
analyze-costs |
Comprehensive cost overview — spend breakdown, top agents, optimization opportunities |
find-savings |
Discover optimization opportunities — model downgrades, caching, routing |
cost-leak-scan |
Scan for waste patterns — retry storms, oversized contexts, model mismatch |
User: What was my AI cost this week?
→ metrx_get_cost_summary(period_days=7)
Total Spend: $234.56 | Calls: 2,450 | Error Rate: 0.2%
├── customer-support: $156.23 (1,800 calls)
└── code-generator: $78.33 (650 calls)
💡 Switch customer-support from GPT-4 to Claude Sonnet: Save $42/week
User: Am I overpaying for my agents?
→ metrx_compare_models(models=["gpt-4o", "claude-3-5-sonnet", "gemini-1.5-pro"])
Model Comparison (per 1M tokens):
├── gpt-4o: $2.50 in / $10.00 out
├── claude-3-5-sonnet: $3.00 in / $15.00 out
└── gemini-1.5-pro: $3.50 in / $10.50 out
User: Test Claude 3.5 Sonnet against my GPT-4 setup
→ metrx_create_model_experiment(agent_id="agent_123",
model_a="gpt-4o", model_b="claude-3-5-sonnet-20241022", traffic_split=10)
Experiment started: 90% GPT-4o, 10% Claude 3.5 Sonnet
Check back in 14 days for statistical significance.
This repo also includes @metrxbot/cost-leak-detector — a free, offline CLI that scans your LLM API logs for wasted spend. No signup, no cloud, no data leaves your machine.
npx @metrxbot/cost-leak-detector demo
It runs 7 checks (idle agents, premium model overuse, missing caching, high error rates, context overflow, no budgets, arbitrage opportunities) and gives you a scored report in seconds. See the full docs.
The server looks for your API key in this order:
METRX_API_KEY environment variable~/.metrxrc file (created by --auth)Run npx @metrxbot/mcp-server --auth to save your key, or set the env var directly.
| Variable | Required | Description |
|---|---|---|
METRX_API_KEY |
Yes* | Your Metrx API key (get one free) |
METRX_API_URL |
No | Override API base URL (default: https://metrxbot.com/api/v1) |
*Not required if you've run --auth — the key is read from ~/.metrxrc automatically.
| Flag | Description |
|---|---|
--demo |
Start with sample data — no API key or signup needed |
--auth |
Interactive login — opens browser, validates key, saves to ~/.metrxrc |
--test |
Verify your API key and connection |
60 requests per minute per tool. For higher limits, contact [email protected].
git clone https://github.com/metrxbots/mcp-server.git
cd mcp-server
npm install
npm run typecheck
npm test
See CONTRIBUTING.md for guidelines.
The product is Metrx (metrxbot.com). The npm scope is @metrxbot and the Smithery listing is metrxbot/mcp-server. The GitHub organization is metrxbots (with an s) because metrxbot was already taken on GitHub. If you see metrxbot vs metrxbots across platforms, they're the same project — just a GitHub namespace constraint.
MIT — see LICENSE.
Did Metrx work for you? We'd love to hear it — good or bad.
If you installed but hit a snag, tell us what happened — we read every report.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"metrxbots-mcp-server": {
"command": "npx",
"args": []
}
}
}