loading…
Search for a command to run...
loading…
Token-cost telemetry for OpenClaw, queryable from Claude or any MCP-aware agent. Provides per-agent and per-provider cost attribution, anomaly detection, model-
Token-cost telemetry for OpenClaw, queryable from Claude or any MCP-aware agent. Provides per-agent and per-provider cost attribution, anomaly detection, model-routing recommendations, and 30-day forecasts.
MCP server for AI agent token-cost telemetry — built for the operator who woke up to a surprise bill. Catches the failure modes vendor dashboards miss: silent re-routing to API billing (HERMES.md bug, 1,464 upvotes), unattended /loop overnight burns ($6k in 26 hours, real story), per-agent attribution buried behind aggregate provider totals, multi-day dashboard lag that surfaces the spike after the money is gone. Cross-provider: Anthropic, OpenAI, Gemini, Ollama, AWS Bedrock. Per-agent + per-provider attribution, spend-spike anomaly detection (per-agent median × threshold), cheaper-routing recommendations with 30-day savings estimates, monthly forecast. Reads any cost-log JSONL with the standard
{request_id, timestamp, provider, model, agent_id, prompt_tokens, completion_tokens, cost_usd}schema. OpenClaw operators get native~/.openclaw/cost-logs/parsing; other deployments wrap their provider calls with the included logging shim or commission a Custom MCP Build adapter. Keywords: Claude API cost surprise, /loop overnight burn, HERMES.md billing routing, dashboard lag, AI agent FinOps, LLM cost attribution, token spend monitoring.
Status: v1.1.0 License: MIT MCP PyPI
Production AI deployments are leaking money in ways the vendor dashboards don't surface in time. Real stories from the last 30 days on r/ClaudeAI:
/loop command running 26 hours unattended on Claude Opus 4.7 (1,175 upvotes, 314 comments, May 1 2026). The Anthropic dashboard lagged by days; the spike was invisible until the limit email landed. Verbatim from the OP: "By the time it shows a spike, the money is already spent."claude-code-guide agent recommended a command that bypasses Max-plan billing.In every case the dashboard lagged, the per-agent attribution was invisible, and Anthropic's own hard-cap mechanism either failed or arrived too late. This MCP server surfaces per-agent + per-provider cost attribution live, queryable from inside Claude or any MCP-aware client, before the bill arrives:
> claude: where did our LLM spend go this week?
[MCP tools: cost_overview + top_cost_drivers]
Total spend last 7 days: $42.18
By provider:
Anthropic $30.40 (72%) — claude-sonnet-4 dominates
OpenAI $9.20 (22%) — chat-bot agent
Gemini $1.86 (4%) — cron-summarizer (cheap-route working)
Ollama $0.00 (local, free)
Top 3 cost drivers:
data-extraction-agent $28.50 (68%)
chat-bot $9.20 (22%)
cron-summarizer $1.86 (4%)
1 anomaly flagged — see find_cost_anomalies for details.
> claude: any cheaper-routing opportunities?
[MCP tool: model_routing_recommendations]
Recommendation: data-extraction-agent currently runs claude-sonnet-4
with avg 400-token completions — extraction-style work that
gemini-2.5-flash usually handles at ~95% quality for ~5% the cost.
Estimated savings: $27.10/30d if migrated. Test on a 10% slice first.
v1.1 — quota-window awareness. The cost tracker now also answers "will my agent hit a 429 before the window resets?" before it actually does. This is the HERMES.md / overnight-burn detector — projecting per-dimension exhaustion ETA from your active burn rate vs the rate-limit headers your provider already returns:
> claude: am I about to 429?
[MCP tool: predict_429_in_window]
severity: CRITICAL
provider: anthropic
burn_rate_tokens_per_min: 8000
dimension_predictions:
tokens : will 429 in ~6.5 min (resets in 22 min) ⚠
output_tokens : will 429 in ~11.7 min (resets in 22 min) ⚠
input_tokens : safe through reset (headroom 70k at reset)
requests : safe through reset (headroom 858 at reset)
summary: tokens projected to 429 in ~6.5 min at current burn (8000/min).
Window resets in 22 min — throttle now.
> claude: what burn rate keeps me safe?
[MCP tool: recommend_throttle_target target_buffer_pct=10]
target_buffer_pct: 10
targets:
tokens : current 8000/min, max safe 545/min — must throttle
output_tokens : current 3000/min, max safe 1136/min — must throttle
input_tokens : current 5000/min, max safe 6818/min — safe
requests : current 1/min, max safe 35/min — safe
summary: throttle required on tokens, output_tokens to keep 10% buffer at reset.
openclaw-cost-tracker-mcpFour things existing tools (provider billing dashboards, generic FinOps tools, custom scripts, even paid SaaS like Lava or AgentShield) don't do well together:
Per-agent attribution, not just per-provider totals. Provider dashboards show "$X spent on Anthropic" — they can't tell you which of your six agents drove 78% of that. Cost tracker reads per-request cost-log JSONL and aggregates with the agent_id intact.
Cost-spike anomaly detection per agent. A single 120k-token paste into chat — or an unattended /loop running overnight — costs more than a week of normal traffic. The default 3x-median-per-agent threshold flags those before they show up in the month-end bill. Catches the same shape as the $6k overnight thread and the silent re-routing patterns that vendor dashboards miss.
Routing recommendations grounded in actual usage. Generic "use cheaper models" advice is useless. This server identifies specific agents whose volume + completion-length pattern suggests a cheaper provider would deliver the same outcome, with concrete 30-day savings estimates.
Local-only, MCP-native, no traffic interception. Lava + similar paid gateways sit between your app and the provider — your traffic routes through them. AgentShield is a callback for LangChain/CrewAI/OpenAI SDK, not MCP. Cost-tracker is read-only: it parses your existing cost logs and surfaces them via MCP tools your Claude conversation can query directly. No proxy, no traffic routing, no subscription, no vendor lock-in.
Built for the production-AI operator running real workloads with real spend that matters — whether on OpenClaw, Claude Code with claude -p, raw Anthropic API, OpenAI Assistants, or any combination.
| Tool | What it returns |
|---|---|
cost_overview |
Total spend + by-provider + top agents + top models + anomaly count for a window |
costs_by_agent |
Per-agent breakdown with avg-cost-per-request + share of total |
costs_by_provider |
Per-provider breakdown with token counts |
find_cost_anomalies |
Requests flagged as 3x+ above their agent's median cost |
top_cost_drivers |
Top N spending agents + models, no other noise |
model_routing_recommendations |
Specific cheaper-model suggestions with 30d savings estimates |
forecast_monthly_cost |
Projects 30-day total + per-provider with confidence note |
current_quota_usage (v1.1) |
Per-dimension rate-limit utilization + most-pressured-dimension headline |
predict_429_in_window (v1.1) |
Projects per-dimension 429 ETA from recent burn rate. CRITICAL/WARNING/INFO severity |
recommend_throttle_target (v1.1) |
Max safe burn rate per dimension to land at target_buffer_pct headroom at reset |
Resources:
cost://overview — 7-day snapshotcost://forecast — 30-day projectioncost://anomalies — recent flagged anomaliescost://quota (v1.1) — current quota-state snapshotPrompts:
diagnose-cost-spike — walk a recent spike to its root cause + corrective actionweekly-cost-digest — 200-word weekly cost digestdiagnose-quota-pressure (v1.1) — walk current quota + burn rate, recommend a specific throttle actionpip install openclaw-cost-tracker-mcp
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-cost": {
"command": "python",
"args": ["-m", "openclaw_cost_tracker_mcp"],
"env": {
"OPENCLAW_COST_BACKEND": "mock"
}
}
}
}
| Backend | Status | Description |
|---|---|---|
mock |
✅ v1.0 | Sample data with deliberate anomalies + routing opportunities for protocol verification. v1.1: includes near-exhausted token-quota snapshot + recent loop-burst entries so predict_429_in_window returns CRITICAL out of the box |
openclaw-jsonl |
✅ v1.0 | Parses OpenClaw's native cost-log JSONL files (default ~/.openclaw/cost-logs/, configurable via OPENCLAW_COST_LOGS). v1.1: also parses optional quota-snapshots.jsonl in the same dir (one record per provider response, schema in backends/openclaw_jsonl.py docstring) |
provider-direct |
⏳ v1.2 | Reads Anthropic + OpenAI billing APIs directly (no log shim required) |
Each line is one JSON record:
{"request_id":"req-abc123","timestamp":"2026-05-04T12:34:56Z","provider":"anthropic","model":"claude-sonnet-4","agent_id":"data-extraction-agent","skill_id":"extract-structured-data","prompt_tokens":8500,"completion_tokens":600,"total_tokens":9100,"cost_usd":0.0345,"duration_ms":4823}
If your OpenClaw deployment doesn't emit this format, wrap your provider calls with a small logging shim — sample shim in examples/.
| Version | Scope | Status |
|---|---|---|
| v1.0 | mock + openclaw-jsonl backends, 7 tools / 3 resources / 2 prompts, anomaly detection + routing + forecast, GitHub Actions CI matrix, PyPI Trusted Publishing | ✅ |
| v1.1 | Quota-window awareness — QuotaSnapshot data model, get_latest_quota_state() backend method, 3 new tools (current_quota_usage, predict_429_in_window, recommend_throttle_target), cost://quota resource, diagnose-quota-pressure prompt. Reads anthropic-ratelimit-* headers (or equivalents) and projects per-dimension 429 ETA from recent burn rate. Folded in from research-pass-2 P01 candidate after incumbent validation against Lava + AgentShield + Nornr |
✅ |
| v1.2 | provider-direct backend (Anthropic + OpenAI billing API integrations); backend federation; budget alerts + threshold breach detection; /loop overnight-burn detector (folded from P05 watch-list candidate) |
⏳ |
| v1.x | Per-channel cost attribution; webhook emitter for budget + quota alerts | ⏳ |
If your AI deployment doesn't use OpenClaw's cost-log format — different agent harness, custom logging, AWS Bedrock metering, vendor billing API — and you want the same attribution + anomaly + routing visibility, that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Single backend adapter for your existing cost-data source | $8,000–$10,000 | 1–2 weeks |
| Standard | Custom backend + custom anomaly rules + integration with your alerting | $15,000–$20,000 | 2–4 weeks |
| Complex | Multi-backend federation + budget enforcement + custom routing logic | $25,000–$35,000 | 4–8 weeks |
To engage:
Custom MCP Build inquiryThis server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection) and openclaw-health-mcp (deployment health). Install all three for full operational visibility.
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (cost overruns being one of the most common), and write the corrective-action plan:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: [email protected] with subject AI audit inquiry.
PRs welcome. Backends are pluggable — see src/openclaw_cost_tracker_mcp/backends/ for the contract.
To add a new backend:
CostBackend in backends/<your_backend>.pyget_entries() (cost telemetry); optionally override get_latest_quota_state() if your data source carries rate-limit headers (default returns None for graceful degrade)backends/__init__.pytests/test_backend_<your_backend>.pyBug reports + feature requests: open a GitHub issue.
MIT — see LICENSE.
LAUNCH50 for the first 30 days.Built by Temur Khan — independent practitioner on production AI systems. Contact: [email protected]
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"openclaw-cost-tracker-mcp": {
"command": "npx",
"args": []
}
}
}