loading…
Search for a command to run...
loading…
Scan codebases for LLM API calls and estimate monthly costs. Compare costs between git refs to catch cost regressions during code review.
Scan codebases for LLM API calls and estimate monthly costs. Compare costs between git refs to catch cost regressions during code review.
Catch LLM cost changes in code review. Infracost for LLM spend.
CI PyPI version GitHub Marketplace License: MIT Python 3.10+ tokentoll MCP server
A CLI tool and GitHub Action that statically analyzes your code for LLM API calls, estimates their cost, and shows you the cost impact of every change in your terminal or as a PR comment. Zero runtime dependencies.
A single model swap from gpt-4o-mini to gpt-4o increases costs 15x.
A new API call in a hot path can add $10,000/month to your bill.
These changes hide in normal code review.
tokentoll finds LLM API calls in your code, estimates their cost, and shows you the cost impact of every change before it hits production.
pip install tokentoll
# Scan current directory for LLM API calls and their costs
tokentoll scan .
# Show cost impact of your last commit
tokentoll diff HEAD~1
# Compare two branches
tokentoll diff main..feature-branch
name: LLM Cost Diff
on:
pull_request:
paths:
- "**.py"
permissions:
pull-requests: write
jobs:
cost-diff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: Jwrede/[email protected]
| SDK | Patterns | Status |
|---|---|---|
| OpenAI | chat.completions.create, responses.create |
Supported |
| Anthropic | messages.create, messages.stream |
Supported |
| Google GenAI | models.generate_content |
Supported |
| LiteLLM | completion, acompletion |
Supported |
| LangChain | ChatOpenAI, ChatAnthropic, init_chat_model |
Supported |
| Zhipu AI | ZhipuAiClient, ZhipuAI (GLM models) |
Supported |
| JS/TS SDKs | Planned |
tokentoll scanLLM API Calls Detected
============================================================
File: src/agents/summarizer.py
Line 42: openai client.chat.completions.create
Model: gpt-4o | Max tokens: 4096
Est. cost/call: $0.03 | Monthly (1000 calls/month per call site): $26.50
Line 78: openai client.chat.completions.create
Model: gpt-4o-mini | Max tokens: 1000
Est. cost/call: $0.000301 | Monthly (1000 calls/month per call site): $0.30
--
Total estimated monthly cost: $26.80
1000 calls/month per call site
tokentoll diffLLM Cost Diff: main..feature-branch
============================================================
+ ADDED src/agents/rewriter.py:35
openai | Model: gpt-4o
Est. cost/call: $0.03 | Monthly: +$26.50
~ MODIFIED src/agents/summarizer.py:42
openai | Model: gpt-4o -> gpt-4o-mini
Est. cost/call: $0.03 -> $0.000301 | Monthly: -$26.20
--
Monthly cost impact: +$0.30
Added: 1 | Changed: 1 | Removed: 0
1000 calls/month per call site
Source Code (.py files)
|
v
+-------------+ +------------------+
| AST Scanner |---->| SDK Detectors |
| (ast.parse) | | OpenAI, Anthropic|
+-------------+ | Google, LiteLLM |
| LangChain |
+------------------+
|
v
+------------------+
| Pricing Engine |
| 2200+ models |
| Auto-cached |
+------------------+
|
+-----------+-----------+
| |
v v
+------------+ +-------------+
| Scan Report| | Diff Engine |
| (costs) | | (old vs new) |
+------------+ +-------------+
| |
v v
+------------+ +-------------+
| Table/JSON | | Table/JSON/ |
| | | PR Comment |
+------------+ +-------------+
ast module to find LLM API callsos.getenv() fallbacks, class attributes, constructor args, dict contents,
and **kwargs unpackingtokentoll scan [PATH...] [--format table|json|markdown] [--calls-per-month N] [--config PATH]
tokentoll diff [REF] [--base REF] [--head REF] [--format table|json|markdown|github-comment] [--config PATH]
tokentoll update # Update bundled pricing data
tokentoll includes an MCP (Model Context Protocol) server that lets Claude Code and other MCP hosts check the cost impact of LLM code changes directly from an agent conversation.
pip install tokentoll[mcp]
claude mcp add --transport stdio tokentoll -- tokentoll-mcp
| Tool | Description |
|---|---|
scan |
Find LLM API calls in a directory and estimate monthly costs. Accepts a path and optional calls_per_month. |
diff |
Compare LLM costs between two git refs. Accepts base_ref and optional head_ref (defaults to HEAD). |
Both tools return JSON output.
Claude Code can check the cost impact of its own changes before committing.
For example, after swapping a model from gpt-4o to gpt-4o-mini, the agent
can call the diff tool against HEAD to verify the cost reduction before
creating the commit.
Pricing is bundled and works offline. To update to the latest prices:
tokentoll update
Pricing data is sourced from LiteLLM's model_prices_and_context_window.json
and covers 300+ models across OpenAI, Anthropic, Google, AWS Bedrock,
Azure, and more.
When tokentoll encounters a call where the model name is a variable it cannot resolve, it applies a sensible per-SDK default so you still get cost estimates:
| SDK | Default Model |
|---|---|
| OpenAI | gpt-4o |
| Anthropic | claude-sonnet-4-20250514 |
| Google GenAI | gemini-2.0-flash |
| LiteLLM | gpt-4o |
| LangChain | gpt-4o |
| Zhipu AI | zai/glm-4.6 |
These defaults are shown as gpt-4o (default) in scan output. You can override
them per-project or per-path using a .tokentoll.yml config file (see below).
Create a .tokentoll.yml in your project root to customize behavior.
tokentoll automatically finds this file by walking up from the scanned directory.
# Default model for all dynamic (unresolved) calls
default_model: gpt-4o
# Per-SDK defaults (override the built-in defaults above)
default_models:
openai: gpt-4o-mini
anthropic: claude-haiku-3-20240307
# Assumed calls per month per call site
calls_per_month: 5000
# Skip cost estimation entirely for dynamic (unresolved) models. When true,
# calls whose model name cannot be resolved statically are reported with no
# cost rather than priced against a default. Useful for projects that prefer
# silence over a guess.
skip_dynamic_models: false
# Exclude paths from scanning (prefix match or glob pattern)
exclude:
- tests/
- examples/
- docs/
- "*_test.py"
# Per-path overrides (longest prefix match)
overrides:
- path: src/agents/
default_model: gpt-4o
calls_per_month: 10000
- path: src/azure/
skip_dynamic_models: true
Resolution order for dynamic model defaults: per-SDK config (default_models) >
generic config (default_model) > built-in SDK defaults.
You can also pass --config path/to/.tokentoll.yml to use a specific config file.
By default, tokentoll estimates token counts using a characters/4 heuristic. For more accurate estimates, install tiktoken:
pip install tiktoken
When tiktoken is available, tokentoll uses the correct tokenizer encoding for
each model. Unknown models fall back to cl100k_base. Tiktoken is lazy-loaded
and encoders are cached, so there is no startup penalty if you don't need it.
Real codebases rarely pass model names as string literals. tokentoll's multi-pass constant propagation engine follows:
DEFAULT_MODEL = os.getenv("MODEL", "gpt-4o")
class Config:
model: str = DEFAULT_MODEL
config = Config()
kwargs = {"model": config.model, "max_tokens": 2000}
client.chat.completions.create(**kwargs)
# tokentoll resolves: model="gpt-4o", max_tokens=2000
MODEL = "gpt-4o")os.getenv() / os.environ.get() fallback values**kwargs unpacking.tokentoll.yml).--calls-per-month, .tokentoll.yml, or per-path overrides). Use the exclude
option to skip test and example files.MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"tokentoll": {
"command": "npx",
"args": []
}
}
}