loading…
Search for a command to run...
loading…
Self-improving agent governance: type thumbs-up or thumbs-down on any AI agent action. ThumbGate turns every mistake into a prevention rule and blocks the patte
Self-improving agent governance: type thumbs-up or thumbs-down on any AI agent action. ThumbGate turns every mistake into a prevention rule and blocks the pattern from repeating. One thumbs-down, never again. 33 pre-action checks, budget enforcement, and
Your AI coding bill has a leak.
Stop paying $ for the same AI mistake.
Every retry loop, every hallucinated import, every "let me try a different approach" — those are billable tokens on every LLM vendor's bill. Thumbs-down once; ThumbGate blocks that exact mistake on every future call. Across Claude Code, Cursor, Codex, Gemini, Amp, Cline, OpenCode — any MCP-compatible agent, forever.
Under the hood: your thumbs-down becomes one of your Pre-Action Checks that physically blocks the pattern permanently on every future call — across every session, every model, every agent. It is self-improving agent governance: every correction promotes a fresh prevention rule, and your library of prevention rules grows stronger with every lesson. Works with Claude Code, Cursor, Codex, Gemini CLI, Amp, Cline, OpenCode, and any MCP-compatible agent. The monthly Anthropic / OpenAI bill stops paying for the same lesson over and over — local-first enforcement, zero tokens spent on repeats.
Prevent expensive AI mistakes. Make AI stop repeating mistakes. Turn a smart assistant into a reliable operator.
Mission: make AI coding affordable by making sure you never pay for the same mistake twice.
Watch the force-push scenario: agent tries to git push --force, one thumbs-down, next session it's blocked — zero tokens spent on the repeat.
▶ Watch the 90-second demo · Script · ElevenLabs narration: npm run demo:voiceover
If someone is not already bought into ThumbGate, do not lead with architecture. Lead with one repeated mistake.
thumbs down: or thumbs up: with one concrete sentence. Native ChatGPT rating buttons are not the ThumbGate capture path; typed feedback is.npx thumbgate init where the agent executes so the lesson can become a Pre-Action Check instead of another reminder.The buying question is simple: what repeated AI mistake would be worth blocking before the next tool call?
Frontier-model calls are not cheap. Sonnet 4.5 is ~$3 / 1M input tokens and ~$15 / 1M output tokens. Opus is 5× that. Every time your agent:
…you are paying for that round-trip. Twice if it retries. Three times if you re-prompt. And the agent has no memory across sessions, so the meter resets every Monday.
Session 1: Agent force-pushes to main. You fix it. +4,200 tokens
Session 2: Agent force-pushes again. You fix it. +4,200 tokens
Session 3: Same mistake. Again. You lose 45m. +5,800 tokens
That's ~$0.21 in tokens just to fix the same mistake three times — multiplied by every developer, every repeated-mistake class, every week. The math gets ugly fast.
Session 1: Agent force-pushes to main. You 👎 it. +4,200 tokens
Session 2: ⛔ Check blocks the force-push. Zero round-trip. +0 tokens
Session 3+: Never happens again. +0 tokens
One thumbs-down. The PreToolUse hook intercepts the call before it reaches the model — no input tokens, no output tokens, no retry loop. The dashboard tracks tokens saved this week as a live counter so you can see exactly what your prevention rules are worth. Mark a review checkpoint once, and the dashboard narrows the next pass to only the feedback, lessons, and check blocks that landed since your last review.
ThumbGate doesn't make your agent smarter. It makes your agent cheaper to be wrong with.
npx thumbgate init # auto-detects your agent, wires everything
npx thumbgate capture "Never run DROP on production tables"
That single command creates a prevention rule. Next time any AI agent tries to run DROP on production:
⛔ Check blocked: "Never run DROP on production tables"
Pattern: DROP.*production
Verdict: BLOCK
ThumbGate operates as a 4-layer enforcement stack between your AI agent and your codebase:
![]()
Your thumbs-up/down reactions are captured via MCP protocol, CLI, or the ChatGPT GPT surface. Each reaction is stored as a structured lesson with context, timestamp, and severity.
The check engine converts lessons into enforceable rules using pattern matching, semantic similarity (via LanceDB vectors), and Thompson Sampling for adaptive rule selection. Rules stay in local ThumbGate runtime state.
Before any agent action executes, ThumbGate's PreToolUse hook intercepts the command and evaluates it against all active checks. This happens at the MCP protocol level — the agent physically cannot bypass it.
Checks are distributed across all connected agents via MCP stdio protocol. One correction in Claude Code protects Cursor, Codex, Gemini CLI, Cline, and any MCP-compatible agent.
Prompt engineering still matters, but it is only the starting point. ThumbGate adds prompt evaluation on top: proof lanes, benchmarks, and self-heal checks tell you whether your prompt and workflow actually held up under execution instead of leaving you to guess from vibes. Run npx thumbgate eval --from-feedback --write-report=.thumbgate/prompt-eval-proof.md to turn real thumbs-up/down feedback into reusable eval cases and a buyer-ready proof report.
When a new managed model drops, do not swap ThumbGate over on vendor claims alone. Rank it against the actual ThumbGate workload first:
npx thumbgate model-candidates --workload=pretool-gating --json
npx thumbgate model-candidates --workload=long-trace-review --provider=openai-compatible --gateway=tinker --json
The catalog currently includes the April 23, 2026 Tinker additions:
tinker/qwen3.6-35b-a3b for pre-action gating, agentic coding, and tool-usetinker/qwen3.6-27b for the cheap fast-pathtinker/kimi-k2.6-128k for long-trace review and multi-agent sessionsEach recommendation ships with the benchmark commands to run next: feedback-derived prompt eval, gate-eval, and thumbgate bench. That keeps model selection evidence-backed instead of hype-driven.
![]()
![]()
| Agent | Command |
|---|---|
| Claude Code | npx thumbgate init --agent claude-code |
| Cursor | npx thumbgate init --agent cursor |
| Codex | npx thumbgate init --agent codex |
| Gemini CLI | npx thumbgate init --agent gemini |
| Amp | npx thumbgate init --agent amp |
| Cline (Roo Code successor) | npx thumbgate init --agent cline |
| Claude Desktop | Download extension bundle |
| Any MCP agent | npx thumbgate serve |
Works with Claude Code, Cursor, Codex, Gemini CLI, Amp, Cline, OpenCode, and any MCP-compatible agent. Migrating from Roo Code (sunsetting 2026-05-15)? See adapters/cline/INSTALL.md.
Claude renders the live ThumbGate footer today. npx thumbgate init --agent codex now installs the full Codex hook bundle and writes the ThumbGate statusLine target into ~/.codex/config.json so you can test it on your local Codex build immediately.
Open the Codex plugin install page or download the standalone bundle from GitHub Releases. The Codex launcher resolves thumbgate@latest when MCP and hooks start, so published npm fixes reach active Codex installs without hand-editing ~/.codex/config.toml.
STEP 1 STEP 2 STEP 3
──────── ──────── ────────
You react ThumbGate learns The check holds
👎 on a bad ──► Feedback becomes ──► Next time the
agent action a saved lesson agent tries the
and a block rule same thing:
👍 on a good ──► Good pattern gets ⛔ BLOCKED
agent action reinforced (or ✅ allowed)
No manual rule-writing. No config files. Your reactions teach the agent what your team actually wants.
ThumbGate sells three concrete outcomes:
thumbgate eval --from-feedback, proof lanes, ThumbGate Bench, and self-heal:check to evaluate whether prompts and workflows actually improved behavior.git push --force on protected branches before it runs⛔ force-push → blocks git push --force
⛔ protected-branch → blocks direct push to main
⛔ unresolved-threads → blocks push with open reviews
⛔ package-lock-reset → blocks destructive lock edits
⛔ env-file-edit → blocks .env secret exposure
+ custom prevention rules for project-specific failures
npx thumbgate init # detect agent, wire hooks
npx thumbgate doctor # health check
npx thumbgate capture # create a check from text
npx thumbgate lessons # see what's been learned
npx thumbgate explore # terminal explorer for lessons, checks, stats
npx thumbgate native-messaging-audit # inspect local browser bridges and extension hosts
npx thumbgate dashboard # open local dashboard
npx thumbgate serve # start MCP server on stdio
npx thumbgate bench # run reliability benchmark
| Free | Pro ($19/mo) | Team ($49/seat/mo) | |
|---|---|---|---|
| Local CLI + enforced checks | ✅ | ✅ | ✅ |
| Feedback captures (lifetime) | 3 | Unlimited | Unlimited |
| Auto-promoted prevention rules | 1 | Unlimited | Unlimited |
| MCP agent integrations | All | All | All |
| Personal dashboard | — | ✅ | ✅ |
| DPO export (model fine-tuning) | — | ✅ | ✅ |
| Team lesson export/import | — | ✅ | ✅ |
| Shared hosted lesson DB | — | — | ✅ |
| Org-wide dashboard | — | — | ✅ |
| Approval + audit proof | — | — | ✅ |
The free tier gives you 3 lifetime feedback captures and 1 auto-promoted prevention rule — enough to prove the enforcement loop works. MCP integrations for all agents (Claude Code, Cursor, Codex, Gemini, Amp, Cline, OpenCode) ship free.
Pro ($19/mo or $149/yr) lifts those caps and adds history-aware lesson recall, lesson search, DPO export, and a personal dashboard. Team ($49/seat/mo) adds a shared hosted lesson DB, org dashboard, and shared enforcement across the org. Pro and Team include open_feedback_session, append_feedback_context, and finalize_feedback_session for structured multi-turn feedback capture.
Best first paid motion for teams: the Workflow Hardening Sprint — qualify one repeated failure before committing to a full rollout. Start intake →
Best first technical motion: install the CLI-first and let init wire hooks for the agent you already use.
Paid path for individual operators: ThumbGate Pro is the self-serve side lane for a personal dashboard and export-ready evidence.
Start free · See Pro · Team Sprint intake
One team's hard-won lessons shouldn't stay trapped on one laptop. ThumbGate Pro and Team can export lessons as portable bundles and import them into any other ThumbGate instance — so a mistake caught by Team A becomes a prevention rule for Team B.
Export lessons from one project:
curl -X POST http://localhost:3456/v1/lessons/export \
-H "Authorization: Bearer $THUMBGATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"outputPath": "./lessons-export.json"}'
Filter by signal or tags:
curl -X POST http://localhost:3456/v1/lessons/export \
-H "Authorization: Bearer $THUMBGATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"signal": "down", "tags": ["push-notifications", "ci"]}'
Import into another team's ThumbGate:
curl -X POST http://localhost:3456/v1/lessons/import \
-H "Authorization: Bearer $THUMBGATE_API_KEY" \
-H "Content-Type: application/json" \
-d @lessons-export.json
What happens on import:
team-import with original source project, export timestamp, and original IDThe export bundle includes full lesson metadata: signal, title, context, tags, failure type, skill, structured rules, and diagnosis. It's the same data you see in the lesson detail dashboard — portable as JSON.
Use cases:
Every thumbs-up and thumbs-down becomes a training signal. ThumbGate Pro exports your captured feedback as DPO (Direct Preference Optimization) pairs — ready to feed into a LoRA fine-tune so your model stops repeating known mistakes at the weight level, not just the check level.
Export DPO pairs:
curl -X POST http://localhost:3456/v1/dpo/export \
-H "Authorization: Bearer $THUMBGATE_API_KEY" \
-o dpo-pairs.jsonl
What you get: JSONL where each line is a preference pair:
chosen — the agent action you thumbed uprejected — the action you thumbed down for the same task contextprompt — the originating user intentUse cases:
/v1/kto/export)Why this matters: Checks block mistakes. Fine-tuning prevents them from being attempted. Combine both for belt-and-suspenders governance.
| Layer | Technology |
|---|---|
| Storage | SQLite + FTS5, LanceDB vectors, JSONL logs |
| Capture | 3 feedback captures lifetime (free), unlimited (Pro) |
| Intelligence | MemAlign dual recall, Thompson Sampling |
| Enforcement | PreToolUse hook engine, Checks config |
| Interfaces | MCP stdio, HTTP API, CLI (Node.js >=18) |
| Billing | Stripe |
| Execution | Railway, Cloudflare Workers, Docker Sandboxes |
| Governance | Workflow Sentinel, control plane, Docker Sandboxes |
Every Changeset is tied to the exact main merge commit and generates Verification Evidence for Release Confidence.
Popular buyer questions: AI search topical presence · Relational knowledge and AI recommendations · Stop repeated AI agent mistakes · Browser automation safety · Native messaging host security · Autoresearch agent safety · Cursor guardrails · Codex CLI guardrails · Gemini CLI memory + enforcement
Workflow Hardening Sprint · Live Dashboard
Give the agent more context when a thumbs-down isn't enough:
👎 thumbs down
└─► open_feedback_session
└─► "you lied about deployment" (append_feedback_context)
└─► "tests were actually failing" (append_feedback_context)
└─► finalize_feedback_session
└─► lesson inferred from full conversation
Free and self-hosted users can invoke search_lessons directly through MCP, and via the CLI with npx thumbgate lessons. History-aware feedback sessions give the agent full context for each lesson.
Is ThumbGate a model fine-tuning tool? No. ThumbGate does not update model weights. It captures feedback, stores lessons, injects context at runtime, and blocks bad actions before they execute.
How is this different from CLAUDE.md or .cursorrules? Those are suggestions the agent can ignore. ThumbGate checks are enforced — they physically block the action before it runs. They also auto-generate from feedback instead of requiring manual writing.
Does it work with my agent? If it supports MCP or pre-action hooks, yes. Claude Code, Claude Desktop, Cursor, Codex, Gemini CLI, Amp, Cline, OpenCode all work out of the box.
Is it free? The free tier gives you 3 lifetime feedback captures and 1 auto-promoted prevention rule — enough to prove the enforcement loop works. MCP integrations ship free for every agent.
Pro ($19/mo or $149/yr) lifts those caps and adds history-aware lesson recall, lesson search, and a personal dashboard. Team ($49/seat/mo) adds a shared hosted lesson DB, org dashboard, and shared enforcement.
MIT. See LICENSE.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"thumbgate": {
"command": "npx",
"args": [
"-y",
"thumbgate"
]
}
}
}