loading…
Search for a command to run...
loading…
Safe-upgrade advisor for OpenClaw. Detects current version, checks the deployment against a hand-curated catalog of version-specific known regressions, captures
Safe-upgrade advisor for OpenClaw. Detects current version, checks the deployment against a hand-curated catalog of version-specific known regressions, captures pre-upgrade snapshots, diffs them against post-upgrade state, and emits step-by-step upgrade + rollback guides.
MCP server for safe AI agent runtime upgrades — version-aware regression catalog, pre/post snapshot diffing, step-by-step upgrade + rollback guides. Captures deployment state before upgrade, re-runs detection checks after, surfaces
new_failures(caused by the upgrade) separately fromunchanged_failures(pre-existing) andrecovered(fixed by the upgrade). Read-only by design — never executes the upgrade itself; the operator retains full agency. v1.0 ships with the OpenClaw regression catalog (8 entries grounded in real field reports); the same machinery accepts a custom catalog for any AI runtime via Custom MCP Build adapters. Keywords: AI runtime upgrade, regression detection, safe deployment, version-specific bug catalog, AI agent ops.
Status: v1.2.0 License: MIT MCP PyPI
Production AI runtime upgrades — OpenClaw, Claude Code, agent harnesses, runtime servers — carry recurring regressions that you only find after upgrading. Recent examples:
on_message. 2026.4.30+ surfaced OOM under sustained 200k-token contexts. The pattern: upgrade on Friday, hit a new failure mode on Tuesday, spend Wednesday-Thursday excavating release notes + field reports.This MCP server moves the regression excavation upfront — before the upgrade, not after — and verifies the post-upgrade state by diffing against a snapshot you took beforehand. Read-only by design: never executes the upgrade itself; the operator retains full agency.
> claude: should I upgrade my 2026.4.23 deployment?
[MCP tools: current_version + available_upgrades]
Current: 2026.4.23
Recommended target: 2026.5.2 (no CRITICAL regressions in path)
Available upgrades:
2026.4.24-.26 HIGH R-73421 Discord-receive breakage
2026.4.27 — clean
2026.4.30 HIGH R-OOM-DURING-LARGE-CONTEXT (unfixed)
2026.5.1-.2 HIGH R-OOM + R-LOG-ROTATION-DROP (unfixed)
> claude: walk me through upgrading to 2026.4.27.
[MCP tool: upgrade_guide]
2026.4.23 → 2026.4.27 — proceed with mitigations applied.
Applicable known regressions:
R-41372 (HIGH) — Cron --session web-search silent fail.
Mitigation: silentwatch-mcp covers detection until upgrade.
R-73421 (HIGH) — Discord-receive callbacks not firing.
Mitigation: `openclaw skill reload discord` after upgrade.
Pre-upgrade steps:
1. Capture pre-upgrade snapshot (call pre_upgrade_snapshot)
2. Verify backups: cp -r ~/.openclaw ~/.openclaw.backup-$(date +%Y%m%d)
Upgrade steps:
1. openclaw gateway stop
2. openclaw upgrade --to 2026.4.27
3. openclaw gateway start
Post-upgrade steps:
1. Run post_upgrade_verify(snapshot_id=<your-pre-upgrade-id>)
2. openclaw skill reload discord (R-73421 mitigation)
Rollback steps: stop → openclaw upgrade --to 2026.4.23 → restore backup → start.
Confidence: Path includes 2 HIGH regressions but no CRITICAL.
> claude: I just upgraded. Verify it.
[MCP tool: post_upgrade_verify(pre_snapshot_id="snap-...")]
Upgrade 2026.4.23 → 2026.4.27: SUCCESS.
0 new failures, 1 recovered (skills.discord_receive_registered),
0 unchanged failures.
v1.2 — provider-side regression detection. The other half of the upgrade-safety story: when the hosted LLM provider silently changes their model behavior with no upgrade event on your side. Source: Anthropic April 23 2026 post-mortem — Anthropic admitted silently changing Claude Code's default reasoning effort for 5 weeks (Mar 4 → Apr 7) without notification. Verbatim from a Phoenix user asking for the feature in their own community discussions (#10442): "Does Phoenix have a way to detect this kind of silent drift where surface metrics look healthy but the model is actually failing?" This server now does.
> claude: has Anthropic regressed something on their end in the last hour?
[MCP tool: detect_provider_regression(provider="anthropic")]
Severity: CRITICAL
provider: anthropic
current_window_hours: 1 sample_count: 50
baseline_window_hours: 168 sample_count: 1000
Alerts:
[CRITICAL] latency_p95: 3,200ms vs 1,500ms baseline (+113%)
[HIGH] latency_median: 1,500ms vs 800ms baseline (+87%)
[MEDIUM] response_length_median: 350 vs 800 (-56%)
Summary: anthropic: 3 alerts — worst is CRITICAL on latency_p95:
latency_p95 is 113% higher than baseline (3200 vs 1500) — likely regression
> claude: capture the next 100 calls so I can see the fingerprint over time.
[MCP tool: record_provider_call (called by your LLM-client shim, once per response)]
After enough calls accumulate:
[MCP tool: get_provider_fingerprint(provider="anthropic", window_hours=24)]
provider: anthropic
window_hours: 24 sample_count: 240
fingerprint:
call_count: 240
median_latency_ms: 850
p95_latency_ms: 1620
median_response_length_tokens: 760
distinct_models: ["claude-sonnet-4-7"]
most_common_model_version: "claude-sonnet-4-7-20260301"
openclaw-upgrade-orchestrator-mcpThree things existing tools (vendor changelogs, internal runbooks, generic CI/CD orchestrators) don't do:
Catalog-grounded regression awareness. A generic upgrade tool tells you the version exists. This server tells you which versions have known issues, which fix versions remediate them, and which mitigations apply if you have to use the affected version.
Pre/post snapshot diffing tied to the catalog. The same checks run before + after the upgrade. The diff highlights new_failures (caused by the upgrade) separately from unchanged_failures (pre-existing) and recovered (fixed by the upgrade). No more "did this break in 2026.4.27 or was it already broken?"
Read-only by design. Never runs openclaw upgrade --to ... for you. Never modifies state. Operators retain full agency over the actual upgrade — this server gives them the information to make the decision, then verifies it after they execute.
Built for the production-AI operator who owns OpenClaw deployments and has been through enough upgrade-day fire drills.
| Tool | What it returns |
|---|---|
current_version |
Currently-installed version + detection method |
available_upgrades |
Newer versions with regression-count flags + recommended target |
pre_upgrade_snapshot |
Captures every check's pass/fail state, persists with snapshot_id |
upgrade_guide |
Step-by-step plan: pre / upgrade / post / rollback steps + applicable regressions + confidence note |
post_upgrade_verify |
Diff post-upgrade against a stored pre-upgrade snapshot — new_failures / recovered / unchanged |
rollback_guide |
Recovery plan for a given snapshot — downgrade command + state-restore steps + risk note |
regression_catalog |
Full known-regression catalog, optionally filtered to one version |
list_snapshots |
All stored snapshots (id + version + summary) |
record_provider_call (v1.2) |
Append a single provider API call observation to the fingerprint history |
get_provider_fingerprint (v1.2) |
Aggregate fingerprint over a window — call count, latency p50/p95, response-length distribution, distinct models, most-common headers |
detect_provider_regression (v1.2) |
Compare current vs baseline window; flag distribution shifts with severity |
Resources:
upgrade://current — current version infoupgrade://snapshots — every stored snapshotupgrade://catalog — full regression catalogupgrade://provider-fingerprint (v1.2) — current Anthropic 1-hour fingerprintPrompts:
plan-upgrade(target_version) — walks through the upgrade decisionverify-upgrade(pre_snapshot_id) — walks through post-upgrade verificationdiagnose-provider-regression(provider) (v1.2) — walks through a no-user-upgrade-event regressionpip install openclaw-upgrade-orchestrator-mcp
{
"mcpServers": {
"openclaw-upgrade": {
"command": "python",
"args": ["-m", "openclaw_upgrade_orchestrator_mcp"],
"env": {
"OPENCLAW_UPGRADE_BACKEND": "mock"
}
}
}
}
| Backend | Status | Description |
|---|---|---|
mock |
✅ v1.0 | 2026.4.23 deployment with active R-73421 Discord-receive breakage; in-memory snapshots; suitable for protocol verification + bundle demos. v1.2: also pre-populates a synthetic 7d-baseline + last-hour-regression-burst on Anthropic so detect_provider_regression returns CRITICAL out of the box |
openclaw-system |
✅ v1.0 | Reads ~/.openclaw/version + ~/.openclaw/gateway.yaml; persists snapshots as JSON in ~/.openclaw/upgrades/snapshots/. Override via OPENCLAW_VERSION_FILE, OPENCLAW_GATEWAY_CONFIG, OPENCLAW_UPGRADE_SNAPSHOT_DIR. v1.2: also reads/writes provider-call records as JSONL in ~/.openclaw/upgrades/provider-calls.jsonl. Override via OPENCLAW_PROVIDER_CALLS_FILE |
8 hand-curated entries covering documented OpenClaw regressions:
R-41372-CRON-WEB-SEARCH-SILENT-FAIL (HIGH, 2026.4.20–2026.5.1)R-63002-POST-UPGRADE-CPU-SPIKE (CRITICAL, 2026.4.8–2026.4.10)R-73421-DISCORD-RECEIVE-BREAKAGE (HIGH, 2026.4.23–2026.4.27)R-GATEWAY-PORT-CONFLICT-2026.4.15 (MEDIUM, 2026.4.15–2026.4.18)R-OOM-DURING-LARGE-CONTEXT-2026.4.30 (HIGH, 2026.4.30–unfixed)R-STATUS-RECONCILIATION-DRIFT-2026.4.5 (LOW, 2026.4.5–2026.4.10)R-CLAWHUB-CACHE-POISONING-2026.3.28 (HIGH, 2026.3.28–2026.4.2)R-LOG-ROTATION-DROP-2026.5.1 (MEDIUM, 2026.5.1–unfixed)Use regression_catalog for the full, queryable list.
available_upgrades flags every version reachable from current and computes a recommended_target:
For each available version V > current:
applicable_regressions = regressions_in_path(current, V)
has_known_critical = any(r.severity == CRITICAL for r in applicable_regressions)
recommended_target = highest V with has_known_critical == False
regressions_in_path(current, target) includes a regression if:
OpenClaw upgrades atomically (no execution on intermediate versions), so a regression strictly between current and target without affecting either endpoint is NOT included. This avoids over-conservative recommendations.
| Version | Scope | Status |
|---|---|---|
| v1.0 | mock + openclaw-system backends, 8 tools / 3 resources / 2 prompts, 8-entry regression catalog, 6 detection checks, GitHub Actions CI matrix, PyPI Trusted Publishing | ✅ |
| v1.2 | Provider-side regression detection — ProviderCallRecord data model, 3 new tools (record_provider_call, get_provider_fingerprint, detect_provider_regression), upgrade://provider-fingerprint resource, diagnose-provider-regression prompt. Detects passive distribution shifts in latency/response-shape/model-version when the provider silently changes things on their end. Folded in from research-pass-3 P08 candidate after incumbent validation against Phoenix + Langfuse + Galileo |
✅ |
| v1.3 | Catalog auto-fetch from upstream changelog feed; richer detection checks tied to OpenClaw's /healthz endpoint; multi-step upgrade pathing |
⏳ |
| v1.4 | Custom catalog packs (operator can ship internal-only regression entries alongside the canonical catalog); rule-overrides | ⏳ |
| v1.x | Webhook emit on detected regression; integration with CI to gate merges of OpenClaw-version bumps | ⏳ |
If your AI deployment uses a different runtime (custom agent harness, internal fork of OpenClaw, vendor-locked deployment) and you want the same regression-aware upgrade discipline, that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Single backend adapter for your existing version-source | $8,000–$12,000 | 1–2 weeks |
| Standard | Custom backend + custom regression catalog (initial 10-15 entries from your incident history) + integration with your alerting | $15,000–$25,000 | 2–4 weeks |
| Complex | Multi-deployment fleet view + auto-catalog ingestion from internal changelog + per-environment recommendation tuning | $30,000–$45,000 | 4–8 weeks |
To engage:
Custom MCP Build inquiry — upgrade orchestrationThis server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp, openclaw-health-mcp, openclaw-cost-tracker-mcp, and openclaw-skill-vetter-mcp. Install all five for full operational visibility.
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (upgrade regression cycles being one of the most damaging), and write the corrective-action plan:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: [email protected] with subject AI audit inquiry.
PRs welcome. Detection checks are pluggable — see src/openclaw_upgrade_orchestrator_mcp/checks/__init__.py for the contract.
To add a check:
def run(state: DeploymentState) -> CheckResult in the checks moduleCHECKS: dict[str, callable]check_id from a regression's detection_check_id in catalog.pytests/test_checks.pyTo add a backend:
UpgradeBackend in backends/<your_backend>.pycollect_state, save_snapshot, load_snapshot, list_snapshotsbackends/__init__.pytests/test_backends.pyTo add a regression entry:
CATALOG in catalog.py with stable regression_iddetection_check_id (or set to None for advisory-only)tests/test_catalog.pyBug reports + feature requests: open a GitHub issue.
MIT — see LICENSE.
LAUNCH50 for the first 30 days.Built by Temur Khan — independent practitioner on production AI systems. Contact: [email protected]
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"openclaw-upgrade-orchestrator-mcp": {
"command": "npx",
"args": []
}
}
}