loading…
Search for a command to run...
loading…
Self-hosted memory and governance layer for AI coding agents. 28 MCP tools with structured knowledge capture, hybrid search (semantic + BM25 + cross-encoder rer
Self-hosted memory and governance layer for AI coding agents. 28 MCP tools with structured knowledge capture, hybrid search (semantic + BM25 + cross-encoder reranking), behavioral documentation nudges, cold-start codebase analyzer, and git-native storage. Single Docker container, zero cloud dependencies.
flaiwheel MCP server Available on Glama
Self-hosted memory & governance layer for AI coding agents. Turn every bug fix into permanent knowledge. Zero cloud. Zero lock-in.
AI coding agents forget everything between sessions. That leads to repeated bugs, lost architectural decisions, and knowledge decay.
Flaiwheel ensures:
Every bug fixed makes the next bug cheaper.
It does not replace your AI assistant. It makes it reliable at scale.
📄 Whitepaper (PDF) — Vision, architecture, and design in depth.
Flaiwheel is a self-contained Docker service that operates on three levels:
Pull — agents search before they code (search_docs, get_file_context)
Push — agents document as they work (write_bugfix_summary, write_architecture_doc, …)
Capture — git commits auto-capture knowledge via a post-commit hook, even without an AI agent
.md, .pdf, .html, .docx, .rst, .txt, .json, .yaml, .csv) into a vector databaseget_file_context(filename) — pre-loads spatial knowledge for any file the agent is about to edit (complements get_recent_sessions for full temporal + spatial context)fix:, feat:, refactor:, perf:, docs: commit as a structured knowledge doc automaticallyGiven, When, Then) for QA automationvalidate_doc() checks freeform markdown before it enters the knowledge base/api/impact-metrics computes estimated time saved + regressions avoided; CI pipelines can post guardrail outcomes to /api/telemetry/ci-guardrail-reportanalyze_codebase(path) scans a source code directory entirely server-side (zero tokens, zero cloud). Uses Python's built-in ast module for Python, regex for TypeScript/JavaScript, the existing MiniLM embedding model for classification and duplicate detection. Returns a single bootstrap_report.md with language distribution, category map, top 20 files to document first ranked by documentability score, duplicate pairs, and coverage gaps. Reduces cold-start token cost by ~90% on legacy codebases.AuthManager crashed on read-only /data before the MCP server could start (the real reason Glama saw 0 tools). Skipped in stdio cold-start mode.print() in watcher, indexer, readers, bootstrap replaced with diag() (stderr). Verified: full MCP handshake returns all 28 tools over stdio.config.save() resilient — read-only filesystem logs warning instead of crashing.LICENSE file (BSL 1.1) for correct GitHub/Glama detection; all docs and headers point to LICENSE (not LICENSE.md).[inspect] deps and cold-start stdio path for lightweight MCP directory builds..skills/skills/flaiwheel/SKILL.md to your project. When you open the project in Claude (Cowork), the skill is auto-available — no extra setup needed. The skill drives session-start context restore, pre-coding knowledge search, mandatory post-bugfix documentation, and session-end summarisation.skills/flaiwheel/SKILL.md in this repo for reference and manual install.iptables to legacy backend (fixes Docker networking / DNAT errors)docker group (no more permission denied)service (no systemd on WSL2)~/.bashrc (idempotent, runs on every WSL2 login)python3 extensively for JSON manipulation. On minimal Linux/WSL2 systems without python3, config file writes silently failed (/dev/fd/63: line N: python3: command not found). python3 is now checked as prerequisite #0 and auto-installed via apt/dnf/yum/pacman/brew if missing.iptables-nft backend is not supported. The installer now switches to iptables-legacy via update-alternatives before starting Docker. Also adds the current user to the docker group automatically.bash <(curl ...) — every displayed install/re-run command throughout the script (error messages, AGENTS.md, Cursor rules, etc.) now uses process substitution to avoid WSL2 pipe issues.curl | bash pipe write failures on WSL2 — curl | bash can fail with curl: (23) Failure writing output on WSL2 due to pipe/tmp permission issues. The primary install command in README is now bash <(curl ...) (process substitution), which avoids the pipe entirely. The re-exec block also tries $HOME as a fallback temp dir when /tmp writes fail. Error message explicitly recommends the bash <(curl ...) form.sudo curl | bash was used, the curl: (23) pipe error truncated the script before the previous sudo guard (which was after colors/functions) was ever reached. The guard is now the very first executable line (set -euo pipefail aside), so it fires even on a truncated download. Duplicate guard after colors removed.docker info every 2 seconds for up to 30 seconds after service docker start. Also shows the actual output of service docker start so startup errors are visible instead of silently swallowed.systemd, so systemctl start docker silently failed. The installer now detects WSL2 via /proc/version and uses sudo service docker start instead. If Docker still isn't running after install, a clear WSL2-specific error is shown with the exact fix command and a tip to add it to ~/.bashrc for auto-start on login.sudo curl | bash and sudo bash install.sh — running the installer as root via sudo breaks GitHub CLI authentication: gh auth stores credentials in /root/.config/gh/ instead of the real user's home, making every subsequent gh call fail. Also caused curl: (23) Failure writing output pipe errors on WSL. The installer now detects SUDO_USER at startup and exits immediately with a clear message telling the user to re-run without sudo. Privilege escalation for package installs is handled internally.gh auth login must not be run with sudo — after auto-installing gh on Linux/WSL, the installer now explicitly tells the user to run gh auth login without sudo. If auth was previously done with sudo, credentials ended up in /root/.config/gh/ and were invisible to the current user, causing the auth check to fail. The error messages at both the post-install and the auth-check step now clearly warn: do not use sudo for gh auth.apt-get, dnf, yum, zypper, pacman), Docker convenience script, and systemctl calls now automatically use sudo when the installer is not running as root. Root installs are unaffected. Fixes Permission denied / lock file errors on WSL and standard Linux desktop users./data/ — analyze_codebase() saves the report to /data/coldstart-<project>.md after the first run. Subsequent calls return the cached report instantly (<1s). The installer also writes the cache during install so the very first MCP call by any agent is instant. Call with force=True to regenerate after major codebase changes.analyze_codebase() in all agent Session Setup templates — AGENTS.md, .cursor/rules/flaiwheel.mdc, CLAUDE.md, and .github/copilot-instructions.md all now include it as step 3 of Session Setup. Agents automatically get the codebase overview before starting work.docker exec for cold-start — replaced broken HTTP calls to the MCP SSE endpoint with direct docker exec python3. Analysis now works reliably in ~20s.y now always re-runs analysis even when cached report exists._run_coldstart/_do_coldstart_analysis to top of script so fast-path can call them.LATEST_VERSION now uses _FW_VERSION directly, no CDN fetch._run_coldstart() called from fast-path, update, and fresh install. Smart cache detection.analyze_codebase() cached to /data/coldstart-<project>.md for instant reads. New force=True param.analyze_codebase() as a first-session step.docker exec python3 invocation. Cold-start report now actually works (~20s).analyze_codebase() for up to 90s after container starts.main — LATEST_VERSION now fetched from main branch so stale cached installers no longer silently skip updates.install.sh cold-start question is now asked right after the embedding model selection (before the Docker rebuild), so all interactive questions are gathered first and the user never misses the prompt after a long rebuild.analyze_codebase(path) — new 28th MCP tool for zero-token cold-start analysis of legacy codebases. Runs entirely server-side in Docker. Uses Python ast, regex, MiniLM embeddings, and nearest-centroid classification. Returns a ranked bootstrap_report.md with language distribution, category map, top 20 files by documentability score, near-duplicate pairs, and recommended next steps. Reduces cold-start token cost by ∼90%.reindex() MCP tool), keeping the vector DB clean until the repo has been reviewed..vscode/mcp.json and .github/copilot-instructions.md.mcp-remote.search_bugfixes calls no longer inflate miss rate above 100%._path_category_hint unified token-based approach across all categories.CHANGELOG.md added to repo root.Prerequisites: GitHub CLI authenticated (gh auth login), Docker running.
Platform support: macOS and Linux work out of the box. On Windows, run the installer from WSL or Git Bash (Docker Desktop must be running with WSL 2 backend enabled).
Run this from inside your project directory:
bash <(curl -sSL https://raw.githubusercontent.com/dl4rce/flaiwheel/main/scripts/install.sh)
WSL2 / Linux note: Use the
bash <(curl ...)form above — it avoidscurl: (23)pipe write errors that occur withcurl | bashon some WSL2 setups. Never prefix withsudo.
That's it. The installer automatically:
<project>-knowledge repo with the standard folder structure.cursor/mcp.json and .cursor/rules/flaiwheel.mdc.vscode/mcp.json (native SSE, VS Code 1.99+) and .github/copilot-instructions.mdclaude_desktop_config.json via mcp-remote bridge (requires Node.js).mcp.json + CLAUDE.md and runs claude mcp add automatically if the CLI is on PATH.skills/skills/flaiwheel/SKILL.md so the full Flaiwheel workflow is available as a native Claude skillAGENTS.md for all other agents.md docs are found, creates a migration guide — the AI will offer to organize them into the knowledge repoAfter install:
| Agent | What to do |
|---|---|
| Cursor | Restart Cursor → Settings → MCP → enable flaiwheel toggle |
| Claude Desktop (macOS app) | Quit and reopen Claude for Mac — hammer icon appears when connected |
| Claude Code CLI | Already registered automatically — run /mcp inside Claude Code to verify |
| VS Code | Open project → Command Palette → MCP: List Servers → start flaiwheel |
| Claude (Cowork) | Skill auto-loads from .skills/skills/flaiwheel/SKILL.md — no further action needed |
The installer also sets up a post-commit git hook that automatically captures every fix:, feat:, refactor:, perf:, and docs: commit as a structured knowledge doc — no agent or manual action required.
Once connected, the AI has access to all Flaiwheel tools. If you have existing docs, tell the AI: "migrate docs".
If you also use Open WebUI's Open Terminal integration, this repo includes helper installers for a local open-terminal daemon.
Third-party write-ups (for example AI·Collab — Open Terminal) may mirror only the Linux script; macOS uses scripts/macos/install-open-terminal-launchagent.sh below. After any mirror update, re-check the file with shasum -a 256 against the same revision on GitHub.
curl (no git clone)Use main or pin a commit SHA / tag in the URL for reproducible bytes.
Linux / WSL2 (systemd --user):
curl -fsSL -o install-open-terminal-systemd-user.sh \
https://raw.githubusercontent.com/dl4rce/flaiwheel/main/scripts/install-open-terminal-systemd-user.sh
/bin/chmod +x install-open-terminal-systemd-user.sh
/bin/bash ./install-open-terminal-systemd-user.sh
macOS (LaunchAgent; do not use sudo):
curl -fsSL -o install-open-terminal-launchagent.sh \
https://raw.githubusercontent.com/dl4rce/flaiwheel/main/scripts/macos/install-open-terminal-launchagent.sh
/bin/chmod +x install-open-terminal-launchagent.sh
/bin/bash ./install-open-terminal-launchagent.sh
If chmod or bash are “not found”, your PATH is broken (often Conda base); the /bin/… paths above still work.
systemd --user)./scripts/install-open-terminal-systemd-user.sh
com.flaiwheel.open-terminal-local.servicehttp://localhost:8000systemd=true is enabled in /etc/wsl.conf.Useful commands:
systemctl --user status com.flaiwheel.open-terminal-local.service
journalctl --user -u com.flaiwheel.open-terminal-local.service -f
launchctl LaunchAgent)./scripts/macos/install-open-terminal-launchagent.sh
com.flaiwheel.open-terminal-localhttp://localhost:8000$HOME by default so Open Terminal does not start in /private/tmp.launchd does not load ~/.zshrc, so the daemon used to see only /usr/bin:/bin:… and miss Homebrew / Supabase CLI. The generated wrapper prepends /opt/homebrew/bin, /usr/local/bin, ~/.local/bin, and ~/.npm-global/bin. Re-run the installer (menu →1 Update) after pulling this change so the wrapper is regenerated.~/.config/flaiwheel/open-terminal-working-directory (fresh-install prompt, or menu →5 when re-running the script). Update (menu →1) keeps using that saved path. One-off override: set OPEN_TERMINAL_WORKING_DIRECTORY for that run only.Environment overrides (both scripts):
HOST=127.0.0.1 PORT=8000 OPEN_TERMINAL_CORS_ALLOWED_ORIGINS='https://your-openwebui.example' ./scripts/install-open-terminal-systemd-user.sh
HOST=127.0.0.1 PORT=8000 OPEN_TERMINAL_CORS_ALLOWED_ORIGINS='https://your-openwebui.example' ./scripts/macos/install-open-terminal-launchagent.sh
macOS only — custom initial folder for Open Terminal (must exist before you save it):
OPEN_TERMINAL_WORKING_DIRECTORY="$HOME/projects/my-repo" ./scripts/macos/install-open-terminal-launchagent.sh
Re-run the same script and choose 5 to change or clear the saved folder (or edit ~/.config/flaiwheel/open-terminal-working-directory). Non-interactive install: set the env var above or create that file with a single line (path); use AUTO_INSTALL_DEPS=1 to skip the first-run path prompt.
Run the same install command again from your project directory:
bash <(curl -sSL https://raw.githubusercontent.com/dl4rce/flaiwheel/main/scripts/install.sh)
The installer detects the existing container, asks for confirmation, then:
Your knowledge base, index, and credentials are preserved — only the code is updated.
# On GitHub, create: <your-project>-knowledge (private repo)
mkdir -p architecture api bugfix-log best-practices setup changelog
echo "# Project Knowledge Base" > README.md
git add -A && git commit -m "init" && git push
git clone https://github.com/dl4rce/flaiwheel.git /tmp/flaiwheel-build
docker build -t flaiwheel:latest /tmp/flaiwheel-build
docker run -d \
--name flaiwheel \
-p 8080:8080 \
-p 8081:8081 \
-e MCP_GIT_REPO_URL=https://github.com/you/yourproject-knowledge.git \
-e MCP_GIT_TOKEN=ghp_your_token \
-v flaiwheel-data:/data \
flaiwheel:latest
Cursor — add to .cursor/mcp.json:
{
"mcpServers": {
"flaiwheel": {
"type": "sse",
"url": "http://localhost:8081/sse"
}
}
}
VS Code / GitHub Copilot (1.99+) — add to .vscode/mcp.json:
{
"servers": {
"flaiwheel": {
"type": "sse",
"url": "http://localhost:8081/sse"
}
}
}
Then: Command Palette → MCP: List Servers → start flaiwheel.
Claude Desktop (macOS app) — add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"flaiwheel": {
"command": "npx",
"args": ["-y", "mcp-remote", "http://localhost:8081/sse"]
}
}
}
Requires Node.js. Restart Claude for Mac after editing.
Claude Code CLI — run once in your project directory:
claude mcp add --transport sse --scope project flaiwheel http://localhost:8081/sse
yourproject-knowledge/
├── README.md ← overview / index
├── architecture/ ← system design, decisions, diagrams
├── api/ ← endpoint docs, contracts, schemas
├── bugfix-log/ ← auto-generated bugfix summaries
│ └── 2026-02-25-fix-payment-retry.md
├── best-practices/ ← coding standards, patterns
├── setup/ ← deployment, environment setup
├── changelog/ ← release notes
└── tests/ ← test cases, scenarios, regression patterns
Flaiwheel indexes 9 file formats. All non-markdown files are converted to markdown-like text in memory at index time — no generated files on disk, no repo clutter.
| Format | Extension(s) | How it works |
|---|---|---|
| Markdown | .md |
Native (pass-through) |
| Plain text | .txt |
Wrapped in # filename heading |
.pdf |
Text extracted per page via pypdf |
|
| HTML | .html, .htm |
Headings/lists/code converted to markdown, scripts stripped |
| reStructuredText | .rst |
Heading underlines converted to # levels, code blocks preserved |
| Word | .docx |
Paragraphs + heading styles mapped to markdown |
| JSON | .json |
Pretty-printed in fenced json code block |
| YAML | .yaml, .yml |
Wrapped in fenced yaml code block |
| CSV | .csv |
Converted to markdown table |
Quality checks (structure, completeness, bugfix format) apply only to .md files. Other formats are indexed as-is.
All config via environment variables (MCP_ prefix), Web UI (http://localhost:8080), or .env file.
| Variable | Default | Description |
|---|---|---|
MCP_DOCS_PATH |
/docs |
Path to .md files inside container |
MCP_EMBEDDING_PROVIDER |
local |
local (free, private) or openai |
MCP_EMBEDDING_MODEL |
all-MiniLM-L6-v2 |
Embedding model name |
MCP_CHUNK_STRATEGY |
heading |
heading, fixed, or hybrid |
MCP_RERANKER_ENABLED |
false |
Enable cross-encoder reranker for higher precision |
MCP_RERANKER_MODEL |
cross-encoder/ms-marco-MiniLM-L-6-v2 |
Reranker model name |
MCP_RRF_K |
60 |
RRF k parameter (lower = more weight on top ranks) |
MCP_RRF_VECTOR_WEIGHT |
1.0 |
Vector search weight in RRF fusion |
MCP_RRF_BM25_WEIGHT |
1.0 |
BM25 keyword search weight in RRF fusion |
MCP_MIN_RELEVANCE |
0 |
Minimum relevance % to return (0 = no filter) |
MCP_GIT_REPO_URL |
Knowledge repo URL (enables git sync) | |
MCP_GIT_BRANCH |
main |
Branch to sync |
MCP_GIT_TOKEN |
GitHub token for private repos | |
MCP_GIT_SYNC_INTERVAL |
300 |
Pull interval in seconds (0 = disabled) |
MCP_GIT_AUTO_PUSH |
true |
Auto-commit + push bugfix summaries |
MCP_WEBHOOK_SECRET |
GitHub webhook secret (enables /webhook/github HMAC verification) |
|
MCP_TRANSPORT |
sse |
MCP transport: sse or stdio |
MCP_SSE_PORT |
8081 |
MCP SSE endpoint port |
MCP_WEB_PORT |
8080 |
Web UI port |
A single Flaiwheel container can manage multiple knowledge repositories — one per project. Each project gets its own ChromaDB collection, git watcher, index lock, health tracker, and quality checker, while sharing one embedding model in RAM and one MCP/Web endpoint.
How it works:
install.sh run creates the Flaiwheel container with project Ainstall.sh runs from other project directories detect the running container and register the new project via the API — no additional containersproject parameter (e.g., search_docs("query", project="my-app"))set_project("my-app") at the start of every conversation to bind all subsequent calls to that project (sticky session)project parameter, the active project (set via set_project) is used; if none is set, the first project is usedlist_projects() via MCP to see all registered projects (shows active marker)Adding/removing projects:
setup_project(name="my-app", git_repo_url="...") — registers, clones, indexes, and auto-bindsinstall.sh from a new project directory (auto-registers)POST /api/projects with {name, git_repo_url, git_branch, git_token}DELETE /api/projects/{name} or the "Remove" button in the Web UIBackward compatibility: existing single-project setups continue to work without changes. If no projects.json exists but MCP_GIT_REPO_URL is set, Flaiwheel auto-creates a single project from the env vars.
When you change the embedding model via the Web UI, Flaiwheel re-embeds all documents in the background using a shadow collection. Search remains fully available on the old model while the migration runs. Once complete, the new index atomically replaces the old one — zero downtime.
The Web UI shows a live progress bar with file count and percentage. You can cancel at any time.
| Model | RAM | Quality | Best for |
|---|---|---|---|
all-MiniLM-L6-v2 |
90MB | 78% | Large repos, low RAM |
nomic-ai/nomic-embed-text-v1.5 |
520MB | 87% | Best English quality |
BAAI/bge-m3 |
2.2GB | 86% | Multilingual (DE/EN) |
Select via Web UI or MCP_EMBEDDING_MODEL env var. Full list in the Web UI.
The reranker is a second-stage model that rescores the top candidates from hybrid search. It reads the full (query, document) pair together, which produces much more accurate relevance scores than independent embeddings — especially for vocabulary-mismatch queries where the user and the document use different words for the same concept.
How it works:
top_k × 5)top_kEnable via Web UI (Search & Retrieval card) or environment variable:
docker run -d \
-e MCP_RERANKER_ENABLED=true \
-e MCP_RERANKER_MODEL=cross-encoder/ms-marco-MiniLM-L-6-v2 \
...
| Reranker Model | RAM | Speed | Quality |
|---|---|---|---|
cross-encoder/ms-marco-MiniLM-L-6-v2 |
90MB | Fast | Good — best speed/quality balance |
cross-encoder/ms-marco-MiniLM-L-12-v2 |
130MB | Medium | Better — higher precision |
BAAI/bge-reranker-base |
420MB | Slower | Best — state-of-the-art accuracy |
The reranker is off by default (zero overhead). When enabled, it adds ~50ms latency per search but typically improves precision by 10-25% on vocabulary-mismatch queries.
Instead of waiting for the 300s polling interval, configure a GitHub webhook for instant reindex on push:
http://your-server:8080/webhook/githubapplication/jsonMCP_WEBHOOK_SECRETThe webhook endpoint verifies the HMAC signature if MCP_WEBHOOK_SECRET is set. Without a secret, any POST triggers a pull + reindex.
Track non-vanity engineering impact directly in Flaiwheel:
/api/telemetry/ci-guardrail-report — CI reports guardrail findings/fixes per PR/api/impact-metrics?project=<name>&days=30 — returns estimated time saved + regressions avoidedExample payload:
{
"project": "my-app",
"violations_found": 4,
"violations_blocking": 1,
"violations_fixed_before_merge": 2,
"cycle_time_baseline_minutes": 58,
"cycle_time_actual_minutes": 43,
"pr_number": 127,
"branch": "feature/payment-fix",
"commit_sha": "abc1234",
"source": "github-actions"
}
Flaiwheel persists telemetry on disk (<vectorstore>/telemetry) so metrics survive container restarts and updates.
Reindexing is incremental by default — only files whose content changed since the last run are re-embedded. On a 500-file repo, this means a typical reindex after a single-file push takes <1s instead of re-embedding everything.
Use reindex(force=True) via MCP or the Web UI "Reindex" button to force a full rebuild (e.g. after changing the embedding model).
┌─────────────────────────────────────────────────────────────┐
│ Docker Container (single process, N projects) │
│ │
│ ┌───────────────────────────────────────────────────────┐ │
│ │ Web-UI (FastAPI) Port 8080 │ │
│ │ Project CRUD, config, monitoring, search, health │ │
│ └─────────────────────┬─────────────────────────────────┘ │
│ │ shared state (ProjectRegistry) │
│ ┌─────────────────────┴─────────────────────────────────┐ │
│ │ MCP Server (FastMCP) Port 8081 │ │
│ │ 28 tools (search, write, classify, manage, projects) │ │
│ └─────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────┴─────────────────────────────────┐ │
│ │ Shared Embedding Model (1× in RAM) │ │
│ └─────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌──────────────────────┴────────────────────────────────┐ │
│ │ Per-Project Contexts (isolated) │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Project A │ │ Project B │ │ Project C │ │ │
│ │ │ collection │ │ collection │ │ collection │ │ │
│ │ │ watcher │ │ watcher │ │ watcher │ │ │
│ │ │ lock │ │ lock │ │ lock │ │ │
│ │ │ health │ │ health │ │ health │ │ │
│ │ │ quality │ │ quality │ │ quality │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └───────────────────────────────────────────────────────┘ │
│ │
│ /docs/{project}/ ← per-project knowledge repos │
│ /data/ ← shared vectorstore + config + projects │
└─────────────────────────────────────────────────────────────┘
query
│
├──► Vector Search (ChromaDB/HNSW, cosine similarity)
│ fetch top_k (or top_k×5 if reranker enabled)
│
├──► BM25 Keyword Search (bm25s, English stopwords)
│ fetch top_k (or top_k×5 if reranker enabled)
│
├──► RRF Fusion (configurable k, vector/BM25 weights)
│ merge + rank candidates
│
├──► [optional] Cross-Encoder Reranker
│ rescore (query, doc) pairs for higher precision
│
├──► Min Relevance Filter (configurable threshold)
│
└──► Return top_k results with relevance scores
Access at http://localhost:8080 (HTTP Basic Auth — credentials shown on first start).
Features:
# Clone
git clone https://github.com/dl4rce/flaiwheel.git
cd flaiwheel
# Install
pip install -e ".[dev]"
# Run tests (259 tests covering readers, quality checker, indexer, reranker, health tracker, MCP tools, model migration, multi-project, bootstrap, classification, file-context, cold-start analyzer)
pytest
# Run locally (needs /docs and /data directories)
mkdir -p /tmp/flaiwheel-docs /tmp/flaiwheel-data
MCP_DOCS_PATH=/tmp/flaiwheel-docs MCP_VECTORSTORE_PATH=/tmp/flaiwheel-data python -m flaiwheel
Business Source License 1.1 (BSL 1.1)
Flaiwheel is source-available under the Business Source License 1.1.
You may use Flaiwheel for free if:
Commercial use beyond these limits (e.g., teams of 11+ or commercial deployment) requires a paid license.
See LICENSE for full terms.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"dl4rce-flaiwheel": {
"command": "npx",
"args": []
}
}
}