loading…
Search for a command to run...
loading…
Context-aware MCP server advisor. Tells you what to install for your specific project — and why.
Context-aware MCP server advisor. Tells you what to install for your specific project — and why.
Context-aware MCP server advisor. Tells you what to install for your specific project — and why.
Glama has 19,000+ MCP servers. You have a project. Nobody bridges the gap.
LLMs asked directly hallucinate servers that don't exist and recommend from stale training data. Directories give you search, not advice.
mcpilot fills the selection under context gap: not "here are 19,000 options" but "for your specific project, right now, here's what you need and why."
Project start: "I'm building a Python data pipeline with DuckDB and FastAPI" → what do I install right now
Mid-project: "I just added an auth layer / I need to handle PDF ingestion" → what do I add now that I've reached this point
The second moment is more valuable. At project start, people can Google. Mid-project they're in flow.
recommend_for_project(description)
→ top MCP servers for your stack with rationale
recommend_next(current_stack, new_context)
→ what to add as your project evolves
explain_why(server_name, project_description)
→ why a specific server fits your project
Prerequisites: uv
git clone https://github.com/yahiaklk/mcpilot
cd mcpilot
uv sync
Build the index (first run, ~30s):
uv run python -m mcpilot.indexer
claude mcp add --scope user mcpilot -- uv run --directory /path/to/mcpilot python -m mcpilot.server
Add to your claude_desktop_config.json:
{
"mcpServers": {
"mcpilot": {
"command": "uv",
"args": ["run", "--directory", "/path/to/mcpilot", "python", "-m", "mcpilot.server"]
}
}
}
Once connected, ask your AI assistant:
recommend_for_project("Python FastAPI backend with PostgreSQL and JWT auth")
recommend_next("github,filesystem", "adding Stripe payments and PDF invoices")
explain_why("postgres", "multi-tenant SaaS with row-level security")
all-MiniLM-L6-v2 (local, no API cost)uv run python -m mcpilot.indexer --force
Multi-stage image with the embedding model + DuckDB index baked in — no runtime network dependency.
docker build -t mcpilot:0.1.0 .
docker run --rm -i mcpilot:0.1.0 # stdio transport, for local MCP clients
Wire into Claude Desktop:
{
"mcpServers": {
"mcpilot": {
"command": "docker",
"args": ["run", "--rm", "-i", "mcpilot:0.1.0"]
}
}
}
Non-root user (uid=10001), pinned Python 3.12, deps resolved from uv.lock, model cached under /app/.hf_cache with HF_HUB_OFFLINE=1 at runtime.
Python · FastMCP · DuckDB · sentence-transformers · uv
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mcpilot": {
"command": "npx",
"args": []
}
}
}