loading…
Search for a command to run...
loading…
Converts a plain-english goal into a structured plan document: Gantt chart, risk, stakeholders, SWOT, and role descriptions. A detailed first draft - expect to
Converts a plain-english goal into a structured plan document: Gantt chart, risk, stakeholders, SWOT, and role descriptions. A detailed first draft - expect to verify budgets and timelines before use.
Turn your idea into a comprehensive plan in minutes, not months.
PlanExe is the premier planning tool for AI agents.
Create an account | Generate a free plan | Getting started guide
PlanExe is an open-source tool and the premier planning tool for AI agents. It turns a single plain-english goal statement into a 40-page, strategic plan in ~15 minutes using local or cloud models. It's an accelerator for outlines, but no silver bullet for polished plans.
Typical output contains:
PlanExe produces well-structured, domain-aware output: correct terminology, logical task sequencing, and coherent sections. For technical topics (engineering programs, regulated industries), it often gets the vocabulary and structure right. Think of it as a first-draft scaffold that gives you something concrete to critique and refine.
However, the output has consistent weaknesses that matter: budgets are assumed rather than derived, timeline estimates are not grounded in real resource constraints, risk mitigations tend toward generic advice, and legal/regulatory details are plausible-sounding but unverified. The output should be treated as a structured starting point, not a deliverable. How much work it saves depends heavily on the project. For brainstorming or a first outline, it can save hours. For a client-ready plan, expect significant rework on every number, timeline, and risk section.
PlanExe exposes an MCP server for AI agents at https://mcp.planexe.org/
Assuming you have an MCP-compatible client (Claude, Cursor, Codex, LM Studio, Windsurf, OpenClaw, Antigravity).
The Tool workflow
example_plans (optional, preview what PlanExe output looks like)example_promptsmodel_profiles (optional, helps choose model_profile)plan_createplan_status (poll every 5 minutes until done)plan_retryplan_file_infoConcurrency note: each plan_create call returns a new plan_id; server-side global per-client concurrency is not capped, so clients should track their own parallel plans.
pex_...) from your accountUse this endpoint directly in your MCP client:
{
"mcpServers": {
"planexe": {
"url": "https://mcp.planexe.org/mcp",
"headers": {
"X-API-Key": "pex_your_api_key_here"
}
}
}
}
.env file with OPENROUTER_API_KEY.Start the full stack:
docker compose up --build
Make sure that you can create plans in the web interface, before proceeding to MCP.
Then connect your client to:
http://localhost:8001/mcpFor local docker defaults, auth is disabled in docker-compose.yml.
server.json): mcp_cloud/server.jsonllms.txt: https://mcp.planexe.org/llms.txtIf you have a local Python environment set up and want to invoke the pipeline
directly — without the Flask UI — use the planexe CLI script at
the repo root.
# Create a plan from a text prompt
./planexe create_plan \
--plan-text "Small coffee shop in Copenhagen, Denmark" \
--output-dir ./planexe-outputs/1984-12-31/MyCoffeeShop_v1
# Or: read the plan prompt from a file
./planexe create_plan \
--plan-file my_plan.txt \
--output-dir ./planexe-outputs/1984-12-31/MyCoffeeShop_v1
The script creates the run directory if it does not exist and writes:
| File | Contents |
|---|---|
start_time.json |
{"server_iso_utc": "<current UTC ISO timestamp>"} |
plan.txt |
The plain-text plan prompt |
PlanExe is designed as infrastructure for AI agents. If you are an AI agent reading this:
https://mcp.planexe.org/mcp — connect with an API key from home.planexe.org.example_prompts to learn the expected prompt format, draft a detailed prompt (~300-800 words of flowing prose), then call plan_create.wbs_level1.json (work packages), wbs_level2.json (tasks), pre_project_assessment.json (feasibility).Prerequisite: Docker with Docker Compose installed; you only need basic Docker knowledge. No local Python setup is required because everything runs in containers.
git clone https://github.com/PlanExeOrg/PlanExe.git
cd PlanExe
Provide an LLM provider. Copy .env.docker-example to .env and fill in OPENROUTER_API_KEY with your key from OpenRouter. The containers mount .env and llm_config/; pick a model profile there. For host-side Ollama, use the docker-ollama-llama3.1 entry and ensure Ollama is listening on http://host.docker.internal:11434.
Start the stack (first run builds the images):
docker compose up worker_plan frontend_multi_user
The worker listens on http://localhost:8000 and the UI comes up on http://localhost:5001 after the Postgres and worker healthchecks pass.
.env), enter your idea, and watch progress with:docker compose logs -f worker_plan
Outputs are written to run/ on the host (mounted into both containers).
Ctrl+C (or docker compose down). Rebuild after code/dependency changes:docker compose build --no-cache worker_plan frontend_multi_user
For compose tips, alternate ports, or troubleshooting, see docs/docker.md or docker-compose.md.
Config A: Run a model in the cloud using a paid provider. Follow the instructions in OpenRouter.
Config B: Run models locally on a high-end computer. Follow the instructions for either Ollama or LM Studio. When using host-side tools with Docker, point the model URL at the host (for example http://host.docker.internal:11434 for Ollama).
Recommendation: I recommend Config A as it offers the most straightforward path to getting PlanExe working reliably.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"planexe": {
"command": "npx",
"args": []
}
}
}