loading…
Search for a command to run...
loading…
Multi-round AI brainstorming debates between multiple models (GPT, Gemini, DeepSeek, Groq, Ollama, etc.). Pit different LLMs against each other to explore ideas
Multi-round AI brainstorming debates between multiple models (GPT, Gemini, DeepSeek, Groq, Ollama, etc.). Pit different LLMs against each other to explore ideas from diverse perspectives.
Multi-model AI brainstorming MCP server. Orchestrates debates between GPT, Gemini, DeepSeek, and Claude with structured synthesis. Includes instant quick mode, multi-model code review with verdicts, and red-team/Socratic styles. Hosted mode needs zero API keys.
Don't trust one AI. Make them argue.
Click to watch: 3 models debate, cross-examine, and produce a structured verdict — all inside Claude Code.
Add to your project's .mcp.json:
{
"mcpServers": {
"brainstorm": {
"command": "npx",
"args": ["-y", "brainstorm-mcp"],
"env": {
"OPENAI_API_KEY": "sk-...",
"GEMINI_API_KEY": "AIza...",
"DEEPSEEK_API_KEY": "sk-..."
}
}
}
}
Add to claude_desktop_config.json:
{
"mcpServers": {
"brainstorm": {
"command": "npx",
"args": ["-y", "brainstorm-mcp"],
"env": {
"OPENAI_API_KEY": "sk-...",
"DEEPSEEK_API_KEY": "sk-..."
}
}
}
}
npm install -g brainstorm-mcp
brainstorm-mcp
Hosted mode requires no API keys — just install and go. The host (Claude Code) executes prompts using its own model access.
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
DEEPSEEK_API_KEY=sk-...
Set BRAINSTORM_CONFIG to point to a JSON config:
{
"providers": {
"openai": { "model": "gpt-5.4", "apiKeyEnv": "OPENAI_API_KEY" },
"gemini": { "model": "gemini-2.5-flash", "apiKeyEnv": "GEMINI_API_KEY" },
"deepseek": { "model": "deepseek-chat", "apiKeyEnv": "DEEPSEEK_API_KEY" },
"ollama": { "model": "llama3.1", "baseURL": "http://localhost:11434/v1" }
}
}
Known providers (openai, gemini, deepseek, groq, mistral, together) don't need a baseURL.
| Tool | Description | Annotation |
|---|---|---|
brainstorm |
Multi-round debate between AI models (API or hosted mode) | readOnly |
brainstorm_quick |
Instant multi-model perspectives — parallel, no rounds | readOnly |
brainstorm_review |
Multi-model code review with findings, severity, verdict | readOnly |
brainstorm_respond |
Submit Claude's response in an interactive session | readOnly |
brainstorm_collect |
Submit model responses in a hosted session | readOnly |
list_providers |
Show configured providers and API key status | readOnly |
add_provider |
Add a new AI provider at runtime | non-destructive |
Prompt: "Use brainstorm_quick to compare Redis vs PostgreSQL for session storage"
Tool called: brainstorm_quick
{ "topic": "Redis vs PostgreSQL for session storage in a Node.js app" }
Output: Each configured model responds independently in parallel. You get a side-by-side comparison in under 10 seconds with model names, responses, timing, and cost.
Error handling: If a model fails (rate limit, timeout), the tool continues with remaining models and shows which ones failed.
Prompt: "Review this diff for security issues" (with a git diff pasted)
Tool called: brainstorm_review
{
"diff": "diff --git a/src/auth.ts ...",
"title": "Add JWT authentication middleware",
"focus": ["security", "correctness"]
}
Output: A structured verdict (approve / approve with warnings / needs changes) with a findings table showing severity, category, file, line numbers, and suggestions. Includes model agreement analysis — issues flagged by multiple models have higher confidence.
Error handling: If synthesis fails, raw model reviews are still returned.
Prompt: "Brainstorm using opus, sonnet, and haiku about whether we should use GraphQL or REST"
Tool called: brainstorm
{
"topic": "GraphQL vs REST for our public API",
"models": ["opus", "sonnet", "haiku"],
"mode": "hosted",
"rounds": 2,
"style": "redteam"
}
Output: The tool returns prompts for each model. The host (Claude Code) spawns sub-agents with different models, collects responses, and feeds them back via brainstorm_collect. After all rounds, a synthesis model produces a 3-bullet verdict: Recommendation, Key Tradeoffs, Strongest Disagreement.
Error handling: Sessions expire after 10 minutes. If a session is not found, a clear error message is returned with instructions to start a new one.
brainstorm-mcp runs entirely on your machine and does not collect, store, or transmit any personal data, telemetry, or analytics.
In API mode, prompts are sent directly from your machine to the model providers you configure (OpenAI, Gemini, DeepSeek, etc.) using your own API keys. In hosted mode, no external API calls are made.
Debate sessions are stored in-memory only with a 10-minute TTL. No data is written to disk unless you explicitly save results.
Full privacy policy: PRIVACY.md
git clone https://github.com/spranab/brainstorm-mcp.git
cd brainstorm-mcp
npm install
npm run build
npm start
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"spranab-brainstorm-mcp": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also