loading…
Search for a command to run...
loading…
MCP server for bidirectional AI agent collaboration — 5 tools for spawning and communicating with any agent CLI
MCP server for bidirectional AI agent collaboration — 5 tools for spawning and communicating with any agent CLI
English | 한국어
MCP server for bidirectional AI agent collaboration. Spawn and communicate with any AI coding agent CLI — Claude Code, Codex, Gemini, Aider, and more.
Your primary agent keeps failing on the same issue? Ask another agent:
# Claude Code is stuck on a TypeScript error it can't resolve.
# It spawns Codex for a second opinion:
spawn_agent("codex", "This TypeScript error keeps appearing. How do I fix it?", {
error: "Type 'string' is not assignable to type 'number'",
files: ["src/utils.ts"]
})
Have another model review your agent's code changes:
spawn_agent("claude", "Review these changes for bugs and edge cases", {
files: ["src/api.ts", "src/handler.ts"],
intent: "Code review before merge"
})
Build a pipeline where agents handle different stages:
# Agent 1: Research
spawn_agent("gemini", "Find the best approach for WebSocket reconnection")
# Agent 2: Implementation (using Agent 1's advice)
spawn_agent("codex", "Implement WebSocket reconnection with exponential backoff", {
files: ["src/ws-client.ts"]
})
# Agent 3: Review
spawn_agent("claude", "Review this implementation for production readiness", {
files: ["src/ws-client.ts"]
})
Agents can ask questions back. The host answers, and work continues:
Host: spawn_agent("codex", "Add caching to the API layer")
Codex: [QUESTION] Should I use Redis or in-memory cache?
Host: reply("codex-a1b2c3", "Use Redis, we have it in our docker-compose")
Codex: [RESULT] Added Redis caching with 5-minute TTL...
AI coding agents get stuck sometimes. Instead of waiting for you, they can ask another agent for help. agent-link-mcp lets any MCP-compatible agent spawn other agent CLIs as collaborators, exchange questions, and get results back — all through standard MCP tools.
agent-link-mcp spawns other AI agents as CLI subprocesses. You need to install and authenticate the agent CLIs you want to collaborate with:
| Agent | Install | Auth |
|---|---|---|
| Claude Code | npm install -g @anthropic-ai/claude-code |
claude login |
| Codex | npm install -g @openai/codex |
codex login |
| Gemini CLI | npm install -g @anthropic-ai/gemini-cli |
gemini login |
| Aider | pip install aider-chat |
Set OPENAI_API_KEY or ANTHROPIC_API_KEY |
You only need the ones you plan to use. agent-link-mcp auto-detects which CLIs are installed.
# Claude Code
claude mcp add agent-link npx agent-link-mcp
# Codex
codex mcp add agent-link npx agent-link-mcp
# Any MCP client
npx agent-link-mcp
Note: Only the agent you're working in needs this MCP server installed. The other agents are spawned as subprocesses — they don't need agent-link-mcp.
spawn_agentSpawn an agent and send it a task.
{
"agent": "codex",
"task": "Refactor this function for better performance",
"context": {
"files": ["src/utils.ts"],
"error": "TypeError: Cannot read property 'x' of undefined",
"intent": "Performance improvement"
},
"model": "o3",
"timeoutMs": 7200000
}
| Parameter | Type | Default | Description |
|---|---|---|---|
agent |
string | required | Agent name ("claude", "codex", "gemini", "aider") |
task |
string | required | Task description |
context |
object | — | Optional { files, error, intent, diff }. diff: true includes git diff output. diff: "staged" for staged only. |
cwd |
string | cwd | Working directory for the agent process |
model |
string | — | Model to use (e.g. "o3", "gpt-5.4", "claude-sonnet-4", "gemini-2.5-pro"). Passed via --model flag. |
thinking |
string | — | Thinking/reasoning depth ("low", "medium", "high", "max"). Claude: --effort, Codex: -c reasoning_effort, Aider: --reasoning-effort. |
retry |
boolean | false | Auto-retry on failure (up to 3 attempts). |
escalate |
boolean | false | On retry, automatically increase thinking level. Requires retry: true. |
timeoutMs |
number | 3600000 | Timeout in ms. Default: 1 hour. |
Returns one of:
{ status: "done", agentId: "codex-a1b2c3", result: "..." } — task completed{ status: "waiting_for_reply", agentId: "codex-a1b2c3", question: "..." } — agent needs clarification{ error: "...", agentId: "codex-a1b2c3" } — something went wrongspawn_agentsRun multiple agents in parallel. Returns all results together.
{
"agents": [
{ "agent": "codex", "task": "Review for bugs", "context": { "diff": true } },
{ "agent": "claude", "task": "Review for security", "context": { "diff": true } }
],
"cwd": "/path/to/project"
}
Returns { summary: { total, succeeded, failed, waiting }, results: [...] }.
replyAnswer a spawned agent's question and continue the conversation.
{
"agentId": "codex-a1b2c3",
"message": "Yes, you can remove the side effects"
}
kill_agentAbort a running agent session.
{
"agentId": "codex-a1b2c3"
}
list_agentsList available agent CLIs.
{
"agents": [
{ "name": "claude", "command": "claude", "source": "auto", "available": true },
{ "name": "codex", "command": "codex", "source": "auto", "available": true },
{ "name": "gemini", "command": "gemini", "source": "auto", "available": false }
]
}
get_statusGet active agent sessions.
{
"sessions": [
{ "agentId": "codex-a1b2c3", "agent": "codex", "status": "waiting_for_reply", "startedAt": "..." }
]
}
You (using Claude Code)
↓
"Ask Codex to help with this refactoring"
↓
Claude Code → spawn_agent("codex", task, context)
↓
agent-link-mcp server → spawns `codex` CLI as subprocess
↓
Codex processes the task...
↓
Codex: "[QUESTION] Should I remove the side effects?"
↓
agent-link-mcp → parses response → returns to Claude Code
↓
Claude Code → reply("codex-a1b2c3", "Yes, remove them")
↓
agent-link-mcp → re-invokes Codex with accumulated context
↓
Codex: "[RESULT] Refactoring complete. Here's what I changed..."
↓
Claude Code receives the result and continues working
agent-link-mcp automatically detects installed agent CLIs:
| Agent | CLI Command |
|---|---|
| Claude Code | claude |
| Codex | codex |
| Gemini | gemini |
| Aider | aider |
Add custom agents via config file at ~/.agent-link/config.json:
{
"agents": {
"codex": {
"command": "/usr/local/bin/codex",
"args": ["--full-auto"],
"promptFlag": null,
"outputFormat": "text"
},
"my-local-llm": {
"command": "ollama",
"args": ["run", "codellama"],
"promptFlag": null,
"outputFormat": "text"
}
}
}
Override config path with AGENT_LINK_CONFIG environment variable.
You can specify which model the spawned agent should use via the model parameter:
# Use a specific model for Codex
spawn_agent("codex", "Debug this issue", { model: "o3" })
# Use a specific model for Claude
spawn_agent("claude", "Review this code", { model: "claude-sonnet-4" })
The model name is passed to the agent CLI via its --model flag. If omitted, the agent uses its default model.
Control how deeply the agent reasons with the thinking parameter:
# High reasoning for complex debugging
spawn_agent("codex", "Debug this race condition", { thinking: "high" })
# Max effort for Claude
spawn_agent("claude", "Architect a new auth system", { thinking: "max" })
| Agent | Flag | Values |
|---|---|---|
| Claude | --effort |
low, medium, high, max |
| Codex | -c reasoning_effort |
low, medium, high |
| Aider | --reasoning-effort |
low, medium, high |
If omitted, the agent uses its default reasoning level.
Default timeout is 1 hour (3,600,000ms). You can override per-call:
# 2 hour timeout for complex tasks
spawn_agent("codex", "Refactor the entire auth system", { timeoutMs: 7200000 })
Spawned agents receive instructions to format their responses:
[QUESTION] ... — needs clarification from the host agent[RESULT] ... — task completedIf the agent doesn't follow the format, the entire output is treated as a result.
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"agent-link-mcp": {
"command": "npx",
"args": []
}
}
}