loading…
Search for a command to run...
loading…
Transforms idle LAN machines into a unified compute cluster for AI agents to offload CPU-intensive tasks like simulations and backtesting. It provides a broker-
Transforms idle LAN machines into a unified compute cluster for AI agents to offload CPU-intensive tasks like simulations and backtesting. It provides a broker-worker architecture that integrates with MCP-compatible tools to distribute workloads across a local network.
Distributed compute MCP server — pool idle LAN machines into a compute cluster for AI agents.

Running CPU-intensive agentic workloads (backtesting, simulations, hyperparameter sweeps) can peg your host machine at 100% with just 6-7 subagents. Meanwhile, other machines on your LAN sit idle with dozens of cores unused.
hive-mcp turns idle machines on your LAN into a unified compute pool, accessible via MCP from Claude Code, Cursor, Copilot, or any MCP-compatible AI tool.
Host Worker A Worker B
+-----------------+ +----------------+ +----------------+
| Claude Code | | hive worker | | hive worker |
| hive-mcp broker |<---->| daemon | | daemon |
| (MCP + WS) | ws | auto-discovered| | auto-discovered|
+-----------------+ +----------------+ +----------------+
8 cores 14 cores 6 cores
= 28 total cores
pip install hive-mcp
hive broker
# Prints the shared secret and starts listening
Worker machines are headless compute — they only need Python and hive-mcp. No Claude Code, no AI tools, no API keys. They just execute tasks and return results.
# Copy the secret from the broker, then:
hive join --secret <token>
# Auto-discovers broker via mDNS — no address needed!
Register hive-mcp as an MCP server:
claude mcp add hive-mcp -- hive broker
This writes the config to ~/.claude.json scoped to your current project directory.
Now Claude Code can submit compute tasks to your cluster:
You: "Run backtests for these 20 parameter combinations"
Claude Code: I'll run 6 locally and submit 14 to hive...
submit_task(code="run_backtest(params_7)", priority=1)
submit_task(code="run_backtest(params_8)", priority=1)
...
| Tool | Description |
|---|---|
create_batch |
Create a named batch for grouping related tasks |
close_batch |
Close a batch after all tasks are submitted |
get_batch_status |
Get task counts and status for a batch |
get_batch_results |
Retrieve all results from a batch in one call |
submit_task |
Submit a Python or shell task (optionally to a batch) |
get_task_status |
Check if a task is queued, running, or complete |
get_task_result |
Retrieve the output of a completed task |
pull_task |
Pull a queued task back for local execution |
report_local_result |
Report result of a locally-executed pulled task |
cancel_task |
Cancel a pending or running task |
list_workers |
See all connected workers and their capacity |
get_cluster_status |
Overview of the entire cluster |
--max-cpu 80)hive context injects cluster info into every promptHiveClienthive broker # Start broker + MCP server
hive join # Join as worker (auto-discover broker)
hive join --broker-addr IP:PORT # Join with explicit address
hive join --max-cpu 60 # Limit CPU usage to 60%
hive join --max-tasks 4 # Hard cap at 4 concurrent tasks
hive status # Show cluster status
hive secret # Show/generate shared secret
hive context # Output machine + cluster info (for hooks)
hive tls-setup # Generate self-signed TLS certificates
Add automatic cluster awareness to every prompt:
{
"hooks": {
"UserPromptSubmit": [
{
"command": "hive context",
"timeout": 3000
}
]
}
}
This injects:
[hive-mcp] Local machine: 8 cores / 16 threads, CPU: 45%, RAM: 14GB free / 32GB total
[hive-mcp] Cluster: 2 workers online (20 cores), 0 queued, 3 active
[hive-mcp] Tip: 20 remote cores available via hive. Use submit_task() for overflow.
from hive_mcp.client.sdk import HiveClient
async with HiveClient("192.168.1.100", 7933, secret="...") as client:
task = await client.submit("print('hello from hive')")
result = await client.wait(task["task_id"])
print(result["stdout"]) # "hello from hive"
hive tls-setup to generate self-signed certificatesThere is a known Claude Code bug where Windows drive letter casing (c:/ vs C:/) creates duplicate project entries in ~/.claude.json. The MCP config ends up under one casing while Claude Code looks up the other.
Fix: Open ~/.claude.json, search for your project path in the "projects" object, and ensure both case variants have identical mcpServers config. Or re-run claude mcp add from the same terminal type you use for Claude Code sessions.
Check ~/.hive/broker.log for startup errors. Common causes:
hive CLI and expected environmentMIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"hive-mcp": {
"command": "npx",
"args": []
}
}
}