loading…
Search for a command to run...
loading…
An orchestration platform for deploying parallel swarms of tool-enabled AI agents to perform complex tasks like code generation, system monitoring, and multi-pe
An orchestration platform for deploying parallel swarms of tool-enabled AI agents to perform complex tasks like code generation, system monitoring, and multi-perspective analysis. It features a unique chunked write pattern that enables the creation of large-scale documents and code files by assembling parallel agent outputs.
AI organism evolution and parallel task execution with tool-enabled agents. Now with Chunked Write Pattern for generating large documents and code files!
| Role | Model | VRAM | Purpose |
|---|---|---|---|
| Scout | qwen3:4b | 2.5GB | Reconnaissance |
| Worker | qwen3:4b | 2.5GB | Task execution |
| Memory | qwen3:4b | 2.5GB | Context retention |
| Guardian | qwen3:4b | 2.5GB | System monitoring |
| Learner | qwen3:4b | 2.5GB | Pattern acquisition |
| Synthesizer | qwen2.5:14b | 8.99GB | Result synthesis |
spawn_colony - Create bug colony (standard/fast/heavy/hybrid)list_colonies - List active coloniescolony_status - Detailed colony infoquick_colony - Quick health checkdissolve_colony - Remove colonycleanup_idle - Remove idle coloniesfarm_stats - Comprehensive statisticsdeploy_swarm - Deploy tasks to colonyquick_swarm - One-shot spawn + deploycode_review_swarm - 4-perspective code reviewcode_gen_swarm - Generate code + tests + docsfile_swarm - Parallel file operationsexec_swarm - Parallel shell commandsapi_swarm - Parallel HTTP requestskmkb_swarm - Multi-angle knowledge queriestool_swarm - Deploy bugs with real system toolssystem_health_swarm - Quick system health checkrecon_swarm - Directory/codebase reconnaissancedeep_analysis_swarm - Deep disk/file analysisworker_task - Single worker with full toolsheavy_write - Direct file write (bypasses LLM for large content)synthesize - Standalone synthesis of any JSON resultschunked_write - Generate large documents via parallel section writingchunked_code_gen - Generate code files with functions written in parallelchunked_analysis - Multi-perspective analysis with synthesis| Role | Tools |
|---|---|
| Scout | read_file, list_dir, file_exists, system_status, process_list, disk_usage, check_service, exec_cmd |
| Worker | read_file, write_file, list_dir, exec_cmd, http_get, http_post, system_status, disk_usage, check_service |
| Memory | read_file, kmkb_search, kmkb_ask, list_dir, system_status, process_list, disk_usage, check_service, exec_cmd |
| Guardian | system_status, process_list, disk_usage, check_service, read_file, list_dir, exec_cmd |
| Learner | read_file, analyze_code, list_dir, kmkb_search, system_status, process_list, disk_usage, check_service, exec_cmd |
Agent Farm v3.3 uses Ollama's structured output feature to enforce JSON schemas on model responses:
# Bug responds with guaranteed-valid JSON:
{"tool": "system_status", "arg": ""}
{"tool": "exec_cmd", "arg": "df -h"}
{"tool": "check_service", "arg": "ollama"}
The constrained decoding (GBNF grammar) masks invalid tokens during generation, ensuring:
Results now include a mode field showing which method was used:
structured - JSON schema enforcedstructured+autoformat - JSON + simple result formattingstructured+deep - JSON with multi-step reasoningregex - Fallback regex parsingregex+autoformat - Regex + simple result formattingThe chunked write pattern solves the ~500 char output limitation of small models by decomposing large tasks:
1. PLANNER BUG (qwen2.5:14b)
|-- Creates structured JSON outline
|-- {"sections": [{"title": "...", "description": "..."}]}
2. WORKER BUGS (qwen3:4b) - IN PARALLEL
|-- Each writes one section (~300-500 chars)
|-- 4 workers = 4 sections simultaneously
3. PYTHON CONCATENATION (NO LLM)
|-- header + separator.join(sections)
|-- Zero token cost, instant assembly
4. DIRECT FILE WRITE (NO LLM)
|-- tool_write_file() saves result
|-- Bypasses any output corruption
| Tool | Output Size | Sections | Time |
|---|---|---|---|
| chunked_write | 9.6 KB | 5 | 78s |
| chunked_code_gen | 1.9 KB | 4 functions | 88s |
| chunked_analysis | Varies | 4 perspectives | ~60s |
agent-farm:system_health_swarm
agent-farm:tool_swarm
colony_type: "heavy"
tasks: [
{"prompt": "Check CPU temperature"},
{"prompt": "List top 5 memory processes"},
{"prompt": "Check if docker is running"}
]
agent-farm:heavy_write
path: "/tmp/large_output.txt"
content: "... large content ..."
agent-farm:recon_swarm
target_path: "/home/kyle/repos/my-project"
agent-farm:chunked_write
output_path: "/tmp/security_guide.md"
spec: "Linux server security hardening guide"
num_sections: 5
doc_type: "markdown"
Output: 9KB+ document with 5 coherent sections
agent-farm:chunked_code_gen
output_path: "/tmp/utils.py"
spec: "File utilities: read, write, copy, delete"
language: "python"
num_functions: 4
Output: Complete Python module with 4 functions
agent-farm:chunked_analysis
target: "/home/kyle/repos/project"
question: "What are the architectural patterns?"
num_perspectives: 4
Output: Analysis from Structure, Patterns, Quality, Performance perspectives
cd ~/repos/agent-farm
uv venv
uv pip install -e .
{
"mcpServers": {
"agent-farm": {
"command": "/home/kyle/repos/agent-farm/.venv/bin/python",
"args": ["-m", "agent_farm.server"]
}
}
}
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"agent-farm": {
"command": "npx",
"args": []
}
}
}