loading…
Search for a command to run...
loading…
Integrates the Reachy Mini robot or its simulation with Claude Code to provide interactive physical feedback through emotions, speech, and celebratory animation
Integrates the Reachy Mini robot or its simulation with Claude Code to provide interactive physical feedback through emotions, speech, and celebratory animations. It features advanced capabilities like sentiment analysis, semantic problem search, and cross-project memory to enhance the developer experience.
MCP server that brings Reachy Mini to life as your coding companion in Claude Code.
Reachy reacts to your coding sessions with emotions, speech, and celebratory dances - making coding more interactive and fun!
| Feature | Basic | + LLM | + Memory |
|---|---|---|---|
| Robot emotions & animations | ✅ | ✅ | ✅ |
| Text-to-speech (Piper TTS) | ✅ | ✅ | ✅ |
| Session tracking (SQLite) | ✅ | ✅ | ✅ |
| Smart sentiment analysis | ❌ | ✅ | ✅ |
| AI-generated responses | ❌ | ✅ | ✅ |
| Semantic problem search | ❌ | ❌ | ✅ |
| Cross-project memory | ❌ | ❌ | ✅ |
| Platform | Basic | LLM (MLX) | LLM (Ollama) | Memory |
|---|---|---|---|---|
| macOS Apple Silicon | ✅ | ✅ | ✅ | ✅ |
| macOS Intel | ✅ | ❌ | ✅ | ✅ |
| Linux | ✅ | ❌ | ✅ | ✅ |
| Windows | ⚠️ Experimental | ❌ | ✅ | ✅ |
Install the package:
pip install reachy-claude-mcp
Start Reachy Mini simulation (if you don't have the physical robot):
# On macOS with Apple Silicon
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal
# On other platforms
python -m reachy_mini.daemon.app.main --sim --scene minimal
Add to Claude Code (~/.mcp.json):
{
"mcpServers": {
"reachy-claude": {
"command": "reachy-claude"
}
}
}
Start Claude Code and Reachy will react to your coding!
(Optional) Add instructions for Claude - Copy examples/CLAUDE.md to your project root or ~/projects/CLAUDE.md. This teaches Claude when and how to use Reachy's tools effectively.
pip install reachy-claude-mcp
Without LLM features, Reachy uses keyword matching for sentiment - still works great!
Option A: MLX (Apple Silicon only - fastest)
pip install "reachy-claude-mcp[llm]"
Option B: Ollama (cross-platform)
# Install Ollama from https://ollama.ai
ollama pull qwen2.5:1.5b
# Then just use the basic install - Ollama is auto-detected
pip install reachy-claude-mcp
The system automatically picks the best available backend: MLX → Ollama → keyword fallback.
pip install "reachy-claude-mcp[all]"
# Start Qdrant vector database
docker compose up -d
git clone https://github.com/mchardysam/reachy-claude-mcp.git
cd reachy-claude-mcp
# Install with all features
pip install -e ".[all]"
# Or specific features
pip install -e ".[llm]" # MLX sentiment analysis (Apple Silicon)
pip install -e ".[memory]" # Qdrant vector store
You don't need a physical Reachy Mini to use this. The simulation works great:
# On macOS with Apple Silicon, use mjpython for the MuJoCo GUI
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal
# On Linux/Windows/Intel Mac
python -m reachy_mini.daemon.app.main --sim --scene minimal
The simulation dashboard will be available at http://localhost:8000.
Follow the Reachy Mini setup guide to connect to your physical robot.
| Variable | Default | Description |
|---|---|---|
REACHY_CLAUDE_HOME |
~/.reachy-claude |
Data directory for database, memory, voice models |
| LLM Settings | ||
REACHY_LLM_MODEL |
mlx-community/Qwen2.5-1.5B-Instruct-4bit |
MLX model (Apple Silicon) |
REACHY_OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
REACHY_OLLAMA_MODEL |
qwen2.5:1.5b |
Ollama model name |
| Memory Settings | ||
REACHY_QDRANT_HOST |
localhost |
Qdrant server host |
REACHY_QDRANT_PORT |
6333 |
Qdrant server port |
| Voice Settings | ||
REACHY_VOICE_MODEL |
(auto-download) | Path to custom Piper voice model |
| Tool | Description |
|---|---|
robot_respond |
Speak a summary (1-2 sentences) + play emotion |
robot_emotion |
Play emotion animation only |
robot_celebrate |
Success animation + excited speech |
robot_thinking |
Thinking/processing animation |
robot_wake_up |
Start-of-session greeting |
robot_sleep |
End-of-session goodbye |
robot_oops |
Error acknowledgment |
robot_acknowledge |
Quick nod without speaking |
| Tool | Description |
|---|---|
robot_dance |
Perform a dance move |
robot_dance_respond |
Dance while speaking |
robot_big_celebration |
Major milestone celebration |
robot_recovered |
After fixing a tricky bug |
| Tool | Description |
|---|---|
process_response |
Auto-analyze output and react appropriately |
get_project_greeting |
Context-aware greeting based on history |
find_similar_problem |
Search past solutions across projects |
store_solution |
Save problem-solution pairs for future |
link_projects |
Mark relationships between projects |
| Tool | Description |
|---|---|
list_robot_emotions |
List available emotions |
list_robot_dances |
List available dance moves |
get_robot_stats |
Memory statistics across sessions |
list_projects |
All projects Reachy remembers |
amazed, angry, anxious, attentive, bored, calm, celebrate, come, confused,
curious, default, disgusted, done, excited, exhausted, frustrated, go_away,
grateful, happy, helpful, inquiring, irritated, laugh, lonely, lost, loving,
neutral, no, oops, proud, relieved, sad, scared, serene, shy, sleep, success,
surprised, thinking, tired, uncertain, understanding, wake_up, welcoming, yes
Celebrations: celebrate, victory, playful, party Acknowledgments: nod, agree, listening, acknowledge Reactions: mind_blown, recovered, fixed_it, whoa Subtle: idle, processing, waiting, thinking_dance Expressive: peek, glance, sharp, funky, smooth, spiral
Claude can call these tools during coding sessions:
# After completing a task
robot_respond(summary="Done! Fixed the type error.", emotion="happy")
# When celebrating a win
robot_celebrate(message="Tests are passing!")
# Big milestone
robot_big_celebration(message="All tests passing! Ship it!")
# When starting to think
robot_thinking()
# Session start
robot_wake_up(greeting="Good morning! Let's write some code!")
# Session end
robot_sleep(message="Great session! See you tomorrow.")
src/reachy_claude_mcp/
├── server.py # MCP server with tools
├── config.py # Centralized configuration
├── robot_controller.py # Reachy Mini control
├── tts.py # Piper TTS (cross-platform)
├── memory.py # Session memory manager
├── database.py # SQLite project tracking
├── vector_store.py # Qdrant semantic search
├── llm_backends.py # LLM backend abstraction (MLX, Ollama)
└── llm_analyzer.py # Sentiment analysis and summarization
The voice model auto-downloads on first use. If you have issues:
# Manual download
mkdir -p ~/.reachy-claude/voices
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx.json \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.json
Install PulseAudio or ALSA utilities:
# Ubuntu/Debian
sudo apt install pulseaudio-utils
# Fedora
sudo dnf install pulseaudio-utils
Check which backend is available:
pip install "reachy-claude-mcp[llm]"ollama serve) and you've pulled a model (ollama pull qwen2.5:1.5b)If neither is available, the system falls back to keyword-based sentiment detection (still works, just less smart).
Make sure Qdrant is running:
docker compose up -d
Or point to a remote Qdrant instance:
export REACHY_QDRANT_HOST=your-qdrant-server.com
If mjpython isn't found, you may need to install MuJoCo separately or use regular Python:
# Try without mjpython
python -m reachy_mini.daemon.app.main --sim --scene minimal
On Linux, you may need to set MUJOCO_GL=egl or MUJOCO_GL=osmesa for headless rendering.
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"reachy-claude-mcp": {
"command": "npx",
"args": []
}
}
}