loading…
Search for a command to run...
loading…
An MCP server to autonomously launch, manage, and shutdown local mlx-lm instances on Apple Silicon.(Apple Silicon上でローカルLLM (mlx-lm) を自律的に起動・管理・停止するためのMCPサーバー。)
An MCP server to autonomously launch, manage, and shutdown local mlx-lm instances on Apple Silicon.(Apple Silicon上でローカルLLM (mlx-lm) を自律的に起動・管理・停止するためのMCPサーバー。)
An MCP (Model Context Protocol) server designed to autonomously manage, launch, and shutdown local mlx-lm instances on Apple Silicon (Mac) environments.
This tool empowers AI agents (like Cline, Claude Desktop, etc.) to start local LLM servers on demand, check their status, prepare environments, and gracefully shut them down when no longer needed, saving system resources.
mlx-lm server with any supported model in the background.mlx-lm installed in your environment (pip install mlx-lm)# Clone the repository
git clone [https://github.com/YOUR_USERNAME/mcp-mlx-launcher.git](https://github.com/YOUR_USERNAME/mcp-mlx-launcher.git)
cd mcp-mlx-launcher
# Install dependencies
pip install -e .
To use this server with your MCP client (e.g., Claude Desktop or Cline), add the following to your MCP configuration file:
{
"mcpServers": {
"mcp-mlx-launcher": {
"command": "python",
"args": [
"-m",
"mcp_mlx_launcher.server"
]
}
}
}
Once connected, the MCP server provides the following tools to the AI agent:
check_system_environment(): Diagnoses the current system environment, returning available unified memory (GB) and architecture details.check_llm_status(port: int): Returns true if a server is currently running on the specified port.list_running_servers(): Retrieves a list of all local LLM servers (ports and models) currently running in the background.search_mlx_models(search_query: str = "", limit: int = 10): Searches Hugging Face for available MLX format models and lists their details (like download count and model ID).download_model(model_name: str): Pre-downloads a specified MLX model from Hugging Face and caches it locally. Useful for preparing large models before launching.launch_llm_server(model_name: str, port: int, memory_requirement_gb: float = 4.0): Launches an mlx_lm.server instance in the background. Includes an optional memory requirement check to prevent out-of-memory errors.restart_llm_server(port: int, model_name: str = None, memory_requirement_gb: float = 4.0): Gracefully stops the running server on the given port and restarts it. If model_name is omitted, it restarts with the currently loaded model.shutdown_llm_server(port: int): Gracefully terminates the running LLM server on the given port.Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"mcp-mlx-launcher": {
"command": "npx",
"args": []
}
}
}