loading…
Search for a command to run...
loading…
Enables AI assistants to manage Nutanix infrastructure via Prism Central and Prism Element APIs, including VM operations, cluster management, networking, and as
Enables AI assistants to manage Nutanix infrastructure via Prism Central and Prism Element APIs, including VM operations, cluster management, networking, and as-built report generation.
[!WARNING] Use at your own risk. MCP servers grant AI models the ability to execute actions against your infrastructure. AI-driven management of production environments carries inherent risk — models can misinterpret intent, hallucinate parameters, or trigger destructive operations. This software is provided "as is" without warranty of any kind. The authors accept no liability for data loss, downtime, or any damages arising from use of this tool. Always review AI-proposed actions before execution and maintain proper backups.
An MCP (Model Context Protocol) server that exposes Nutanix Prism Central and Prism Element APIs as tools for AI assistants like GitHub Copilot, Claude, and others.
| Tool | Description |
|---|---|
list_vms |
List all VMs with OData filtering (auto-paginates) |
get_vm |
Get full VM config — CPU, memory, disks, NICs |
power_on_vm |
Power on a VM |
power_off_vm |
Power off a VM (ACPI guest shutdown or force) |
create_vm |
Create a new VM with name, cluster, CPU, memory, disk |
update_vm |
Update VM config — CPU, memory, name, description |
delete_vm |
Permanently delete a VM (requires confirmation) |
clone_vm |
Clone a VM with a new name |
| Tool | Description |
|---|---|
snapshot_vm |
Create an on-demand recovery point of a VM |
list_vm_snapshots |
List all recovery points for a VM |
restore_vm_snapshot |
Restore a VM to a previous recovery point |
| Tool | Description |
|---|---|
list_clusters |
List all registered Nutanix clusters |
get_cluster |
Get cluster config, network, storage, and health details |
list_hosts |
List all hypervisor hosts across clusters |
get_host |
Get host hardware specs, hypervisor info, and resource usage |
list_storage_containers |
List storage containers across clusters |
| Tool | Description |
|---|---|
list_subnets |
List subnets/VLANs with CIDR, VLAN ID, and cluster |
get_subnet |
Get subnet details including IP pools and DHCP config |
list_images |
List disk images (ISOs, QCOW2) in the image library |
get_image |
Get image details — size, type, source, cluster placement |
list_categories |
List all category keys and values |
get_category |
Get all values for a specific category key |
| Tool | Description |
|---|---|
assign_category |
Tag a VM with a category key:value pair |
remove_category |
Remove a category assignment from a VM |
list_entities_by_category |
Find all VMs tagged with a specific category |
| Tool | Description |
|---|---|
list_alerts |
List all alerts from Prism Central |
get_alert |
Get full alert details — entities, resolution guidance |
acknowledge_alert |
Acknowledge or resolve an alert |
list_tasks |
List recent async tasks with status |
get_task |
Get task completion status and error details |
| Tool | Description |
|---|---|
pe_get_cluster_info |
Cluster AOS version, capacity, and health |
pe_list_hosts |
Hosts with hardware specs and CVM info |
pe_get_host_disks |
Per-host physical disk inventory (model, serial, firmware, tier) |
pe_get_host_nics |
Per-host NIC details — speed, link state, MAC, LLDP |
pe_list_cvms |
Controller VMs — IP, memory, power state |
pe_get_cluster_health |
Data resiliency and fault tolerance status |
pe_list_health_checks |
NCC-style health check results |
pe_list_alerts |
Active/resolved alerts on a PE cluster |
| Tool | Description |
|---|---|
pe_list_containers |
Storage containers with replication factor and policies |
pe_list_storage_pools |
Storage pools and disk composition |
pe_list_disks |
Physical disk inventory — type, status, capacity |
pe_list_volume_groups |
Volume groups — iSCSI IQN, attached VMs, CHAP |
pe_get_volume_group |
Detailed volume group config |
| Tool | Description |
|---|---|
pe_list_vms |
VMs on a specific cluster |
pe_list_networks |
VLANs — managed/unmanaged, IP pool config |
pe_list_images |
Disk images and ISOs on a cluster |
| Tool | Description |
|---|---|
pe_list_protection_domains |
Protection domains — schedules, replication state |
pe_get_protection_domain |
Detailed PD config — consistency groups, VMs, schedules |
pe_list_snapshots |
Snapshots for a protection domain |
pe_list_remote_sites |
DR partner clusters — addresses, capabilities |
pe_get_replication_status |
Active replication progress, lag, and bandwidth |
pe_list_dr_snapshots |
DR snapshots across remote sites |
pe_list_pd_replications |
All active PD replications cluster-wide |
pe_list_unprotected_vms |
VMs not in any protection domain (compliance gaps) |
| Tool | Description |
|---|---|
pe_get_auth_config |
Auth types, directory services (LDAP/AD) |
pe_get_smtp_config |
SMTP relay server configuration |
pe_get_snmp_config |
SNMP traps, users, and community strings |
pe_get_syslog_config |
Remote syslog targets and severity levels |
pe_get_alert_email_config |
Alert email recipients and notification rules |
pe_get_nfs_whitelists |
Global NFS export ACLs |
pe_get_licensing_info |
License type (Starter/Pro/Ultimate) and features |
pe_get_metro_witness |
Metro Availability witness server config |
| Tool | Description |
|---|---|
generate_asbuilt |
Generate a comprehensive infrastructure report from a PE cluster — overview, system config, hosts, storage, VMs, networks, data protection, alerts, health checks, and Mermaid topology diagram |
export_asbuilt_html |
Convert AsBuilt Markdown to self-contained HTML with interactive TOC sidebar and print-optimized CSS for PDF export |
get_project_architecture |
Get the Nutanix MCP Server project architecture documentation |
AsBuilt reports include 9 sections: overview, system, hosts (with per-host disk inventory), VMs, networks, storage, data protection (with remote sites and unprotected VM detection), alerts, and health checks. Hypervisor names are mapped automatically (kKvm → AHV). The HTML export features an interactive table of contents with scroll-spy that is hidden when printing to PDF.
The server exposes resources via nutanix:// URIs, allowing LLMs to browse
entities without explicit tool calls:
| URI Pattern | Description |
|---|---|
nutanix://vms |
Browse all VMs |
nutanix://vms/{uuid} |
Get a specific VM |
nutanix://clusters |
Browse all clusters |
nutanix://clusters/{uuid} |
Get a specific cluster |
nutanix://hosts/{uuid} |
Get a specific host |
nutanix://subnets/{uuid} |
Get a specific subnet |
nutanix://images/{uuid} |
Get a specific image |
| Prompt | Description |
|---|---|
set_credentials |
Interactive credential configuration (for clients without env var support) |
nutanix_overview |
Guided environment overview — clusters, hosts, storage, alerts |
cd mcp/nutanix-mcp-server
pip install -e .
Or with dev dependencies:
pip install -e ".[dev]"
Copy .env.example to .env and fill in your credentials:
cp .env.example .env
NUTANIX_HOST=your-prism-central.example.com
NUTANIX_PORT=9440
NUTANIX_USERNAME=your-username
NUTANIX_PASSWORD=your-password
NUTANIX_VERIFY_SSL=true
NUTANIX_TIMEOUT=30
nutanix-mcp
Or directly:
python -m nutanix_mcp
This server uses stdio transport — it communicates via stdin/stdout. Each client configures a command to launch the server process.
Tip: Store credentials in environment variables or a
.envfile, never in config files committed to source control.
Add the server to your project with the claude mcp add command:
claude mcp add nutanix -- python -m nutanix_mcp
Or manually create/edit .mcp.json in your project root:
{
"mcpServers": {
"nutanix": {
"type": "stdio",
"command": "python",
"args": ["-m", "nutanix_mcp"],
"cwd": "/path/to/mcp/nutanix-mcp-server",
"env": {
"NUTANIX_HOST": "your-prism-central.example.com",
"NUTANIX_USERNAME": "your-username",
"NUTANIX_PASSWORD": "your-password",
"NUTANIX_VERIFY_SSL": "true"
}
}
}
}
For user-wide availability (all projects), add to ~/.claude.json instead.
Edit the config file at:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json~/.config/Claude/claude_desktop_config.json{
"mcpServers": {
"nutanix": {
"type": "stdio",
"command": "python",
"args": ["-m", "nutanix_mcp"],
"cwd": "/path/to/mcp/nutanix-mcp-server",
"env": {
"NUTANIX_HOST": "your-prism-central.example.com",
"NUTANIX_USERNAME": "your-username",
"NUTANIX_PASSWORD": "your-password",
"NUTANIX_VERIFY_SSL": "true"
}
}
}
}
Restart Claude Desktop fully after editing.
Add to .vscode/mcp.json in your workspace:
{
"servers": {
"nutanix": {
"type": "stdio",
"command": "python",
"args": ["-m", "nutanix_mcp"],
"cwd": "${workspaceFolder}/mcp/nutanix-mcp-server",
"env": {
"NUTANIX_HOST": "your-prism-central.example.com",
"NUTANIX_USERNAME": "your-username",
"NUTANIX_PASSWORD": "your-password",
"NUTANIX_VERIFY_SSL": "true"
}
}
}
}
Add to opencode.json (or opencode.jsonc) in your project root:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"nutanix": {
"type": "local",
"command": ["python", "-m", "nutanix_mcp"],
"environment": {
"NUTANIX_HOST": "your-prism-central.example.com",
"NUTANIX_USERNAME": "your-username",
"NUTANIX_PASSWORD": "your-password",
"NUTANIX_VERIFY_SSL": "true"
},
"enabled": true
}
}
}
Note: OpenCode uses "command" as an array and "environment" instead of "env".
The Docker MCP Gateway can proxy this server inside a container. Two approaches:
Build a container image and reference it in your MCP client config:
FROM python:3.12-slim
WORKDIR /app
COPY mcp/nutanix-mcp-server/ .
RUN pip install --no-cache-dir -e .
CMD ["python", "-m", "nutanix_mcp"]
Then in any MCP client config:
{
"mcpServers": {
"nutanix": {
"type": "stdio",
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "NUTANIX_HOST=your-prism-central.example.com",
"-e", "NUTANIX_USERNAME=your-username",
"-e", "NUTANIX_PASSWORD=your-password",
"-e", "NUTANIX_VERIFY_SSL=true",
"nutanix-mcp-server"
]
}
}
}
If you have Docker Desktop with the MCP Toolkit:
docker mcp gateway run
Configure the gateway profile to include the nutanix server. The gateway then exposes all registered MCP servers as a single unified endpoint.
In your AI client, point to the gateway:
{
"mcpServers": {
"MCP_DOCKER": {
"command": "docker",
"args": ["mcp", "gateway", "run"]
}
}
}
The gateway handles routing, lifecycle management, and credential isolation.
| Version | Endpoint Pattern | Use Case |
|---|---|---|
| v4 (preferred) | /api/{namespace}/v4.0/{path} |
VMs, clusters, hosts, networking |
| v3 (fallback) | /api/nutanix/v3/{resource}/list |
Resources not yet in v4 |
| v2 (PE direct) | https://{pe_ip}:9440/api/nutanix/v2.0/{resource} |
Per-cluster storage, disks, alerts |
Use list_clusters to find cluster UUIDs, then list_hosts to find CVM IPs.
Those CVM IPs can be used as pe_host in the Prism Element tools.
# Lint
ruff check src/
# Type check
mypy src/
# Test
pytest
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"nutanix-mcp-server": {
"command": "npx",
"args": []
}
}
}