loading…
Search for a command to run...
loading…
Enables AI assistants to interact with VAST Data clusters for monitoring, listing, and management operations. It provides both read-only and read-write modes fo
Enables AI assistants to interact with VAST Data clusters for monitoring, listing, and management operations. It provides both read-only and read-write modes for cluster and tenant administration tasks.
PyPI Version Python Version License
VASTOps MCP Server is a Model Context Protocol (MCP) server for VAST Data administration tasks. It provides AI assistants with tools to interact with VAST clusters for monitoring, listing, and management operations. It is supported both for Cluster and Tenant admins.

Install vastops-mcp:
# If installed via pip
pip install vastops-mcp
Configure your VAST cluster connection:
# If installed via pip
vastops-mcp setup
This will prompt you for:
- Cluster address (IP, FQDN, or URL like `https://host:port`)
- Username and password
- Tenant (for tenant admins)
- Tenant (for super admins - which tenant context to use)
Use mcpsetup to get instructions for common AI assistance tools:
# create the syntax for popular ai assistances (currently has builtin support for cursor,claude-desktop,windsurf,vscode)
vastops-mcp mcpsetup vscode
🔧 Configuring MCP server for: vscode
Detected command: vastops-mcp
Detected args: ['mcp']
📋 VSCode Configuration Instructions
Config file location: /Users/user/.vscode/mcp.json
Create a new file if not exists, or add the VASTOps MCP entry to the existing 'servers' section:
{
"servers": {
"VASTOps MCP": {
"command": "vastops-mcp",
"args": [
"mcp"
]
}
}
}
📝 Next steps:
1. Edit or create the config file at the location shown above
2. Restart VSCode
3. The MCP server should be available in VSCode's MCP tools
4. Test by asking VSCode to list VAST clusters
** Add the --read-write flag as a 2nd argument to be able to make updates in VAST clusters
List all VAST clusters
List all views on cluster cluster1
Show me all tenants across all clusters
Create bandwidth and iops graph for cluster1 over the last hour
create dataflow diagram for cluster1 for /path view on the tenant3 tenant for the last hour
show me dataflow diagram for 172.21.224.139 on cluster1
Show me the hardware topology for cluster cluster1
Are there any issues with my configured data protection relationships ?
Create mini support bundle on cluster1 and name it bundle1. Timeframe should be yesterday at midnight for 4m. Generate it only for cnodes prefixed by cnode-128 and upload it to support without private data.
Find all users prefixed with "s3" on cluster cluster1 tenant tenant1
Are there any critical alerts on my clusters that were not acknoledged ?
List all snapshots for view path /data/app1 on cluster cluster1 tenant tenant1
Show me all quotas configured for tenant tenant1 on cluster cluster1
Get performance metrics for cnodes on cluster cluster1 over the last 7 day
Show me all view policies on cluster cluster1 that support S3
First, get all available clusters. Then compare views with path "/" across all clusters, showing capcity information
Show me all tenants on cluster cluster1, for each tenant show me the 5 views with the highest used capacity
Get performance metrics for cluster cluster1, then get metrics for all cnodes, and finally get metrics for top 3 views. Show me a summary of IOPS and bandwidth for each object type
Find all views where logical used capacity is greater than 1TB. For each of these views, get their performance metrics over the last 24 hours and show which views have the highest IOPS
Create a new NFS view on cluster cluster1 with path /data/newview in tenant tenant1
Create a view on cluster cluster1 with path /shared/data in tenant tenant1 that supports both NFS and S3 protocols
Create a snapshot named "backup-2024-01-15" for view path /data/app1 on cluster cluster1, tenant tenant1 and keep it for 24h
Create a clone from snapshot "backup-2024-01-15" of view /data/app1. The clone should be at path /data/app1-clone in tenant tenant1 on cluster cluster1
Set a hard quota of 10TB for view path /data/app1 on cluster cluster1, tenant tenant1
Create 3 new views for vmware based on template.
Create a indestructible snapshot named resrote-point_<view name> for all vmware views on cluster1
Refresh a clone from most recent snapshot of view /data/app1 at path /data/app1-clone in tenant tenant1 on cluster cluster1
macOS:
brew install jq
Linux (Ubuntu/Debian):
sudo apt-get install jq
Linux (RHEL/CentOS):
sudo yum install jq
pip install vastops-mcp
For a full step-by-step walkthrough (prereqs, vastops-mcp setup, wiring
into Claude Desktop / Claude Code, smoke tests), see
docs/user-guide/installation.md.
You can test functions:
vastops-mcp list
# Or
./vastops-mcp.sh list
# List views
vastops-mcp list views --cluster vast3115-var
# List tenants with JSON output
vastops-mcp list tenants --format json
# List views with filters
vastops-mcp list views --cluster cluster1 --tenant mytenant
# Save output to file
vastops-mcp list views --cluster cluster1 --output views.csv --format csv
# List clusters
vastops-mcp clusters
# List performance metrics
vastops-mcp performance --object-name tenant --cluster vast3115-var
# Query users
vastops-mcp query-users --cluster vast3115-var --prefix user
# Create a view
vastops-mcp create view --cluster cluster1 --path /myview --protocols NFS
# Create a view from template
vastops-mcp create view-from-template --cluster cluster1 --template-name mytemplate
# Create a snapshot
vastops-mcp create snapshot --cluster cluster1 --path /myview --name mysnapshot
# Create a clone
vastops-mcp create clone --cluster cluster1 --source-path /myview --source-snapshot mysnapshot --destination-path /myclone
# Create or update quota
vastops-mcp create quota --cluster cluster1 --path /myview --hard-limit 10GB
table (default): Human-readable table formatjson: JSON outputcsv: CSV formatAdditional list tools are automatically registered from the YAML template file located at ~/.vastops-mcp/mcp_list_cmds_template.yaml. These tools follow the naming pattern list_{command_name}_vast.
Note: Commands with create_mcp_tool: false in the YAML template will not be registered as standalone MCP tools. They can still be used in merged commands and via CLI, but won't appear in the MCP tool list.
The following create tools are available when the MCP server is started with --read-write:
Note: Create tools are always registered (visible to LLMs) but will raise an error if called when the server is not in read-write mode.
~/.vastops-mcp/config.json (cluster configurations, no env var override)mcp_list_cmds_template.yaml in project root (shipped template)~/.vastops-mcp/mcp_list_template_modifications.yaml (user customizations)~/.vastops-mcp/view_templates.json (for view template-based creation). This file can be modified based on the template example view_templates_example.yaml in project root (shipped template)~/.vastops-mcp/vastops_mcp.logTemplate file paths can be overridden using environment variables:
VASTOPS_MCP_DEFAULT_TEMPLATE_FILE: Override default template file pathVASTOPS_MCP_TEMPLATE_MODIFICATIONS_FILE: Override template modifications file pathVASTOPS_MCP_VIEW_TEMPLATE_FILE: Override view templates file pathExample:
export VASTOPS_MCP_DEFAULT_TEMPLATE_FILE=/custom/path/default_template.yaml
export VASTOPS_MCP_TEMPLATE_MODIFICATIONS_FILE=/custom/path/modifications.yaml
export VASTOPS_MCP_VIEW_TEMPLATE_FILE=/custom/path/view_templates.json
vastops-mcp list views
The server supports HTTP/HTTPS and SOCKS proxies for reaching VAST clusters through corporate or enterprise network environments. Proxies are configured via standard environment variables:
HTTPS_PROXY or https_proxy — highest precedence (recommended for VAST since the API uses HTTPS)HTTP_PROXY or http_proxy — fallbackALL_PROXY or all_proxy — catch-all, recommended for SOCKS proxiesBypassing the proxy (NO_PROXY):
Use NO_PROXY (or no_proxy) to list hosts that should connect directly, without going through the proxy. Separate multiple entries with commas. A wildcard * bypasses the proxy for all hosts.
# Skip proxy for internal VAST clusters
export NO_PROXY=vast-cluster1.internal,10.0.0.5
HTTP/HTTPS proxy examples:
# Basic HTTP proxy
export HTTPS_PROXY=http://proxy.example.com:8080
# Proxy with authentication
export HTTPS_PROXY=http://username:[email protected]:8080
# Run commands as normal — proxy is picked up automatically
vastops-mcp clusters
vastops-mcp list views --cluster cluster1
SOCKS proxy support:
SOCKS proxies (SOCKS4, SOCKS4a, SOCKS5, SOCKS5h) are supported but require the optional PySocks library:
# Install PySocks for SOCKS proxy support
pip install 'vastops-mcp[socks]'
# — or directly —
pip install pysocks
# SOCKS5 proxy (client-side DNS resolution)
export ALL_PROXY=socks5://proxy.example.com:1080
# SOCKS5h proxy (remote DNS resolution — recommended for internal hostnames)
export ALL_PROXY=socks5h://proxy.example.com:1080
# SOCKS5 with authentication
export ALL_PROXY=socks5h://username:[email protected]:1080
# SOCKS4 proxy
export ALL_PROXY=socks4://proxy.example.com:1080
Proxy types at a glance:
| Type | Description | Env var | Dependency |
|---|---|---|---|
| HTTP/HTTPS | Standard corporate proxies | HTTPS_PROXY / HTTP_PROXY |
Built-in |
| SOCKS5 | SOCKS5 with client-side DNS | ALL_PROXY |
PySocks |
| SOCKS5h | SOCKS5 with remote DNS (recommended for privacy) | ALL_PROXY |
PySocks |
| SOCKS4 | Legacy SOCKS4 protocol | ALL_PROXY |
PySocks |
| SOCKS4a | SOCKS4 with remote DNS | ALL_PROXY |
PySocks |
Note: Any proxy env var will work with any proxy type, but using
ALL_PROXYfor SOCKS proxies follows standard conventions and keeps your configuration clear.
The API whitelist provides security by restricting which VAST API endpoints and HTTP methods can be accessed. It is configured in the YAML template file's api_whitelist section.
- views): Defaults to GET only- views: [post]): Allows GET + specified methods- views: [post] enables both GET and POST for the views endpoint- quotas: [post, patch] enables GET, POST, and PATCH for the quotas endpointThe whitelist is defined in the YAML template file:
api_whitelist:
# Simple format - GET only
- clusters
- tenants
# With methods - GET + specified methods
- views: [post] # GET + POST for create operations
- snapshots: [post] # GET + POST for create operations
- quotas: [post, patch] # GET + POST + PATCH for create/update operations
monitors), all sub-endpoints are allowed (e.g., monitors.ad_hoc_query)All API calls are validated against the whitelist. This ensures:
- views: [post])The YAML template file defines dynamic list functions. See TEMPLATE_STRUCTURE.md for complete documentation.
Each command in the YAML file defines:
$field_name syntaxSee TEMPLATE_STRUCTURE.md for detailed examples and best practices.
The server uses:
The server includes create functions for creating VAST objects. These functions are available when the MCP server is started with the --read-write flag:
Important: Create functions require the MCP server to be started with --read-write flag. If called in read-only mode, the LLM user will be notified that read-write mode is required.
Security: All create functions use API whitelisting to ensure only allowed endpoints and HTTP methods can be accessed. See API Whitelist section for details.
VASTOps MCP Server welcomes questions, feedback, and feature requests. Join the conversation on https://community.vastdata.com/
Apache License 2.0
See LICENSE file for details.
Haim Marko [email protected]
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"vastops-mcp-server": {
"command": "npx",
"args": []
}
}
}