loading…
Search for a command to run...
loading…
Exposes the complete Zabbix API to MCP-compatible AI assistants, enabling natural language management of hosts, problems, and templates across multiple instance
Exposes the complete Zabbix API to MCP-compatible AI assistants, enabling natural language management of hosts, problems, and templates across multiple instances. It provides 220 tools for comprehensive monitoring and configuration with support for read-only modes and secure authentication.
developed and maintained by
and community
Overview: What is this? · Features
Install: Quick Start · Installation · Upgrade · Installer CLI
Configure: Reference · TLS / HTTPS · Token Budget
Use: Client Wizard · AI Clients · Prompts · Tools · Parameters · PDF Reports
More: Compatibility · Development · Related Projects · License · About initMAX
MCP (Model Context Protocol) is an open standard that lets AI assistants (ChatGPT, Claude, VS Code Copilot, JetBrains AI, Codex, and others) use external tools. This server exposes the entire Zabbix API as MCP tools — allowing any compatible AI assistant to query hosts, check problems, manage templates, acknowledge events, and perform any other Zabbix operation.
The server runs as a standalone HTTP service. AI clients connect to it over the network.
graph_render (PNG export), anomaly_detect (z-score analysis), capacity_forecast (linear regression), report_generate (PDF reports), action_prepare/action_confirm (two-step write approval)generate-token), or config.tomlmonitoring, alerts, users, extensions, etc.) or individual API prefix to reduce the tool catalog size and stay under LLM context limits (see Token Budget below)extend for full detailszabbix_raw_api_call tool for any API method not explicitly definedgit clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
sudo nano /etc/zabbix-mcp/config.toml # fill in your Zabbix URL + API token
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server
Done. The server is running on http://127.0.0.1:8080/mcp.
Detailed guide: See INSTALL.md for step-by-step instructions for both on-prem (systemd) and Docker deployments, including uninstall, security checklist, and TLS setup.
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
The install script will:
zabbix-mcp (no login shell)/opt/zabbix-mcp/venv/etc/zabbix-mcp/config.tomlzabbix-mcp-server)/var/log/zabbix-mcp/*.log (daily, 30 days retention)cd zabbix-mcp-server
sudo ./deploy/install.sh update
That's the whole procedure — no manual steps afterwards. From v1.15+ the update command handles git sync, package reinstall, systemd reload, validation, and service restart in one shot.
What update does:
fetch + reset --hard origin/<branch> if history diverged), then re-executes itself from the updated script./opt/zabbix-mcp/venv.config.toml — aborts if the config is invalid.systemctl restart zabbix-mcp-server and performs an HTTP health check on the configured port.What is preserved (never overwritten):
/etc/zabbix-mcp/config.toml — your Zabbix URL, API token, MCP tokens, scopes, TLS settings, etc./var/lib/zabbix-mcp/).You'll see ✓ Config preserved at /etc/zabbix-mcp/config.toml (not overwritten) during the update. Check config.example.toml afterwards for any new options added in the release.
PDF reporting during update:
By default update keeps your current reporting state — if PDF reporting was installed, it stays; if it wasn't, it is not added. To change that:
# Enable PDF reporting on an existing install that didn't have it
sudo ./deploy/install.sh update --with-reporting
# Update without PDF reporting dependencies (smaller install)
sudo ./deploy/install.sh update --without-reporting
The --with-reporting flag pulls in weasyprint, jinja2, and system libs (cairo, pango, gdk-pixbuf). See PDF Reports for what you get.
Upgrading from very old versions (pre-v1.15)? If
updatefails, do a one-time manual sync first:git fetch origin && git reset --hard origin/main sudo ./deploy/install.sh updateTroubleshooting: if something goes wrong, inspect:
sudo ./deploy/install.sh test-config # validate config.toml sudo journalctl -u zabbix-mcp-server -n 50 --no-pager
Edit the config file with your Zabbix server details:
sudo nano /etc/zabbix-mcp/config.toml
Minimal configuration - just fill in your Zabbix URL and API token:
[server]
transport = "http"
host = "127.0.0.1"
port = 8080
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "your-api-token"
read_only = true
verify_ssl = true
All available options with detailed descriptions are documented in config.example.toml.
The config file contains two different types of tokens that serve different purposes:
┌────────────┐ MCP token (Bearer) ┌──────────────────┐ api_token ┌───────────────┐
│ MCP Client ├──────────────────────► MCP Server ├─────────────────► Zabbix Server │
│ (AI / IDE) │ (optional) │ (zabbix-mcp) │ (required) │ │
└────────────┘ │ │ └───────────────┘
│ Admin Portal │
│ :9090 (optional) │
└──────────────────┘
api_token (in [zabbix.*]) — required — authenticates the MCP server to your Zabbix instance. This is a Zabbix API token that you create in the Zabbix frontend.
How to create one:
The token inherits the permissions of the Zabbix user it belongs to:
| Use case | Recommended Zabbix role | read_only config |
|---|---|---|
| Read-only monitoring (problems, hosts, dashboards) | User role with read access to needed host groups | true |
| Full management (create hosts, templates, triggers) | Admin role with read-write access to target host groups | false |
| Complete API access (users, settings, global scripts) | Super admin role | false |
Use the principle of least privilege — create a dedicated Zabbix user for the MCP server with only the permissions it needs.
Protects the MCP server from unauthorized access. When configured, MCP clients must include a bearer token in every request: Authorization: Bearer <token>.
Recommended: Multi-token system (v1.16+) — generate tokens via installer, admin portal, or manually:
# Generate a token via installer
sudo ./deploy/install.sh generate-token claude
# Or generate manually
python3 -c "import secrets,hashlib; t='zmcp_'+secrets.token_hex(32); print(f'Token: {t}\nHash: sha256:{hashlib.sha256(t.encode()).hexdigest()}')"
Then add to config.toml:
[tokens.claude]
name = "Claude Code"
token_hash = "sha256:<paste hash>"
scopes = ["*"] # or specific: ["monitoring", "alerts"]
read_only = true
Each token can have independent scopes, IP restrictions, server binding, and expiry. See config.example.toml for all options.
Legacy: Single auth_token — still supported for backward compatibility:
[server]
auth_token = "your-secret-token-here"
Legacy
auth_tokenis automatically migrated to[tokens.legacy]on first v1.16 start.
When no tokens are configured, the server accepts unauthenticated connections. This is safe when bound to 127.0.0.1 (default) but must be configured when exposed to the network (0.0.0.0).
You can connect to multiple Zabbix instances. Each tool has a server parameter to select which one to use (defaults to the first defined):
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "prod-token"
read_only = true
[zabbix.staging]
url = "https://zabbix-staging.example.com"
api_token = "staging-token"
read_only = false
The first server (production) is used as the default. To target a specific instance, just mention it naturally in your prompt:
| Prompt | Target server | What happens |
|---|---|---|
| "Show me hosts with high CPU usage" | production (default) |
Queries the first defined server automatically |
| "Show me hosts in our staging Zabbix instance" | staging |
AI recognizes "staging" and routes to the matching server |
| "What are the top triggers in the last hour on production?" | production |
Explicit mention of "production" confirms the default |
| "Compare trigger counts between production and staging" | both | AI queries both servers and combines the results |
| "Create a maintenance window on staging for tonight" | staging |
Write operation routed to staging (requires read_only = false) |
| "Acknowledge all disaster problems on production" | production |
Write operation on production (blocked if read_only = true) |
| "Export the 'Linux by Zabbix agent' template from production" | production |
Read-only export, works even with read_only = true |
| "Import this template to staging" | staging |
Write operation routed to staging |
| "Migrate host 'web-01' from production to staging" | both | AI reads from production, creates on staging |
The AI assistant maps your natural language to the correct server parameter automatically — no need to use technical syntax like server = "staging" in your prompts.
The MCP server itself is stateless — there is no shared state between instances. You can run multiple MCP server instances behind a reverse proxy (nginx, HAProxy, Caddy) using round-robin load balancing. Each instance connects to Zabbix independently.
Note: When your Zabbix runs in HA mode with multiple frontends, the API is available on each frontend. Currently the MCP server connects to a single
urlper[zabbix.<name>]entry. Multi-frontend failover (connecting to multiple URLs for the same Zabbix instance) is a planned feature.
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server
Verify the server is running:
sudo systemctl status zabbix-mcp-server
The server exposes two health check mechanisms:
| Method | Endpoint | Auth required | Returns |
|---|---|---|---|
| HTTP endpoint | GET /health |
No | {"status": "ok"} — confirms the HTTP server is running |
| MCP tool | health_check |
Yes (if auth_token set) | Full connectivity status of each configured Zabbix server |
Quick check from the command line:
# Simple HTTP health check (no authentication needed)
curl http://localhost:8080/health
# → {"status":"ok"}
Use the HTTP /health endpoint for load balancer probes, uptime monitoring, and container orchestration readiness checks. Use the health_check MCP tool for deeper diagnostics including Zabbix server connectivity.
The application writes to the log file configured in config.toml (log_file). Startup errors before logging initialization go to the systemd journal.
# Live log stream (application log)
tail -f /var/log/zabbix-mcp/server.log
# Via journalctl (startup errors + fallback)
sudo journalctl -u zabbix-mcp-server -f
Web-based administration portal for managing MCP tokens, users, report templates, and server settings. Runs on a separate port (default: 9090) — the MCP port (8080) serves only the MCP protocol, no admin UI.
![]() |
![]() |
![]() |
![]() |
[admin]
enabled = true
port = 9090
The installer generates an admin password automatically. To reset: sudo ./deploy/install.sh set-admin-password
Features:
| Feature | Description |
|---|---|
| Dashboard | System overview with MCP health status (green/red dot), Zabbix server connectivity with async token validation, uptime, recent audit activity |
| MCP Tokens | Create, revoke, per-token scope control (group + individual tool level), per-token Zabbix server binding, IP restrictions, expiry, read-only flag; legacy token migration with tooltip |
| Tool Exposure | Drag & drop bubble UI for enabling/disabling tools globally and per-token; groups + individual tool prefixes; globally disabled tools shown as locked in token scopes |
| Zabbix Servers | Connection status with API + token validation (detects "API online but token invalid"), version display, test connection, add/edit/delete |
| Client MCP Wizard (beta) | Point-and-click generator: pick a Zabbix server -> pick a token (or skip auth) -> pick one of 14 AI clients -> get a copy-paste-ready config snippet + per-client install instructions. Handles URL composition, 0.0.0.0 host override, transport picker, token substitution in the snippet and curl test. Feedback wanted - please report issues at https://github.com/initMAX/zabbix-mcp-server/issues. |
| Users | Admin / operator / viewer roles; password complexity enforcement (10+ chars, uppercase, digit) |
| Report Templates | Built-in + custom templates, GrapesJS visual editor with Zabbix blocks, HTML code editor, variable picker, server-side Jinja2 preview |
| Settings | All config.toml sections editable — MCP Server, TLS & Security, Tool Exposure (allowlist + denylist), PDF Reports & Branding, Admin Portal |
| Audit Log | All admin actions logged (JSON lines), filterable by date/action/user, CSV export |
| Restart Management | Blikající "Restart needed" badge in header after config changes; click to restart with progress bar polling until MCP is back online |
| Design | initMAX branded, dark/light/auto mode, Rubik font, instant CSS tooltips, responsive mobile layout |
All changes are written back to config.toml (preserving comments and formatting via tomlkit). Every config change triggers a "Restart needed" indicator.
Beta - introduced in v1.20 with 14 supported clients and wide test coverage, but we are still collecting real-world feedback on the per-client snippets, the OAuth-vs-Bearer handling (especially Claude Desktop + ChatGPT), and edge cases around Docker / NAT / reverse-proxy host overrides. Please report issues at https://github.com/initMAX/zabbix-mcp-server/issues so we can graduate it out of beta.
A standalone page at /wizard (sidebar entry Client MCP Wizard) that replaces hand-editing JSON / TOML config files for 14 AI clients. Single-page progressive disclosure in four steps:
[zabbix.*] entries from config.toml.allowed_servers includes the chosen server, plus per-token scope chips (groups + individual prefixes), IP restrictions, and expiry. When the MCP server is in no-auth mode, a Continue without token card generates a tokenless snippet; when auth is enabled, the + Create new token card chains into /tokens/create?return_to=/wizard and comes back with the new token pre-filled via a URL fragment (never sent to the server).[server].host = 0.0.0.0 (Docker container IPs are de-emphasized with a manual-entry input on top), transport picker with a "detected" badge on the running transport, per-client install instructions on the left, syntax-highlighted snippet on the right with a copy-on-hover overlay icon, download-as-file button, and a matching curl quick-test block. Both code blocks substitute a pasted Bearer token live so the operator can verify before copying.Every snippet and instruction set comes from a single-source-of-truth catalog (src/zabbix_mcp/admin/wizard_clients.py) cross-checked against each client's current official documentation (Claude Desktop via mcp-remote wrapper for Bearer tokens, Claude Code with the --transport / --header flag rename from 2025, ChatGPT Developer-mode Apps & Connectors path, Gemini CLI httpUrl vs url key split, Goose Streamable HTTP YAML schema, Open WebUI native MCP since v0.6.31, etc.).
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Port separation: MCP endpoint (
/mcp,/health) runs exclusively on the MCP port (default 8080). Admin portal runs exclusively on the admin port (default 9090). No admin API is exposed on the MCP port. Firewall both ports independently.
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
cp config.example.toml config.toml
nano config.toml # fill in your Zabbix details
cp .env.example .env # optional: customize port, host, auth token
docker compose up -d
The config file is mounted read-write into the container (admin portal writes changes back). Logs are stored in a Docker volume.
Customizing the port and host interface — create a .env file (copy from .env.example) and set:
MCP_HOST=127.0.0.1 # interface to bind on the Docker host (default: 127.0.0.1)
MCP_PORT=8080 # port used inside the container and exposed on the host (default: 8080)
MCP_AUTH_TOKEN=... # bearer token for MCP server authentication (optional)
MCP_PORT controls both the container-internal port and the host-side binding — no need to edit docker-compose.yml. The port setting in config.toml is ignored when running via Docker (overridden by MCP_PORT).
Security: Docker deployments are typically exposed to the network. Generate an MCP token (
sudo ./deploy/install.sh generate-token <name>) or add a[tokens.*]section inconfig.tomlto require authentication. See MCP Authentication above.
Upgrade:
git pull
docker compose up -d --build
Logs:
docker compose logs -f
If you prefer to install manually without the deploy script:
python3 -m venv /opt/zabbix-mcp/venv
/opt/zabbix-mcp/venv/bin/pip install /path/to/zabbix-mcp-server
/opt/zabbix-mcp/venv/bin/zabbix-mcp-server --config /path/to/config.toml
Recommended (beta): use the Client MCP Wizard in the admin portal at
/wizard. It generates copy-paste-ready config snippets for 14 AI clients (Claude Desktop, Codex, Cursor, Cline, VS Code Copilot, JetBrains AI, Goose, Open WebUI, 5ire, Gemini CLI, n8n, Claude Code, ChatGPT, Generic) with the correct URL, transport, and Bearer header substitution. Still beta - feedback welcome at https://github.com/initMAX/zabbix-mcp-server/issues. The manual instructions below stay for reference.
The server uses the Streamable HTTP transport by default and listens on http://127.0.0.1:8080/mcp. SSE transport is also available (http://127.0.0.1:8080/sse) for clients that do not support Streamable HTTP session management.
MCP (Model Context Protocol) is an open standard that lets AI assistants use external tools. Any MCP-compatible client can connect to this server - ChatGPT, VS Code, Claude, Codex, JetBrains, and others.
To connect an MCP client to the server, you need 3 things from your server configuration:
Check your admin portal (Settings → MCP Server) or config.toml for 3 values — transport, address, and token:
![]() |
|
Transport → determines the client URL path and the "type" field in client config:
| Your transport | Client "type" |
Client URL |
|---|---|---|
| HTTP (Streamable HTTP — recommended) | "type": "http" |
http://your-server:port/mcp |
| SSE (Server-Sent Events) | "type": "sse" |
http://your-server:port/sse |
| STDIO (subprocess mode) | (not applicable) | (no URL — client launches server locally) |
Host + Port → your server's IP address and port (e.g. 10.0.0.5:8888). If host is 0.0.0.0, use your server's actual IP.
If auth_token exists in your config.toml or you see tokens in the admin portal (MCP Tokens page), clients must include the token in the Authorization header. If no tokens are configured, skip this step — no header needed.
|
![]() |
Optional: You can generate new tokens via
sudo ./deploy/install.sh generate-token <name>or in admin portal → MCP Tokens → Create Token. The token value is shown only once at creation. Theauth_tokenvalue from config.toml can also be used directly.
# HTTP transport, no token
claude mcp add zabbix -t http -e http://your-server:8080/mcp
# HTTP transport, with token
claude mcp add zabbix -t http -e http://your-server:8080/mcp -h "Authorization: Bearer zmcp_your-token-here"
# SSE transport, no token
claude mcp add zabbix -t sse -e http://your-server:8080/sse
# SSE transport, with token
claude mcp add zabbix -t sse -e http://your-server:8080/sse -h "Authorization: Bearer zmcp_your-token-here"
# STDIO transport (local subprocess)
claude mcp add zabbix -t command -- /opt/zabbix-mcp/venv/bin/zabbix-mcp-server --config /etc/zabbix-mcp/config.toml
Config file location:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.jsonHTTP transport, no token:
{
"mcpServers": {
"zabbix": {
"type": "http",
"url": "http://your-server:8080/mcp"
}
}
}
HTTP transport, with token:
{
"mcpServers": {
"zabbix": {
"type": "http",
"url": "http://your-server:8080/mcp",
"headers": {
"Authorization": "Bearer zmcp_your-token-here"
}
}
}
}
SSE transport, with token:
{
"mcpServers": {
"zabbix": {
"type": "sse",
"url": "http://your-server:8080/sse",
"headers": {
"Authorization": "Bearer zmcp_your-token-here"
}
}
}
}
Add .vscode/mcp.json to your workspace:
HTTP transport, no token:
{
"servers": {
"zabbix": {
"type": "http",
"url": "http://your-server:8080/mcp"
}
}
}
HTTP transport, with token:
{
"servers": {
"zabbix": {
"type": "http",
"url": "http://your-server:8080/mcp",
"headers": {
"Authorization": "Bearer zmcp_your-token-here"
}
}
}
}
Via CLI:
# HTTP transport, no token
codex mcp add zabbix --url http://your-server:8080/mcp
# HTTP transport, with token (reads token from environment variable)
export ZABBIX_MCP_TOKEN="zmcp_your-token-here"
codex mcp add zabbix --url http://your-server:8080/mcp --bearer-token-env-var ZABBIX_MCP_TOKEN
# SSE transport, no token
codex mcp add zabbix --url http://your-server:8080/sse
Or add directly to ~/.codex/config.toml:
HTTP transport, no token:
[mcp_servers.zabbix]
url = "http://your-server:8080/mcp"
HTTP transport, with token:
[mcp_servers.zabbix]
url = "http://your-server:8080/mcp"
http_headers = { Authorization = "Bearer zmcp_your-token-here" }
SSE transport, with token:
[mcp_servers.zabbix]
url = "http://your-server:8080/sse"
http_headers = { Authorization = "Bearer zmcp_your-token-here" }
Cursor, JetBrains IDEs, ChatGPT — use the same URL and optional Authorization header in their respective MCP server settings.
Once connected, you can ask your AI assistant things like:
| Prompt | What it does |
|---|---|
| "Show me all current problems" | Calls problem_get to list active alerts |
| "Which hosts are down?" | Calls host_get with status filter |
| "Acknowledge event 12345 with message 'investigating'" | Calls event_acknowledge |
| "What triggers fired in the last hour?" | Calls trigger_get with time filter and only_true |
| "List all hosts in group 'Linux servers'" | Calls hostgroup_get then host_get with group filter |
| "Show me CPU usage history for host 'web-01'" | Calls host_get, item_get, then history_get |
| "Put host 'db-01' into maintenance for 2 hours" | Calls maintenance_create |
| "Export the template 'Template OS Linux'" | Calls configuration_export |
| "How many items does host 'app-01' have?" | Calls item_get with countOutput |
| "Check the health of the MCP server" | Calls health_check |
The AI chains multiple tools automatically when needed.
All tools accept an optional server parameter to target a specific Zabbix instance (defaults to the first configured server).
| Category | Tool | Description |
|---|---|---|
| Monitoring | problem_get | Get active problems and alerts — the primary tool for checking what is wrong right now |
event_get / event_acknowledge | Retrieve events and acknowledge, close, or comment on them | |
history_get / trend_get | Query raw historical metric data or aggregated trends for capacity planning | |
sla_get / sla_getsli | Manage SLAs and retrieve calculated service availability (SLI) data | |
dashboard_* / map_* | Create, update, and manage dashboards and network maps | |
| Data Collection | host_* / hostgroup_* | Manage monitored hosts, host groups, and their membership |
item_* / trigger_* / graph_* | Manage data collection items, trigger expressions, and graphs | |
template_* / templategroup_* | Manage monitoring templates and template groups | |
maintenance_* | Schedule and manage maintenance periods to suppress alerts | |
discoveryrule_* / *prototype_* | Low-level discovery rules and item/trigger/graph prototypes | |
configuration_export / _import | Export or import full Zabbix configuration (YAML, XML, JSON) | |
| Alerts | action_* / mediatype_* | Configure automated alert actions and notification channels (email, Slack, webhook, ...) |
alert_get | Query the history of sent notifications and remote commands | |
script_execute | Execute global scripts on hosts (SSH, IPMI, custom commands) | |
| Users & Access | user_* / usergroup_* / role_* | Manage user accounts, permission groups, and RBAC roles |
token_* | Create, list, and manage API tokens for service accounts | |
| Administration | proxy_* / proxygroup_* | Manage Zabbix proxies and proxy groups for distributed monitoring |
auditlog_get | Query the audit trail of all configuration changes and logins | |
settings_get / _update | View and modify global Zabbix server settings | |
| Generic | zabbix_raw_api_call | Call any Zabbix API method directly by name — use for methods not covered above |
health_check | Verify MCP server status and connectivity to all configured Zabbix servers |
The report_generate tool produces professional PDF reports from Zabbix data. Reports are rendered server-side with Jinja2 templates and WeasyPrint - the LLM only chooses the report type and parameters, so the output is deterministic and consistent across runs.
Beta status: Reporting (templates, custom template authoring, admin editor) is a first-concept feature shipped in v1.16. Built-in templates are stable, but the authoring API and template inventory may change. Feedback welcome at issues.
Built-in templates:
| Type | Contents | Required input |
|---|---|---|
availability |
Host availability with SLA gauge, event count, per-host availability table | host group, period |
capacity_host |
CPU / memory / disk usage (avg, min, max) per host from trend data | host group, period |
capacity_network |
Network bandwidth (Mbit/s) per interface + per-host CPU stats | host group, period |
backup |
Daily success/fail matrix (hosts x days), auto-detects backup item keys (veeam, bacula, borg, restic, ...) |
host group, period |
showcase |
Demonstrates every widget the v1.23 visual editor ships with (gauge, metric cards, bars, two/three-column layout, page breaks, note callout, hosts loop, backup matrix, network interfaces) - duplicate and trim as a starting point for your own template | host group, period |
Enabling reports:
PDF generation requires two extra Python packages. The installer pulls them in automatically when the optional [reporting] extra is selected; for manual installs:
pip install zabbix-mcp-server[reporting]
# or
pip install weasyprint jinja2
Branding is configured in config.toml:
[server]
report_logo = "/etc/zabbix-mcp/logo.png" # PNG, JPG, or SVG
report_company = "ACME Corp" # appears in report title
report_subtitle = "IT Monitoring Service" # header subtitle
Example prompts:
| Prompt | What it does |
|---|---|
| "Generate an availability report for host group 5 for the last 30 days" | Calls report_generate with report_type=availability |
| "Create a capacity report for the Linux servers group, last 7 days" | Calls report_generate with report_type=capacity_host |
| "Generate a backup report for the Database servers group for last month" | Calls report_generate with report_type=backup |
The tool returns the PDF as a base64-encoded data URI. Most clients (Claude Desktop, Claude Code) render or save the file automatically.
Custom templates can be authored three ways - pick whichever fits your workflow:
Visual editor in the admin portal (/templates/create) - drag-and-drop widgets from three categories:
Plus a Use logo toolbar button on any image component that swaps it for the Logo widget (so you don't have to type {{ logo_base64 }} by hand), a live Preview button, and a built-in Insert variable dropdown for HTML mode.

AI-assisted generation (new in v1.23, beta) - click "Generate with AI" on the template editor, describe the report in plain English, and an LLM produces a validated Jinja2 template. Seven providers supported (Anthropic Claude, OpenAI GPT, Google Gemini, Azure OpenAI, Ollama self-hosted, Mistral, Groq) configurable from the admin portal at /settings -> AI Template Generation - no need to hand-edit config.toml. Output is rendered through a SandboxedEnvironment before hitting the editor; malformed templates come back with a specific error instead of silently getting saved. Admin + operator roles only (viewer cannot generate).

Hand-written HTML in /etc/zabbix-mcp/templates/ registered in config.toml:
[report_templates.my_custom]
display_name = "My Custom Report"
description = "Short description"
template_file = "/etc/zabbix-mcp/templates/my_custom.html"
All three paths write to the same /etc/zabbix-mcp/templates/ directory and are validated against the same SandboxedEnvironment before save in v1.23+, so a broken template never reaches disk. See docs/REPORTING.md for the full authoring guide: available Jinja2 context variables per report type, base CSS classes provided by base.html, and a worked example.
By default the server exposes all ~232 Zabbix API tools. Each tool's JSON schema (name, description, 20-40 optional parameters) adds roughly 400-500 tokens to the MCP tool catalog that is sent to the LLM at the start of every session. With the default "all tools" configuration, the catalog alone costs ~100k tokens before your first prompt even reaches the model. This is the single largest driver of token usage - far more than compact vs. extended response mode.
Fix: add a tools allowlist in [server] to expose only what you need:
[server]
# Tight allowlist for problem triage / host inspection (~15 tools, ~7k tokens)
tools = ["host", "hostgroup", "problem", "trigger", "event", "item"]
# Broader set including templates and dashboards (~30 tools, ~15k tokens)
# tools = ["host", "hostgroup", "problem", "trigger", "event", "item",
# "template", "dashboard", "maintenance"]
Or use group names as shortcuts (pulls in more tools per group):
| Group | Tools (approx) | Contains |
|---|---|---|
monitoring |
~31 | host, hostgroup, item, trigger, problem, event, history, trend, graph, sla, discovery, httptest, ... |
alerts |
~16 | action, alert, mediatype, script |
data_collection |
~107 | template, templategroup, templatedashboard, valuemap, dashboard |
users |
~30 | user, usergroup, userdirectory, usermacro, token, role, mfa |
administration |
~39 | settings, housekeeping, authentication, maintenance, map, proxy, ... |
extensions |
~9 | graph_render, anomaly_detect, capacity_forecast, report_generate, ... |
The same mechanism works per-token via [tokens.*].scopes - see MCP Authentication.
| Parameter | Description |
|---|---|
server | Target Zabbix server name — defaults to the first configured server when omitted |
output | Fields to return — by default returns a compact set of key fields; pass extend for all fields, or comma-separated field names (e.g. hostid,name,status) |
filter | Exact match filter as JSON object — e.g. {"status": 0} returns only enabled objects |
search | Pattern match filter as JSON object — e.g. {"name": "web"} finds all objects containing "web" in the name |
limit | Maximum number of results to return — use to avoid large responses |
sortfield / sortorder | Sort results by a field name in ASC (ascending) or DESC (descending) order |
countOutput | Return the count of matching objects instead of the actual data — useful for statistics |
All available options with detailed descriptions are in config.example.toml. Quick overview:
| Section | Parameter | Description |
|---|---|---|
[server] | transport | "http" (recommended), "sse", or "stdio" |
host | HTTP bind address — 127.0.0.1 (localhost only) or 0.0.0.0 (all interfaces) | |
port | HTTP port, 1–65535 (default: 8080) | |
log_level | debug, info, warning, error, or critical | |
log_file | Path to log file (parent directory must exist) | |
auth_token | Bearer token for HTTP/SSE authentication (supports ${ENV_VAR}) | |
rate_limit | Max Zabbix API calls per minute per client (default: 300, set to 0 to disable) | |
tools | Filter exposed tools by category or prefix — e.g. ["monitoring", "alerts"] (default: all ~231 tools) | |
disabled_tools | Denylist counterpart to tools — exclude specific tool groups or prefixes | |
tls_cert_file / tls_key_file | Enable native HTTPS — paths to TLS certificate and private key (see TLS / HTTPS below) | |
cors_origins | List of allowed CORS origins (default: disabled) | |
allowed_hosts | IP allowlist — IPs and CIDR ranges (e.g. ["10.0.0.0/24"]) | |
allowed_import_dirs | Directories for source_file imports (default: disabled) | |
compact_output | Return only key fields from get methods (default: true); set to false to always return all fields | |
response_max_chars | Maximum characters per tool response before truncation (default: 50000, min: 5000). Increase for template export workflows: 200000 for medium templates, 500000 for large built-in templates. See Token Budget | |
[zabbix.<name>] | url | Zabbix frontend URL (must start with http:// or https://) |
api_token | API token (supports ${ENV_VAR}) | |
read_only | Block write operations (default: true) | |
verify_ssl | Verify TLS certificates (default: true) | |
skip_version_check | Skip zabbix-utils version compatibility check (default: false) |
The server supports native HTTPS via tls_cert_file and tls_key_file in config.toml.
Certificate requirements depend on your MCP client:
| Client type | Self-signed cert | Publicly trusted cert (Let's Encrypt, etc.) |
|---|---|---|
| Local CLI clients (Claude Code, Cursor, etc.) | Works | Works |
| Remote MCP connections (Claude Desktop cloud, web clients) | Does not work | Required |
Why? Remote MCP connections from Claude Desktop are brokered through Anthropic's cloud infrastructure — the request comes from Anthropic's servers to your MCP server, not from your local machine. Self-signed certificates will be rejected because they can't be verified by a trusted Certificate Authority.
Recommended production setup: Use a reverse proxy (nginx, Caddy) with Let's Encrypt for automatic TLS certificate management:
Client → Caddy (HTTPS, Let's Encrypt) → MCP Server (HTTP, localhost:8080)
This way the MCP server runs plain HTTP on localhost while the reverse proxy handles TLS termination with a publicly trusted certificate.
sudo ./deploy/install.sh [COMMAND] [OPTIONS]
| Command / Option | Description |
|---|---|
install |
Fresh installation (default) |
update |
Update existing installation, preserve config |
uninstall |
Complete removal — service, config, logs, virtualenv, system user |
--dry-run |
Check prerequisites (Python, firewall, SELinux) without installing |
--install-python |
Automatically install Python 3.12 if no suitable version found |
-h, --help |
Show help |
The installer automatically detects the best available Python (>=3.10). If none is found, it asks whether to install Python 3.12 automatically (or use --install-python to skip the prompt). It also checks for firewall/SELinux issues and verifies the health endpoint after installation.
| Zabbix Version | Status | Notes |
|---|---|---|
| 8.0 | Experimental | Works with skip_version_check = true — core API methods tested, some 8.0-specific methods may not be covered yet |
| 7.0 LTS, 7.2, 7.4 | Fully supported | All API methods match this version — complete feature coverage |
| 6.0 LTS, 6.2, 6.4 | Supported | Core methods work, some newer API methods (e.g. proxy groups, MFA) may return errors |
| 5.0 LTS, 5.2, 5.4 | Basic support | Core monitoring and data collection work, newer features unavailable |
The server uses the standard Zabbix JSON-RPC API. Methods not available in your Zabbix version will return an error from the Zabbix server — the MCP server itself does not enforce version checks.
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
Test with MCP Inspector:
npx @modelcontextprotocol/inspector zabbix-mcp-server --config config.toml
| Project | Description |
|---|---|
| Zabbix AI Skills | 35 ready-to-use AI workflows for Zabbix — maintenance windows, host onboarding, template upgrades, audits, and more |
AGPL-3.0 - see LICENSE.
initMAX is an international Zabbix Premium Partner and Certified Trainer with offices in the United States, the Czech Republic, and Slovakia. We build, deploy, and support Zabbix infrastructure for organizations across North America and Europe, and this server is part of a wider effort to integrate Zabbix into modern AI-assisted operations workflows.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"zabbix-mcp-server": {
"command": "npx",
"args": []
}
}
}