loading…
Search for a command to run...
loading…
Open-source agentic schema layer. Define metrics once in YAML, query governed data from any warehouse (Snowflake, BigQuery, Databricks, PostgreSQL, DuckDB) via
Open-source agentic schema layer. Define metrics once in YAML, query governed data from any warehouse (Snowflake, BigQuery, Databricks, PostgreSQL, DuckDB) via MCP.
Self-hosted semantic layer for AI agents.
Docs · CLI · Discord · Website
Bonnard is an agent-native semantic layer — one set of metric definitions, every consumer (AI agents, apps, dashboards) gets the same governed answer. This repo is the self-hosted Docker deployment: run Bonnard on your own infrastructure with no cloud account needed.
# 1. Scaffold project
npx @bonnard/cli init --self-hosted
# 2. Configure your data source
# Edit .env with your database credentials
# 3. Start the server
docker compose up -d
# 4. Define your semantic layer
# Add cube/view YAML files to bonnard/cubes/ and bonnard/views/
# 5. Deploy models to the server
bon deploy
# 6. Verify your semantic layer
bon schema
# 7. Connect AI agents
bon mcp
Requires Node.js 20+ and Docker.
http://localhost:3000bon deploy without restarting containersGET /health for uptime monitoringRun bon mcp to see connection config for your setup. Examples below.
{
"mcpServers": {
"bonnard": {
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}
{
"mcpServers": {
"bonnard": {
"type": "url",
"url": "https://bonnard.example.com/mcp",
"headers": {
"Authorization": "Bearer your-secret-token-here"
}
}
}
}
from crewai import MCPServerAdapter
mcp = MCPServerAdapter(
url="https://bonnard.example.com/mcp",
transport="streamable-http",
headers={"Authorization": "Bearer your-secret-token-here"}
)
Protect your endpoints by setting ADMIN_TOKEN in .env:
ADMIN_TOKEN=your-secret-token-here
All API and MCP endpoints will require Authorization: Bearer <token>. The /health endpoint remains open for monitoring.
Restart after changing .env:
docker compose up -d
Caddy provides automatic HTTPS via Let's Encrypt.
Create a Caddyfile next to your docker-compose.yml:
bonnard.example.com {
reverse_proxy localhost:3000
}
Add Caddy to your docker-compose.yml:
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
restart: unless-stopped
Add the volume at the top level:
volumes:
models: {}
caddy_data: {}
Then remove the Bonnard port mapping (ports: - "3000:3000") since Caddy handles external traffic.
# Copy project files to your server
scp -r . user@your-server:~/bonnard/
# SSH in and start
ssh user@your-server
cd ~/bonnard
docker compose up -d
| Variable | Description | Default |
|---|---|---|
CUBEJS_DB_TYPE |
Database driver (postgres, duckdb, snowflake, bigquery, databricks, redshift, clickhouse) |
duckdb |
CUBEJS_DB_* |
Database connection settings (host, port, name, user, pass) | — |
CUBEJS_DATASOURCES |
Comma-separated list for multi-datasource setups | default |
CUBEJS_API_SECRET |
HS256 secret for Cube JWT auth (auto-generated by bon init) |
— |
ADMIN_TOKEN |
Bearer token for API/MCP authentication | — (open) |
CUBE_PORT |
Cube API port | 4000 |
BONNARD_PORT |
Bonnard server port | 3000 |
CORS_ORIGIN |
Allowed CORS origins | * |
CUBE_VERSION |
Cube Docker image tag | v1.6 |
BONNARD_VERSION |
Bonnard Docker image tag | latest |
See .env.example for a full annotated configuration file.
| Service | Image | Role |
|---|---|---|
cube |
cubejs/cube |
Semantic layer engine — executes queries against your warehouse |
cubestore |
cubejs/cubestore |
Pre-aggregation cache — stores materialized results for fast reads |
bonnard |
ghcr.io/bonnard-data/bonnard |
MCP server, admin UI, deploy API — the interface layer for agents and tools |
All three services communicate over an internal Docker network. Only bonnard (port 3000) and optionally cube (port 4000) are exposed externally.
# Health check
curl http://localhost:3000/health
# View logs
docker compose logs -f
# View active MCP sessions
curl -H "Authorization: Bearer <token>" http://localhost:3000/api/mcp/sessions
From your development machine:
bon deploy
This pushes your cube/view YAML files to the running server. No restart needed — Cube picks up changes automatically.
Control image versions via .env:
CUBE_VERSION=v1.6
BONNARD_VERSION=latest
Warehouses: Snowflake, Google BigQuery, Databricks, PostgreSQL (including Supabase, Neon, RDS), Amazon Redshift, DuckDB (including MotherDuck), ClickHouse
See the full documentation for connection guides.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"bonnard": {
"command": "npx",
"args": []
}
}
}Query your database in natural language
Read-only database access with schema inspection.
Interact with Redis key-value stores.
Database interaction and business intelligence capabilities.