loading…
Search for a command to run...
loading…
Agent-native semantic layer, letting AI agents query databases through specifying intent instead of writing SQL, then compiling structured queries into correct,
Agent-native semantic layer, letting AI agents query databases through specifying intent instead of writing SQL, then compiling structured queries into correct, dialect-aware SQL. Dynamic and expressive, supporting multi-stage queries, time-shifts, and complex join schemas.
PyPI Python Docs License GitHub stars
SLayer is a semantic layer that lets AI agents query your database correctly.
If you find SLayer useful, a ⭐ helps others discover it!
SLayer sits between your database and whatever consumes the data – AI agents, internal tools, dashboards, or scripts. You define your data models (or let SLayer auto-generate them from the schema), and query using a structured API of measures, dimensions, and filters instead of writing SQL directly.
SLayer compiles these queries into the correct SQL for your database, handling joins, aggregations, time-based calculations, and dialect differences so that consumers don't have to.
See also: automatic model ingestion, queries-as-models, auto-applied filters, and more.
Why not just let agents write SQL? Because they get it wrong often enough to matter – see our blog post and dbt's benchmark analysis.
We recommend using uv, especially if you don't work in a Python project.
To run the server:
# Instant demo — spins up the bundled Jaffle Shop DuckDB and ingests it
uvx --from 'motley-slayer[all]' slayer serve --demo
# Or run without --demo and connect your own data afterwards
uvx --from 'motley-slayer[all]' slayer serve
Or to add the MCP server:
# With the Jaffle Shop demo preloaded (zero-config quickstart)
claude mcp add slayer -- uvx --from 'motley-slayer[all]' slayer mcp --demo
# Or without the demo
claude mcp add slayer -- uvx --from 'motley-slayer[all]' slayer mcp
The --demo flag additionally requires jafgen — install hints are printed if it's missing.
Then configure a datasource or ask your agent to help you do it.
Read more on how to get started with MCP, CLI, REST API, Python in the docs.
# Query
curl -X POST http://localhost:5143/query \
-H "Content-Type: application/json" \
-d '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'
# List models (returns name + description)
curl http://localhost:5143/models
# Get a single datasource (credentials masked)
curl http://localhost:5143/datasources/my_postgres
See more in the docs.
SLayer supports two MCP transports, HTTP (served alongside the API) and stdio (serverless, spawned by the agent).
# 1. stdio-based, does not require a running server
claude mcp add slayer -- slayer mcp
# 1b. same, but preload the Jaffle Shop demo on startup
claude mcp add slayer -- slayer mcp --demo
# 2. HTTP-based (SSE), provided SLayer server is already running
claude mcp add slayer-remote --transport sse --url http://localhost:5143/mcp/sse
SLayer does not expose credentials to consumers once created.
Both transports expose the same tools, allowing to inspect, create and update datasources and models and run queries. More info in the docs.
Useful for agents working in code execution environments, e.g. for AI data analytics, as well as any Python apps.
from slayer.client.slayer_client import SlayerClient
from slayer.core.query import SlayerQuery, ColumnRef
# Remote mode (connects to running server)
client = SlayerClient(url="http://localhost:5143")
# Or local mode (no server needed)
from slayer.storage.yaml_storage import YAMLStorage
client = SlayerClient(storage=YAMLStorage(base_dir="./my_models"))
# Query data
query = SlayerQuery(
model="orders",
fields=[{"formula": "*:count"}, {"formula": "revenue:sum"}],
dimensions=[ColumnRef(name="status")],
limit=10,
)
df = client.query_df(query)
print(df)
# Run a query directly from the terminal
slayer query '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'
# Or from a file
slayer query @query.json --format json
These commands do not depend on a running server.
By default, models are defined as YAML files. Add an optional description to help users and agents understand complex models:
name: orders
sql_table: public.orders
data_source: my_postgres
description: "Core orders table with revenue metrics"
dimensions:
- name: id
sql: id
type: number
primary_key: true
- name: status
sql: status
type: string
- name: created_at
sql: created_at
type: time
measures:
- name: revenue
sql: amount
- name: quantity
sql: qty
The fields parameter specifies what data columns to return.
{
"model": "orders",
"dimensions": ["status"],
"time_dimensions": [{"dimension": "created_at", "granularity": "month"}],
"fields": [
{"formula": "*:count"},
{"formula": "revenue:sum"},
{"formula": "revenue:sum / *:count", "name": "aov", "label": "Average Order Value"},
{"formula": "cumsum(revenue:sum)"},
{"formula": "change_pct(revenue:sum)"},
{"formula": "last(revenue:sum)", "name": "latest_rev"},
{"formula": "time_shift(revenue:sum, -1, 'year')", "name": "rev_last_year"},
{"formula": "time_shift(revenue:sum, -2)", "name": "rev_2_periods_ago"},
{"formula": "lag(revenue:sum, 1)", "name": "rev_prev_row"},
{"formula": "rank(revenue:sum)"},
{"formula": "change(cumsum(revenue:sum))", "name": "cumsum_delta"}
]
}
Available functions: cumsum, time_shift, change, lag, and more – see docs. Formulas support arbitrary nesting — e.g., change(cumsum(revenue:sum)) or cumsum(revenue:sum) / *:count.
Filters use simple formula strings — no verbose JSON objects:
{
"model": "orders",
"fields": [{"formula": "*:count"}, {"formula": "revenue:sum"}],
"filters": [
"status == 'completed'",
"amount > 100"
]
}
Filters support a variety of operators, composition, pattern matching. Transforms & computed columns can also be used for filtering. See docs for more.
Connect to a database and generate models automatically. SLayer introspects the schema, detects foreign key relationships, and creates models with explicit join metadata.
For example, given tables orders → customers → regions (via FKs), the orders model will automatically include:
customers.name, regions.name, etc. (dotted syntax)customers.*:count_distinct, regions.*:count_distinct# Via CLI
slayer ingest --datasource my_postgres --schema public
# Via API
curl -X POST http://localhost:5143/ingest \
-d '{"datasource": "my_postgres", "schema_name": "public"}'
Via MCP, agents can do this conversationally:
create_datasource(name="mydb", type="postgres", host="localhost", database="app", username="user", password="pass")ingest_datasource_models(datasource_name="mydb", schema_name="public")models_summary(datasource_name="mydb") → inspect_model(model_name="orders") → query(...)The fastest way is from the CLI — pass a connection URL and optionally ingest models in one step:
slayer datasources create postgresql://user:${DB_PASSWORD}@localhost/analytics --ingest
Or configure datasources as individual YAML files in the datasources/ directory:
# datasources/my_postgres.yaml
name: my_postgres
type: postgres
host: ${DB_HOST}
port: 5432
database: ${DB_NAME}
username: ${DB_USER}
password: ${DB_PASSWORD}
Environment variable references (${VAR}) are resolved at read time.
See more in the docs.
SLayer ships with two storage backends:
SLayer allows easily implementing your own storage backends, which is useful for features such as tenant isolation.
See the documentation page for storage backends for more.
| # | Step | Status |
|---|---|---|
| 1 | Dynamic joins | ✅ |
| 2 | Multi-stage queries | ✅ |
| 3 | Cross-model measures | ✅ |
| 4 | Aggregation at query time | ✅ |
| 5 | Smart output formatting (currency, percentages) | ✅ |
| 6 | Unpivoting | ❌ |
| 7 | Auto-propagating filters | ❌ |
| 8 | Asof joins | ❌ |
| 9 | Chart generation (eCharts) | ❌ |
The examples/ directory contains runnable examples that also serve as integration tests:
| Example | Description |
|---|---|
| embedded | SQLite, no server needed |
| postgres | Docker Compose with Postgres + REST API |
| mysql | Docker Compose with MySQL + REST API |
| clickhouse | Docker Compose with ClickHouse + REST API |
The docs/examples/ directory contains Jupyter notebooks that walk through SLayer's features step by step.
| Notebook | Topic |
|---|---|
| SQL vs DSL | How model SQL and query DSL stay cleanly separated |
| Auto-Ingestion | Schema introspection, FK graph discovery, automatic model generation |
| Time Operations | change, change_pct, time_shift, lag, lead, last — composable time transforms |
| Joins | Dot syntax, multi-hop dimensions, diamond join disambiguation |
| Joined Measures | Cross-model measures with sub-query isolation |
| Multistage Queries | Query chaining, queries-as-models, ModelExtension |
SLayer includes Claude Code skills in .claude/skills/ to help Claude understand the codebase:
SLayer currently has no caching or pre-aggregation engine. If you need to process lots of requests to large databases at sub-second latency, consider adding a caching layer or pre-aggregation engine.
MIT — see LICENSE.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"slayer": {
"command": "npx",
"args": []
}
}
}