loading…
Search for a command to run...
loading…
MCP server for the Coalesce Transform API; run tools support Snowflake Key Pair and PAT auth
MCP server for the Coalesce Transform API; run tools support Snowflake Key Pair and PAT auth
npm version Install in VS Code Install in VS Code Insiders Install in Cursor License: MIT
MCP server for Coalesce. Built for Snowflake Cortex Code (CoCo) - with first-class support for every other MCP client (Claude Code, Claude Desktop, Cursor, VS Code, Windsurf). Manage nodes, pipelines, environments, jobs, and runs, and drive the local-first Coalesce CLI from the same server: validate a project, preview DDL/DML, plan a deployment, and apply it to a cloud environment.
| Task | Jump to | |
|---|---|---|
| 🚀 | Get running in 2 minutes | Quick Start |
| 🎛️ | Customize agent behavior | Skills |
| 🔍 | Find a specific tool | Tools |
| 📦 | Walk through the full setup | Full Installation |
| 🔑 | Authenticate (env var or ~/.coa/config) |
Credentials |
| 🌐 | Run against multiple Coalesce environments | Multiple environments |
| 🔒 | Lock prod down to read-only | Safety model |
Each link below opens a short install guide with a click-to-install button (where supported) and the manual config.
[!TIP] ❄️ Snowflake Cortex Code + coalesce-transform-mcp. CoCo is Snowflake's AI coding CLI - it already knows your warehouse, role, and data. Drop this MCP in and an agent can plan pipelines, create nodes, run DML, and verify results in a single session, all under Snowflake's auth model. Install in Cortex Code →
Why this pairing? Cortex Code is Snowflake's AI coding CLI - it already authenticates to your warehouse, runs under your Snowflake role, and has native tools for querying live data. Add coalesce-transform-mcp and a single agent session can plan pipelines, create nodes, run DML, and verify results against real rows without leaving the terminal.
One-liner (after installing the Cortex Code CLI):
cortex mcp add coalesce-transform npx coalesce-transform-mcp
Or edit ~/.snowflake/cortex/mcp.json directly:
{
"mcpServers": {
"coalesce-transform": {
"type": "stdio",
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Drop the env block if you're using ~/.coa/config - Cortex Code and Coalesce can both pick the token up from the same profile. Full walkthrough: docs/installation-guides/cortex-code.md.
Click-to-install: Install in Cursor
Manual: paste into .cursor/mcp.json in your project root (or ~/.cursor/mcp.json for global):
{
"mcpServers": {
"coalesce-transform": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Cursor does not expand ${VAR} - paste the literal token, or drop the env block and use ~/.coa/config (see Credentials).
Click-to-install: Install in VS Code
Manual: follow the VS Code MCP install guide and use this config:
{
"name": "coalesce-transform",
"command": "npx",
"args": ["coalesce-transform-mcp"]
}
Add the COALESCE_ACCESS_TOKEN via VS Code's secret input prompt, or drop the token and use ~/.coa/config. Reload the VS Code window after install.
Click-to-install: Install in VS Code Insiders
Manual: identical to the stable VS Code install - Insiders reads the same MCP config.
One-liner:
claude mcp add coalesce-transform -- npx coalesce-transform-mcp
Pass env vars inline if you need them:
claude mcp add coalesce-transform \
--env COALESCE_ACCESS_TOKEN=$COALESCE_ACCESS_TOKEN \
-- npx coalesce-transform-mcp
Manual: paste into .mcp.json in your project root (or ~/.claude.json for global):
{
"mcpServers": {
"coalesce-transform": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_ACCESS_TOKEN": "${COALESCE_ACCESS_TOKEN}"
}
}
}
}
Claude Code does expand ${VAR} from your shell env at load time - .mcp.json can stay safely committed to git with variable references. Omit the env block if using ~/.coa/config.
No deeplink yet - paste manually.
File: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
{
"mcpServers": {
"coalesce-transform": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Claude Desktop does not expand ${VAR} - paste the literal token, or drop the env block and use ~/.coa/config. Fully quit Claude Desktop (Cmd+Q) and relaunch after editing.
No deeplink yet - paste manually.
File: ~/.codeium/windsurf/mcp_config.json.
{
"mcpServers": {
"coalesce-transform": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
Windsurf does not expand ${VAR} - paste the literal token, or drop the env block and use ~/.coa/config. Restart Windsurf after editing.
[!TIP]
🚀 Just installed?
Just say "help me get set up" — or run
/coalesce-setup. Your agent will check your credentials and project setup, then walk you through fixing whatever's missing.
Skills are editable markdown that shapes how the agent reasons about your Coalesce project. Ship your team's naming conventions, grain definitions, and layering patterns as context - every agent on the server instantly picks them up. No fine-tuning, no prompt engineering, just markdown you edit and commit.
Set COALESCE_MCP_SKILLS_DIR to make skills editable on disk. Each skill resolves to default content, user-augmented content, or a full user override - see docs/context-skills.md for the resolution order and customization walkthrough.
25 skills, grouped into 6 families:
overview - General Coalesce concepts, response guidelines, and operational constraintstool-usage - Best practices for tool batching, parallelization, and SQL conversionid-discovery - Resolving project, workspace, environment, job, run, node, and org IDsstorage-mappings - Storage location concepts, {{ ref() }} syntax, and reference patternsecosystem-boundaries - Scope of this MCP vs adjacent data-engineering MCPs (Snowflake, Fivetran, dbt, Catalog)data-engineering-principles - Node type selection, layered architecture, methodology detection, materialization strategiessql-platform-selection - Determining the active SQL platform from project metadatasetup-guide - First-time MCP setup flow driven by diagnose_setup (pairs with the /coalesce-setup prompt)
sql-snowflake - Snowflake-specific SQL conventions for node SQLsql-databricks - Databricks-specific SQL conventions for node SQLsql-bigquery - BigQuery-specific SQL conventions for node SQL
node-creation-decision-tree - Choosing between predecessor-based creation, updates, and full replacementsnode-payloads - Working with workspace node bodies, metadata, config, and array-replacement riskshydrated-metadata - Coalesce hydrated metadata structures for advanced node payload editingintelligent-node-configuration - How intelligent config completion works, schema resolution, automatic field detectionnode-operations - Editing existing nodes: joins, columns, config fields, and SQL-to-graph conversionaggregation-patterns - JOIN ON generation, GROUP BY detection, and join-to-aggregation conversion
node-type-selection-guide - When to use each Coalesce node type (Stage/Work vs Dimension/Fact vs specialized)node-type-corpus - Node type discovery, corpus search, and metadata patterns
pipeline-workflows - Building pipelines end-to-end: node type selection, multi-node sequences, executionintent-pipeline-guide - Using build_pipeline_from_intent to create pipelines from natural languagepipeline-review-guide - Using review_pipeline for pipeline analysis and optimizationpipeline-workshop-guide - Using pipeline workshop tools for iterative, conversational pipeline building
run-operations - Starting, retrying, polling, diagnosing, and canceling Coalesce runsrun-diagnostics-guide - Using diagnose_run_failure to analyze failed runs and determine fixes[!NOTE]
Legend
- ⚠️ Destructive - the tool needs
confirmed: truebefore it will run.- 🧰 Bundled
coaCLI - runs locally against a project directory. The tool needs aprojectPathpointing at a folder that containsdata.yml.- Preflight validation - destructive 🧰 tools run a safety check before shelling out. See Safety model.

Environments, workspaces, projects
list_environments - List all available environmentsget_environment - Get details of a specific environmentlist_workspaces - List all workspacesget_workspace - Get details of a specific workspacelist_projects - List all projectsget_project - Get project detailsNodes
list_environment_nodes - List nodes in an environmentlist_workspace_nodes - List nodes in a workspaceget_environment_node - Get a specific environment nodeget_workspace_node - Get a specific workspace nodeanalyze_workspace_patterns - Detect package adoption, pipeline layers, methodology, and generate recommendationslist_workspace_node_types - List distinct node types observed in current workspace nodesJobs, subgraphs, runs
list_environment_jobs - List all jobs for an environmentget_environment_job - Get details of a specific jobget_workspace_subgraph - Get details of a specific subgraph by UUID (the public API has no subgraph list endpoint — look up UUIDs via the repo's subgraphs/ folder or the local cache populated on create)list_runs - List runs with optional filtersget_run - Get details of a specific runget_run_results - Get results of a completed runget_run_details - Run metadata plus results in one callSearch
search_workspace_content - Search node SQL, column names, descriptions, and config valuesaudit_documentation_coverage - Scan all workspace nodes/columns for missing descriptionsLocal project & cloud CLI
coa_list_project_nodes - List all nodes defined in a local project (pre-deploy)
Plan & build
plan_pipeline - Plan a pipeline from SQL or a natural-language goal without mutating the workspace; ranks best-fit node types from the local repocreate_pipeline_from_plan - Execute an approved pipeline plan using predecessor-based creationcreate_pipeline_from_sql - Plan and create a pipeline directly from SQLbuild_pipeline_from_intent - Build a pipeline from a natural language goal with automatic entity resolution and node type selectionreview_pipeline - Analyze an existing pipeline for redundant nodes, missing joins, layer violations, naming issues, and optimization opportunitiesparse_sql_structure - Parse a SQL statement into structural components (CTEs, source tables, projected columns) without touching the workspaceselect_pipeline_node_type - Rank and select the best Coalesce node type for a pipeline stepWorkshop (iterative, conversational)
pipeline_workshop_open - Open an iterative pipeline builder session with workspace context pre-loadedpipeline_workshop_instruct - Send a natural language instruction to modify the current workshop planget_pipeline_workshop_status - Get the current state of a workshop sessionpipeline_workshop_close - Close a workshop session and release resourcesLocal project validation & planning
coa_validate - Validate YAML schemas and scan a local project for configuration problemscoa_plan - Generate a deployment plan JSON by diffing a local project against a cloud environment (non-destructive)
Create
create_workspace_node_from_scratch - Create a workspace node with no predecessorscreate_workspace_node_from_predecessor - Create a node from predecessor nodes with column coverage verificationcreate_node_from_external_schema - Create a workspace node whose columns match an existing warehouse table or external schemaUpdate
set_workspace_node - Replace a workspace node with a full bodyupdate_workspace_node - Safely update selected fields of a workspace nodereplace_workspace_node_columns - Replace metadata.columns wholesaledelete_workspace_node - Delete a node from a workspace ⚠️Configure
complete_node_configuration - Intelligently complete a node's configuration by analyzing contextapply_join_condition - Auto-generate and write a FROM/JOIN/ON clause for a multi-predecessor nodeconvert_join_to_aggregation - Convert a join-style node into an aggregated fact-style nodeSubgraphs & jobs
create_workspace_subgraph - Create a subgraph to group nodes visuallyupdate_workspace_subgraph - Update a subgraph's name and node membershipdelete_workspace_subgraph - Delete a subgraph (nodes are NOT deleted) ⚠️create_workspace_job - Create a job in a workspace with node include/exclude selectorsupdate_workspace_job - Update a job's name and node selectorsdelete_workspace_job - Delete a job ⚠️
start_run - Start a new run; requires Snowflake authrun_and_wait - Start a run and poll until completionrun_status - Check status of a running jobretry_run - Retry a failed runretry_and_wait - Retry a failed run and poll until completioncancel_run - Cancel a running job ⚠️diagnose_run_failure - Classify errors, surface root cause, suggest actionable fixesget_environment_overview - Environment details with full node listget_environment_health - Dashboard: node counts, run statuses, failed runs in last 24h, stale nodes, dependency healthLocal execution (bundled CLI)
coa_dry_run_create - Preview DDL without executing (does not validate columns/types exist in warehouse)coa_dry_run_run - Preview DML without executing (same caveat)coa_create - Run DDL (CREATE/REPLACE) against the warehouse for selected nodes ⚠️coa_run - Run DML (INSERT/MERGE) to populate selected nodes ⚠️coa_deploy - Apply a plan JSON to a cloud environment ⚠️coa_refresh - Run DML for selected nodes in an already-deployed environment (no local project required) ⚠️
get_upstream_nodes - Walk the full upstream dependency graph for a nodeget_downstream_nodes - Walk the full downstream dependency graph for a nodeget_column_lineage - Trace a column through the pipeline upstream and downstreamanalyze_impact - Downstream impact of changing a node or specific column - impacted counts, grouped by depth, and critical pathpropagate_column_change - Update all downstream columns after a column rename or data type change ⚠️
list_repo_packages - List package aliases and enabled node-type coverage from a committed Coalesce repolist_repo_node_types - List exact resolvable committed node-type identifiers from nodeTypes/get_repo_node_type_definition - Resolve one node type and return its outer definition plus parsed nodeMetadataSpecgenerate_set_workspace_node_template - Generate a YAML-friendly set_workspace_node body templatesearch_node_type_variants - Search the committed node-type corpus by normalized family, package, primitive, or support statusget_node_type_variant - Load one exact node-type corpus variant by variant keygenerate_set_workspace_node_template_from_variant - Generate a template from a committed corpus variant
create_environment - Create a new environment within a projectdelete_environment - Delete an environment ⚠️create_project - Create a new projectupdate_project - Update a projectdelete_project - Delete a project ⚠️list_git_accounts - List all git accountsget_git_account - Get git account detailscreate_git_account - Create a new git accountupdate_git_account - Update a git accountdelete_git_account - Delete a git account ⚠️
list_org_users - List all organization usersget_user_roles - Get roles for a specific userlist_user_roles - List all user rolesset_org_role - Set organization role for a userset_project_role - Set project role for a userdelete_project_role - Remove project role from a user ⚠️set_env_role - Set environment role for a userdelete_env_role - Remove environment role from a user ⚠️
Cache snapshots
cache_workspace_nodes - Fetch every page of workspace nodes, write a full snapshot, and return cache metadatacache_environment_nodes - Fetch every page of environment nodes, write a full snapshotcache_runs - Fetch every page of run results, write a full snapshotcache_org_users - Fetch every page of organization users, write a full snapshotclear_data_cache - Delete all cached snapshots, auto-cached responses, and plan summaries ⚠️Skills & setup
personalize_skills - Export bundled skill files to a local directory for customizationdiagnose_setup - Stateless probe reporting configured setup pieces; pairs with the /coalesce-setup MCP promptcoa_doctor - Check config, credentials, and warehouse connectivity end-to-end for a local projectcoa_describe - Fetch a section of COA's self-describing documentation by topic + optional subtopic (also available as coalesce://coa/describe/* resources)Requirements:
coa_create/coa_run (see Credentials)@coalescesoftware/coa CLI ships its own runtime; the MCP tarball itself is under 1 MB)1. Clone your project. If your team already has a Coalesce project in Git, clone it locally - the bundled coa CLI operates on a project directory, so most local create/run tools require one on disk:
git clone <your-coalesce-project-repo-url>
cd my-project
Don't have a Git-linked project yet? In the Coalesce UI, open your workspace → Settings → Git and connect a repo (or create one via your Git provider and paste the URL). Coalesce will commit the project skeleton on first push; clone that repo locally once it's populated.
my-project/
├── data.yml # Root metadata (fileVersion, platformKind)
├── locations.yml # Storage location manifest
├── nodes/ # Pipeline nodes (.yml for V1, .sql for V2)
├── nodeTypes/ # Node type definitions with templates
├── environments/ # Environment configs with storage mappings
├── macros/ # Reusable SQL macros
├── jobs/ # Job definitions
└── subgraphs/ # Subgraph definitions
V1 vs V2 - the format is pinned by fileVersion in data.yml. V1 (fileVersion: 1 or 2) stores each node as a single YAML file with columns, transforms, and config inline. V2 (fileVersion: 3) is SQL-first: the node body lives in a .sql file using @id / @nodeType annotations and {{ ref() }} references, with YAML retained for config. New projects default to V2; existing V1 projects keep working unchanged.
Point the MCP at this directory by setting repoPath in ~/.coa/config or COALESCE_REPO_PATH in your env block.
2. Create workspaces.yml. This file is required for coa_create / coa_run and their dry-run variants. It maps each storage location declared in locations.yml to a physical database + schema for local development. It's typically gitignored (per-developer), so cloning the project does not give it to you - you have to create it.
The /coalesce-setup prompt detects a missing workspaces.yml and walks you through it. If you'd rather do it directly, pick one of:
Ask your agent to bootstrap it (easiest): prompt the agent to call the coa_bootstrap_workspaces tool (it needs confirmed: true, so the agent will ask before running).
[!WARNING] The generated file contains placeholder values. The bootstrap tool seeds
database/schemawith defaults that won't match your real warehouse. Ask the agent to open the file with you and replace every placeholder before callingcoa_create/coa_run- otherwise the generated DDL/DML will target the wrong (or non-existent) database.
Hand-write it. Ask the agent to fetch the authoritative schema via the coa_describe tool (topic: "schema", subtopic: "workspaces") - no top-level wrapper, no fileVersion.
workspaces.yml# workspaces.yml - keys are workspace names; `dev` is the default if --workspace is omitted
dev:
connection: snowflake # required - name of the connection block COA should use
locations: # optional - one entry per storage location name from locations.yml
SRC_INGEST_TASTY_BITES:
database: JESSE_DEV # required
schema: INGEST_TASTY_BITES # required
ETL_STAGE:
database: JESSE_DEV
schema: ETL_STAGE
ANALYTICS:
database: JESSE_DEV
schema: ANALYTICS
Ask your agent to verify the setup - e.g. "Run coa_doctor on my project and summarize the results." It checks data.yml, workspaces.yml, credentials, and warehouse connectivity end to end.
3. Pick an auth path:
| Option A - env var | Option B - reuse ~/.coa/config |
|---|---|
Simplest for first-time MCP users. Generate a
|
Best if you already use the
See Credentials for the profile schema. |
When both sources set a field, the env var wins.
4. Install the server via one of the Quick Start paths above.
5. Restart your client, then run the /coalesce-setup prompt to verify everything is wired up.
If you have more than one Coalesce environment to manage, see Multiple environments.
The server reads credentials from two sources and merges them with env-wins precedence - a matching env var always overrides the profile value, so you can pin a single field per session without editing the config file. Call diagnose_setup to see which source supplied each value.
~/.coa/config (shared with the coa CLI)COA stores credentials in a standard INI file. You create it by hand, or let coa write it as you use the CLI. The MCP reads the profile selected by COALESCE_PROFILE (default [default]) and maps the keys below onto their matching env vars.
[default]
token=<your-coalesce-refresh-token>
domain=https://your-org.app.coalescesoftware.io
snowflakeAccount=<your-snowflake-account> # e.g., abc12345.us-east-1 - required by coa CLI
snowflakeUsername=YOUR_USER
snowflakeRole=YOUR_ROLE
snowflakeWarehouse=YOUR_WAREHOUSE
snowflakeKeyPairKey=/Users/you/.coa/rsa_key.p8
snowflakeAuthType=KeyPair
orgID=<your-org-id> # optional; fallback for cancel-run
repoPath=/Users/you/path/to/repo # optional; for repo-backed tools
cacheDir=/Users/you/.coa/cache # optional; per-profile cache isolation
[staging]
# …additional profiles; select with COALESCE_PROFILE
Key mapping - each profile key maps to an env var of the same concept:
| Profile key | Env var |
|---|---|
token |
COALESCE_ACCESS_TOKEN |
domain |
COALESCE_BASE_URL |
snowflake* (all keys) |
SNOWFLAKE_* (matching suffix) |
orgID |
COALESCE_ORG_ID |
repoPath |
COALESCE_REPO_PATH |
cacheDir |
COALESCE_CACHE_DIR |
Notes:
snowflakeAuthType is read by COA itself (no env var) - include it when using key-pair auth.orgID, repoPath, and cacheDir are MCP-specific - the COA CLI ignores them.npx @coalescesoftware/coa describe config for the authoritative reference. Unknown keys are ignored.If ~/.coa/config doesn't exist the server runs env-only - startup never fails on a missing or malformed profile file; it just logs a stderr warning.
| Variable | Description | Default |
|---|---|---|
COALESCE_ACCESS_TOKEN |
Bearer token from the Coalesce Deploy tab. Optional when ~/.coa/config provides a token. |
— |
COALESCE_PROFILE |
Selects which ~/.coa/config profile to load. |
default |
COALESCE_BASE_URL |
Region-specific base URL. | https://app.coalescesoftware.io (US) |
COALESCE_ORG_ID |
Fallback org ID for cancel-run. Also readable from orgID in the active ~/.coa/config profile. |
— |
COALESCE_REPO_PATH |
Local repo root for repo-backed tools and pipeline planning. Also readable from repoPath in the active ~/.coa/config profile. |
— |
COALESCE_CACHE_DIR |
Base directory for the local data cache. When set, cache files are written here instead of the working directory. Also readable from cacheDir in the active ~/.coa/config profile. |
— |
COALESCE_MCP_AUTO_CACHE_MAX_BYTES |
JSON size threshold before auto-caching to disk. | 32768 |
COALESCE_MCP_LINEAGE_TTL_MS |
In-memory lineage cache TTL in milliseconds. | 1800000 |
COALESCE_MCP_MAX_REQUEST_BODY_BYTES |
Max outbound API request body size. | 524288 |
COALESCE_MCP_READ_ONLY |
When true, hides all write/mutation tools during registration. Only read, list, search, cache, analyze, review, diagnose, and plan tools are exposed. |
false |
COALESCE_MCP_SKILLS_DIR |
Directory for customizable AI skill resources. When set, reads context resources from this directory and seeds defaults on first run. Users can augment or override any skill. | — |
start_run, retry_run, run_and_wait, retry_and_wait, and the warehouse-touching COA tools (coa_create, coa_run) need Snowflake credentials. These normally come from ~/.coa/config. Override any field via env var:
| Variable | Required | Description |
|---|---|---|
SNOWFLAKE_ACCOUNT |
Yes | Snowflake account identifier (e.g., abc12345.us-east-1). Required by the local coa CLI and coa doctor; not used by the MCP's REST run path. |
SNOWFLAKE_USERNAME |
Yes | Snowflake account username |
SNOWFLAKE_KEY_PAIR_KEY |
No | Path to PEM-encoded private key (required if SNOWFLAKE_PAT not set) |
SNOWFLAKE_PAT |
No | Snowflake Programmatic Access Token (alternative to key pair) |
SNOWFLAKE_KEY_PAIR_PASS |
No | Passphrase for encrypted keys |
SNOWFLAKE_WAREHOUSE |
Yes | Snowflake compute warehouse |
SNOWFLAKE_ROLE |
Yes | Snowflake user role |
"Required" means one of env OR the matching ~/.coa/config field must supply the value. SNOWFLAKE_PAT is env-only - COA's config uses snowflakePassword for Basic auth (a different concept), which this server deliberately doesn't read.
{
"coalesce-transform": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_PROFILE": "staging",
"SNOWFLAKE_ROLE": "TRANSFORMER_ADMIN"
}
}
}
Reads: "use the [staging] profile, but override its snowflakeRole."
If you work across several Coalesce environments (dev/staging/prod, or multiple orgs), register the package once per profile under distinct server names:
{
"mcpServers": {
"coalesce-prod": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": {
"COALESCE_PROFILE": "prod",
"COALESCE_MCP_READ_ONLY": "true"
}
},
"coalesce-dev": {
"command": "npx",
"args": ["coalesce-transform-mcp"],
"env": { "COALESCE_PROFILE": "dev" }
}
}
}
Why this pattern:
coalesce-prod__* vs coalesce-dev__*, so an agent can't accidentally mutate the wrong environment.COALESCE_MCP_READ_ONLY=true to hide every write tool on that server while leaving dev fully writable.Skip this pattern if you only use one environment - a single registration is simpler. For 2–3 environments it's worth the extra config; beyond that, each server is a separate Node process, so consider whether you actually need them all loaded at once.
Three layers prevent destructive surprises. See docs/safety-model.md for the full breakdown (tool annotations, read-only mode, explicit confirmation, COA preflight validation).
readOnlyHint / destructiveHint / idempotentHint. The ⚠️ marker in Tools marks destructiveHint: true tools.COALESCE_MCP_READ_ONLY=true hides all write/mutation tools at server startup. Use it for audits, agent sandboxes, or pairing with a prod profile.delete_*, propagate_column_change, cancel_run, clear_data_cache, coa_create, coa_run, coa_deploy, coa_refresh all require confirmed: true.npx at @preview for preview builds.diagnose_setup probe and /coalesce-setup MCP prompt.overrideSQLToggle, and write helpers reject overrideSQL fields.cache_workspace_nodes and siblings when you want a reusable snapshot. Configure the threshold with COALESCE_MCP_AUTO_CACHE_MAX_BYTES.COALESCE_REPO_PATH (or add repoPath= to your ~/.coa/config profile) to your local Coalesce repo root (containing nodeTypes/, nodes/, packages/), or pass repoPath on individual tool calls. The server does not clone repos or install packages.@next tag. Every release of this MCP ships with a known-good COA build.~/.cache/coalesce-transform-mcp/coa-describe/<coa-version>/ after first access. Cache is version-keyed - upgrading the MCP automatically invalidates stale content.| Resource | ||
|---|---|---|
| 📘 | Coalesce Docs | Product documentation |
| 🔌 | Coalesce API Docs | REST API reference |
| 🧰 | Coalesce CLI (coa) | Bundled CLI docs |
| 🛒 | Coalesce Marketplace | Node type packages |
| 🔗 | Model Context Protocol | MCP spec & ecosystem |
Issues and PRs welcome.
MIT © Coalesce - built on top of the open Model Context Protocol.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"coalesce-transform-mcp": {
"command": "npx",
"args": [
"-y",
"coalesce-transform-mcp"
]
}
}
}Just installed Coalesce Transform Mcp? Say to Claude: "remember why I installed Coalesce Transform Mcpand what I want to try" — it'll save into your Vault.