loading…
Search for a command to run...
loading…
A proxy server that wraps existing MCP servers to significantly reduce token consumption by compressing tool descriptions into a two-step interface. It enables
A proxy server that wraps existing MCP servers to significantly reduce token consumption by compressing tool descriptions into a two-step interface. It enables users to integrate extensive toolsets without exceeding context limits or incurring high API costs.
Release Build status Commit activity License
An MCP server wrapper for reducing tokens consumed by MCP tools, available in both Python and TypeScript libraries.
All backend MCP tools can now be registered as custom commands in a just-bash sandboxed shell. The agent gets a single
bashtool that supports standard Unix utilities plus MCP tools converted to CLIs automatically — with pipes, composition, and all. Available in both Python and TypeScript.
COMMAND_OR_URLcan now be a multi-server MCP config JSON string. Each configured server gets its own prefixed wrapper tools and, in CLI mode, its own generated CLI script. For example, a config withweatherandcalendarservers generates separateweatherandcalendarCLI commands.
Added a sibling TypeScript implementation with matching compression concepts, OAuth support, in-process runtime APIs, and TypeScript CLI mode.
COMMAND_OR_URLcan now be an MCP config JSON string. The JSON key becomes the default--server-nameunless one is passed explicitly.
--cli-mode— Converts any wrapped MCP server into a local CLI. Generates an executable shell script (Unix) or.cmdfile (Windows) so agents and users can interact with the backend via familiar command-line conventions rather than structured tool calls.
--toonify— Automatically converts JSON responses from wrapped backend tools into TOON format, a compact human- and LLM-readable alternative to JSON.
MCP Compressor is a proxy server that wraps existing Model Context Protocol (MCP) servers and compresses their tool descriptions to significantly reduce token consumption. Instead of exposing all tools with full schemas directly to language models, it provides a small number of proxy tools or CLI commands instead.
MCP servers are exploding in popularity, but their tool descriptions consume significant tokens in every LLM request. For example:
With 30k+ tokens just for tool descriptions, costs can reach 1-10 cents per request depending on prompt caching. MCP Compressor solves this by replacing dozens of tools with just 2 wrapper tools, achieving 70-97% token reduction while maintaining full functionality. This enables:
low, medium, high, or max--toonify--cli-mode — generates shell scripts that let you (or an AI agent) interact with backends via familiar command-line syntax. Supports both single and multi-server configs.--just-bash. The agent gets a single bash tool that supports standard Unix utilities and MCP tools with pipes and composition.| Mode | Tools exposed | How the LLM invokes tools |
|---|---|---|
| Compressed (default) | get_tool_schema + invoke_tool |
Via MCP tool calls |
| CLI | Per-server _help tools |
Via bash CLI commands (bridge + generated scripts) |
| Bash | Per-server _help tools + bash tool |
Via a sandboxed just-bash shell |
| Capability | Python | TypeScript |
|---|---|---|
| Core compression proxy server | ✅ | ✅ |
| stdio / streamable HTTP / SSE backends | ✅ | ✅ |
| Single and multi-server MCP config JSON input | ✅ | ✅ |
| Persistent OAuth support | ✅ | ✅ |
| CLI mode (single and multi-server) | ✅ | ✅ |
| just-bash mode | ✅ | ✅ |
| In-process runtime API for app/agent embedding | ⚠️ not first-class | ✅ first-class |
| Prompt/resource passthrough parity | ✅ broader | ⚠️ narrower |
| Production maturity | ✅ primary implementation | ⚠️ newer implementation |
Use the Python implementation when you want the most mature feature set today. Use the TypeScript implementation when you want Node.js-native usage, in-process embedding, or tighter TypeScript ecosystem integration.
TypeScript package name:
@atlassian/mcp-compressoron npm.
Install using pip or uv:
pip install mcp-compressor
# or
uv pip install mcp-compressor
Wrap any MCP server by providing its command or URL:
# Wrap a stdio MCP server
uvx mcp-compressor uvx mcp-server-fetch
# Wrap a remote HTTP MCP server
uvx mcp-compressor https://example.com/server/mcp
# Wrap a remote SSE MCP server
uvx mcp-compressor https://example.com/server/sse
See uvx mcp-compressor --help for detailed documentation on available arguments.
Control how much compression to apply with the --compression-level or -c flag:
# Low
mcp-compressor uvx mcp-server-fetch -c low
# Medium (default)
mcp-compressor uvx mcp-server-fetch -c medium
# High
mcp-compressor uvx mcp-server-fetch -c high
# Max
mcp-compressor uvx mcp-server-fetch -c max
If you want the wrapped backend to behave like a local command-line tool, start here:
mcp-compressor --cli-mode --server-name atlassian -- https://mcp.atlassian.com/v1/mcp
Then use the generated CLI script:
atlassian --help
Instead of exposing the wrapped backend as many MCP tools, --cli-mode turns the backend into a local CLI with a single help tool for discovery.
This is especially useful when you want an agent to work through a shell-style interface, or when a backend server already makes more sense as commands and flags than as direct MCP tool calls.
flowchart LR
Client["MCP Client / Agent"] -->|discovers| HelpTool["<server_name>_help"]
HelpTool -->|explains commands| GeneratedCLI["Generated local CLI script\n(e.g. atlassian)"]
User["User or Agent"] -->|runs CLI subcommands| GeneratedCLI
GeneratedCLI --> Bridge["Local HTTP bridge\n127.0.0.1:<port>"]
Bridge --> Compressor["mcp-compressor\n--cli-mode"]
Compressor --> Backend["Wrapped MCP server"]
Backend --> Compressor
Compressor --> Bridge
Bridge --> GeneratedCLI
<server_name>_help tool instead of the wrapper toolset--toonify is automatically enabled in CLI mode for compact, readable output# Wrap a remote MCP server as a local CLI
uvx mcp-compressor --cli-mode --server-name atlassian -- https://mcp.atlassian.com/v1/mcp
# Or pass a single MCP config JSON string
uvx mcp-compressor --cli-mode '{"mcpServers": {"atlassian": {"url": "https://mcp.atlassian.com/v1/mcp"}}}'
# Multi-server config — generates one CLI script per server
uvx mcp-compressor --cli-mode '{"mcpServers": {"weather": {"command": "uvx", "args": ["mcp-weather"]}, "calendar": {"command": "uvx", "args": ["mcp-calendar"]}}}'
When CLI mode starts, it:
127.0.0.1:<port>~/.local/bin/<name> if available on PATH, otherwise to the current directory; on Windows it writes a .cmd launcher to a suitable directory on PATH<server_name>_help MCP tool per server so the client can discover each generated CLI and its subcommandsExample usage after startup:
# Top-level help — lists all subcommands
atlassian --help
# Per-tool help — shows flags derived from the backend tool schema
atlassian get-confluence-page --help
# Invoke a tool using ordinary CLI flags
atlassian get-confluence-page --cloud-id abc123 --page-id 456
# Escape hatch for complex inputs
atlassian create-jira-issue --json '{"cloudId":"abc","projectKey":"PROJ","summary":"Bug"}'
CLI subcommand names are the snake_case → kebab-case conversion of backend tool names (for example getConfluencePage → get-confluence-page).
The generated script only works while mcp-compressor --cli-mode is running.
Use --cli-port if you want to pin the local bridge to a specific port.
# Set working directory
mcp-compressor uvx mcp-server-fetch --cwd /path/to/dir
# Pass environment variables (supports environment variable expansion)
mcp-compressor uvx mcp-server-fetch \
-e API_KEY=${MY_API_KEY} \
-e DEBUG=true
# Add custom headers
mcp-compressor https://api.example.com/mcp \
-H "Authorization=Bearer ${TOKEN}" \
-H "X-Custom-Header=value"
# Set timeout (default: 10 seconds)
mcp-compressor https://api.example.com/mcp \
--timeout 30
When running multiple MCP servers through mcp-compressor, you can add custom prefixes to the wrapper tool names to avoid conflicts:
# Without server name - tools will be: get_tool_schema, invoke_tool
mcp-compressor uvx mcp-server-fetch
# With server name - tools will be: github_get_tool_schema, github_invoke_tool
mcp-compressor https://api.githubcopilot.com/mcp/ --server-name github
# Special characters are automatically sanitized
mcp-compressor uvx mcp-server-fetch --server-name "My Server!"
# Results in: my_server__get_tool_schema, my_server__invoke_tool
Use --toonify to automatically convert JSON backend tool results into TOON format.
# Convert JSON backend tool results to TOON
mcp-compressor https://api.example.com/mcp --toonify
When --toonify is enabled:
invoke_tool(...) are also toonifiedget_tool_schema(...) and list_tools(...) are never toonifiedinvoke_tool(...) is never toonifiedCLI mode is documented in the dedicated CLI Mode section above.
The short version: use --cli-mode, give the server a name, and interact with the generated local script while mcp-compressor is running.
mcp-compressor https://mcp.atlassian.com/v1/mcp --server-name atlassian --cli-mode --cli-port 8765
# Set log level
mcp-compressor uvx mcp-server-fetch --log-level debug
mcp-compressor uvx mcp-server-fetch -l warning
just-bash mode takes CLI mode one step further — instead of generating shell scripts and a local HTTP bridge, it registers all backend MCP tools as custom commands in a just-bash sandboxed shell environment and exposes a single bash MCP tool.
The agent can run standard Unix utilities (grep, cat, jq, sed, awk, etc.) and MCP tools in the same shell, including pipes and composition.
# Python
uvx mcp-compressor --just-bash -- '{"mcpServers":{"github":{"command":"uvx","args":["mcp-server-github"]},"fetch":{"command":"uvx","args":["mcp-server-fetch"]}}}'
# TypeScript (requires just-bash to be installed)
npx @atlassian/mcp-compressor --just-bash -- '{"mcpServers":{"atlassian":{"url":"https://mcp.atlassian.com/v1/mcp"},"github":{"command":"npx","args":["-y","@modelcontextprotocol/server-github"]}}}'
graph LR
LLM["LLM Agent"] -->|"bash(command)"| B["just-bash Sandbox"]
B -->|standard commands| Unix["grep, cat, jq, sed, ..."]
B -->|MCP commands| CMD["server-name subcommand --args"]
CMD -->|invoke| Runtime["CompressorRuntime"]
Runtime -->|MCP protocol| Backend["Backend MCP Server"]
The agent sees one tool: bash. MCP tools appear as parent commands (named after the server) with subcommands (named after the tools):
# Help for a server's available tools
atlassian --help
# Invoke a tool as a subcommand
atlassian search-issues --jql "project=PROJ AND status='In Progress'"
# Pipe MCP output through Unix tools
atlassian search-issues --jql "project=PROJ" | jq '.issues[].key'
# Subcommand help
atlassian search-issues --help
bash tool instead of get_tool_schema + invoke_tool or per-server help toolsjq, grep, sed, and other Unix utilitiesfrom mcp_compressor.bash_commands import create_bash_command, build_bash_tool_description
from just_bash import Bash
# Assuming `compressed_tools` is a connected CompressedTools instance
cmd = create_bash_command("atlassian", "Atlassian tools", tools, compressed_tools.invoke_tool)
bash = Bash(commands={cmd.name: cmd})
# Execute commands
result = await bash.exec("atlassian search-issues --jql 'project=PROJ'")
See the TypeScript README for in-process library usage with @atlassian/mcp-compressor/bash.
The MCP Compressor acts as a transparent proxy between your LLM client and the underlying MCP server:
flowchart TB
subgraph github["GitHub MCP"]
g1["create_pr"]
g2["get_me"]
g3["list_repos"]
g4["get_issue"]
g5["..."]
g6["(+87 more tools)"]
end
subgraph proxy["MCP Compressor"]
t1["get_tool_schema"]
t2["invoke_tool"]
end
subgraph client["MCP Client"]
end
g1 <--> proxy
g2 <--> proxy
g3 <--> proxy
g4 <--> proxy
g6 <--> proxy
t1 <--> client
t2 <--> client
Instead of seeing all tools with full schemas (which are often thousands of tokens), the LLM sees just:
Available tools:
<tool>search_web(query, max_results): Search the web for information</tool>
<tool>get_weather(location, units): Get current weather for a location</tool>
<tool>send_email(to, subject, body): Send an email message</tool>
When the LLM needs to use a tool, it first calls get_tool_schema(tool_name) to retrieve the full schema, then invoke_tool(tool_name, tool_input) to execute it.
If --toonify is enabled, successful backend tool results are converted from JSON to TOON before being returned to the client. The wrapper helper responses themselves are not reformatted.
In CLI mode (--cli-mode), the compressor exposes a single <server_name>_help tool instead of the usual wrappers. All actual tool interaction happens through the generated shell script via a local HTTP bridge.
sequenceDiagram
participant Client as MCP Client
participant Compressor as MCP Compressor
participant Server as GitHub MCP<br/>(91 tools)
Client->>Compressor: list_tools()
Compressor->>Server: list_tools()
Server-->>Compressor: create_pr, get_me, list_repos, ...
Compressor-->>Client: get_tool_schema, invoke_tool
Client->>Compressor: get_tool_schema("create_pr")
Compressor-->>Client: create_pr description & schema
Client->>Compressor: invoke_tool("create_pr", {...})
Compressor->>Server: create_pr({...})
Server-->>Compressor: result
Compressor-->>Client: result
| Level | Description | Use Case |
|---|---|---|
max |
Maximum compression - exposes list_tools() function |
Maximum token savings. Good for (1) MCP servers you want to provide to your agent but expect tools to be used rarely and (2) for servers with a very large number of tools |
high |
Only tool name and parameter names | Maximum token savings, best for large toolsets |
medium (default) |
First sentence of each description | Balanced approach, good for most cases. |
low |
Complete tool descriptions | For tools that are unusual and not intuitive for the agent to understand and use. Using a lower level of compression in these cases provides more context to the LLM on the purpose of the tools and how they relate to each other. |
The best choice of compression level will depend on a number of factors, including:
bash tool with a single input argument command. Any modern LLM will understand exactly how to use it after seeing just the tool name and the name of the argument, so unless there is unexpected internal logic within the tool, aggressive compression can be used with little downside.You can pass an MCP config JSON string directly as COMMAND_OR_URL on the CLI. This is especially useful for remote servers when you want the config itself to carry the URL, headers, transport, or stdio command details.
Single-server and multi-server configs are both supported. For multi-server configs, each server gets its own prefixed wrapper tools (e.g. weather_get_tool_schema, calendar_invoke_tool).
To configure mcp-compressor in an MCP JSON configuration file, use the following pattern:
{
"mcpServers": {
"compressed-github": {
"command": "mcp-compressor",
"args": [
"https://api.githubcopilot.com/mcp/",
"--header",
"Authorization=Bearer ${GH_PAT}",
"--server-name",
"github"
],
},
"compressed-fetch": {
"command": "mcp-compressor",
"args": [
"uvx",
"mcp-server-fetch",
"--server-name",
"fetch"
],
}
}
}
This configuration will create tools named github_get_tool_schema, github_invoke_tool, fetch_get_tool_schema, and fetch_invoke_tool, preventing naming conflicts when multiple compressed servers are used together.
With compression level:
{
"mcpServers": {
"compressed-fetch": {
"command": "mcp-compressor",
"args": [
"uvx",
"mcp-server-fetch",
"--compression-level", "high"
],
}
}
}
Usage: mcp-compressor [OPTIONS] COMMAND_OR_URL
Run the MCP Compressor proxy server.
This is the main entry point for the CLI application. It connects to an MCP
server (via stdio, HTTP, or SSE) and wraps it with a compressed tool
interface.
Arguments:
COMMAND_OR_URL The URL of the MCP server to connect to for streamable HTTP
or SSE servers, or the command and arguments to run for
stdio servers. Example: uvx mcp-server-fetch \[required]
Options:
--cwd TEXT The working directory to use when running
stdio MCP servers.
-e, --env TEXT Environment variables to set when running
stdio MCP servers, in the form
VAR_NAME=VALUE. Can be used multiple times.
Supports environment variable expansion with
${VAR_NAME} syntax.
-H, --header TEXT Headers to use for remote (HTTP/SSE) MCP
server connections, in the form Header-
Name=Header-Value. Can be use multiple
times. Supports environment variable
expansion with ${VAR_NAME} syntax.
-t, --timeout FLOAT The timeout in seconds for connecting to the
MCP server and making requests. \[default:
10.0]
-c, --compression-level [max|high|medium|low]
The level of compression to apply to tool
the tools descriptions of the wrapped MCP
server. \[default: medium]
-n, --server-name TEXT Optional custom name to prefix the wrapper
tool names (get_tool_schema, invoke_tool,
list_tools). The name will be sanitized to
conform to MCP tool name specifications
(only A-Z, a-z, 0-9, _, -, .).
-l, --log-level [debug|info|warning|error|critical]
The logging level. Used for both the MCP
Compressor server and the underlying MCP
server if it is a stdio server. \[default:
WARNING]
--toonify Convert JSON backend tool responses to TOON
format automatically.
--cli-mode Start in CLI mode: expose a single help MCP
tool, start a local HTTP bridge, and generate
a shell script for interacting with the
wrapped server via CLI. --toonify is
automatically enabled in this mode.
--cli-port INTEGER Port for the local CLI bridge HTTP server
(default: random free port).
--install-completion Install completion for the current shell.
--show-completion Show completion for the current shell, to
copy it or customize the installation.
--help Show this message and exit.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mcp-compressor": {
"command": "npx",
"args": []
}
}
}