loading…
Search for a command to run...
loading…
An MCP server that connects AI assistants to a local Wolfram Engine, enabling symbolic math, numerical analysis, and data visualization through Wolfram Language
An MCP server that connects AI assistants to a local Wolfram Engine, enabling symbolic math, numerical analysis, and data visualization through Wolfram Language. It provides secure expression filtering, client authentication, and supports both local stdio and HTTP transports.
A Model Context Protocol (MCP) server that wraps a local Wolfram Engine, enabling AI assistants (Claude, ChatGPT, etc.) to perform symbolic math, numerical analysis, and data visualization via Wolfram Language.
Disclaimer: This is an unofficial, independent, personal project. It is not affiliated with, sponsored by, endorsed by, or certified by Wolfram Research, Inc. "Wolfram", "Wolfram Language", "Wolfram Engine", "Mathematica", and related marks are trademarks of Wolfram Research.
This software does not include any Wolfram Engine / Mathematica binaries, activation keys, license files, or other proprietary materials. Users must independently obtain and properly license their own copy of the Wolfram Engine or Mathematica in accordance with Wolfram's licensing terms.
The sole purpose of this project is to allow a licensed individual to invoke their own, locally-installed Wolfram kernel through AI assistants on their own machine, within the scope permitted by their license. Redistribution of Wolfram Engine access to third parties is not an intended use case and may violate Wolfram's licensing terms.
evaluate (text) and evaluate_image (PNG, experimental) — all Wolfram Language capabilities through two universal tools# Clone and install
git clone https://github.com/siqiliu-tsinghua/mma-mcp.git
cd mma-mcp
uv sync
# Graphics export dependencies (headless servers only — desktops already have these)
sudo apt-get install -y libfontconfig1 libgl1 libasound2t64 libxkbcommon0 libegl1
# Generate default config
uv run mma-mcp init
# Generate security group files (requires Wolfram kernel, ~1 min)
uv run mma-mcp setup
# Start server (stdio, for local MCP clients)
uv run mma-mcp serve
Add to your .mcp.json:
{
"mcpServers": {
"mma-mcp": {
"command": "uv",
"args": ["--directory", "/path/to/mma-mcp", "run", "mma-mcp"]
}
}
}
Add to your claude_desktop_config.json (Settings -> Developer -> Edit Config):
{
"mcpServers": {
"mma-mcp": {
"command": "/path/to/mma-mcp/.venv/bin/mma-mcp"
}
}
}
On macOS/Linux, find the config at
~/Library/Application Support/Claude/claude_desktop_config.jsonor~/.config/Claude/claude_desktop_config.json.
uv run mma-mcp serve --transport http --host 127.0.0.1 --port 8000
All settings live in mma_mcp.toml (or pyproject.toml under [tool.mma-mcp]).
uv run mma-mcp init # generates mma_mcp.toml with comments
Key sections:
| Section | Description |
|---|---|
[kernel] |
Wolfram kernel path, timeout, output format |
[server] |
Transport mode, host, port |
[security] |
Blacklist/whitelist mode, capability groups |
[tools] |
Which MCP tools to expose |
[tls] |
Domain and DNS provider for HTTPS (Caddy) |
[auth] |
Client identity and role-based access control |
Expressions are filtered before reaching the Wolfram kernel. Symbols are extracted via regex and checked against the active policy.
Blacklist mode (default): blocks dangerous groups (system_exec, file I/O, networking, dynamic eval).
Whitelist mode: only allows symbols from explicitly enabled groups.
29 capability groups (22 safe + 7 dangerous) cover ~6000 Wolfram Language symbols. Regenerate from your local kernel:
uv run mma-mcp setup # required after cloning (generates from your local kernel)
uv run mma-mcp setup --force # force regeneration (e.g., after Wolfram Engine upgrade)
When using HTTP transport, you can configure per-client credentials and roles to isolate different AI clients (e.g., Claude and ChatGPT) connecting to the same kernel:
# Generate password hash
uv run mma-mcp hash-password
# Generate TOML snippet for a new client
uv run mma-mcp add-client alice --role admin
Each client is bound to a role that controls which tools it can access, which Wolfram symbols it can use, and resource limits (timeout, result size). Concurrent clients are isolated via a kernel worker pool — each tool call runs in an exclusive kernel process with a temporary WL context.
See the [auth] section in mma_mcp.toml for configuration details.
# Run tests
uv run pytest tests/ -v
# Inspect MCP tools interactively
uv run mcp dev src/mma_mcp/server.py
| Command | Description |
|---|---|
mma-mcp serve |
Start the MCP server (default) |
mma-mcp init |
Generate default mma_mcp.toml |
mma-mcp setup |
Generate security group JSONs from local kernel |
mma-mcp caddyfile |
Generate Caddyfile for HTTPS |
mma-mcp hash-password |
Hash a password for config |
mma-mcp add-client |
Generate TOML snippet for a new AI client |
| Client | Long computations | Notes |
|---|---|---|
| Claude.ai | ✔ Supported | Sends progressToken; server heartbeat keeps connection alive |
| ChatGPT | ✘ May timeout | Does not send progressToken; has a hard timeout (~60s) independent of server heartbeat |
| Claude Desktop / Claude Code | Not tested | Local stdio transport |
MIT — applies only to the code in this repository. Use of Wolfram Engine / Mathematica is governed by Wolfram Research's own license terms.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mma-mcp": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also