loading…
Search for a command to run...
loading…
An MCP server that enables cross-LLM communication and memory sharing, allowing different AI models to collaborate and share context across conversations.
An MCP server that enables cross-LLM communication and memory sharing, allowing different AI models to collaborate and share context across conversations.
Access multiple LLM APIs from one place. Call ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral, and Hugging Face Inference Router with intelligent model selection, preferences, and prompt logging.
An MCP (Model Context Protocol) server that provides unified access to multiple Large Language Model APIs for AI coding environments like Cursor and Claude Desktop.
Ready to access multiple LLMs? Install in seconds:
Install in Cursor (Recommended):
Or install manually:
npm install -g cross-llm-mcp
# Or from source:
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp && npm install && npm run build
call-chatgpt – OpenAI's ChatGPT APIcall-claude – Anthropic's Claude APIcall-deepseek – DeepSeek APIcall-gemini – Google's Gemini APIcall-grok – xAI's Grok APIcall-kimi – Moonshot AI's Kimi APIcall-perplexity – Perplexity AI APIcall-mistral – Mistral AI APIcall-huggingface – Hugging Face Inference Router (OpenAI-compatible Hub models)call-all-llms – Call all LLMs with the same promptcall-llm – Call a specific provider by nameget-user-preferences – Get current preferencesset-user-preferences – Set default model, cost preference, and tag-based preferencesget-models-by-tag – Find models by tag (coding, business, reasoning, math, creative, general)get-prompt-history – View prompt history with filtersget-prompt-stats – Get statistics about prompt logsdelete-prompt-entries – Delete log entries by criteriaclear-prompt-history – Clear all prompt logsClick the install link above or use:
cursor://anysphere.cursor-deeplink/mcp/install?name=cross-llm-mcp&config=eyJjcm9zcy1sbG0tbWNwIjp7ImNvbW1hbmQiOiJucHgiLCJhcmdzIjpbIi15IiwiY3Jvc3MtbGxtLW1jcCJdfX0=
After installation, add your API keys in Cursor settings (see Configuration below).
Requirements: Node.js 18+ and npm
# Clone and build
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp
npm install
npm run build
Add to claude_desktop_config.json:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"cross-llm-mcp": {
"command": "node",
"args": ["/absolute/path/to/cross-llm-mcp/build/index.js"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here",
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
"GEMINI_API_KEY": "your_gemini_api_key_here",
"XAI_API_KEY": "your_grok_api_key_here",
"KIMI_API_KEY": "your_kimi_api_key_here",
"PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
"MISTRAL_API_KEY": "your_mistral_api_key_here",
"HF_TOKEN": "your_huggingface_token_here"
}
}
}
}
Restart Claude Desktop after configuration.
Set environment variables for the LLM providers you want to use:
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export DEEPSEEK_API_KEY="your_deepseek_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
export XAI_API_KEY="your_grok_api_key"
export KIMI_API_KEY="your_kimi_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
export MISTRAL_API_KEY="your_mistral_api_key"
export HF_TOKEN="your_huggingface_token"
# Or: HUGGINGFACE_API_KEY (same as HF_TOKEN)
# Optional: DEFAULT_HUGGINGFACE_MODEL, HUGGINGFACE_INFERENCE_BASE_URL (default https://router.huggingface.co/v1)
This server calls Hugging Face’s hosted Inference Router; it does not download weights or run PyTorch/GGUF inside Node. To run models on your machine, use tools such as Ollama, llama.cpp, Text Generation Inference, or Hugging Face Inference Endpoints, then point other clients at those services if they expose an API.
Get a response from OpenAI:
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}
Get a response from a Hub model via the Inference Router (model is the Hub repo id, e.g. Qwen/Qwen2.5-7B-Instruct):
{
"tool": "call-huggingface",
"arguments": {
"prompt": "Reply with exactly: ok",
"model": "Qwen/Qwen2.5-7B-Instruct",
"temperature": 0.3,
"max_tokens": 32
}
}
Get responses from all providers:
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about AI",
"temperature": 0.8
}
}
Automatically use the best model for each task type:
{
"tool": "set-user-preferences",
"arguments": {
"defaultModel": "gpt-4o",
"costPreference": "cheaper",
"tagPreferences": {
"coding": "deepseek-r1",
"general": "gpt-4o",
"business": "claude-3.5-sonnet-20241022",
"reasoning": "deepseek-r1",
"math": "deepseek-r1",
"creative": "gpt-4o"
}
}
}
View your prompt logs:
{
"tool": "get-prompt-history",
"arguments": {
"provider": "chatgpt",
"limit": 10
}
}
Models are tagged by their strengths:
deepseek-r1, deepseek-coder, gpt-4o, claude-3.5-sonnet-20241022claude-3-opus-20240229, gpt-4o, gemini-1.5-prodeepseek-r1, o1-preview, claude-3.5-sonnet-20241022deepseek-r1, o1-preview, o1-minigpt-4o, claude-3-opus-20240229, gemini-1.5-progpt-4o-mini, claude-3-haiku-20240307, gemini-1.5-flashBuilt with: Node.js, TypeScript, MCP SDK
Dependencies: @modelcontextprotocol/sdk, superagent, zod
Platforms: macOS, Windows, Linux
Preference Storage:
~/.cross-llm-mcp/preferences.json%APPDATA%/cross-llm-mcp/preferences.jsonPrompt Log Storage:
~/.cross-llm-mcp/prompts.json%APPDATA%/cross-llm-mcp/prompts.json⭐ If this project helps you, please star it on GitHub! ⭐
Contributions welcome! Please open an issue or submit a pull request.
MIT License – see LICENSE.md for details.
If you find this project useful, consider supporting it:
⚡ Lightning Network
lnbc1pjhhsqepp5mjgwnvg0z53shm22hfe9us289lnaqkwv8rn2s0rtekg5vvj56xnqdqqcqzzsxqyz5vqsp5gu6vh9hyp94c7t3tkpqrp2r059t4vrw7ps78a4n0a2u52678c7yq9qyyssq7zcferywka50wcy75skjfrdrk930cuyx24rg55cwfuzxs49rc9c53mpz6zug5y2544pt8y9jflnq0ltlha26ed846jh0y7n4gm8jd3qqaautqa
₿ Bitcoin: bc1ptzvr93pn959xq4et6sqzpfnkk2args22ewv5u2th4ps7hshfaqrshe0xtp
Ξ Ethereum/EVM: 0x42ea529282DDE0AA87B42d9E83316eb23FE62c3f
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"jamesanz-cross-llm-mcp": {
"command": "npx",
"args": []
}
}
}