loading…
Search for a command to run...
loading…
A unified local API gateway providing caching, rate limiting, and full Model Context Protocol compatibility for AI agent integration. It enables users to aggreg
A unified local API gateway providing caching, rate limiting, and full Model Context Protocol compatibility for AI agent integration. It enables users to aggregate multiple API endpoints into a single gateway with built-in observability and customizable eviction strategies.
A unified local API gateway with caching, rate limiting, and full MCP (Model Context Protocol) compatibility.
# Clone the repository
git clone https://github.com/bandageok/mcp-api-gateway.git
cd mcp-api-gateway
# Install dependencies
pip install -r requirements.txt
# Or install directly
pip install aiohttp pyyaml
python gateway.py --create-config
This creates a config.yaml with sample endpoints:
host: localhost
port: 8080
cache:
enabled: true
max_size: 1000
ttl: 300
strategy: lru
rate_limit:
enabled: true
requests_per_minute: 60
apis:
- name: github-api
url: https://api.github.com
method: GET
# With config file
python gateway.py -c config.yaml
# Or with command line arguments
python gateway.py --host 0.0.0.0 --port 8080
# Call an API endpoint
curl http://localhost:8080/api/github-api/users/bandageok
# Check health
curl http://localhost:8080/health
# Get statistics
curl http://localhost:8080/stats
# Clear cache
curl -X DELETE http://localhost:8080/cache/clear
# Get configuration
curl http://localhost:8080/config
The gateway provides full MCP protocol support for AI agents:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "github-api",
"description": "Call GET https://api.github.com",
"inputSchema": {
"type": "object",
"properties": {
"params": {"type": "object"},
"data": {"type": "object"}
}
}
}
]
}
}
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "github-api",
"arguments": {
"params": {"path": "/users/bandageok"}
}
}
}
{
"jsonrpc": "2.0",
"id": 3,
"method": "resources/list",
"params": {}
}
| Option | Type | Default | Description |
|---|---|---|---|
host |
string | localhost | Host to bind to |
port |
int | 8080 | Port to bind to |
debug |
bool | false | Enable debug mode |
log_level |
string | INFO | Logging level |
cache.enabled |
bool | true | Enable caching |
cache.max_size |
int | 1000 | Maximum cache entries |
cache.ttl |
int | 300 | Cache TTL in seconds |
cache.strategy |
string | lru | Cache strategy (lru/lfu/fifo/ttl) |
rate_limit.enabled |
bool | true | Enable rate limiting |
rate_limit.requests_per_minute |
int | 60 | Rate limit threshold |
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Health check |
/health |
GET | Detailed health status |
/stats |
GET | Gateway statistics |
/config |
GET | Current configuration |
/cache/clear |
DELETE | Clear the cache |
/api/{name} |
* | Proxy to configured API |
/mcp |
POST | MCP protocol endpoint |
┌─────────────────────────────────────────────────────────────┐
│ MCP API Gateway │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌───────────────┐ │
│ │ Cache │ │Rate Limiter │ │ MCP Handler │ │
│ │ (LRU/LFU) │ │ (Token) │ │ │ │
│ └─────────────┘ └─────────────┘ └───────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ API Client Pool │
├─────────────────────────────────────────────────────────────┤
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ GitHub │ │ Weather │ │ Stocks │ │ Custom │ │
│ │ API │ │ API │ │ API │ │ API │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘
Connect AI agents to external APIs through MCP:
import requests
# Initialize MCP
response = requests.post("http://localhost:8080/mcp", json={
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {}
})
# List available tools
response = requests.post("http://localhost:8080/mcp", json={
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
})
Protect external APIs from being overwhelmed:
rate_limit:
enabled: true
requests_per_minute: 60 # Max 60 requests per minute
Cache expensive API responses:
cache:
enabled: true
max_size: 1000
ttl: 300 # Cache for 5 minutes
strategy: lru # Evict least recently used
import aiohttp
import asyncio
async def call_gateway():
async with aiohttp.ClientSession() as session:
# Call an API
async with session.get("http://localhost:8080/api/github-api/users/bandageok") as resp:
data = await resp.json()
print(data)
# Check stats
async with session.get("http://localhost:8080/stats") as resp:
stats = await resp.json()
print(f"Cache hit rate: {stats['cache_hit_rate']}")
asyncio.run(call_gateway())
apis:
- name: my-api
url: https://api.example.com
method: GET
headers:
Authorization: Bearer YOUR_TOKEN
timeout: 30
retry_count: 3
MIT License - See LICENSE for details.
Contributions are welcome! Please feel free to submit a Pull Request.
⭐ Star us on GitHub if you find this useful!
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mcp-api-gateway": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also