loading…
Search for a command to run...
loading…
A LangGraph-powered research agent that performs iterative web searches using Google Search and Gemini models to generate structured reports with citations. It
A LangGraph-powered research agent that performs iterative web searches using Google Search and Gemini models to generate structured reports with citations. It integrates with MCP-compatible clients like Claude and Cursor to enable sophisticated, multi-step AI research workflows.
🔍 Intelligent AI Research Agent - A sophisticated LangGraph-powered research agent wrapped as a Model Context Protocol (MCP) server for seamless integration with AI assistants like Claude, Cursor, and other MCP-compatible clients.
graph TD
A[Research Topic Input] --> B[Query Generation]
B --> C[Web Research]
C --> D[Content Analysis]
D --> E[Reflection & Gap Analysis]
E --> F{Research Complete?}
F -->|No| G[Generate Follow-up Queries]
G --> C
F -->|Yes| H[Final Report Generation]
H --> I[Structured Output with Citations]
subgraph "AI Models Used"
J[Gemini 2.0 Flash<br/>Query Generation]
K[Gemini 2.0 Flash<br/>Web Research]
L[Gemini 2.5 Flash<br/>Reflection]
M[Gemini 2.5 Pro<br/>Final Report]
end
B -.-> J
C -.-> K
E -.-> L
H -.-> M
graph TB
subgraph "Client Applications"
A1[Claude Desktop]
A2[Cursor IDE]
A3[Custom MCP Client]
end
subgraph "MCP Server (FastMCP)"
B1[HTTP Transport Layer]
B2[Research Tool Handler]
B3[Progress Streaming]
B4[Health & Stats Endpoints]
end
subgraph "LangGraph Research Agent"
C1[Query Generation Node]
C2[Web Research Node]
C3[Reflection Node]
C4[Final Answer Node]
end
subgraph "External Services"
D1[Google Search API]
D2[Gemini AI Models]
end
A1 --> B1
A2 --> B1
A3 --> B1
B1 --> B2
B2 --> B3
B2 --> C1
C1 --> C2
C2 --> C3
C3 --> C4
C2 --> D1
C1 --> D2
C3 --> D2
C4 --> D2
graph TB
subgraph "Development"
A1[Local Development]
A2[Docker Compose]
end
subgraph "Production Deployment"
B1[Render Platform]
B2[Docker Container]
B3[Custom Cloud Deploy]
end
subgraph "MCP Server Container"
C1[FastMCP HTTP Server]
C2[LangGraph Agent]
C3[Health Monitoring]
C4[Environment Config]
end
A1 --> C1
A2 --> C1
B1 --> C1
B2 --> C1
B3 --> C1
Deploy to Render in 5 minutes:
Fork this repository to your GitHub account
Create Render account at render.com
Deploy service:
Name: deep-research-mcp-server
Runtime: Python 3
Build Command: pip install -r requirements.txt
Start Command: python -m src.mcp_server.server
Add environment variables:
GEMINI_API_KEY = your_gemini_api_key_here
PORT = 8000
Deploy and get your server URL: https://your-service-name.onrender.com
# Clone repository
git clone https://github.com/your-username/deep-research-mcp.git
cd deep-research-mcp
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export GEMINI_API_KEY=your_gemini_api_key_here
# Run MCP server
python -m src.mcp_server.server
# Build Docker image
docker build -t deep-research-mcp .
# Run container
docker run -p 8000:8000 \
-e GEMINI_API_KEY=your_gemini_api_key \
deep-research-mcp
| Variable | Description | Default | Required |
|---|---|---|---|
GEMINI_API_KEY |
Google Gemini API key | - | ✅ |
PORT |
Server port | 8000 | ❌ |
HOST |
Server host | 0.0.0.0 | ❌ |
LOG_LEVEL |
Logging level | info | ❌ |
Configure research behavior through the MCP tool parameters:
{
"topic": "Your research question",
"max_research_loops": 2,
"initial_search_query_count": 3,
"reasoning_model": "gemini-2.5-pro"
}
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"deep-research": {
"url": "https://your-service-name.onrender.com/mcp/"
}
}
}
Add to Cursor settings → MCP Servers:
{
"mcpServers": {
"deep-research": {
"url": "https://your-service-name.onrender.com/mcp/"
}
}
}
from fastmcp import Client
import asyncio
async def research_example():
client = Client("http://localhost:8000/mcp/")
async with client:
result = await client.call_tool("research", {
"topic": "Latest developments in quantum computing",
"max_research_loops": 3,
"initial_search_query_count": 4
})
print("Research Report:")
print(result["report"])
print(f"\nSources: {len(result['sources'])}")
print(f"Execution time: {result['metadata']['execution_time']:.2f}s")
asyncio.run(research_example())
deep-research-mcp/
├── src/
│ ├── agent/ # LangGraph research agent
│ │ ├── app.py # FastAPI app
│ │ ├── graph.py # LangGraph workflow definition
│ │ ├── state.py # State management
│ │ ├── prompts.py # AI prompts
│ │ ├── tools_and_schemas.py # Tools and data schemas
│ │ ├── configuration.py # Agent configuration
│ │ └── utils.py # Utility functions
│ └── mcp_server/ # MCP server implementation
│ ├── server.py # FastMCP server
│ ├── agent_adapter.py # Agent wrapper
│ ├── config.py # Configuration management
│ └── utils.py # Server utilities
├── ClinicalTrials-MCP-Server/ # Additional MCP server example
├── examples/ # Usage examples
├── requirements.txt # Python dependencies
├── pyproject.toml # Project configuration
├── render.yaml # Render deployment config
└── README.md # This file
# Install development dependencies
pip install -r requirements.txt
# Run tests
python -m pytest tests/
# Start server in development mode
python -m src.mcp_server.server
# Test health endpoint
curl http://localhost:8000/health
# Test MCP endpoint
curl -X POST http://localhost:8000/mcp/ \
-H "Content-Type: application/json" \
-d '{"method": "tools/list", "params": {}}'
curl https://your-service-name.onrender.com/health
Response:
{
"status": "healthy",
"service": "Deep Research MCP Server",
"version": "1.0.0",
"agent_status": "healthy"
}
curl https://your-service-name.onrender.com/stats
The server provides structured logging with:
This project is licensed under the MIT License - see the LICENSE file for details.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"deep-research-agent-mcp-server": {
"command": "npx",
"args": []
}
}
}