loading…
Search for a command to run...
loading…
Web search using free multi-engine search (NO API KEYS REQUIRED) — Supports Bing, Baidu, DuckDuckGo, Brave, Exa, and CSDN.
Web search using free multi-engine search (NO API KEYS REQUIRED) — Supports Bing, Baidu, DuckDuckGo, Brave, Exa, and CSDN.
open-websearch provides an MCP server, CLI, and local daemon, and can also be paired with skill-guided agent workflows for live web search and content retrieval without API keys.
MCPopen-websearch to Claude Desktop, Cherry Studio, Cursor, or another MCP client.CLILocal daemonstatus, GET /health, and POST /search / POST /fetch-*. Start it explicitly with open-websearch serve and check it with open-websearch status.SkillInstall the open-websearch skill for your agent first:
npx skills add https://github.com/Aas-ee/open-webSearch --skill open-websearch
On first use, the skill typically follows this path: detect whether a usable open-websearch path already exists, guide setup/enablement if it does not, validate that the capability is active, and only then continue with search or fetch through the smallest working path.
If the current environment cannot complete setup or activation automatically, you can explicitly have the agent start the local daemon first:
open-websearch serve
open-websearch status
Keep installation proxy settings separate from runtime proxy settings:
open-websearch, playwright, or other npm packages.npm --proxy http://127.0.0.1:7890 --https-proxy http://127.0.0.1:7890 install -g open-websearch
search / fetch work.open-websearch network traffic after serve starts, for example:USE_PROXY=true PROXY_URL=http://127.0.0.1:7890 open-websearch serve
If the agent can only get through the package-install step with npm proxy settings, but live search/fetch also needs a proxy after startup, those are two separate configuration steps and should be handled separately.
CLI is for one-shot execution. The local daemon is a long-lived local HTTP service for repeated calls with lower startup friction. Use open-websearch serve as the explicit daemon start command and open-websearch status as the explicit daemon status command.
Action commands such as search and fetch-web try the default local daemon first when it is available. If you pass --daemon-url, that daemon path becomes explicit and silent fallback to direct execution is disabled.
Build first:
npm run build
Start the local daemon:
npm run serve
# globally installed: open-websearch serve
Check status:
npm run status -- --json
# globally installed: open-websearch status --json
Run a one-shot local CLI search:
npm run search:cli -- "open web search" --json
Notes:
open-websearch is the MCP server compatibility entrypoint, not the recommended daemon start command for agent automation.fetch-web.For the local daemon HTTP API (serve, status, GET /health, POST /search, POST /fetch-*), see docs/http-api.md.
If you are using open-websearch as an MCP server, continue with the MCP-oriented setup below.
The fastest way to get started:
# Basic usage
npx open-websearch@latest
# With environment variables (Linux/macOS)
DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true npx open-websearch@latest
# Windows PowerShell
$env:DEFAULT_SEARCH_ENGINE="duckduckgo"; $env:ENABLE_CORS="true"; npx open-websearch@latest
# Windows CMD
set MODE=stdio && set DEFAULT_SEARCH_ENGINE=duckduckgo && npx open-websearch@latest
# Cross-platform (requires cross-env, Used for local development)
npm install -g open-websearch
npx cross-env DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true open-websearch
Environment Variables:
| Variable | Default | Options | Description |
|---|---|---|---|
ENABLE_CORS |
false |
true, false |
Enable CORS |
CORS_ORIGIN |
* |
Any valid origin | CORS origin configuration |
DEFAULT_SEARCH_ENGINE |
bing |
bing, duckduckgo, exa, brave, baidu, csdn, juejin, startpage |
Default search engine |
USE_PROXY |
false |
true, false |
Enable HTTP proxy |
PROXY_URL |
http://127.0.0.1:7890 |
Any valid URL | Proxy server URL |
FETCH_WEB_INSECURE_TLS |
false |
true, false |
Disable TLS certificate verification for fetchWebContent only. Use only when a target site has a broken certificate chain |
MODE |
both |
both, http, stdio |
Server mode: both HTTP+STDIO, HTTP only, or STDIO only |
PORT |
3000 |
1-65535 | Server port |
ALLOWED_SEARCH_ENGINES |
empty (all available) | Comma-separated engine names | Limit which search engines can be used; if the default engine is not in this list, the first allowed engine becomes the default |
SEARCH_MODE |
auto |
request, auto, playwright |
Search strategy. Currently only affects Bing: request only, request then Playwright fallback, or force Playwright |
PLAYWRIGHT_PACKAGE |
auto |
auto, playwright, playwright-core |
Which Playwright client package to resolve when browser mode is enabled |
PLAYWRIGHT_MODULE_PATH |
empty | Absolute path or project-relative path | Reuse an existing Playwright client package outside this project |
PLAYWRIGHT_EXECUTABLE_PATH |
empty | Any valid browser binary path | Launch an existing Chromium/Chrome executable without installing bundled browsers |
PLAYWRIGHT_WS_ENDPOINT |
empty | Valid Playwright ws:// / wss:// endpoint |
Connect to an existing remote Playwright browser server |
PLAYWRIGHT_CDP_ENDPOINT |
empty | Valid Chromium CDP endpoint | Connect to an existing Chromium instance over CDP |
PLAYWRIGHT_HEADLESS |
true |
true, false |
Whether Playwright Chromium runs in headless mode |
PLAYWRIGHT_NAVIGATION_TIMEOUT_MS |
20000 |
Positive integer | Timeout for Playwright navigation and Bing result waits |
MCP_TOOL_SEARCH_NAME |
search |
Valid MCP tool name | Custom name for the search tool |
MCP_TOOL_FETCH_LINUXDO_NAME |
fetchLinuxDoArticle |
Valid MCP tool name | Custom name for the Linux.do article fetch tool |
MCP_TOOL_FETCH_CSDN_NAME |
fetchCsdnArticle |
Valid MCP tool name | Custom name for the CSDN article fetch tool |
MCP_TOOL_FETCH_GITHUB_NAME |
fetchGithubReadme |
Valid MCP tool name | Custom name for the GitHub README fetch tool |
MCP_TOOL_FETCH_JUEJIN_NAME |
fetchJuejinArticle |
Valid MCP tool name | Custom name for the Juejin article fetch tool |
MCP_TOOL_FETCH_WEB_NAME |
fetchWebContent |
Valid MCP tool name | Custom name for generic web/Markdown fetch tool |
Common configurations:
# Enable proxy for restricted regions
USE_PROXY=true PROXY_URL=http://127.0.0.1:7890 npx open-websearch@latest
# Only if a target website has a broken certificate chain
FETCH_WEB_INSECURE_TLS=true npx open-websearch@latest
# Request first, then fallback to Playwright if available
SEARCH_MODE=auto npx open-websearch@latest
# Force request-only Bing search
SEARCH_MODE=request npx open-websearch@latest
# Full configuration
DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true USE_PROXY=true PROXY_URL=http://127.0.0.1:7890 PORT=8080 npx open-websearch@latest
Browser-enhanced Bing fallback is opt-in. The published package does not bundle Playwright anymore. Enable it manually with one of these setups:
npm install playwright
npx playwright install chromium
SEARCH_MODE=auto npx open-websearch@latest
npm install playwright-core
PLAYWRIGHT_PACKAGE=playwright-core PLAYWRIGHT_EXECUTABLE_PATH=/path/to/chromium SEARCH_MODE=auto npx open-websearch@latest
PLAYWRIGHT_MODULE_PATH=/absolute/path/to/node_modules/playwright SEARCH_MODE=playwright npx open-websearch@latest
npm install playwright-core
PLAYWRIGHT_PACKAGE=playwright-core PLAYWRIGHT_WS_ENDPOINT=ws://127.0.0.1:3000/ SEARCH_MODE=auto npx open-websearch@latest
npm install playwright-core
# Start Chrome/Chromium with a debugging port first
chrome --remote-debugging-port=9222 --user-data-dir=/tmp/open-websearch-chrome
# Then connect through CDP
PLAYWRIGHT_PACKAGE=playwright-core PLAYWRIGHT_CDP_ENDPOINT=http://127.0.0.1:9222 SEARCH_MODE=auto npx open-websearch@latest
This is the most practical setup when you want to reuse your own logged-in or previously verified browser session.
Windows PowerShell example:
npm install playwright-core
& "$env:LOCALAPPDATA\Google\Chrome\Application\chrome.exe" `
--remote-debugging-port=9222 `
--user-data-dir="$env:TEMP\open-websearch-chrome"
$env:PLAYWRIGHT_PACKAGE="playwright-core"
$env:PLAYWRIGHT_CDP_ENDPOINT="http://127.0.0.1:9222"
$env:SEARCH_MODE="auto"
npx open-websearch@latest
Mode behavior:
request: only uses request-based Bing scrapingauto: tries request first, and only falls back to Playwright when request fails and a manually accessible Playwright client + browser are availableplaywright: forces Playwright and errors if the configured Playwright client or browser target is unavailableNotes:
PLAYWRIGHT_MODULE_PATH takes precedence over PLAYWRIGHT_PACKAGEPLAYWRIGHT_WS_ENDPOINT takes precedence over PLAYWRIGHT_CDP_ENDPOINTPLAYWRIGHT_EXECUTABLE_PATH and local proxy launch flagsfetchWebContent stays on the request-only path. Public pages can still work, but pages that require browser cookies or browser-rendered HTML may fail.npm install
This installs the core MCP server only. Browser fallback remains optional until you install or connect a Playwright client yourself. 3. Build the server:
npm run build
Cherry Studio:
{
"mcpServers": {
"web-search": {
"name": "Web Search MCP",
"type": "streamableHttp",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"baseUrl": "http://localhost:3000/mcp"
}
}
}
VSCode (Claude Dev Extension):
{
"mcpServers": {
"web-search": {
"transport": {
"type": "streamableHttp",
"url": "http://localhost:3000/mcp"
}
},
"web-search-sse": {
"transport": {
"type": "sse",
"url": "http://localhost:3000/sse"
}
}
}
}
Claude Desktop:
{
"mcpServers": {
"web-search": {
"type": "http",
"url": "http://localhost:3000/mcp"
},
"web-search-sse": {
"type": "sse",
"url": "http://localhost:3000/sse"
}
}
}
NPX Command Line Configuration:
{
"mcpServers": {
"web-search": {
"args": [
"open-websearch@latest"
],
"command": "npx",
"env": {
"MODE": "stdio",
"DEFAULT_SEARCH_ENGINE": "duckduckgo",
"ALLOWED_SEARCH_ENGINES": "duckduckgo,bing,exa"
}
}
}
}
Windows NPX configuration:
{
"mcpServers": {
"web-search": {
"command": "cmd",
"args": [
"/c",
"npx",
"-y",
"open-websearch@latest"
],
"env": {
"MODE": "stdio",
"DEFAULT_SEARCH_ENGINE": "duckduckgo",
"SYSTEMROOT": "C:/Windows"
}
}
}
}
Proxy and TLS notes:
USE_PROXY + PROXY_URL path.USE_PROXY=true, all Axios-based network requests follow the configured PROXY_URL path instead of mixing direct requests with environment-proxy behavior.PROXY_URL points to a local rule-based proxy client, that client can still decide which destinations go DIRECT and which ones are proxied.PROXY_URL points to a fixed upstream proxy or overseas egress, region-sensitive sites such as Baidu, CSDN, Juejin, Linux.do, or GitHub may behave differently than before.HTTP_PROXY or HTTPS_PROXY, they will no longer override the server's internal request behavior.NODE_EXTRA_CA_CERTS on Windows when a site has a missing intermediate CA.FETCH_WEB_INSECURE_TLS=true only as a last resort for fetchWebContent, since it weakens TLS verification.Local STDIO Configuration for Cherry Studio (Windows):
{
"mcpServers": {
"open-websearch-local": {
"command": "node",
"args": ["C:/path/to/your/project/build/index.js"],
"env": {
"MODE": "stdio",
"DEFAULT_SEARCH_ENGINE": "duckduckgo",
"ALLOWED_SEARCH_ENGINES": "duckduckgo,bing,exa"
}
}
}
}
Quick deployment using Docker Compose:
docker-compose up -d
Or use Docker directly:
docker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/aas-ee/open-web-search:latest
Environment variable configuration:
| Variable | Default | Options | Description |
|---|---|---|---|
ENABLE_CORS |
false |
true, false |
Enable CORS |
CORS_ORIGIN |
* |
Any valid origin | CORS origin configuration |
DEFAULT_SEARCH_ENGINE |
bing |
bing, duckduckgo, exa, brave |
Default search engine |
USE_PROXY |
false |
true, false |
Enable HTTP proxy |
PROXY_URL |
http://127.0.0.1:7890 |
Any valid URL | Proxy server URL |
PORT |
3000 |
1-65535 | Server port |
Then configure in your MCP client:
{
"mcpServers": {
"web-search": {
"name": "Web Search MCP",
"type": "streamableHttp",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"baseUrl": "http://localhost:3000/mcp"
},
"web-search-sse": {
"transport": {
"name": "Web Search MCP",
"type": "sse",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"url": "http://localhost:3000/sse"
}
}
}
}
The server provides six tools: search, fetchLinuxDoArticle, fetchCsdnArticle, fetchGithubReadme, fetchJuejinArticle, and fetchWebContent.
For the local daemon HTTP API (serve, status, GET /health, POST /search, POST /fetch-*), see docs/http-api.md.
{
"query": string, // Search query
"limit": number, // Optional: Number of results to return (default: 10)
"engines": string[], // Optional: Engines to use (bing,baidu,linuxdo,csdn,duckduckgo,exa,brave,juejin,startpage) default runtime-configured engine
"searchMode": string // Optional: request, auto, or playwright (currently only affects Bing)
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "search",
arguments: {
query: "search content",
limit: 3, // Optional parameter
engines: ["bing", "csdn", "duckduckgo", "exa", "brave", "juejin"] // Optional parameter, supports multi-engine combined search
}
})
Response example:
[
{
"title": "Example Search Result",
"url": "https://example.com",
"description": "Description text of the search result...",
"source": "Source",
"engine": "Engine used"
}
]
Used to fetch complete content of CSDN blog articles.
{
"url": string // URL from CSDN search results using the search tool
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchCsdnArticle",
arguments: {
url: "https://blog.csdn.net/xxx/article/details/xxx"
}
})
Response example:
[
{
"content": "Example search result"
}
]
Used to fetch complete content of Linux.do forum articles.
{
"url": string // URL from linuxdo search results using the search tool
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchLinuxDoArticle",
arguments: {
url: "https://xxxx.json"
}
})
Response example:
[
{
"content": "Example search result"
}
]
Used to fetch README content from GitHub repositories.
{
"url": string // GitHub repository URL (supports HTTPS, SSH formats)
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchGithubReadme",
arguments: {
url: "https://github.com/Aas-ee/open-webSearch"
}
})
Supported URL formats:
https://github.com/owner/repohttps://github.com/owner/repo.git[email protected]:owner/repo.githttps://github.com/owner/repo?tab=readmeResponse example:
[
{
"content": "<div align=\"center\">\n\n# Open-WebSearch MCP Server..."
}
]
Fetch content directly from public HTTP(S) links, including Markdown files (.md) and ordinary web pages.
{
"url": string, // Public HTTP(S) URL
"maxChars": number // Optional: max returned content length (1000-200000, default 30000)
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchWebContent",
arguments: {
url: "https://raw.githubusercontent.com/Aas-ee/open-webSearch/main/README.md",
maxChars: 12000
}
})
Response example:
{
"url": "https://raw.githubusercontent.com/Aas-ee/open-webSearch/main/README.md",
"finalUrl": "https://raw.githubusercontent.com/Aas-ee/open-webSearch/main/README.md",
"contentType": "text/plain; charset=utf-8",
"title": "",
"truncated": false,
"content": "# Open-WebSearch MCP Server ..."
}
Used to fetch complete content of Juejin articles.
{
"url": string // Juejin article URL from search results
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchJuejinArticle",
arguments: {
url: "https://juejin.cn/post/7520959840199360563"
}
})
Supported URL format:
https://juejin.cn/post/{article_id}Response example:
[
{
"content": "🚀 开源 AI 联网搜索工具:Open-WebSearch MCP 全新升级,支持多引擎 + 流式响应..."
}
]
Since this tool works by scraping multi-engine search results, please note the following important limitations:
Rate Limiting:
Result Accuracy:
Legal Terms:
Search Engine Configuration:
DEFAULT_SEARCH_ENGINE environment variableProxy Configuration:
USE_PROXY=truePROXY_URLWelcome to submit issue reports and feature improvement suggestions!
If you want to fork this repository and publish your own Docker image, you need to make the following configurations:
To enable automatic Docker image building and publishing, please add the following secrets in your GitHub repository settings (Settings → Secrets and variables → Actions):
Required Secrets:
GITHUB_TOKEN: Automatically provided by GitHub (no setup needed)Optional Secrets (for Alibaba Cloud ACR):
ACR_REGISTRY: Your Alibaba Cloud Container Registry URL (e.g., registry.cn-hangzhou.aliyuncs.com)ACR_USERNAME: Your Alibaba Cloud ACR usernameACR_PASSWORD: Your Alibaba Cloud ACR passwordACR_IMAGE_NAME: Your image name in ACR (e.g., your-namespace/open-web-search)The repository includes a GitHub Actions workflow (.github/workflows/docker.yml) that automatically:
Trigger Conditions:
main branchv*)Build and Push to:
Image Tags:
ghcr.io/your-username/open-web-search:latestyour-acr-address/your-image-name:latest (if ACR is configured)main branch or create version tagsdocker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/your-username/open-web-search:latest
If you find this project helpful, please consider giving it a ⭐ Star!
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"aas-ee-open-websearch": {
"command": "npx",
"args": []
}
}
}