loading…
Search for a command to run...
loading…
Anti-bot Google search MCP that replaces the usual search MCP + URL fetcher combo with one server. No API key, no proxies, includes graceful CAPTCHA recovery.
Anti-bot Google search MCP that replaces the usual search MCP + URL fetcher combo with one server. No API key, no proxies, includes graceful CAPTCHA recovery.
✨Anti-Bot Search MCP: No API Key✨
English | 한국어

Demo only. Actual searches run headless by default (no visible browser). Set
SURF_HEADLESS=falseto make Chrome visible like in the clip above.
Google search MCP. No API key. Just works.
search / search_parallel / extract / search_extractextract (blocks localhost, private IPs, AWS metadata by default)Plug it into any MCP client and you get Google search as a tool.
No CAPTCHA solver. When CAPTCHA fires on any tool, a Chrome window opens for a human to solve. Each solve preserves the profile's reputation with Google. Built for sustainable, ethical use.
One-time install needs a ~1s profile warm-up (see Install).
Designed for local use. Not suitable for stateless / serverless deployment.
| result | |
|---|---|
| sequential | ~1.5s/query (first call ~4s, includes setup) |
| parallel x4 | ~1.5s wall (first call ~9s, includes pool warm) |
| parallel x10 | ~4.5s wall |
| search_extract x5 | ~5s wall (search + 5 parallel extracts) |
Measured on a workstation with a 1Gb/s connection.
playwright-extra stealthRequires Node 18+ and Google Chrome (or Chromium) on the system.
npx google-surf-mcp # actual MCP - register in client config
Or local clone:
git clone https://github.com/HarimxChoi/google-surf-mcp
cd google-surf-mcp
npm install
npm run bootstrap
bootstrap opens a Chrome window. Run one Google search in it. Close. Profile is now warm.
Override paths if needed:
CHROME_PATH=/path/to/chrome SURF_TZ=America/New_York npm run bootstrap
Paste this into your ~/.claude.json:
{
"mcpServers": {
"google-surf": {
"command": "npx",
"args": ["-y", "google-surf-mcp"]
}
}
}
Restart Claude Code. Done. search, search_parallel, extract, search_extract are now available.
For other MCP clients, use the same JSON shape in their config file.
Local clone variant:
{
"mcpServers": {
"google-surf": {
"command": "node",
"args": ["/abs/path/to/google-surf-mcp/build/index.js"]
}
}
}
search(query, limit?) - single query, ~1.5s. Returns title / url / snippet. Sponsored ads filtered out.search_parallel(queries[], limit?) - pool of 4, max 10 queries per call.extract(url, max_chars?) - fetch a URL, return article markdown (Readability with text fallback). Failures return { error }, never throw.search_extract(query, limit?, max_chars?) - search + parallel extract in one call. Returns SERP results enriched with full article content. Per-page failures are isolated.search_extract is the killer one: SERP + full article content in a single call. Replaces the usual "search MCP + URL fetcher MCP" combo most agents stitch together.
| var | default | notes |
|---|---|---|
CHROME_PATH |
auto-detected | absolute path to Chrome binary |
SURF_PROFILE_ROOT |
~/.google-surf-mcp |
where the warm profile lives |
SURF_LOCALE |
en-US |
browser locale |
SURF_TZ |
system tz | e.g. America/New_York |
SURF_HEADLESS |
true |
set false to run Chrome visibly (demos / debugging). CAPTCHA auto-recovery always runs visible regardless. |
SURF_IDLE_CLOSE_MS |
30000 |
idle ms before closing the sequential ctx and pool. 0 disables idle auto-close. Lower = faster cleanup, higher = warmer cache for spaced-out calls. |
SURF_ALLOW_PRIVATE |
false |
set true to allow extract to fetch private/loopback addresses (localhost, 127.0.0.1, 10.x, 192.168.x, 169.254.x, etc). Default blocks them as an SSRF guard. |
CHROME_PATH.See CHANGELOG.md.
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"google-surf-mcp": {
"command": "npx",
"args": []
}
}
}