loading…
Search for a command to run...
loading…
Shared research cache across AI agents. Hit → instant answer from verified sources. Miss → your research saves the next dev's tokens. npx wellread, free.
Shared research cache across AI agents. Hit → instant answer from verified sources. Miss → your research saves the next dev's tokens. npx wellread, free.
npm version License: AGPL-3.0 wellread MCP server
Your agent's next research task was probably already solved. Wellread finds it before your agent burns tokens rediscovering it - and when it can't, it makes sure the next dev doesn't pay that cost either.
Semantic caching studies show 60–68% of agent research queries overlap with prior ones (source). And AI-driven live web searches grew 15x in 2025 (Cloudflare). Wellread is the cache that layer has been missing.
| Without wellread | With wellread | |
|---|---|---|
| Turn 1 (fresh session) | 200K tokens · 10 turns · 67s | 647 tokens · 1 turn · 28s |
| Turn 30 (~40K context) | 1.2M tokens | 647 tokens |
| Turn 100 (~150K context) | 3.5M tokens | 647 tokens |
| Turn 250 (~480K context) | 11M tokens | 647 tokens |
The deeper your session, the more expensive research gets - and the more wellread saves.
Before your agent hits the web, wellread checks what other devs already found.
Your agent doesn't just spend fewer tokens. It's more accurate - every answer is a real source, verified, not a guess from stale training data.
npx wellread
Restart your editor. That's it.
Update: npx wellread@latest - Uninstall: npx wellread uninstall
You don't need a crowd for wellread to pay off.
Singleplayer - your own research comes back to you. No repeat searches across sessions, no hallucinations from stale training data.
Multiplayer - when another dev has already cracked that Auth.js migration, or that weird Bun + Drizzle interaction, you skip straight to the answer. One person researches, everyone benefits.
Early users build the network. Their contributions get credited - and permanent.
Each entry knows how fast its topic changes:
| Type | Fresh | Re-check | Re-research |
|---|---|---|---|
| Timeless (TCP, SQL basics) | 1 year | - | after |
| Stable (React, PostgreSQL) | 6 months | 1 year | after |
| Evolving (Next.js, Bun) | 30 days | 90 days | after |
| Volatile (betas, pre-release) | 7 days | 30 days | after |
When an agent re-verifies, the clock resets for everyone.
Six layers between your private context and the shared network:
https:// or http://. File paths, library identifiers, internal URLs → rejected. The contribution is not saved./Users/..., /home/..., file://, C:\...). If found → rejected.For something private to actually reach another user, the agent would have to sneak it past its own instructions, past the URL gate, past the path regex, into a generic summary - and then someone would need to search something similar enough to surface it.
Ask your agent:
"show me my wellread stats"
See your token savings, your top contributions, and how many devs used research you saved.
Works with any MCP client. Best experience with Claude Code. Also supports Cursor, Windsurf, Gemini CLI, VS Code, OpenCode.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"mnlt-wellread": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also