loading…
Search for a command to run...
loading…
An MCP assistant that provides asynchronous and parallel Q\&A capabilities for AI clients using OpenAI-compatible APIs. It supports streaming, chain-of-thought
An MCP assistant that provides asynchronous and parallel Q\&A capabilities for AI clients using OpenAI-compatible APIs. It supports streaming, chain-of-thought detection, and local session persistence with a dedicated management panel for configuration.
MCP Q&A Assistant / MCP 问答助手
Async Q&A capabilities for AI clients via MCP protocol, powered by any OpenAI-compatible API.
通过 MCP 协议为 AI 客户端提供异步问答能力,后端对接任意 OpenAI 兼容 API。
🤖 100% Vibe Coding — Built entirely through AI-assisted development.
⭐ If you like this project, please give it a star!
# First time: launch the management panel to configure your provider
npx @sweatent/askr -m
# Start the MCP stdio server
npx @sweatent/askr
Add to claude_desktop_config.json:
{
"mcpServers": {
"askr": {
"command": "npx",
"args": ["@sweatent/askr"]
}
}
}
Add via CLI or global config:
claude mcp add askr -- npx @sweatent/askr
Or manually edit .claude/settings.json:
{
"mcpServers": {
"askr": {
"command": "npx",
"args": ["@sweatent/askr"]
}
}
}
Configure a stdio-type MCP server with command npx @sweatent/askr per your client's documentation.
npx @sweatent/askr -m
# or
npx @sweatent/askr --manage
Menu options:
Supports Chinese and English interfaces.
Config file is stored in the system data directory:
| Platform | Path |
|---|---|
| Windows | %AppData%/askr/config.json |
| macOS | ~/Library/Application Support/askr/config.json |
| Linux | ~/.config/askr/config.json |
{
"language": "en",
"provider": {
"baseUrl": "https://api.example.com",
"apiKey": "sk-xxx",
"model": "gpt-4",
"stream": true
},
"settings": {
"maxConcurrent": 5,
"timeout": 150,
"foldChars": 1000,
"systemPrompt": "You are a search assistant..."
}
}
https://api.example.com — automatically appends /v1/chat/completionshttps://api.example.com/custom/path# — trailing # means use as-is (strip the #){ "content": "What is REST and its core principles?" }
Ask a single question. Answers exceeding foldChars are truncated; use check for full content. Returns session ID on timeout.
{ "questions": ["What is Docker?", "What is Kubernetes?"] }
Parallel questions, up to maxConcurrent. Each question runs in isolated context, returning its own result or session ID.
{ "count": 5 }
Returns the most recent N session summaries (ID, timestamp, status, question preview).
{ "id": "a3f8k2", "showFull": true, "showThinking": true }
Retrieve session result. Blocks if the session is still running. Parameters:
showFull — Return full answer without truncationshowThinking — Include chain-of-thought content<think> tags, reasoning_content, and thinking fieldsAdd the following to your project's CLAUDE.md or other AI client system prompt for best results:
## askr Usage Guidelines
You can use askr MCP tools to search and ask an external AI to assist with your tasks.
### When to Use
- **Pre-implementation research**: Before using unfamiliar or uncertain public APIs / library functions, query their signatures, parameters, usage, and caveats to avoid writing incorrect code from memory
- **Unfamiliar concepts**: Look up terms, protocols, or design patterns you don't fully understand
- **Latest information**: Query recently released versions, changelogs, news, or announcements that may not be covered in your training data
- **Troubleshooting**: When facing hard-to-solve errors or unexpected behavior, search for solutions including GitHub Issues, Stack Overflow discussions, etc.
- **Solution comparison**: When deciding between multiple technical approaches, query their trade-offs and community recommendations
- **Configuration & compatibility**: Look up platform-specific configurations or known compatibility issues for particular environments or versions
### Principles
1. Prefer using askr to search first; only rely on your own knowledge when you are fully confident it is accurate and up-to-date
2. Be specific in your questions, include necessary context (language, framework, version, etc.), avoid vague queries
3. Combine related sub-questions into one question; use agentq for unrelated parallel questions
4. Use check to retrieve timed-out results; re-ask if check times out twice
5. Use check(id, showFull: true) when you need the full untruncated answer
Thanks to the LinuxDo community for the support!
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"askr": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also