loading…
Search for a command to run...
loading…
An MCP server that gives AI assistants access to the AsyncAPI specification - search, explore and retrieve any version of the spec directly from your coding too
An MCP server that gives AI assistants access to the AsyncAPI specification - search, explore and retrieve any version of the spec directly from your coding tool.
An MCP (Model Context Protocol) server that gives AI assistants access to the AsyncAPI specification. Search, explore, and retrieve any version of the spec directly from your coding tool.
Try it in your browser on Glama — no installation required.
Use the hosted server on Glama — no local setup needed. Add the following to your MCP client configuration:
{
"mcpServers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
See the Configuration section below for client-specific instructions.
npm install
npm run build
Streamable HTTP (for local development):
npm run dev
The server starts on http://localhost:3000/mcp by default. Set the PORT environment variable to use a different port:
PORT=8080 npm run dev
Stdio (for deployment):
npm start
| Tool | Description | Parameters |
|---|---|---|
list_asyncapi_spec_versions |
List stable AsyncAPI spec versions available as GitHub tags | None |
get_asyncapi_spec_metadata |
Return source, version, cache, and size metadata for a spec | version (optional) |
search_asyncapi_spec |
Search the spec and return matching snippets | query (required), version (optional), limit (default: 10, max: 20) |
get_asyncapi_spec_section |
Return a section by heading text or slug | heading (required), version (optional) |
| Resource | URI | Description |
|---|---|---|
| Latest AsyncAPI Spec | asyncapi://spec/latest |
The latest AsyncAPI markdown specification from the master branch |
| AsyncAPI Spec by Version | asyncapi://spec/{version} |
A specific version of the spec fetched from the matching GitHub release tag |
Use these configs to connect to the Glama-hosted server. No local setup required.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
Add to .vscode/mcp.json in your project root:
{
"servers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp",
"type": "http"
}
}
}
Add to your Windsurf MCP settings:
{
"mcpServers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
In Cline's MCP settings, add:
{
"mcpServers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
Add to your OpenCode configuration:
{
"mcp": {
"servers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
}
Add to your Zed settings.json:
{
"context_servers": {
"asyncapi": {
"url": "https://glama.ai/mcp/servers/Souvikns/asyncapi-mcp"
}
}
}
Use these configs when running the server locally with npm run dev. Make sure the server is running before connecting.
{
"mcpServers": {
"asyncapi": {
"url": "http://localhost:3000/mcp"
}
}
}
{
"mcpServers": {
"asyncapi": {
"url": "http://localhost:3000/mcp"
}
}
}
{
"servers": {
"asyncapi": {
"url": "http://localhost:3000/mcp",
"type": "http"
}
}
}
Replace the Glama URL in the configs above with http://localhost:3000/mcp.
This server is deployed on Glama.ai. See glama.ai/mcp/servers/Souvikns/asyncapi-mcp for the hosted instance.
To deploy your own instance, build and run with stdio transport:
npm run build
npm start
A Dockerfile is included for containerized deployments:
docker build -t asyncapi-mcp .
docker run -p 3000:3000 asyncapi-mcp
Once configured, you can ask your AI assistant questions like:
# Install dependencies
npm install
# Build TypeScript to dist/
npm run build
# Run the HTTP server (local development)
npm run dev
# Run the stdio server (for deployment)
npm start
# Type-check without emitting
npx tsc --noEmit
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"asyncapi-mcp": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
автор: modelcontextprotocolProvides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also
автор: xuzexin-hzНе уверен что выбрать?
Найди свой стек за 60 секунд
Автор?
Embed-бейдж для README
Похожее
Все в категории ai