loading…
Search for a command to run...
loading…
A local Retrieval-Augmented Generation system that enables users to ingest markdown files into a FAISS-powered vector knowledge base for semantic search. It pro
A local Retrieval-Augmented Generation system that enables users to ingest markdown files into a FAISS-powered vector knowledge base for semantic search. It provides tools for document indexing and context retrieval to support informed LLM queries without external dependencies.
A local Retrieval-Augmented Generation (RAG) system implemented as an MCP (Model Context Protocol) server. This server allows you to ingest markdown files into a local knowledge base and perform semantic search to retrieve relevant context for LLM queries.
.md filesuv sync
sentence-transformers: For creating text embeddingsfaiss-cpu: For efficient vector similarity searchnumpy: For numerical operationsmcp[cli]: For the MCP server frameworksearch_doc_for_rag_context(query: str)Searches the knowledge base for relevant context based on a user query.
Parameters:
query (str): The search queryReturns:
ingest_markdown_file(local_file_path: str)Ingests a markdown file into the knowledge base.
Parameters:
local_file_path (str): Path to the markdown file to ingestReturns:
list_indexed_documents()Lists all documents currently in the knowledge base.
Returns:
clear_knowledge_base()Clears all documents from the knowledge base.
Returns:
Start the server:
python main.py
Ingest markdown files:
Use the ingest_markdown_file tool to add your .md files to the knowledge base.
Search for context:
Use the search_doc_for_rag_context tool to find relevant information for your queries.
all-MiniLM-L6-v2 modelmain.py: Main server implementation with RAG functionalitypyproject.toml: Project dependencies and configurationrag_index.faiss: FAISS vector index (created automatically)rag_documents.pkl: Serialized documents and metadata (created automatically)The RAG system uses the all-MiniLM-L6-v2 sentence transformer model by default. This model provides a good balance between speed and quality for semantic search tasks.
ingest_markdown_file to add each file to the knowledge basesearch_doc_for_rag_context to find relevant context for your questionsДобавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"eyelevel-rag-mcp-server": {
"command": "npx",
"args": []
}
}
}