loading…
Search for a command to run...
loading…
A local MCP server that guides users through Linux From Scratch (LFS) documentation step by step, tracking progress in a SQLite database and providing search, s
A local MCP server that guides users through Linux From Scratch (LFS) documentation step by step, tracking progress in a SQLite database and providing search, sequential navigation, and optional AI chat assistance.
This project is a local MCP server for following Linux From Scratch (LFS) documentation step by step. It imports HTML documentation into a local SQLite database, indexes it with SQLite FTS5, and exposes MCP tools that keep users on the earliest incomplete checklist item unless they explicitly ask to look ahead.
The server is documentation-only. It never executes shell commands, never runs build steps, and does not perform destructive operations such as chroot, mount, partitioning, package compilation, or filesystem modification. Commands from the LFS book are returned as text only.
One command after install handles config, database initialization, and the web dashboard:
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install -e ".[dev]"
lfs-mcp start
lfs-mcp start will:
~/.config/lfs-mcp/config.toml if one does not exist,If the lfs-mcp console script is not on PATH, the equivalent module form works:
python3 -m lfs_mcp start
A separate one-shot initializer is available too:
lfs-mcp init # create config + DB + import fixture docs
lfs-mcp init --no-import # create config + DB only
lfs-mcp config-path # print the resolved config file path
lfs-mcp config-show # print the merged config (API keys redacted)
The legacy explicit form continues to work and overrides the config:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" web
The config file is a small TOML document. Resolution order, highest priority first:
--config /path/to/config.tomlLFS_MCP_CONFIG=/path/to/config.toml./lfs-mcp.toml (project-local)~/.config/lfs-mcp/config.toml (user-wide, auto-created on first run)Default content:
[server]
host = "127.0.0.1"
port = 8787
open_browser = false
[database]
path = "~/.local/share/lfs-mcp/lfs_docs.db"
[import]
default_lfs_url = "https://www.linuxfromscratch.org/lfs/view/stable-systemd/"
auto_set_active = true
allow_remote_import = true
allow_file_import = true
[ai]
provider = "mock"
model = ""
base_url = ""
# Name of the environment variable that holds the API key.
# Raw API keys must NOT be stored in this file.
api_key_env = "LFS_MCP_API_KEY"
[ui]
theme = "light"
[security]
allow_hosts = ["www.linuxfromscratch.org", "linuxfromscratch.org"]
Override priority for runtime settings:
| setting | sources, highest priority first |
|---|---|
| db path | --db > LFS_MCP_DB > database.path > built-in default |
| host | --host > LFS_MCP_HOST > server.host |
| port | --port > LFS_MCP_PORT > server.port |
| API key | env var named by ai.api_key_env, never written to config |
API keys are read from the environment, never from the config file. Provide them like:
export LFS_MCP_API_KEY="..."
Commands shown in the dashboard are documentation only and are never executed by the app.
The project is local and single-user by default. Progress is stored in the selected SQLite database file, so multiple users should use separate database files unless profile or user isolation is added later.
MCP (Model Context Protocol) lets an AI client call local tools exposed by this server. In this project, the tools provide safe access to imported LFS documentation: list versions, select a version, get the current step, mark documentation sections completed, and search the local docs.
The pipeline is intentionally simple for a college assignment and local demo:
SQLite FTS5 is used instead of a vector database in the first version because it is local, deterministic, easy to test offline, and has no hosted service dependency. The schema keeps metadata_json columns and stable (version_id, section_id) identities so embeddings or vector search can be added later without replacing the storage model.
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install -e ".[dev]"
On systems where python points to Python 3, you may use python; otherwise use python3.
The default database path is:
~/.local/share/lfs-mcp/lfs_docs.db
You can override it with --db /path/to/lfs.db or the LFS_MCP_DB environment variable.
python3 -m lfs_mcp import \
--url https://www.linuxfromscratch.org/lfs/view/13.0-systemd-rc1/ \
--version-id 13.0-systemd-rc1 \
--display-name "Linux From Scratch 13.0 systemd rc1"
The importer also supports local fixture directories for offline tests and demos:
python3 -m lfs_mcp import \
--url tests/fixtures/lfs_sample_v1 \
--version-id sample-v1-systemd \
--display-name "Sample LFS v1 systemd"
Use --force only when you intentionally want to overwrite an already imported version.
The importer uses a clear local importer User-Agent for HTTP requests, applies a request timeout, and ignores table-of-contents links that point outside the provided documentation base URL or fixture directory.
The init command creates or opens the configured SQLite database, verifies SQLite and FTS5 support, imports fixture docs by default when no source URL is provided, sets the imported version active, and prints the next useful commands.
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" init
Defaults:
fixture: tests/fixtures/lfs_sample_v1
version id: sample-v1-systemd
display name: Sample LFS v1 systemd
You can initialize from online LFS docs instead:
python3 -m lfs_mcp --db "$PWD/lfs_docs.db" init \
--source-url https://www.linuxfromscratch.org/lfs/view/13.0-systemd-rc1/ \
--version-id 13.0-systemd-rc1 \
--display-name "Linux From Scratch 13.0 systemd rc1"
Use --force to overwrite an existing imported version intentionally.
The doctor command prints human-readable diagnostics for Python, SQLite, FTS5, the resolved database path, required tables, imported versions, active version, current step loading, and MCP server importability.
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" doctor
python3 -m lfs_mcp server
With a custom database:
python3 -m lfs_mcp --db ./demo-lfs.db server
The web command starts a local FastAPI server with a static HTML/CSS/JavaScript dashboard. It uses the same SQLite database and service logic as the CLI and MCP tools.
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" web
Default local URL:
http://127.0.0.1:8787
You can choose a different host or port:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" web --host 127.0.0.1 --port 8787
The web dashboard includes:
http, https, file, and mailto link schemes are rendered as anchorsPhase 3 AI settings include:
http://127.0.0.1:11434https://generativelanguage.googleapis.com/v1betagemini-2.0-flash-litehttps://ai-gateway.vercel.sh/v1openai/gpt-5.4local-mockPhase 4 chat includes:
/api/chat endpointx-goog-api-key header-based Gemini authenticationAuthorization: Bearer <key> Vercel AI Gateway authentication from the backend onlyreferenced_sections metadata so the UI can show local documentation without parsing assistant prosePhase 5 search-to-chat context includes:
POST /api/chat support for section_ids, with selected sections loaded from the active local LFS versionreferenced_sections entries labeled as selected_section so the Referenced documentation panel shows the exact local source used by the answerPhase 6 local section notes include:
section_notes SQLite table keyed by (version_id, section_id)referenced_sectionsPhase 7 web imports include:
POST /api/import and a dashboard import form for LFS book URLsPOST /api/import-jobs progress tracking for the web UI while real LFS books fetch and parse many HTML pageshttps://www.linuxfromscratch.org/lfs/view/stable-systemd/ and https://www.linuxfromscratch.org/lfs/view/12.3/version_id inference with optional user overridePhase 4 intentionally does not include:
The web dashboard remains local-first and single-user by default. Progress is stored in the selected SQLite database file. Multiple users should use separate database files unless profile or user isolation is added later.
Section notes are local-only personal data stored in the selected SQLite database. Notes are attached to a documentation section by version_id and section_id; they do not modify imported documentation content. Resetting progress does not remove notes, and re-importing the same version with the same section ids preserves them.
Progress changes are tracked in the local progress table only and never run LFS commands.
progress row. Mark complete only ever applies to the current step shown in the Current Step card.pending, then refreshes the Current Step and checklist. Notes, bookmarks, needs-review, and blocked flags are preserved. The button is hidden again after the undo or after a page refresh.The checklist itself is read-only/display: it shows order, title, chapter, and status, with no per-row Undo. This keeps the flow guided and sequential — users cannot jump forward to future sections or arbitrarily revert older completed sections from the UI.
The same revert behavior is available for advanced/automation use via POST /api/progress/{section_id}/incomplete, lfs-mcp incomplete <section_id>, and the MCP tool mark_step_incomplete(section_id, version_id=None).
If you imported an LFS book before the parser title fix and see incorrect titles like Note or a subsection name, re-import the version (same version_id) to refresh titles. Re-import preserves matching progress and notes when section ids remain stable.
Web imports fetch and parse HTML only. The importer does not run JavaScript from fetched pages, does not execute LFS shell commands, and does not send scraped documentation to AI providers during import.
Real LFS imports can take time because the importer fetches the index and then each ordered section page. The web UI starts a local in-process import job and shows discovered section count, current section URL/title, fetched-section progress, elapsed time, and completion/failure status. The synchronous POST /api/import endpoint remains available for curl, scripts, and tests.
AI API keys are user-provided and handled by the local backend. OpenAI, OpenRouter, Groq, Google Gemini, and Vercel AI Gateway require an API key. Gemini keys are sent to the lightweight model-list validation endpoint with the x-goog-api-key HTTP header rather than a query string. Vercel AI Gateway keys are sent from the backend with an Authorization: Bearer header. Raw keys are never returned by API responses, never printed by doctor, and must not be hardcoded into frontend code.
Key storage prefers Python keyring when it is available and usable. If keyring is unavailable, the app falls back to an explicitly marked local plaintext value in the selected SQLite database settings table. The web settings panel and doctor command show a warning when this plaintext fallback is in use. Protect the database file accordingly.
Gemini model listing and Gemini text generation are separate provider capabilities. GET https://generativelanguage.googleapis.com/v1beta/models can succeed while generateContent still fails. If /api/chat returns HTTP 429 with RESOURCE_EXHAUSTED or a quota limit of 0, the configured key/model/endpoint may be valid but the Google project/account has no usable generateContent quota. Check Google AI Studio quota/billing, wait for quota to reset, choose another available model, or use the Local mock provider for development.
The Local mock provider is for local development and UI testing only. It requires no API key, makes no network calls, and returns deterministic local responses built from the same current-step context that /api/chat sends to external providers. It echoes the user message safely, includes active version/current step/next step metadata when available, and clearly labels itself as a local mock response. It is not a real AI answer.
Vercel AI Gateway uses the OpenAI-compatible endpoint at https://ai-gateway.vercel.sh/v1. The default model is openai/gpt-5.4, and user-entered model names from the Vercel AI Gateway model list are preserved. See https://vercel.com/ai-gateway/models for available model identifiers. Connection testing uses the lightweight /models endpoint when available; chat uses {base_url}/chat/completions.
To open the settings UI:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" web.http://127.0.0.1:8787.To import an LFS book from the web dashboard:
python3 -m lfs_mcp --db "$PWD/lfs_docs.db" web.https://www.linuxfromscratch.org/lfs/view/stable-systemd/ or https://www.linuxfromscratch.org/lfs/view/12.3/.Inferred version IDs use the convention lfs-<book-slug>-systemd for systemd books and lfs-<book-slug>-sysv otherwise. Examples:
| URL | Inferred version ID |
|---|---|
https://www.linuxfromscratch.org/lfs/view/stable/ |
lfs-stable-sysv |
https://www.linuxfromscratch.org/lfs/view/stable-systemd/ |
lfs-stable-systemd |
https://www.linuxfromscratch.org/lfs/view/development/ |
lfs-development-sysv |
https://www.linuxfromscratch.org/lfs/view/development-systemd/ |
lfs-development-systemd |
https://www.linuxfromscratch.org/lfs/view/12.3/ |
lfs-12.3-sysv |
https://www.linuxfromscratch.org/lfs/view/12.3-systemd/ |
lfs-12.3-systemd |
When version_id is omitted, current-step, checklist, search, chat, and MCP tools read the active version from SQLite on each request. A web import that sets a version active is therefore visible to MCP calls without restarting the MCP server, as long as the MCP process uses the same database file.
To use the chat UI:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" web.http://127.0.0.1:8787.Chat sends only a small documentation context package to external providers: active version, current step metadata/content excerpt, selected search-result section excerpts, command blocks as documentation text, and next-step preview when current-step context is requested. It does not send the full SQLite database, all imported sections, or local personal notes. The app never executes LFS commands.
After a chat response, the right-side Referenced documentation panel shows the local source context. This panel is populated from backend-provided referenced_sections metadata, not by parsing the assistant's natural-language response. It displays the local imported LFS section used as current-step or selected-section context and may show local note flags for the section. If a section payload is truncated for the UI, the dashboard can fetch the full local section through GET /api/docs/sections/{section_id}?version_id=....
The current test behavior is intentionally limited: Ollama checks the local /api/tags endpoint when reachable; Gemini and Vercel AI Gateway check configured model-list endpoints; Local mock returns success immediately without network; other API-key providers validate local key presence/format only. Automated provider and chat tests use mocked HTTP behavior and do not require real Gemini or Vercel keys or real network access.
The local dashboard exposes these HTTP endpoints:
| Method | Path | Purpose |
|---|---|---|
GET |
/api/health |
Report SQLite, FTS5, database path, active version, and safety flags. |
POST |
/api/import |
Import an LFS book URL or local fixture into SQLite, optionally infer version_id, and set it active. |
POST |
/api/import-jobs |
Start a local background import job for the web UI and return a job_id. |
GET |
/api/import-jobs/{job_id} |
Return local in-process import progress, result, or safe failure message. |
GET |
/api/versions |
List imported LFS documentation versions with active flag, section count, progress summary, and notes summary. |
POST |
/api/versions/active |
Set an already-imported version active using {"version_id": "..."}. |
GET |
/api/current-step |
Return the first incomplete section for the active version. |
GET |
/api/checklist |
Return the active version checklist with progress status. |
GET |
/api/search?q=binutils |
Search the active version with SQLite FTS5. |
GET |
/api/docs/sections/{section_id} |
Fetch one local documentation section, optionally scoped with ?version_id=.... |
GET |
/api/section-notes/{section_id} |
Fetch the local note and flags for one section, optionally scoped with ?version_id=.... |
PATCH |
/api/section-notes/{section_id} |
Upsert a local section note using note_text, bookmarked, needs_review, blocked, and optional version_id. |
GET |
/api/section-notes |
List saved local section notes for the active version, with optional bookmarked, needs_review, and blocked filters. |
POST |
/api/complete-current-step |
Mark the current step completed. |
POST |
/api/complete-step |
Mark a requested step completed using {"section_id": "gcc-pass1", "force": false}. |
POST |
/api/reset-progress |
Reset progress for the active version. |
GET |
/api/settings |
Return local settings and future AI configuration status. |
GET |
/api/ai-settings |
Return provider, model, base URL, key presence, storage status, and warnings without returning raw keys. |
POST |
/api/ai-settings |
Save provider/model/base URL and optionally save a raw API key through the local backend. |
POST |
/api/ai-settings/test |
Run lightweight local validation for the saved AI settings. |
POST |
/api/chat |
Send a minimal contextual chat request to Gemini, Vercel AI Gateway, or Local mock. Accepts {"message": "...", "include_current_step": true, "section_ids": ["..."]}. |
For an MCP-compatible client that accepts a JSON server configuration, use a stdio command similar to:
{
"mcpServers": {
"lfs-docs": {
"command": "python3",
"args": ["-m", "lfs_mcp", "server"],
"env": {
"LFS_MCP_DB": "/absolute/path/to/lfs_docs.db"
}
}
}
}
Claude Desktop is not required for this project, and Claude Desktop is not currently the practical Linux path. The options below work with Linux-friendly MCP clients or with the built-in CLI fallback. Before using any MCP client, install the project and import at least one documentation version:
python3 -m pip install -e ".[dev]"
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" import \
--url tests/fixtures/lfs_sample_v1 \
--version-id sample-v1-systemd \
--display-name "Sample LFS v1 systemd" \
--force
For real LFS docs, replace the fixture import with:
python3 -m lfs_mcp --db "$PWD/lfs_docs.db" import \
--url https://www.linuxfromscratch.org/lfs/view/13.0-systemd-rc1/ \
--version-id 13.0-systemd-rc1 \
--display-name "Linux From Scratch 13.0 systemd rc1"
Use an absolute database path in GUI client configs because those clients may start from a different working directory.
MCP Inspector is the easiest Linux-friendly way to verify that this server exposes tools correctly. It is a local developer UI; it starts this server as a child process over stdio.
Run directly from the repository:
npx @modelcontextprotocol/inspector \
-e LFS_MCP_DB="$PWD/demo-lfs.db" \
-- python3 -m lfs_mcp server
Alternative config-file form, saved anywhere such as ./mcp-inspector-lfs.json:
{
"mcpServers": {
"lfs-docs": {
"command": "python3",
"args": ["-m", "lfs_mcp", "server"],
"env": {
"LFS_MCP_DB": "/absolute/path/to/lfs-mcp/demo-lfs.db"
}
}
}
}
Start it with:
npx @modelcontextprotocol/inspector --config ./mcp-inspector-lfs.json --server lfs-docs
Open the printed Inspector URL, such as http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=generated-token, including the generated token if shown. To verify tools are visible, connect to the server and open the Tools view. You should see tools such as get_current_step, search_lfs_docs, and mark_step_completed.
Example tool call in Inspector:
{
"tool": "get_current_step",
"arguments": {}
}
Cursor supports local MCP servers through mcp.json.
Config locations on Linux:
<project-root>/.cursor/mcp.json~/.cursor/mcp.jsonExample config:
{
"mcpServers": {
"lfs-docs": {
"command": "python3",
"args": ["-m", "lfs_mcp", "server"],
"env": {
"LFS_MCP_DB": "/absolute/path/to/lfs-mcp/demo-lfs.db"
}
}
}
}
Cursor starts the stdio server from this config when the MCP server is enabled. To verify tools are visible, open Cursor settings or the MCP section, confirm lfs-docs is enabled, and check the available tool list in Agent/chat mode.
Example prompt:
Use the lfs-docs MCP server and call get_current_step. What is my current Linux From Scratch step?
VS Code with GitHub Copilot Chat supports MCP servers in Agent mode. This requires a Copilot-enabled VS Code setup with MCP support enabled by your account or organization policy.
Config locations on Linux:
<project-root>/.vscode/mcp.jsonExample .vscode/mcp.json:
{
"servers": {
"lfs-docs": {
"type": "stdio",
"command": "python3",
"args": ["-m", "lfs_mcp", "server"],
"env": {
"LFS_MCP_DB": "/absolute/path/to/lfs-mcp/demo-lfs.db"
}
}
}
}
Start the server by opening .vscode/mcp.json and selecting the Start CodeLens above the server, or run MCP: List Servers from the Command Palette and start lfs-docs. To verify tools are visible, open Copilot Chat, switch to Agent mode, select the tools icon or Configure Tools, and confirm the LFS tools are listed.
Example prompt:
Use the lfs-docs MCP tool get_current_step and tell me only the current LFS step.
gh copilotThe retired GitHub CLI extension gh copilot has been replaced by the newer copilot CLI. If your GitHub Copilot CLI version supports MCP, configure it like this.
Interactive setup:
copilot
Then run:
/mcp add
Choose STDIO or Local, use lfs-docs as the server name, and enter this command:
python3 -m lfs_mcp server
Set environment variables to:
{
"LFS_MCP_DB": "/absolute/path/to/lfs-mcp/demo-lfs.db"
}
Manual config location:
~/.copilot/mcp-config.json
Example config:
{
"mcpServers": {
"lfs-docs": {
"type": "local",
"command": "python3",
"args": ["-m", "lfs_mcp", "server"],
"env": {
"LFS_MCP_DB": "/absolute/path/to/lfs-mcp/demo-lfs.db"
},
"tools": ["*"]
}
}
}
To verify tools are visible inside Copilot CLI:
/mcp show
/mcp show lfs-docs
Example prompt:
Use the lfs-docs MCP server and its get_current_step tool. What should I do now in Linux From Scratch?
If no AI MCP client is available, you can still demo the same behavior through the local CLI. This does not use MCP, but it exercises the same service logic and SQLite database.
Import fixture docs offline:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" import \
--url tests/fixtures/lfs_sample_v1 \
--version-id sample-v1-systemd \
--display-name "Sample LFS v1 systemd" \
--force
Run the CLI equivalent of get_current_step:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" current
Verify the JSON output contains a current_step object with the first incomplete section and a next_step_preview.
Example query:
python3 -m lfs_mcp --db "$PWD/demo-lfs.db" current
List imported versions:
python3 -m lfs_mcp list-versions
Example output:
[
{
"version_id": "13.0-systemd-rc1",
"display_name": "Linux From Scratch 13.0 systemd rc1",
"source_url": "https://www.linuxfromscratch.org/lfs/view/13.0-systemd-rc1/",
"variant": "systemd",
"imported_at": "2026-04-26T23:00:00+00:00",
"progress_exists": 0
}
]
Set the active version:
python3 -m lfs_mcp set-active --version-id 13.0-systemd-rc1
Get the current step:
python3 -m lfs_mcp current
Mark a step completed:
python3 -m lfs_mcp complete introduction
Search the active version:
python3 -m lfs_mcp search "binutils pass 1"
Search across all imported versions:
python3 -m lfs_mcp search "systemd" --version-id all
Reset progress for the active version:
python3 -m lfs_mcp reset-progress
Reset progress for all versions:
python3 -m lfs_mcp reset-progress --all-versions
The server exposes these tools:
| Tool | Purpose |
|---|---|
list_lfs_versions() |
Return imported versions with active flag, section count, progress summary, and notes summary. |
get_active_lfs_version() |
Return the selected/default version or a clear setup message. |
set_active_lfs_version(version_id) |
Validate and select an imported version. |
import_lfs_docs(source_url, version_id=None, display_name=None, force=False) |
Fetch/parse/import/index docs; version_id can be inferred for supported LFS URLs and local fixture paths. |
get_build_checklist(version_id=None) |
Return the ordered checklist for one version. |
get_current_step(version_id=None) |
Return only the earliest incomplete step plus a small next-step preview. |
mark_step_completed(section_id, version_id=None, force=False) |
Complete a step, preventing accidental jumps unless forced. |
mark_step_incomplete(section_id, version_id=None) |
Revert a completed section back to pending. The current step becomes the earliest pending section. |
get_step(section_id, version_id=None) |
Return a section and warn if it is ahead of progress. |
search_lfs_docs(query, version_id=None) |
Search with SQLite FTS5; use version_id="all" for all versions. |
get_package_steps(package_name, version_id=None) |
Return ordered package-related sections and warn on multiple passes. |
reset_progress(version_id=None, all_versions=False) |
Reset selected-version progress by default or all progress explicitly. |
get_current_step():
{
"version_id": "sample-v1-systemd",
"current_step": {
"order": 1,
"section_id": "introduction",
"title": "1.1 Introduction",
"chapter": "Chapter 1",
"source_url": "file:///home/user/lfs-mcp/tests/fixtures/lfs_sample_v1/chapter01/introduction.html",
"summary": "Start at the beginning. Read the LFS systemd book overview before preparing tools.",
"content": "# 1.1 Introduction\n\nStart at the beginning. Read the LFS systemd book overview before preparing tools.",
"command_blocks": []
},
"next_step_preview": {
"order": 2,
"section_id": "prepare",
"chapter": "Chapter 2",
"title": "2.1 Preparing the Host",
"status": "pending",
"preview": "Check host requirements and create a safe learning plan."
}
}
mark_step_completed("gcc-pass1") before earlier steps are complete:
{
"completed": false,
"warning": "This section is ahead of earlier pending checklist items. It was not marked completed.",
"earlier_pending_sections": [
{"section_id": "introduction", "order": 1, "status": "pending"},
{"section_id": "prepare", "order": 2, "status": "pending"}
]
}
search_lfs_docs("gcc"):
{
"query": "gcc",
"scope": "sample-v1-systemd",
"results": [
{
"version_id": "sample-v1-systemd",
"display_name": "Sample LFS v1 systemd",
"section_id": "gcc-pass1",
"title": "5.3 GCC-15.2.0 - Pass 1",
"chapter": "Chapter 5",
"source_url": "file:///home/user/lfs-mcp/tests/fixtures/lfs_sample_v1/chapter05/gcc-pass1.html",
"snippet": "Build the first [GCC] cross compiler pass after Binutils.",
"relation_to_current_step": "ahead_of_current_step"
}
]
}
The ordered checklist is the source of truth. The current step is always the first incomplete section for the selected version. Normal flow does not recommend future steps. Explicit lookup and search are allowed, but future results are labeled ahead_of_current_step.
Progress is isolated per LFS version. The same section_id may exist in multiple versions, and completion in one version does not affect another because the logical identity is (version_id, section_id).
Tests use the bundled fixture docs and do not download the full LFS book:
python3 -m pytest
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"lfs-mcp-documentation-assistant": {
"command": "npx",
"args": []
}
}
}Read, send and search emails from Claude
Send, search and summarize Slack messages
No-code MCP client for team chat platforms, such as Slack, Microsoft Teams, and Discord.
A community discord server dedicated to MCP by [Frank Fiegel](https://github.com/punkpeye)