loading…
Search for a command to run...
loading…
Query Langfuse traces, debug exceptions, analyze sessions, and manage prompts. Full observability toolkit for LLM applications.
Query Langfuse traces, debug exceptions, analyze sessions, and manage prompts. Full observability toolkit for LLM applications.
PyPI Downloads Python 3.10–3.13 License: MIT
Model Context Protocol server for Langfuse observability. Query traces, debug errors, analyze sessions, manage prompts.
Comparison with official Langfuse MCP (as of Jan 2026):
| langfuse-mcp | Official | |
|---|---|---|
| Traces & Observations | Yes | No |
| Sessions & Users | Yes | No |
| Exception Tracking | Yes | No |
| Prompt Management | Yes | Yes |
| Dataset Management | Yes | No |
| Annotation Queues | Yes | No |
| Scores (v2) | Yes | No |
| Selective Tool Loading | Yes | No |
This project provides a full observability toolkit — traces, observations, sessions, exceptions, prompts, datasets, annotation queues, and scores — while the official MCP focuses on prompt management.
Requires uv (for uvx).
Get credentials from Langfuse Cloud → Settings → API Keys. If self-hosted, use your instance URL for LANGFUSE_HOST.
# Claude Code (project-scoped, shared via .mcp.json)
claude mcp add \
-e LANGFUSE_PUBLIC_KEY=pk-... \
-e LANGFUSE_SECRET_KEY=sk-... \
-e LANGFUSE_HOST=https://cloud.langfuse.com \
--scope project \
langfuse -- uvx --python 3.11 langfuse-mcp
# Codex CLI (user-scoped, stored in ~/.codex/config.toml)
codex mcp add langfuse \
--env LANGFUSE_PUBLIC_KEY=pk-... \
--env LANGFUSE_SECRET_KEY=sk-... \
--env LANGFUSE_HOST=https://cloud.langfuse.com \
-- uvx --python 3.11 langfuse-mcp
Restart your CLI, then verify with /mcp (Claude Code) or codex mcp list (Codex).
| Category | Tools |
|---|---|
| Traces | fetch_traces, fetch_trace |
| Observations | fetch_observations, fetch_observation |
| Sessions | fetch_sessions, get_session_details, get_user_sessions |
| Exceptions | find_exceptions, find_exceptions_in_file, get_exception_details, get_error_count |
| Prompts | list_prompts, get_prompt, get_prompt_unresolved, create_text_prompt, create_chat_prompt, update_prompt_labels |
| Datasets | list_datasets, get_dataset, list_dataset_items, get_dataset_item, create_dataset, create_dataset_item, delete_dataset_item |
| Annotation Queues | list_annotation_queues, create_annotation_queue, get_annotation_queue, list_annotation_queue_items, get_annotation_queue_item, create_annotation_queue_item, update_annotation_queue_item, delete_annotation_queue_item, create_annotation_queue_assignment, delete_annotation_queue_assignment |
| Scores | list_scores_v2, get_score_v2 |
| Schema | get_data_schema |
Langfuse uses upsert for dataset items. To edit an existing item, call create_dataset_item with item_id. If the ID exists, it updates; otherwise it creates a new item.
create_dataset_item(
dataset_name="qa-test-cases",
item_id="item_123",
input={"question": "What is 2+2?"},
expected_output={"answer": "4"}
)
This project includes a skill with debugging playbooks.
Via skills (recommended):
npx skills add avivsinai/langfuse-mcp -g -y
Via skild:
npx skild install @avivsinai/langfuse -t claude -y
Manual install:
cp -r skills/langfuse ~/.claude/skills/ # Claude Code
cp -r skills/langfuse ~/.codex/skills/ # Codex CLI
Try asking: "help me debug langfuse traces"
See skills/langfuse/SKILL.md for full documentation.
Load only the tool groups you need to reduce token overhead:
langfuse-mcp --tools traces,prompts
Available groups: traces, observations, sessions, exceptions, prompts, datasets, annotation_queues, scores, schema
Disable all write operations for safer read-only access:
langfuse-mcp --read-only
# Or via environment variable
LANGFUSE_MCP_READ_ONLY=true langfuse-mcp
This disables: create_text_prompt, create_chat_prompt, update_prompt_labels, create_dataset, create_dataset_item, delete_dataset_item, create_annotation_queue, create_annotation_queue_item, update_annotation_queue_item, delete_annotation_queue_item, create_annotation_queue_assignment, delete_annotation_queue_assignment
Set the MCP-exposed default output_mode so clients that omit the parameter automatically use your preferred mode:
langfuse-mcp --default-output-mode full_json_file
# Or via environment variable
LANGFUSE_MCP_DEFAULT_OUTPUT_MODE=full_json_file langfuse-mcp
Supported values: compact, full_json_string, full_json_file
This updates the default shown in MCP tool schemas. Clients can still override it per call by passing output_mode explicitly.
Create .cursor/mcp.json in your project (or ~/.cursor/mcp.json for global):
{
"mcpServers": {
"langfuse": {
"command": "uvx",
"args": ["--python", "3.11", "langfuse-mcp"],
"env": {
"LANGFUSE_PUBLIC_KEY": "pk-...",
"LANGFUSE_SECRET_KEY": "sk-...",
"LANGFUSE_HOST": "https://cloud.langfuse.com",
"LANGFUSE_MCP_DEFAULT_OUTPUT_MODE": "full_json_file"
}
}
}
}
docker run --rm -i \
-e LANGFUSE_PUBLIC_KEY=pk-... \
-e LANGFUSE_SECRET_KEY=sk-... \
-e LANGFUSE_HOST=https://cloud.langfuse.com \
ghcr.io/avivsinai/langfuse-mcp:latest
uv venv --python 3.11 .venv && source .venv/bin/activate
uv pip install -e ".[dev]"
pytest
MIT
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"avivsinai-langfuse-mcp": {
"command": "npx",
"args": []
}
}
}