loading…
Search for a command to run...
loading…
An MCP server that emits OpenTelemetry traces, metrics, and logs to OTLP endpoints in various formats. It includes tool-mimicry profiles to generate realistic t
An MCP server that emits OpenTelemetry traces, metrics, and logs to OTLP endpoints in various formats. It includes tool-mimicry profiles to generate realistic traffic resembling common tools like nginx, postgres, and AWS Lambda for testing and demonstration purposes.
An MCP server that emits OpenTelemetry traces, metrics, and logs to one or more OTLP endpoints at the same time, in any of the three OTLP wire formats:
grpc — OTLP/gRPC (protobuf over HTTP/2)http/protobuf — OTLP/HTTP binary protobufhttp/json — OTLP/HTTP proto3-JSON (spec-compliant)It also ships tool-mimicry profiles that produce realistic signal bundles shaped like well-known tools (nginx, postgres, redis, kafka, aws-lambda, kubernetes pod, generic gRPC service), so you can populate a collector or backend with traffic that looks like a real environment.
uv venv
uv pip install -e .
otel-mcp # stdio transport — wire into any MCP client
Or via an MCP client config (e.g. Claude Desktop, Claude Code):
{
"mcpServers": {
"otel": {
"command": "otel-mcp"
}
}
}
If OTEL_EXPORTER_OTLP_ENDPOINT is set on launch, an endpoint named default
is registered using the standard OTEL env vars:
OTEL_EXPORTER_OTLP_ENDPOINTOTEL_EXPORTER_OTLP_PROTOCOL (grpc | http/protobuf | http/json)OTEL_EXPORTER_OTLP_HEADERS (comma-separated k=v pairs)OTEL_EXPORTER_OTLP_INSECURE (gRPC TLS toggle)| Tool | Purpose |
|---|---|
add_endpoint |
Register a named OTLP destination (url, protocol, signals, headers, …). |
remove_endpoint |
Drop one endpoint by name. |
clear_endpoints |
Drop every endpoint. |
list_endpoints |
Enumerate current endpoints. |
status |
Endpoints + available mimic profiles. |
Endpoints are selected per call: every signal-emitting tool accepts an
endpoints: [names] arg. Omit it to fan out to every endpoint that accepts
that signal type.
| Tool | Shape of input |
|---|---|
send_trace |
{service_name, spans[]} — each span has name, kind, attributes, duration_ms, status, events[], optional parent_name for nesting. |
send_metric |
{service_name, metrics[]} — each metric has name, kind (counter/up_down_counter/gauge/histogram), unit, description, points[]. Histograms accept a list of samples per point. |
send_log |
{service_name, records[]} — each record has body, severity, severity_text, attributes, timestamp_ns. |
| Tool | Purpose |
|---|---|
list_mimic_profiles |
Show every profile with its parameters. |
mimic_tool |
Run one profile and send its bundle once. |
generate_load |
Run a profile on a loop to simulate sustained traffic. |
Built-in profiles:
| Profile | What it looks like |
|---|---|
nginx / http-server |
Server spans with HTTP semconv, access logs, request counters & latency histograms. |
postgres |
DB client spans with db.system=postgresql + connection pool metrics. |
redis |
DB client spans with db.system=redis. |
kafka |
Producer/consumer spans with messaging semconv. |
aws-lambda |
Server spans with faas.* + cloud.* resource attrs, Lambda access logs, invocation metrics. |
k8s-pod |
Resource = full k8s.* attributes, container cpu/memory/network metrics, Started event log. |
grpc |
Server spans with rpc.system=grpc. |
> add_endpoint name="otel-collector" url="http://localhost:4318" protocol="http/protobuf"
> add_endpoint name="jaeger-json" url="http://localhost:4318" protocol="http/json" signals=["traces"]
> mimic_tool profile="nginx" options={"count": 50, "error_rate": 0.1}
> mimic_tool profile="postgres"
> generate_load profile="kafka" iterations=10 interval_seconds=2
> send_trace service_name="checkout" spans=[
{"name": "POST /checkout", "kind": "server", "duration_ms": 42, "attributes": {"http.response.status_code": 200}},
{"name": "charge_card", "kind": "client", "parent_name": "POST /checkout", "duration_ms": 18,
"attributes": {"peer.service": "stripe"}}
]
MimicBundle in src/otel_mcp/mimics.py.PROFILES dict at the bottom of that file.list_mimic_profiles to confirm it picked up the parameters.Profiles stay declarative: each returns span/metric/log specs that flow through the same generator pipeline, so they inherit correct resource merging, wire format support, and fan-out automatically.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"otel-mcp": {
"command": "npx",
"args": []
}
}
}