loading…
Search for a command to run...
loading…
Standalone MCP server for code structure analysis using tree-sitter. Directory trees, symbol definitions, and call graphs without reading raw source files. Supp
Standalone MCP server for code structure analysis using tree-sitter. Directory trees, symbol definitions, and call graphs without reading raw source files. Supports Rust, Python, Go, Java, TypeScript, Fortran, JavaScript, C/C++, and C#. Benchmarked up to 68% fewer tokens vs native tools.
Standalone MCP server for code structure analysis using tree-sitter. OpenSSF silver certified: fewer than 1% of open source projects reach this level.
[!NOTE] Native agent tools (regex search, path matching, file reading) handle targeted lookups well.
aptu-coderhandles the mechanical, non-AI work: mapping directory structure, extracting symbols, and tracing call graphs. Offloading this to a dedicated tool reduces token usage and speeds up coding with better accuracy.
Auth migration task on Claude Code against Django (Python) source tree. Full methodology.
| Mode | Sonnet 4.6 | Haiku 4.5 |
|---|---|---|
| MCP | 112k tokens, $0.39 | 406k tokens, $0.42 |
| Native | 276k tokens, $0.95 | 473k tokens, $0.53 |
| Savings | 59% fewer tokens, 59% cheaper | 14% fewer tokens, 21% cheaper |
AeroDyn integration audit task on Claude Code against OpenFAST (Fortran) source tree. Full methodology.
| Mode | Sonnet 4.6 | Haiku 4.5 |
|---|---|---|
| MCP | 472k tokens, $1.65 | 687k tokens, $0.72 |
| Native | 877k tokens, $2.85 | 2162k tokens, $2.21 |
| Savings | 46% fewer tokens, 42% cheaper | 68% fewer tokens, 68% cheaper |
aptu-coder is a Model Context Protocol server that gives AI agents precise structural context about a codebase: directory trees, symbol definitions, and call graphs, without reading raw files. It supports Rust, Python, Go, Java, TypeScript, TSX, Fortran, JavaScript, C/C++, and C#, and integrates with any MCP-compatible orchestrator.
All languages are enabled by default. Disable individual languages at compile time via Cargo feature flags.
| Language | Extensions | Feature flag |
|---|---|---|
| Rust | .rs |
lang-rust |
| Python | .py |
lang-python |
| TypeScript | .ts |
lang-typescript |
| TSX | .tsx |
lang-tsx |
| Go | .go |
lang-go |
| Java | .java |
lang-java |
| Fortran | .f, .f77, .f90, .f95, .f03, .f08, .for, .ftn |
lang-fortran |
| JavaScript | .js, .mjs, .cjs |
lang-javascript |
| C | .c |
lang-cpp |
| C++ | .cc, .cpp, .cxx, .h, .hpp, .hxx |
lang-cpp |
| C# | .cs |
lang-csharp |
To build with a subset of languages, disable default features and opt in:
[dependencies]
aptu-coder-core = { version = "*", default-features = false, features = ["lang-rust", "lang-python"] }
The current version is published on crates.io. Replace "*" with the latest version string if you prefer a pinned dependency.
brew install clouatre-labs/tap/aptu-coder
Update: brew upgrade aptu-coder
cargo binstall aptu-coder
cargo install aptu-coder
cargo build --release
The binary is at target/release/aptu-coder.
After installation via brew or cargo, register with the Claude Code CLI:
claude mcp add --transport stdio aptu-coder -- aptu-coder
If you built from source, use the binary path directly:
claude mcp add --transport stdio aptu-coder -- /path/to/repo/target/release/aptu-coder
stdio is intentional: this server runs locally and processes files directly on disk. The low-latency, zero-network-overhead transport matches the use case. Streamable HTTP adds a network hop with no benefit for a local tool.
Or add manually to .mcp.json at your project root (shared with your team via version control):
{
"mcpServers": {
"aptu-coder": {
"command": "aptu-coder",
"args": []
}
}
}
All optional parameters may be omitted. Shared optional parameters for analyze_directory, analyze_file, and analyze_symbol (analyze_module does not support these):
| Parameter | Type | Default | Description |
|---|---|---|---|
summary |
boolean | auto | Compact output; auto-triggers above 50K chars |
cursor |
string | -- | Pagination cursor from a previous response's next_cursor |
page_size |
integer | 100 | Items per page |
force |
boolean | false | Bypass output size warning |
verbose |
boolean | false | true = full output with section headers and imports (Markdown-style headers in analyze_directory; adds I: section in analyze_file); false = compact format |
summary=true and cursor are mutually exclusive. Passing both returns an error.
analyze_directoryWalks a directory tree, counts lines of code, functions, and classes per file. Respects .gitignore rules. Default output is a flat PAGINATED list. Pass verbose=true for FILES / TEST FILES section headers. Pass summary=true for a compact STRUCTURE tree with aggregate counts.
Required: path (string) -- directory to analyze
Additional optional: max_depth (integer, default unlimited) -- recursion limit; use 2-3 for large monorepos
analyze_directory path: /path/to/project
analyze_directory path: /path/to/project max_depth: 2
analyze_directory path: /path/to/project summary: true
analyze_directory path: /path/to/project verbose: true
analyze_fileExtracts functions, classes, and imports from a single file.
Required: path (string) -- file to analyze
Additional optional:
ast_recursion_limit (integer, optional) -- tree-sitter AST traversal depth cap; leave unset for unlimited depth. Minimum value is 1; 0 is treated as unset.fields (array of strings, optional) -- limit output to specific sections. Valid values: "functions", "classes", "imports". Omit to return all sections. The FILE header (path, line count, section counts) is always emitted regardless. Ignored when summary=true. When "imports" is listed explicitly, the I: section is rendered regardless of the verbose flag.analyze_file path: /path/to/file.rs
analyze_file path: /path/to/file.rs page_size: 50
analyze_file path: /path/to/file.rs cursor: eyJvZmZzZXQiOjUwfQ==
analyze_moduleExtracts a minimal function/import index from a single file. ~75% smaller output than analyze_file. Use when you need function names and line numbers or the import list, without signatures, types, or call graphs. Returns an actionable error if called on a directory path, steering to analyze_directory.
Required: path (string) -- file to analyze
analyze_module path: /path/to/file.rs
analyze_symbolBuilds a call graph for a named symbol across all files in a directory. Uses sentinel values <module> (top-level calls) and <reference> (type references). Functions called >3 times show (•N) notation.
Required:
path (string) -- directory to searchsymbol (string) -- symbol name, case-sensitive exact-matchAdditional optional:
follow_depth (integer, default 1) -- call graph traversal depthmax_depth (integer, default unlimited) -- directory recursion limitast_recursion_limit (integer, optional) -- tree-sitter AST traversal depth cap; leave unset for unlimited depth. Minimum value is 1; 0 is treated as unset.impl_only (boolean, optional) -- when true, restrict callers to only those originating from an impl Trait for Type block (Rust only). Returns INVALID_PARAMS if the path contains no .rs files. Emits a FILTER: header showing how many callers were retained out of total.match_mode (string, default exact) -- Symbol lookup strategy:exact: Case-sensitive exact match (default)insensitive: Case-insensitive exact matchprefix: Case-insensitive prefix match; returns an error listing candidates when multiple symbols matchcontains: Case-insensitive substring match; returns an error listing candidates when multiple symbols match
All non-exact modes return an error with candidate names when the match is ambiguous; use the listed candidates to refine to a unique match.The tool also returns structuredContent with typed arrays for programmatic consumption: callers (production callers), test_callers (callers from test files), and callees (direct callees), each as Option<Vec<CallChainEntry>>. A CallChainEntry has three fields: symbol (string), file (string), and line (JSON integer; usize in the Rust API). These arrays represent depth-1 relationships only; follow_depth does not affect them.
Example output:
FOCUS: format_structure_paginated (1 defs, 1 callers, 3 callees)
CALLERS (1-1 of 1):
format_structure_paginated <- analyze_directory
<- format_structure_paginated
CALLEES: 3 (use cursor for callee pagination)
analyze_symbol path: /path/to/project symbol: my_function
analyze_symbol path: /path/to/project symbol: my_function follow_depth: 3
analyze_symbol path: /path/to/project symbol: my_function max_depth: 3 follow_depth: 2
For large codebases, two mechanisms prevent context overflow:
Pagination
analyze_file and analyze_symbol append a NEXT_CURSOR: line when output is truncated. Pass the token back as cursor to fetch the next page. summary=true and cursor are mutually exclusive; passing both returns an error.
# Response ends with:
NEXT_CURSOR: eyJvZmZzZXQiOjUwfQ==
# Fetch next page:
analyze_symbol path: /my/project symbol: my_function cursor: eyJvZmZzZXQiOjUwfQ==
Summary Mode
When output exceeds 50K chars, the server auto-compacts results using aggregate statistics. Override with summary: true (force compact) or summary: false (disable).
# Force summary for large project
analyze_directory path: /huge/codebase summary: true
# Disable summary (get full details, may be large)
analyze_directory path: /project summary: false
In single-pass subagent sessions, prompt caches are written but never reused. Benchmarks showed MCP responses writing ~2x more to cache than native-only workflows, adding cost with no quality gain. Set DISABLE_PROMPT_CACHING=1 (or DISABLE_PROMPT_CACHING_HAIKU=1 for Haiku-specific pipelines) to avoid this overhead.
The server's own instructions expose a 4-step recommended workflow for unknown repositories: survey the repo root with analyze_directory at max_depth=2, drill into the source package, run analyze_module on key files for a function/import index (or analyze_file when signatures and types are needed), then use analyze_symbol to trace call graphs. MCP clients that surface server instructions will present this workflow automatically to the agent.
| Variable | Default | Description |
|---|---|---|
CODE_ANALYZE_FILE_CACHE_CAPACITY |
100 |
Maximum number of file-analysis results held in the in-process LRU cache. Increase for large repos where many files are queried repeatedly. |
CODE_ANALYZE_DIR_CACHE_CAPACITY |
20 |
Maximum number of directory-analysis results held in the in-process LRU cache. |
DISABLE_PROMPT_CACHING |
unset | Set to 1 to disable prompt caching (recommended for single-pass subagent sessions). |
DISABLE_PROMPT_CACHING_HAIKU |
unset | Set to 1 to disable prompt caching for Haiku-specific pipelines only. |
All four tools emit metrics to daily-rotated JSONL files at $XDG_DATA_HOME/aptu-coder/ (fallback: ~/.local/share/aptu-coder/). Each record captures tool name, duration, output size, and result status. Files are retained for 30 days. See docs/OBSERVABILITY.md for the full schema.
Apache-2.0. See LICENSE for details.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"code-analyze-mcp": {
"command": "npx",
"args": []
}
}
}PRs, issues, code search, CI status
Database, auth and storage
Reference / test server with prompts, resources, and tools.
Secure file operations with configurable access controls.