loading…
Search for a command to run...
loading…
Dependency intelligence for AI agents. CVE scanning, health checks, upgrade planning.
Dependency intelligence for AI agents. CVE scanning, health checks, upgrade planning.
4DA reads the internet for developers — privately, locally — and gets sharper every day.
It scans your codebase — Cargo.toml, package.json, go.mod, Git history — and scores every article, advisory, and release from 20+ sources against what you actually build. An item needs 2+ independent signals to survive. Everything else is rejected.
Tested across 9 developer personas: 92% of content is filtered as noise, 98% of actual noise is correctly rejected. Your real rejection rate — computed from your own data — is shown in the Evidence tab.
It learns from how you engage with what it shows you. Save something — topics boost, source reputation rises, your taste embedding sharpens. Dismiss something — anti-patterns form, future noise drops. Yesterday's noise becomes tomorrow's signal.
Already using Claude Code, Cursor, or Windsurf? One command:
npx @4da/mcp-server
This scans your project, detects your stack, and gives your AI assistant live vulnerability scanning, dependency health, upgrade planning, and ecosystem intelligence. No API keys. No accounts. Works standalone — no desktop app required. Full MCP documentation.
5 independent signal axes. An item must pass 2 or more to surface. Single-axis matches are hard-capped at 28% — no matter how strong one signal is, it cannot pass alone.
| Axis | What it measures |
|---|---|
| Context | Semantic similarity to your active codebase |
| Interest | Alignment with your declared and learned topics |
| ACE | Real-time signals from your Git commits and file edits |
| Dependency | Direct matches against your installed packages |
| Learned | Save/dismiss feedback boosts or suppresses future scores |
What passes the gate goes through 12 quality multipliers: content depth, novelty detection, competing tech penalties, title-body coherence, and intent scoring from recent work. Every constant is calibrated across 9 simulated developer personas with 215 labeled test items.
After keyword scoring, an LLM layer verifies the top items against your full developer context — stack, dependencies, recent commits, anti-technologies, and engagement history. Strict 1-5 rubric:
This is where the gold surfaces — articles the keyword pipeline misses because there's no keyword overlap, but the LLM understands the conceptual relevance to your specific project.
You own the compute. Use Ollama for free local inference (fully private), or bring your own Anthropic/OpenAI key. 4DA never pays for your compute, never stores your keys remotely, never makes API calls you didn't configure.
Content creators who learn the scoring algorithm still can't game it:
No algorithm can be gamed when the scoring signal comes from your local filesystem. Your Cargo.lock doesn't lie.
4DA is local-first and direct-to-provider. There is no 4DA-operated server, no analytics, and no user account system. Your indexed content, scores, and intelligence live in a SQLite database on your machine.
The only outbound traffic:
| Category | Where | Why |
|---|---|---|
| Source adapters | HN, GitHub, Reddit, arXiv, etc. | Fetching public content you configured |
| LLM providers | Anthropic / OpenAI / localhost Ollama | Only if YOU set up BYOK keys |
| License validation | Keygen | Only if you activated a paid license |
| Updater | GitHub Releases | Signed via minisign, once per session |
| Crash reports | Sentry | Off by default. Opt-in only. |
That's the whole list. There is no 4DA telemetry endpoint because there is no 4DA cloud.
Don't take our word for it:
| Network Transparency | Every outbound connection, with source code references |
| Trust Architecture | Why local-first means you don't need to trust us |
| Privacy (Plain Language) | One-page, no-legalese privacy summary |
| Security Audit Guide | Map of trust-critical code paths for auditors |
| Build from Source | Compile it yourself and verify the binary |
Pre-built binaries — no Rust toolchain required.
| Platform | Download | Auto-updates |
|---|---|---|
| Windows | .exe installer | Yes |
| macOS | .dmg (Apple Silicon & Intel) | Yes |
| Linux | .AppImage / .deb | Yes |
Every release publishes SHASUMS256.txt and per-file .sha256 sidecars. Verification instructions.
Windows users: SmartScreen will prompt on first launch (new application, building reputation). Click More info → Run anyway. Full details.
Or install the MCP server for Claude Code / Cursor / Windsurf:
npx @4da/mcp-server
git clone https://github.com/runyourempire/4DA.git
cd 4DA
pnpm install
pnpm tauri dev # First build: 5-15 min. Dev server: localhost:4444.
Prerequisites: Rust (1.93.1 via rust-toolchain.toml), Node.js 20, pnpm 9.15. Platform-specific: Windows needs VS Build Tools 2022 with C++ workload. Full build guide.
First-run setup (API keys, context dirs, sources): Getting Started.
Your Codebase External Sources
| |
v v
+-----------+ +--------------+
| ACE | | 20+ Source |
| Scanner + | | Adapters |
| Git Watch | | (background) |
+-----+-----+ +------+-------+
| |
v v
+------------------------------------------+
| 5-Axis Scoring Engine |
| |
| context --+ |
| interest --+- confirmation gate (2+/5) |
| ace -------+ |
| dependency-+ x quality x novelty |
| learned ---+ x domain x intent |
+------------------+-----------------------+
|
v
+-----------------+
| What survived |
+-----------------+
| Layer | Technology |
|---|---|
| App Shell | Tauri 2.0 (Rust backend + WebView) |
| Frontend | React 19 + TypeScript + Tailwind CSS v4 |
| Database | SQLite 3.45+ with sqlite-vec (vector search) |
| Scoring | Custom pipeline → build-time Rust codegen |
| Embeddings | OpenAI text-embedding-3-small / Ollama |
| LLM | Anthropic Claude / OpenAI / Ollama (BYOK) |
Free — $0 forever. No credit card. No account. No expiration.
Signal — $12/month or $99/year (14-day free trial).
Free is not a demo. It's the full scoring engine, all sources, behavior learning, and MCP integration.
Plug your intelligence system directly into Claude Code, Cursor, Windsurf, VS Code (Copilot), or any MCP-compatible tool.
npx @4da/mcp-server
9 tools work standalone with zero setup (vulnerability scanning, dependency health, upgrade planning, ecosystem news, pre-task briefings, decision memory, agent memory). 5 more activate with the desktop app (scored content feed, actionable signals, knowledge gaps, feedback learning, developer DNA). Every tool reliably returns useful data. Full tool reference.
Reads from the same database as the desktop app. No extra setup.
4da briefing # Latest AI briefing
4da signals # All classified signals
4da signals --critical # Critical/high priority only
4da gaps # Knowledge gaps in your dependencies
4da health # Project dependency health
4da status # Database stats
Brief — today's top picks and live signal stream scored against your stack
Preemption — forward-looking intelligence: CVEs, breaking changes, dependency risks
Blind Spots — coverage gaps and high-relevance items you never saw
Signal — the items that earn their place, confirmed through 2+ independent axes
4DA is built by a solo engineer with AI-assisted development (Claude Code). All code is human-reviewed. The test suite (3,400+ tests across Rust and TypeScript) and CI pipeline verify correctness on every commit. The scoring algorithm is hand-designed and benchmarked against 9 developer personas with labeled test data.
pnpm tauri dev # Dev server (localhost:4444)
cargo test # Rust tests (from src-tauri/)
pnpm test # Frontend tests
pnpm validate:all # Full validation (lint + types + tests + build)
The scoring claims in this README are tested, not asserted. The benchmark suite runs the full PASIFA pipeline against 9 simulated developer personas (Rust systems, Python ML, fullstack TypeScript, DevOps/SRE, mobile, bootstrap/first-run, power user, stack switcher, niche specialist) with labeled test items scored as relevant or noise.
cd src-tauri
cargo test scoring::benchmark -- --nocapture # Full benchmark with output
cargo test scoring::simulation -- --nocapture # Persona simulation suite
Source: src-tauri/src/scoring/benchmark.rs (1,335 lines, 27 tests) and src-tauri/src/scoring/simulation/ (persona definitions, domain embeddings, enrichment data).
FSL-1.1-Apache-2.0 — source available. Free to use, inspect, and modify for any purpose except building a competing product. Every release converts to Apache 2.0 two years after publication — after that, no restrictions at all.
4DA — 4 Dimensional Autonomy
All signal. No feed.
"4DA" and the 4DA logo are trademarks of 4DA Systems Pty Ltd (ACN 696 078 841). The FSL-1.1-Apache-2.0 license does not grant rights to use these trademarks.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"4da-mcp-server": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also