loading…
Search for a command to run...
loading…
Cite-able, content-addressed, signed memory of every place on Earth
Signed, content-addressed, cite-able memory of every place on Earth. Eight read primitives, ed25519 receipts, no keys for reads. Apache-2.0, pure Rust, open data only.
Live at emem.dev — GET /health, POST /v1/recall,
or paste https://emem.dev/mcp into any MCP client.
License Rust MCP OpenAPI Container HF Space CI
GET /health POST /v1/recall POST /v1/find_similar
GET /v1/agent_card POST /v1/compare POST /v1/diff
GET /openapi.json POST /v1/query_region POST /v1/trajectory
GET /.well-known/emem.json POST /v1/verify POST /v1/intent
GET /v1/demos POST /v1/attest GET /mcp (discover)
GET /v1/grid_info POST /v1/recall_many POST /mcp (jsonrpc)
GET /v1/bands POST /v1/recall_polygon POST /v1/locate
POST /v1/verify_receipt GET /v1/facts/:cid
A protocol, not a service. Every fact about every place is a tuple
(cell, band, tslot); the canonical CBOR of that tuple hashes to a stable
CID. Every read is signed with the responder's ed25519 key, so any client
can verify offline against the pubkey at /.well-known/emem.json.
LLMs confabulate spatial facts. emem gives them something to cite:
Agents talk to it over plain REST, MCP Streamable HTTP, or OpenAPI 3.1. All three are the same wire — pick whichever your host already speaks.
The canonical image lives at ghcr.io/vortx-ai/emem:latest (multi-arch
linux/amd64 + linux/arm64, anonymously pullable, ~30 MB compressed).
Tags: latest, <short-sha>, <vX.Y.Z> on releases.
# Pull (verifies image exists + caches it).
docker pull ghcr.io/vortx-ai/emem:latest
# Run with a persistent volume so attestations survive restarts.
docker run --rm -p 5051:5051 -v emem-data:/var/emem \
ghcr.io/vortx-ai/emem:latest
# Smoke-check from another shell.
curl -s http://localhost:5051/health | jq .
curl -s http://localhost:5051/v1/agent_card | jq '.serverInfo, .runtime'
Or pin to a specific release:
docker pull ghcr.io/vortx-ai/emem:v0.0.3
The image is built by .github/workflows/publish.yml on every push to
main. Provenance and SBOM are attached — verify with
cosign verify-attestation (see docs/PUBLISHING.md).
A hosted instance lives at
huggingface.co/spaces/vortx-ai/emem.
Hit ${SPACE_URL}/mcp from any MCP client to talk to it.
# 1) Build the workspace.
cargo build --release --workspace
# 2) Run the server (defaults: 0.0.0.0:5051, persistent storage at ./var/emem).
EMEM_BIND=0.0.0.0:5051 EMEM_DATA=./var/emem ./target/release/emem-server
# 3) Hit it.
curl -s http://localhost:5051/health
curl -s -X POST http://localhost:5051/v1/recall \
-H 'content-type: application/json' \
-d '{"cell":"damO.zb000.xUti.zde78"}' # Mt Fuji
The hosted endpoint is https://emem.dev/mcp — Streamable HTTP, no auth,
28 tools. Paste-ready configs live under examples/:
| platform | file |
|---|---|
| Claude Desktop | examples/claude-desktop.json |
| Claude Code | examples/claude-code.mcp.json |
| Cursor | examples/cursor.mcp.json |
| Cline (VS Code) | examples/cline.mcp.json |
| OpenAI GPT | examples/openai-gpt-action.json |
| LangChain | examples/langchain.py |
| LlamaIndex | examples/llamaindex.py |
Per-client wiring + smoke tests in docs/CLIENTS.md; agent loop guide in docs/AGENTS.md.
Two CLI binaries exercise every primitive end-to-end and dump per-step
request + response + receipt files to var/demos/<UTC>/:
./target/release/emem-livedemo # synthetic data, every primitive
./target/release/emem-realdemo # real Copernicus DEM 30m S3 tiles
Trace artifacts surface at GET /v1/demos. Two trial reports against
the live endpoint live at docs/AGENT_TRIAL.md
(single-agent loop) and docs/GLOBAL_TRIAL.md
(43 fixtures across nine place-types; both run with scripts/global_trial.py).
┌──────────────┐ ┌────────────────────┐
user ──────► │ AI agent │ ──────► /v1/ │ emem responder │
│ (Claude / │ /mcp │ ┌──────────────┐ │
│ Cursor / │ /openapi.json │ │ ed25519 key │ │
│ GPT / etc) │ │ └──────────────┘ │
└──────┬───────┘ │ ┌──────────────┐ │
│ │ │ sled cache │ │
│ signed receipt │ └──────────────┘ │
▼ │ ┌──────────────┐ │
┌──────────────┐ │ │ merkle log │ │
│ user reply │ │ └──────────────┘ │
│ + cid │ │ ┌──────────────┐ │
└──────────────┘ │ │ vsicurl COG │ ──► open data
│ └──────────────┘ │ (Cop-DEM, JRC,
└────────────────────┘ Hansen, ESA…)
Address algebra (token cost)
| field | bits | wire form | tokens |
|---|---|---|---|
cell |
64 | 4 BPE bigrams | ≤ 4 |
tslot |
64 | base32 short | ≤ 2 |
vec |
1792 D fp16 | 12-byte prefix | ≤ 3 |
cid |
32 B | 8-byte prefix | ≤ 3 |
Crypto: blake3 hashing, ed25519 signatures, base32-nopad-lowercase CIDs.
Receipts are signed over blake3(request_id || served_at || primitive || cells || fact_cids) so any client offline-verifies with the responder pubkey
in /.well-known/emem.json.
Full math + architecture in docs/WHITEPAPER.md. Wire-format spec in docs/SPEC.md.
emem ships with only open-source dependencies and reads only from open-data providers in its default build. No API keys, no operator credentials, no SaaS lock-in.
| concern | how it's handled |
|---|---|
| code license | Apache-2.0 (this repo) |
| crate licenses | All deps are MIT / Apache-2.0 / BSD / ISC — see NOTICE |
| data licenses | Copernicus DEM (open), JRC GSW (CC-BY 4.0), Hansen GFC (open), ESA WorldCover (CC-BY 4.0), GHSL / WorldPop (CC-BY 4.0), OSM (ODbL) — see NOTICE |
| auth | none for L0/L1 reads; ed25519 attester key for L2 writes |
| transport | HTTPS via in-process rustls + Let's Encrypt ACME (no Cloudflare, no proxies) |
emem/
├── Cargo.toml # workspace root
├── crates/
│ ├── emem-core/ # types, manifests, errors
│ ├── emem-codec/ # cell64, cid64, vec64, hilbert
│ ├── emem-fact/ # canonical CBOR + facts + receipts
│ ├── emem-claim/ # structured claims, verify outcomes
│ ├── emem-cache/ # sled hot cache (cell64 → cid64 → fact)
│ ├── emem-fetch/ # vsicurl Range reads, source connectors
│ ├── emem-storage/ # Storage trait, append-only merkle log
│ ├── emem-cubes/ # 1792-D voxel cube loader (legacy AgriSynth bootstrap)
│ ├── emem-primitives/ # recall, compare, find_similar, …
│ ├── emem-attest/ # merkle root, batch verify
│ ├── emem-intent/ # intent → plan
│ ├── emem-mcp/ # MCP tool surface
│ ├── emem-api-rest/ # axum router + OpenAPI + content nego
│ └── emem-cli/ # emem-server, emem-livedemo, emem-realdemo
├── docs/ # SPEC, WHITEPAPER, AGENTS, DEPLOY
├── examples/ # paste-ready MCP configs
└── web/ # landing surface (HTML, JSON, llms.txt)
For a full multi-channel rollout (GitHub public, GHCR, Docker Hub mirror, HuggingFace Space, MCP Server Registry, awesome-mcp-servers PR), follow docs/GO_LIVE.md.
See docs/DEPLOY.md for the full deploy story for a
self-hosted bare-metal emem.dev-style instance.
TL;DR for emem.dev:
EMEM_TLS_DOMAINS=emem.dev,www.emem.dev EMEM_TLS_CONTACT=mailto:[email protected] ./target/release/emem-server:443 in your cloud security list,setcap 'cap_net_bind_service=+ep' ./target/release/emem-server,emem.dev's A record at the host's public IP — done.The server does its own TLS + Let's Encrypt ACME via rustls-acme /
TLS-ALPN-01 (only :443 is needed; no :80, no Cloudflare, no Caddy).
Issues and PRs welcome — see CONTRIBUTING.md for the dev loop, CODE_OF_CONDUCT.md, and SECURITY.md for vulnerability disclosure.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"emem": {
"command": "npx",
"args": []
}
}
}