loading…
Search for a command to run...
loading…
87+ specialized tools for German and European energy data. Direct AI access to Marktstammdatenregister (MaStR), ENTSO-E, Redispatch 2.0, and Grid Operations for
87+ specialized tools for German and European energy data. Direct AI access to Marktstammdatenregister (MaStR), ENTSO-E, Redispatch 2.0, and Grid Operations for utilities and datacenters.
MicroService Agent System for Energy Markets
Maintenance CI CodeQL Release codecov
A modular, scalable microservices platform built with Moleculer for developing energy market applications with AI integration (Google Gemini) and MCP (Model Context Protocol) support.
/app for interactive, browser-based testing of the AI agent — no separate tooling required/api/agent/session/:id/csv?param=value) for zero-config integration with automation tools such as Microsoft Power Automate, Excel Power Query, or cron jobs/api/datapoints. See the health overview for a dashboard of all registered datapoints.- 📸 Snapshots — Seal a group of datapoints as a consistent unit with SHA-256 provenance hashing. Create, validate (drift detection), list, and remove snapshots via /api/datapoints/snapshot* (v0.13)/api/oep/* (v0.12)POST /api/grid-connection/validate): inventory → delta → capacity → EWK benchmark → Go/No-Go decision → audit trail. No LLM — identical inputs, identical findings. Reports sealed with PouchDB snapshots for EU AI Act Art. 12 compliance (v0.14)POST /api/energy-sharing/validate): generator/consumer eligibility, MaLo validation, share-sum check, DV validation. Regulatory deadline: 01.06.2026 (v0.15)POST /api/mastr-quality/audit): registration completeness, capacity plausibility, NAP/MeLo connectivity, duplicate detection, geo spot-check. Weighted 0–100 score across 5 dimensions (v0.17)POST /api/redispatch/audit): portfolio assembly (Weg A/B), NAP/MeLo/DV checks, curtailment data, financial risk scoring (v0.18)GET /api/dashboard/*): VNB overview, market snapshot, quality summary, finding-codes reference. All upstream calls parallel via Promise.allSettled, graceful degradation, 5–15 min cache (v0.19)GET /api/cookbook) — discover common query patterns for the AI agent and REST API (v0.20.5)POST /api/company/companies) (v0.20.3)GET/PUT/DELETE /api/object-store/:namespace/:key) (v0.20.4)POST /api/znp/projects) (v0.20.4)GET /api/nova/stream) (v0.24)POST /api/cya/generate) (v0.26)POST /api/mastr-monitor/watches) (v0.27)POST /api/edm/virtual/*) (v0.29)/api/docsmain run automated quality checks (lint, build, unit coverage gates, integration discovery sanity, OpenAPI audit, security audits).v*) trigger a release pipeline (release:check + build + GitHub Release).llm.txt is validated in release checks and regenerated from source-of-truth files via npm run generate:llm.llm.txt sync is checked strictly when CHANGELOG.md changes.main and require Maintenance CI + CodeQL checks before merge.# Clone the repository
git clone https://github.com/energychain/cernion-energy-tools.git
cd cernion-energy-tools
# Install dependencies
npm install
# Copy environment variables
cp .env.example .env
# Edit .env and add your API keys (see Configuration section)
nano .env
# Start all services
npm start
# Or use development mode with hot reload
npm run dev
The API Gateway will start on http://localhost:3000 by default.
| URL | Description |
|---|---|
http://localhost:3000/app |
Research Web App — AI agent UI for interactive testing |
http://localhost:3000/api/docs |
Swagger UI — full OpenAPI documentation |
http://localhost:3000/api/openapi.json |
Raw OpenAPI spec |
# Call a microservice action
npm run cli -- skeleton.hello --name=John
# Health check
npm run cli -- skeleton.health
# Get help
npm run cli -- --help
The built-in web application at /app lets you explore all microservices using plain-text natural language — no curl, no Swagger form, no coding required.
Describe your question — type in plain English or German, e.g. "Alle PV-Anlagen im Netz der Enercity in Hannover"
Review the plan — the AI decomposes the question into a numbered sequence of microservice calls and shows you exactly which services will be called and with which parameters.
Adjust parameters — concrete values extracted from your query (dates, postal codes, MeLo IDs, operator names, …) appear as pre-filled, editable form fields. Change any value without re-generating the plan.
Run & explore — results appear in a sortable, filterable table. The raw JSON from every step is available for debugging.
Share or automate — a shareable URL and a Live CSV link are generated automatically (see below).
Every completed analysis exposes a parameterised CSV endpoint:
GET /api/agent/session/<id>/csv?param1=value1¶m2=value2
Power Automate / Excel Power Query example:
http://10.0.0.8:3900/api/agent/session/2a70e478-90ce-4fa5-b996-6f98efdba7cf/csv?startDate=2026-03-01
Point a HTTP → Get file action or a Power Query Web data source at this URL. Change the startDate parameter to fetch a different reporting period — no re-analysis needed.
Other automation patterns:
read_csv(url) in a Jupyter notebook# Create a new service interactively
npm run create
# Or specify a name directly
npm run create -- my-service
This creates a new service in custom-services/ from the skeleton template and generates a matching test in custom-tests/.
Custom services are local-only and ignored by git. Core services shipped with the project live in services/.
Copy the skeleton template:
cp templates/skeleton.service.js custom-services/my-service.service.js
Edit the service — change the name property, add actions, events, and methods.
Restart services:
npm start
custom-services/ and are loaded at startup.custom-tests/ and are excluded from release coverage.npm run test:custom -- my-service.service.test.js
cernion-energy-tools/
├── services/ # Core microservices (shipped with release)
│ ├── api.service.js # API Gateway + Swagger UI
│ ├── agent.service.js # AI agent — plan/execute/export
│ ├── assets.service.js # MaStR installation assets
│ ├── datapoint.service.js # Named datapoints + snapshots (v0.11–v0.13)
│ ├── osm-geo.service.js # OSM geo layer (v0.10)
│ ├── oep.service.js # Open Energy Platform (v0.12)
│ ├── datasource-registry.service.js
│ ├── datasource-connector.service.js
│ ├── datasource-cache.service.js
│ ├── datasource-discovery.service.js
│ ├── forecast.service.js
│ ├── gas-storage.service.js
│ ├── german-grid.service.js
│ ├── grid-operations.service.js
│ ├── cya.service.js # CYA narrative agent (v0.26)
│ ├── mastr-monitor.service.js # MaStR Monitor + subscriptions (v0.27)
│ ├── nova.service.js # NOVA SSE decision feed (v0.24)
│ ├── znp.service.js # Zählpunkt-Netzbetreiber-Prüfung (v0.20.4)
│ └── ... # 45 services total — see services/ for full list
├── src/
│ ├── app.html # Research Web App (single-page)
│ ├── connectors/ # Built-in datasource connector plugins
│ ├── mcp-client.js # Centralised MCP tool caller
│ ├── async-job-poller.js # Async job polling
│ ├── prompt-scrubber.js # PII masking for LLM prompts
│ ├── oeo-mappings.js # OEO class mappings (~150 entries)
│ ├── validation-findings.js # Finding codes + FINDING_CODE_METADATA (92 codes, v0.19)
│ ├── mastr-monitor-diff.js # Field-level delta computation (v0.27)
│ ├── mastr-monitor-notify.js # SMTP email notifications (v0.27)
│ ├── mastr-monitor-scheduler.js # Cron preset scheduler (v0.27)
│ ├── cya-agent-personas.js # CYA multi-stakeholder personas (v0.26)
│ └── oemetadata-builder.js # OEMetadata v2.0 builder
├── custom-services/ # Local/custom services (git-ignored)
├── custom-connectors/ # Local/custom datasource plugins (git-ignored)
├── custom-tests/ # Local/custom tests (git-ignored)
├── templates/
│ └── skeleton.service.js
├── tests/ # Core test suite
├── scripts/ # Build / audit scripts
├── index.js # Main entry point
├── cli.js # CLI tool
├── create-service.js # Interactive service creator
├── moleculer.config.js # Moleculer configuration
├── .env.example # Environment variables template
└── package.json
Copy .env.example to .env and edit:
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
API Gateway port |
LOG_LEVEL |
info |
Logging level (info, debug, warn, error) |
GEMINI_API_KEY |
— | Google Gemini API key (required for AI agent) |
GEMINI_MODEL |
gemini-3-pro-preview |
Gemini model name |
MCP_SERVER_URL |
— | MCP server URL |
CERNION_TOKEN |
— | Cernion MCP token (request here or email [email protected]) |
NAMESPACE |
— | Moleculer namespace for service isolation |
TRANSPORTER |
— | Message transporter (NATS, Redis, MQTT, …) |
REQUEST_TIMEOUT_MS |
900000 |
Broker request timeout in ms |
RETRY_POLICY_ENABLED |
false |
Enable broker-level retries for retryable errors |
CIRCUIT_BREAKER_ENABLED |
false |
Enable circuit breaker protection |
BULKHEAD_ENABLED |
false |
Enable bulkhead concurrency protection |
METRICS_ENABLED |
false |
Enable Moleculer metrics collection |
TRACING_ENABLED |
false |
Enable Moleculer tracing |
ASYNC_POLLER_DEBUG |
false |
Enable verbose async job poller debug logging |
ASYNC_POLLER_LOG_MAX_CHARS |
400 |
Max chars for poller debug payload snippets |
DATASOURCE_MONGO_COLLECTION_REGISTRY |
datasource_registry |
Collection name for datasource definitions |
DATASOURCE_MONGO_COLLECTION_CACHE |
datasource_cache |
Collection name for cached datasource rows |
DATASOURCE_MONGO_COLLECTION_AUDIT |
datasource_audit |
Collection name for privacy/audit records |
DATASOURCE_CONNECTOR_PLUGINS_DIR |
src/connectors |
Built-in datasource connector directory |
DATASOURCE_CUSTOM_PLUGINS_DIR |
custom-connectors |
Custom datasource connector directory |
DATASOURCE_MAX_INFER_SAMPLE_ROWS |
200 |
Max sample rows used for schema inference |
DATASOURCE_SCRAPER_TIMEOUT_MS |
30000 |
Timeout for scraper connector page loads |
DATASOURCE_DEFAULT_PRIVACY_CONTEXT |
ai-agent |
Default privacy mode for datasource reads |
GRID_CONNECTION_DB_PATH |
./.grid-connections |
PouchDB path for Netzanschluss validation reports (v0.14) |
ENERGY_SHARING_DB_PATH |
./data/energy-sharing |
PouchDB path for Energy Sharing audit trail (v0.15) |
MASTR_QUALITY_DB_PATH |
./data/mastr-quality |
PouchDB path for MaStR quality audits (v0.17) |
REDISPATCH_DB_PATH |
./data/redispatch-expost |
PouchDB path for Redispatch Ex-Post audits (v0.18) |
OBJECT_STORE_DB_PATH |
./data/object-store |
PouchDB path for generic object store (v0.20.4) |
ZNP_DB_PATH |
./data/znp |
PouchDB path for ZNP projects (v0.20.4) |
COOKBOOK_DB_PATH |
./data/cookbook |
PouchDB path for API cookbook (v0.20.5) |
COOKBOOK_SEED_FILE |
— | Optional JSON seed file for cookbook recipes |
GEMINI_EMBEDDING_MODEL |
text-embedding-004 |
Gemini embedding model for semantic cookbook search |
SMTP_HOST |
— | SMTP server hostname for MaStR Monitor email notifications (v0.27) |
SMTP_PORT |
587 |
SMTP server port |
SMTP_USER |
— | SMTP authentication username |
SMTP_PASS |
— | SMTP authentication password |
SMTP_FROM |
— | Sender address for notification emails |
MASTR_MONITOR_BASE_URL |
http://localhost:3000 |
Base URL embedded in subscription confirmation links |
For complete operational options (retry backoff, circuit-breaker thresholds, bulkhead queue limits), see .env.example.
The v0.9 datasource layer adds a second data plane next to MCP-backed public energy tools: internal utility and grid-operator data.
datasource-registry — CRUD for source definitions, cache policy, Data Dictionary, dictionary version history, and schema inference draftsdatasource-connector — plugin runtime for reading heterogeneous sources through built-in or custom connectorsdatasource-cache — privacy-aware cached row access, status inspection, refresh, invalidation, and DSGVO audit traildatasource-discovery — AI-ready inhouse source descriptors for the agent and future Logic Builder integrationscsv — delimited files from disk, including .gzrest — JSON/CSV HTTP endpointsgeojson — feature flattening with centroid coordinatesxlsx — spreadsheet row extraction via SheetJSdocx — Word extraction scaffold (optional mammoth dependency)scraper — HTML/table extraction scaffold via cheerio or puppeteerPOST /api/datasourcesGET /api/datasourcesGET /api/datasources/:idPUT /api/datasources/:idDELETE /api/datasources/:idGET /api/datasources/:id/dictionaryPUT /api/datasources/:id/dictionaryGET /api/datasources/:id/dictionary/historyGET /api/datasources/:id/dictionary/:versionPOST /api/datasources/:id/inferPOST /api/datasources/:id/refreshGET /api/datasource-cache/:sourceIdGET /api/datasource-cache/:sourceId/statusGET /api/datasource-cache/:sourceId/auditPOST /api/datasource-cache/:sourceId/refreshDELETE /api/datasource-cache/:sourceIdGET /api/datasource-discoveryGET /api/datasource-discovery/search?q=...GET /api/datasource-discovery/:sourceId/descriptordocx, scraperEdit moleculer.config.js to customise logger settings, transporter, cacher, circuit breaker, metrics, and tracing.
Since v0.11.4 Cernion is annotated with machine-readable mappings to the Open Energy Ontology (v2.11.0).
| Layer | What it does |
|---|---|
src/oeo-mappings.js |
Static lookup (~150 entries): installation types, grid concepts, voltage levels, market types, ENTSO-E PSR codes, units. Includes German labels. |
x-oeo-class in OpenAPI |
Every REST endpoint carries x-oeo-class arrays linking to OEO class IRIs. |
semanticHints.oeoClasses |
Datasource discovery descriptors expose domain-level OEO annotations. |
| Classifier keyword boost | German OEO labels (e.g. "Solaranlage", "Stromnetz") enrich the heuristic scorer for German-language uploads. |
GET /api/datapoints/oeo-context |
JSON-LD @context document mapping datapoint fields to OEO IRIs. |
scripts/sync-oeo.js |
Validates mappings against upstream OEO releases. Run: npm run sync:oeo. |
The ontology is maintained by @OpenEnergyPlatform/ontology.
All inline references are tagged with // @OpenEnergyPlatform/ontology — OEO_XXXXX label
so that GitHub search surfaces our dependency to upstream maintainers.
| Script | Description |
|---|---|
npm start |
Start all services |
npm run dev |
Start with hot reload and REPL |
npm run cli |
Run CLI tool |
npm run create |
Create new service from template |
npm run lint |
Run ESLint |
npm run lint:fix |
Auto-fix ESLint issues |
npm run format |
Format code with Prettier |
npm test |
Run full test suite with coverage |
npm run test:unit |
Run unit/service tests with coverage thresholds |
npm run test:unit:ci |
CI-safe unit run (--runInBand --forceExit) |
npm run test:integration |
Run integration tests (*.integration.test.js) |
npm run test:e2e |
Run live end-to-end integration test (assets.integration.test.js) |
npm run test:custom |
Run custom tests (no coverage threshold) |
npm run test:watch |
Watch mode |
npm run audit:openapi |
Audit OpenAPI request/parameter quality |
npm run audit:security |
Run blocking dependency audit (critical severity) |
npm run audit:security:advisory |
Run advisory dependency audit (high+) |
npm run export:openapi |
Generate openapi-export.json with x-ui-page annotations |
npm run release:check |
Run core release gates (unit coverage, OpenAPI, critical security audit) |
npm run sync:oeo |
Validate/update OEO mappings from upstream release |
npm run sync:oemetadata |
Validate/update OEMetadata schema from upstream |
npm run build |
No-op passthrough for CI compatibility |
RETRY_POLICY_ENABLED=false, CIRCUIT_BREAKER_ENABLED=false, BULKHEAD_ENABLED=false).CIRCUIT_BREAKER_ENABLED=true and BULKHEAD_ENABLED=true after validation in staging.ASYNC_POLLER_DEBUG=true with conservative ASYNC_POLLER_LOG_MAX_CHARS.Each service follows this structure:
module.exports = {
name: 'service-name',
settings: { /* service-specific settings */ },
actions: {
myAction: {
rest: 'GET /my-action',
params: { param1: { type: 'string' } },
openapi: { summary: '…', tags: ['MyService'] },
async handler(ctx) { /* … */ }
}
},
events: { /* event handlers */ },
methods: { /* internal methods */ },
created() {}, async started() {}, async stopped() {}
};
The agent service (services/agent.service.js) exposes four REST actions used by the Research Web App:
| Endpoint | Description |
|---|---|
POST /api/agent/analyze |
Generate a multi-step execution plan from a free-text query |
POST /api/agent/execute |
Run the plan and return results + an AI-generated summary |
GET /api/agent/session/:id |
Retrieve a saved session (shareable URL) |
GET /api/agent/session/:id/csv?… |
Re-run plan and download results as CSV |
Every concrete value from the user's message (dates, postal codes, IDs, operator names, …) is automatically surfaced as an editable form field with the extracted value pre-filled. Structural parameters (format, limit, type, …) remain hardcoded. This makes every generated query a reusable template that can be adjusted without re-analysis.
normalizePlan() — normalises varying key names from the LLM (useTool/args/label → action/params/description)resolveChainedRef() — resolves __step_N.fieldPath references between steps, strips {{…}} wrapperseffectiveInputs — seeds from requiredInputs[].default, overlaid by user-supplied values; overrides hardcoded step params for any declared requiredInput namerepairAttempt flag)The API Gateway (services/api.service.js) provides:
http://localhost:3000/apihttp://localhost:3000/apphttp://localhost:3000/api/docshttp://localhost:3000/api/openapi.jsonpackage.json and OpenAPI version in services/api.service.jsCHANGELOG.mdnpm test (must pass with coverage thresholds)npm run lintnpm run audit:openapinpm run audit:securitycustom-services/, custom-tests/, .sessions/, and .env are not committedContributions are welcome! Please read CONTRIBUTING.md before submitting a pull request.
git checkout -b feat/my-feature)npm test and npm run lintThis project follows Semantic Versioning. See CHANGELOG.md for the full release history.
Please report security issues privately. See SECURITY.md for the responsible disclosure policy.
Please follow our community guidelines in CODE_OF_CONDUCT.md.
GPL-3.0 — see LICENSE for details.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"cernion-grid-intelligence": {
"command": "npx",
"args": []
}
}
}