loading…
Search for a command to run...
loading…
A privacy-first memory layer that pseudonymizes sensitive data locally before sharing a 'Working-Fiction' version with external AI agents. It enables secure age
A privacy-first memory layer that pseudonymizes sensitive data locally before sharing a 'Working-Fiction' version with external AI agents. It enables secure agentic workflows by ensuring personally identifiable information never leaves the user's sovereign hardware.
Stop guessing which AI model to use. This MCP server builds a dataset of your preferences each time you choose between draft responses. It learns from your actual choices, not from general benchmarks.
Live URL: https://log-mcp.casey-digennaro.workers.dev License: MIT • Runtime: Cloudflare Workers • Dependencies: 0
Public model rankings often don't reflect your specific needs. This server learns your preferences directly from the choices you make while working, helping it route future prompts to the model you'd likely choose.
Fork this repository first to create your own instance.
For local development:
git clone https://github.com/your-username/log-mcp
cd log-mcp
cp .env.example .env
# Add your API keys to the .env file
npm run dev
When you submit a prompt, LOG-mcp generates draft responses from each configured model. You select the best one. Each choice trains your private preference profile. Over time, it begins routing prompts directly to the model you would have selected. All choice data remains within your Worker.
The system requires explicit choice data to learn. If you rarely select between drafts, it cannot build an effective routing profile and will continue to show all model outputs.
LOG-mcp runs statelessly on Cloudflare Workers. All preference data is stored in a Cloudflare KV namespace. There are no external databases or background processes to manage.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"l-o-g-latent-orchestration-gateway": {
"command": "npx",
"args": []
}
}
}