loading…
Search for a command to run...
loading…
A Model Context Protocol server that lets Claude (and any MCP client) cite actual Canadian financial-services regulations: OSFI, PIPEDA, FINTRAC, Quebec Law 25.
A Model Context Protocol server that lets Claude (and any MCP client) cite actual Canadian financial-services regulations: OSFI, PIPEDA, FINTRAC, Quebec Law 25.
A regulatory backstop for Canadian fintech architects: validate designs, vendor selections, and incident responses against OSFI, PIPEDA, FINTRAC, and Quebec Law 25 without leaving your editor.
Architecture review at design time. Regulatory text at query time. Built for senior architects working with LLMs at Canadian financial institutions.
CI License: MIT Python 3.13+ Status: Early Development
🚧 Early development. Project scaffolding and v0.1 roadmap landed in the initial commit. First working release (v0.1.0) targeted for roughly 6 weeks out, built incrementally. Watch the commit history for weekly progress.
| Component | State |
|---|---|
| Project scaffolding, CI, license, dependencies, MCP server stub | ✅ Shipped |
| OSFI Guideline E-21 ingestion working | 🚧 Up next |
compliance_lookup MCP tool returning real cited passages |
⬜ Planned |
| End-to-end smoke test against an MCP client (Claude Desktop, Cursor, Cline) | ⬜ Planned |
| PIPEDA full text | ⬜ Planned |
| FINTRAC AML/ATF guidance | ⬜ Planned |
| Quebec Law 25 | ⬜ Planned |
| Demo recording + first public release (v0.1.0 tag) | ⬜ Planned |
If you want this for your Canadian fintech AI tooling, watch or star the repo. Substantive feedback on the roadmap is welcome via Issues.
Demo lands with the v0.1.0 release: a 90-second screen recording showing an MCP client (Claude Desktop in the demo) calling compliance_lookup and answering a regulatory question with a citation back to the source document.
I have spent 20 years architecting platforms in Canadian financial services. The regulatory review of new designs has always been a slow, expensive, late-stage step. By the time legal or compliance surfaces an issue, the architecture is locked, the build is in progress, and the rework is painful.
Bay Street MCP puts the regulatory backstop where it belongs: at design time, in the architect's editor, alongside your LLM of choice. The server exposes Canadian financial regulation as queryable context. Your LLM reads your architecture, queries the relevant provisions, and surfaces the implications before they become rework.
Bay Street MCP shines at the four moments where architects need regulatory context without leaving their editor:
Pre-design check. "I'm building a real-time fraud detection system that uses customer transaction data. What regulatory considerations should shape the design?" Your LLM queries Bay Street MCP and returns OSFI E-21 (operational risk) plus PIPEDA (data handling) considerations grounded in the source text.
Architecture review augmentation. Paste your design (Mermaid diagram, ADR, RFC, system diagram). Your LLM reads it, queries the relevant provisions, and surfaces a structured regulatory review: "Your design includes [X]. OSFI [section Y] requires [Z]. Recommendation: [W]."
Vendor and third-party evaluation. "We're considering [SaaS vendor]. They process PII for our customers. What PIPEDA and Quebec Law 25 considerations apply to this contract?" Get cited passages on consent, retention, cross-border transfer, and breach notification.
Incident response. "We had a 6-hour outage of our funds-transfer service. What are our regulatory reporting obligations?" Get the specific E-21 incident reporting requirements with citations.
The MCP server is the knowledge backstop. Your LLM is the reasoning engine. You stay the human-in-the-loop deciding what to ship.
The instructions below describe how the server will work once v0.1.0 ships. They do not work against the current commit. Tracking progress is in the Status table above.
The example uses Claude Desktop because it is the most widely deployed MCP client. Bay Street MCP works with any MCP client (Cursor, Cline, Claude Code, Continue, Goose, etc.); the install step varies by client but the underlying server invocation is the same.
Clone and install:
git clone https://github.com/ziamalik/bay-street-mcp.git
cd bay-street-mcp
uv sync
Download a regulation PDF. For the v0.1 example, grab OSFI Guideline E-21 (Operational Risk Management and Resilience) from https://www.osfi-bsif.gc.ca/.
Ingest it:
uv run bay-street-ingest data/osfi-e21.pdf \
--regulation "OSFI Guideline E-21" \
--jurisdiction CA \
--source-url "https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/operational-risk-management-resilience"
Add to your claude_desktop_config.json (typically at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"bay-street": {
"command": "uv",
"args": ["--directory", "/absolute/path/to/bay-street-mcp", "run", "bay-street-mcp"]
}
}
}
See claude_desktop_config.example.json for an alternative invocation if you have installed the package globally.
Restart Claude Desktop (or your MCP client of choice). Ask:
What does OSFI E-21 say about AI risk management?
The LLM will call compliance_lookup and answer with citations.
compliance_lookup(query, top_k) returning passages with {regulation, jurisdiction, page, source_url} citation metadataSubsequent versions add PIPEDA, FINTRAC, Quebec Law 25, then expand to OSFI E-23 (model risk) and B-13 (technology and cyber risk). See Roadmap below.
v0.1 (in progress, ETA ~6 weeks):
compliance_lookup MCP tool returning real cited passagesv0.2 and beyond:
User question → LLM → MCP tool call → Chroma similarity search
→ top-k passages with metadata → LLM synthesizes answer with citations
The ingestion script chunks each regulation by ~800 words with 100-word overlap, stores in Chroma with metadata {regulation, jurisdiction, page, source_url}. The MCP tool returns passages with full citation metadata, so the LLM can cite page numbers and source URLs in its response.
MCP (Model Context Protocol) is becoming the standard interface for connecting LLMs to external context. Exposing this as an MCP server means the same compliance knowledge is usable from any MCP-compatible client (Claude Desktop, Cursor, Cline, Claude Code, Continue, Goose, and others) and any underlying model the client supports, without building a custom integration each time.
Because MCP separates the knowledge layer from the model layer, the same Bay Street MCP install works with any model the client supports: Claude, Mistral, OpenAI (including their open-weight gpt-oss models), Llama, or any local model via Ollama or vLLM. Useful for on-prem deployments, data-residency-sensitive workflows where cloud LLMs are not allowed, and cost-sensitive batch use cases. The knowledge layer (Canadian regulatory text) and the reasoning layer (whichever LLM you choose) are deliberately decoupled.
uv sync --all-extras
uv run pytest
uv run ruff check .
MIT. Use it, fork it, ship it.
Built by Zia Malik — 20 years Canadian financial services, currently building AppVet (AI-powered web app security audits) and writing about fintech-grade AI engineering.
If you are at a Canadian fintech and want this extended for your specific regulatory surface, open an issue or reach out.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"bay-street-mcp": {
"command": "npx",
"args": []
}
}
}