loading…
Search for a command to run...
loading…
An interface for the Nipoppy neuroimaging framework that enables AI agents to list files and manage clinical datasets. It facilitates interaction with neuroimag
An interface for the Nipoppy neuroimaging framework that enables AI agents to list files and manage clinical datasets. It facilitates interaction with neuroimaging data organized according to the Brain Imaging Data Structure (BIDS) standard.
A Model Context Protocol (MCP) interface for the Nipoppy neuroimaging dataset framework. This server exposes tools through MCP that allow AI Agents to interact with in a more deliberate way with Nipoppy studies.
Nipoppy is a lightweight framework for standardized organization and processing of neuroimaging-clinical datasets. It follows the Brain Imaging Data Structure (BIDS) standard and provides tools for managing datasets and processing pipelines.
The Model Context Protocol (MCP) is a standardized protocol that allows AI applications (LLMs) to access external tools and resources through a consistent interface. This server exposes tools for summarizing the current processing status of a Nipoppy study.
This MCP server provides comprehensive access to Nipoppy neuroimaging datasets through both tools and resources:
Context information automatically available to AI agents without function calls:
nipoppy://config - Global dataset configuration and metadatanipoppy://manifest - Dataset structure manifest (participants/sessions/datatypes)nipoppy://status/curation - Data availability at different curation stagesnipoppy://status/processing - Pipeline completion status across participants/sessionsnipoppy://pipelines/{pipeline_name}/{version}/config - Individual pipeline configurationnipoppy://pipelines/{pipeline_name}/{version}/descriptor - Boutiques pipeline descriptornipoppy://demographics - De-identified participant demographic informationnipoppy://bids/description - BIDS dataset description and metadataget_participants_sessions - Unified participant/session query with filtering by data stageget_dataset_info - Enhanced dataset overview with configurable detail levelsnavigate_dataset - File path and configuration access with smart path resolutionlist_manifest_participants_sessions - Use get_participants_sessions(data_stage="all") insteadlist_manifest_imaging_participants_sessions - Use get_participants_sessions(data_stage="imaging") insteadget_pre_reorg_participants_sessions - Use get_participants_sessions(data_stage="downloaded") insteadget_post_reorg_participants_sessions - Use get_participants_sessions(data_stage="organized") insteadget_bids_participants_sessions - Use get_participants_sessions(data_stage="bidsified") insteadlist_processed_participants_sessions - Use get_participants_sessions(data_stage="processed", ...) instead# Clone the repository
git clone https://github.com/nipoppy/mcp.git
cd mcp
# Install dependencies
pip install -e .
You can also use the pre-built Docker container from GitHub Container Registry:
# Pull the latest version
docker pull ghcr.io/bcmcpher/nipoppy-mcp:latest
# Pull a specific version
docker pull ghcr.io/bcmcpher/nipoppy-mcp:v0.1.0
The server can be run in different modes depending on your use case:
# Set the dataset root (optional, defaults to current directory)
# Run the server
python -m nipoppy_mcp.server
# Run with local dataset mounted
docker run -v /path/to/your/nipoppy/dataset:/data ghcr.io/bcmcpher/nipoppy-mcp:latest
# Run with specific version and custom dataset path
docker run \
-v /path/to/dataset:/data \
-e NIPOPPY_DATASET_ROOT=/data \
ghcr.io/bcmcpher/nipoppy-mcp:v0.1.0
Add to your Claude Desktop configuration file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"nipoppy": {
"command": "python",
"args": ["-m", "nipoppy_mcp.server"]
}
}
}
Once connected to an MCP-compatible client, you can access Nipoppy dataset information through both automatic context and explicit tool calls.
Setting the Dataset Root:
The server requires the NIPOPPY_DATASET_ROOT environment variable to be set for resources to work:
export NIPOPPY_DATASET_ROOT=/path/to/your/nipoppy/dataset
Example Queries:
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Or run basic import tests
python -c "from nipoppy_mcp.server import mcp; print('✅ MCP server imports successfully')"
The refactored implementation includes comprehensive error handling and validation. Test with:
# Test basic functionality
python -c "
from nipoppy_mcp.server import get_participants_sessions, get_dataset_info, navigate_dataset
print('✅ Refactored tools imported successfully')
# Test validation (these should raise appropriate errors)
get_participants_sessions('/fake/path', data_stage='invalid_stage')
navigate_dataset('/fake/path', path_type='invalid_type')
"
# Test resource functions
python -c "
from nipoppy_mcp.server import get_dataset_config, get_dataset_manifest
print('✅ Resource functions imported successfully')
"
nipoppy-mcp/
├── nipoppy_mcp/
│ ├── __init__.py
│ └── server.py # Main MCP server implementation
│ # - 8 MCP resources (automatic context)
│ # - 3 refactored tools (unified interface)
│ # - 7 deprecated tools (backward compatibility)
├── tests/ # Test files
├── pyproject.toml # Project configuration
└── README.md
The refactored server provides:
For existing users migrating from the old tools:
| Old Tool | New Tool Call |
|---|---|
list_manifest_participants_sessions() |
get_participants_sessions(data_stage="all") |
list_manifest_imaging_participants_sessions() |
get_participants_sessions(data_stage="imaging") |
get_pre_reorg_participants_sessions() |
get_participants_sessions(data_stage="downloaded") |
get_post_reorg_participants_sessions() |
get_participants_sessions(data_stage="organized") |
get_bids_participants_sessions() |
get_participants_sessions(data_stage="bidsified") |
list_processed_participants_sessions(name, ver, step) |
get_participants_sessions(data_stage="processed", pipeline_name=name, pipeline_version=ver, pipeline_step=step) |
The repository includes example_usage.py to demonstrate the refactored functionality:
# Set your dataset path
export NIPOPPY_DATASET_ROOT=/path/to/your/nipoppy/dataset
# Run the example
python example_usage.py
This script demonstrates:
For immediate access to dataset information (requires NIPOPPY_DATASET_ROOT):
import os
os.environ['NIPOPPY_DATASET_ROOT'] = '/path/to/dataset'
from nipoppy_mcp.server import get_dataset_config
# Resources are available as direct function calls
config = get_dataset_config() # Auto-loads from environment variable
print(f"Dataset has {len(config['installed_pipelines'])} pipelines")
Contributions are welcome! This is a Brainhack 2026 project. Please feel free to submit issues and pull requests.
MIT License - see LICENSE file for details.
The Docker container is automatically built and published to GitHub Container Registry (GHCR) when a new release is tagged:
ghcr.io/bcmcpher/nipoppy-mcplatest - Points to the most recent releasev0.1.0 - Full semantic versionv0.1 - Minor versionv0 - Major version# Build the Docker image locally
docker build -t nipoppy-mcp .
# Run the locally built image
docker run -v /path/to/dataset:/data nipoppy-mcp
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"nipoppy-mcp-server": {
"command": "npx",
"args": []
}
}
}