loading…
Search for a command to run...
loading…
Provides AI-powered child safety tools to detect bullying, grooming, and unsafe content within digital conversations. It enables AI assistants to perform emotio
Provides AI-powered child safety tools to detect bullying, grooming, and unsafe content within digital conversations. It enables AI assistants to perform emotional analysis and generate age-appropriate safety action plans or incident reports.
MCP server for Tuteliq - AI-powered child safety tools for Claude
API Docs • Dashboard • Trust • Discord
Tuteliq MCP Server brings AI-powered child safety tools directly into Claude, Cursor, and other MCP-compatible AI assistants. Ask Claude to check messages for bullying, detect grooming patterns, or generate safety action plans.
| Tool | Description |
|---|---|
detect_bullying |
Analyze text for bullying, harassment, or harmful language |
detect_grooming |
Detect grooming patterns and predatory behavior in conversations |
detect_unsafe |
Identify unsafe content (self-harm, violence, explicit material) |
analyze |
Quick comprehensive safety check (bullying + unsafe) |
analyse_multi |
Run multiple detection endpoints on a single piece of text in one call |
analyze_emotions |
Analyze emotional content and mental state indicators |
get_action_plan |
Generate age-appropriate guidance for safety situations |
generate_report |
Create incident reports from conversations |
| Tool | Description |
|---|---|
detect_social_engineering |
Detect social engineering tactics (pretexting, urgency fabrication, authority impersonation) |
detect_app_fraud |
Detect app-based fraud (fake investment platforms, phishing apps, subscription traps) |
detect_romance_scam |
Detect romance scam patterns (love-bombing, financial requests, identity deception) |
detect_mule_recruitment |
Detect money mule recruitment tactics (easy-money offers, bank account sharing) |
detect_gambling_harm |
Detect gambling-related harm indicators (chasing losses, concealment, distress) |
detect_coercive_control |
Detect coercive control patterns (isolation, financial control, monitoring, threats) |
detect_vulnerability_exploitation |
Detect exploitation of vulnerable individuals (elderly, disabled, financially distressed) |
detect_radicalisation |
Detect radicalisation indicators (extremist rhetoric, us-vs-them framing, ideological grooming) |
| Tool | Description |
|---|---|
analyze_voice |
Transcribe audio and run safety analysis on the transcript |
analyze_image |
Analyze images for visual safety + OCR text extraction |
analyze_video |
Analyze video files for safety concerns via key frame extraction (supports mp4, mov, avi, webm, mkv) |
analyze_document |
Analyze PDF documents for safety concerns — per-page multi-endpoint detection with chain-of-custody hashing (max 50MB, 100 pages) |
| Tool | Description |
|---|---|
detect_synthetic_text |
Detect AI-generated text across 10 child-safety categories (synthetic CSAM, deepfake scripts, AI grooming) |
detect_synthetic_image |
6-signal forensic pipeline: vision AI, EXIF metadata, pixel stats, C2PA Content Credentials, watermarks, pHash |
detect_synthetic_audio |
Dual-signal forensics: transcript + mel spectrogram vision + quantitative audio statistics |
detect_synthetic_video |
5-track analysis: per-frame vision, temporal face consistency, lip-sync correlation, spectral audio, transcript |
get_synthetic_profile |
Account-level 30-day rolling window with trend detection and category distribution |
| Tool | Description |
|---|---|
create_verification_session |
Create a session for age or identity verification — returns a URL for the user to complete the flow |
get_verification_session |
Poll session status — returns full document intelligence (MRZ, barcode, authenticity, face match, liveness) |
cancel_verification_session |
Cancel an active session (no credits consumed) |
| Tool | Description |
|---|---|
list_webhooks |
List all configured webhooks |
create_webhook |
Create a new webhook endpoint |
update_webhook |
Update webhook configuration |
delete_webhook |
Delete a webhook |
test_webhook |
Send a test payload to verify webhook |
regenerate_webhook_secret |
Regenerate webhook signing secret |
| Tool | Description |
|---|---|
get_pricing |
Get available pricing plans |
get_pricing_details |
Get detailed pricing with features and limits |
| Tool | Description |
|---|---|
get_usage_history |
Get daily usage history |
get_usage_by_tool |
Get usage by tool/endpoint |
get_usage_monthly |
Get monthly usage with billing info |
| Tool | Description |
|---|---|
delete_account_data |
Delete all account data (Right to Erasure) |
export_account_data |
Export all account data as JSON (Data Portability) |
record_consent |
Record user consent for data processing |
get_consent_status |
Get current consent status |
withdraw_consent |
Withdraw a previously granted consent |
rectify_data |
Correct user data (Right to Rectification) |
get_audit_logs |
Get audit trail of all data operations |
| Tool | Description |
|---|---|
log_breach |
Log a new data breach (starts 72-hour notification clock) |
list_breaches |
List all data breaches, optionally filtered by status |
get_breach |
Get details of a specific data breach |
update_breach_status |
Update breach status and notification progress |
All detection tools accept an optional context object. These fields influence severity scoring and classification:
| Field | Type | Description |
|---|---|---|
language |
string |
ISO 639-1 code (e.g., "en", "sv"). Auto-detected if omitted. |
ageGroup |
string |
Age group (e.g., "10-12", "13-15", "under 18"). Triggers age-calibrated scoring. |
platform |
string |
Platform name (e.g., "Discord", "Roblox"). Adjusts detection for platform norms. |
relationship |
string |
Relationship context (e.g., "classmates", "stranger"). |
sender_trust |
string |
Sender verification status: "verified", "trusted", or "unknown". |
sender_name |
string |
Name of the sender (used with sender_trust). |
sender_trust BehaviorWhen sender_trust is set to "verified" or "trusted":
support_thresholdControls when crisis support resources (helplines, text lines, web resources) are included in the response:
| Value | Behavior |
|---|---|
low |
Include support for Low severity and above |
medium |
Include support for Medium severity and above |
high |
(Default) Include support for High severity and above |
critical |
Include support only for Critical severity |
Note: Critical severity always includes support resources regardless of the threshold setting.
analyse_multi Endpoint ValuesThe analyse_multi tool accepts up to 10 endpoints per call. Valid endpoint values:
| Endpoint ID | Description |
|---|---|
bullying |
Bullying and harassment detection |
grooming |
Grooming pattern detection |
unsafe |
Unsafe content detection (self-harm, violence, explicit material) |
social-engineering |
Social engineering and pretexting |
app-fraud |
App-based fraud patterns |
romance-scam |
Romance scam patterns |
mule-recruitment |
Money mule recruitment |
gambling-harm |
Gambling-related harm |
coercive-control |
Coercive control patterns |
vulnerability-exploitation |
Exploitation of vulnerable individuals |
radicalisation |
Radicalisation indicators |
https://api.tuteliq.ai/mcp
That's it — Tuteliq tools will be available in your next conversation.
Add to your Cursor MCP settings:
{
"mcpServers": {
"tuteliq": {
"url": "https://api.tuteliq.ai/mcp",
"headers": {
"Authorization": "Bearer your-api-key"
}
}
}
}
For clients that support stdio transport:
{
"mcpServers": {
"tuteliq": {
"command": "npx",
"args": ["-y", "@tuteliq/mcp"],
"env": {
"TUTELIQ_API_KEY": "your-api-key"
}
}
}
}
Once configured, you can ask Claude:
"Check if this message is bullying: 'Nobody likes you, just go away'"
Response:
## ⚠️ Bullying Detected
**Severity:** 🟠 Medium
**Confidence:** 92%
**Risk Score:** 75%
**Types:** exclusion, verbal_abuse
### Rationale
The message contains direct exclusionary language...
### Recommended Action
`flag_for_moderator`
"Analyze this conversation for grooming patterns..."
"Is this message safe? 'I don't want to be here anymore'"
"Analyze the emotions in: 'I'm so stressed about school and nobody understands'"
"Give me an action plan for a 12-year-old being cyberbullied"
"Generate an incident report from these messages..."
"Analyze this audio file for safety: /path/to/recording.mp3"
"Check this screenshot for harmful content: /path/to/screenshot.png"
"List my webhooks" "Create a webhook for critical incidents at https://example.com/webhook"
"Show my monthly usage"
"Is this image AI-generated? /path/to/suspect-image.jpg" "Check if this audio is a voice clone: /path/to/voice.mp3" "Analyze this video for deepfake indicators: /path/to/video.mp4" "Is this text AI-generated? 'The generated text to analyze...'" "Show me the synthetic content profile for customer cust_xyz789"
"Create an age verification session" "Create an identity verification session with passport as preferred document" "Check the status of verification session abc123" "Cancel verification session abc123"
"Check this message for social engineering: 'Your account will be suspended unless you verify now'" "Is this a romance scam? 'I know we just met online but I need help with a medical bill'"
Language is auto-detected when not specified. Beta languages have good accuracy but may have edge cases compared to English.
| Language | Code | Status |
|---|---|---|
| English | en |
Stable |
| Spanish | es |
Beta |
| Portuguese | pt |
Beta |
| French | fr |
Beta |
| German | de |
Beta |
| Italian | it |
Beta |
| Dutch | nl |
Beta |
| Polish | pl |
Beta |
| Romanian | ro |
Beta |
| Turkish | tr |
Beta |
| Greek | el |
Beta |
| Czech | cs |
Beta |
| Hungarian | hu |
Beta |
| Bulgarian | bg |
Beta |
| Croatian | hr |
Beta |
| Slovak | sk |
Beta |
| Slovenian | sl |
Beta |
| Lithuanian | lt |
Beta |
| Latvian | lv |
Beta |
| Estonian | et |
Beta |
| Maltese | mt |
Beta |
| Irish | ga |
Beta |
| Swedish | sv |
Beta |
| Norwegian | no |
Beta |
| Danish | da |
Beta |
| Finnish | fi |
Beta |
| Ukrainian | uk |
Beta |
The bullying and unsafe content tools analyze a single text field per request. If you're analyzing a conversation, concatenate a sliding window of recent messages into one string rather than sending each message individually. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.
The grooming tool already accepts a messages[] array and analyzes the full conversation in context.
Enable PII_REDACTION_ENABLED=true on your Tuteliq API to automatically strip emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed.
Tuteliq supports 27 languages with automatic detection — no configuration required.
English (stable) and 26 beta languages: Spanish, Portuguese, Ukrainian, Swedish, Norwegian, Danish, Finnish, German, French, Dutch, Polish, Italian, Turkish, Romanian, Greek, Czech, Hungarian, Bulgarian, Croatian, Slovak, Lithuanian, Latvian, Estonian, Slovenian, Maltese, and Irish.
All 24 EU official languages + Ukrainian, Norwegian, and Turkish. Each language includes culture-specific safety guidelines covering local slang, grooming patterns, self-harm coded vocabulary, and filter evasion techniques.
See the Language Support docs for details.
MIT License - see LICENSE for details.
Tuteliq offers a free certification program for anyone who wants to deepen their understanding of online child safety. Complete a track, pass the quiz, and earn your official Tuteliq certificate — verified and shareable.
Three tracks available:
| Track | Who it's for | Duration |
|---|---|---|
| Parents & Caregivers | Parents, guardians, grandparents, teachers, coaches | ~90 min |
| Young People (10–16) | Young people who want to learn to spot manipulation | ~60 min |
| Companies & Platforms | Product managers, trust & safety teams, CTOs, compliance officers | ~120 min |
Start here → tuteliq.ai/certify
Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.
End-to-end encryption is making platforms blind. In 2024, platforms reported 7 million fewer incidents than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations exists right now. It is running at api.tuteliq.ai.
The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.
Every second we wait, another child is harmed.
We have the technology. We need the support.
If this mission matters to you, consider sponsoring our open-source work so we can keep building the tools that protect children — and keep them free and accessible for everyone.
Built with care for child safety by the Tuteliq team
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"safenest-mcp-server": {
"command": "npx",
"args": []
}
}
}