loading…
Search for a command to run...
loading…
MobileAI React Native SDK for AI-native UI traversal, tool calling, support escalation, and structured reasoning.
MobileAI React Native SDK for AI-native UI traversal, tool calling, support escalation, and structured reasoning.
Add an in-app AI support agent that understands your React Native UI, answers user questions, navigates screens, fills forms, resolves tasks, and hands off to a human when needed.
MobileAI is the SDK and cloud dashboard behind it. The SDK runs inside your app; MobileAI Cloud handles projects, analytics, hosted proxy configuration, and support escalation.
npm install @mobileai/react-native
⭐ If this helped you, star this repo — it helps others find it!
Intercom, Zendesk, and every chat widget all do the same thing: send the user instructions in a chat bubble.
"To cancel your order, go to Orders, tap the order, then tap Cancel."
That's not support. That's documentation delivery with a chat UI.
This SDK takes a different approach. Instead of telling users where to go, it — with the user's permission — goes there for them.
Every other support tool needs you to build API connectors: endpoints, webhooks, action definitions in their dashboard. Months of backend work before the AI can do anything useful.
This SDK reads your app's live UI natively — every button, label, input, and screen — in real time. There's nothing to integrate. The UI is already the integration. The app already knows how to cancel orders, update addresses, apply promo codes — it has buttons for all of it. The AI just uses them.
No OCR. No image pipelines. No selectors. No annotations. No backend connectors.
The most important insight: UI control is only uncomfortable when it's unexpected. In a support conversation, the user has already asked for help — they're in a "please help me" mindset:
| Context | User reaction to AI controlling UI |
|---|---|
| Unprompted (out of nowhere) | 😨 "What is happening?" |
| In a support chat — user asked for help | 😊 "Yes please, do it for me" |
| User is frustrated and types "how do I..." | 😮💨 "Thank God, yes" |
The SDK handles every tier of support automatically — from a simple FAQ answer to live human chat:
┌──────────────────────────────────────────────────────┐
│ Level 1: Knowledge Answer │
│ Answers from knowledge base — instant, zero UI │
│ "What's your return policy?" → answered directly │
├──────────────────────────────────────────────────────┤
│ Level 2: Show & Guide │
│ AI navigates to exact screen, user acts last │
│ "Settings → Notifications. It's right here. ☘️" │
├──────────────────────────────────────────────────────┤
│ Level 3: Do & Confirm (Copilot — default) │
│ AI fills forms, user confirms the final action │
│ "I've typed your new address. Tap Save to confirm." │
├──────────────────────────────────────────────────────┤
│ Level 4: Full Resolution (Autopilot) │
│ AI completes entire flow with one permission gate │
│ "Done! Order #4521 cancelled. Refund in 3-5 days." │
├──────────────────────────────────────────────────────┤
│ Level 5: Human Escalation │
│ Live agent via WebSocket — seamless handoff │
│ Billing dispute, legal issues, edge cases │
└──────────────────────────────────────────────────────┘
User: "What's your return policy?"
AI: "You can return items within 30 days. Full refund in 5-7 business days."
[from knowledge base — no UI needed]
User: "How do I turn on price alerts?"
AI: "That's in Settings → Notifications. Want me to take you there?"
User: "Yes"
AI: [navigates to Settings → scrolls to Notifications section]
AI: "Here it is. The Price Alerts toggle is right here. ☘️"
[user taps the toggle themselves]
User: "I need to change my shipping address"
AI: "Sure! I'll navigate to your address settings."
[navigates to Profile → Address]
AI: "What's the new address?"
User: "15 Omar St, Nasr City, Cairo"
AI: [types the address into the fields]
AI: "I've filled in '15 Omar St, Nasr City, Cairo'. Tap Save to confirm."
[user taps Save themselves]
User: "Cancel my last order"
AI: "Order #4521 — Margherita pizza, $12.99. Want me to cancel it?"
User: "Yes"
AI: [navigates to Orders → finds #4521 → taps Cancel → taps Confirm]
AI: "Done! Order #4521 is cancelled. Refund arrives in 3-5 days."
User: "I was charged twice for my order"
AI: "I can see your order history, but billing disputes need a human agent."
[triggers escalate → live agent chat via WebSocket]
| Intercom Fin | Zendesk AI | This SDK | |
|---|---|---|---|
| Answer questions | ✅ | ✅ | ✅ Knowledge base |
| Navigate user to right screen | ❌ | ❌ | ✅ App-aware navigation |
| Fill forms for the user | ❌ | ❌ | ✅ Types directly into fields |
| Execute in-app actions | Via API connectors (must build) | Via API connectors | ✅ Via UI — zero backend work |
| Voice support | ❌ | ❌ | ✅ Gemini Live |
| Human escalation | ✅ | ✅ | ✅ WebSocket live chat |
| Mobile-native | ❌ WebView overlay | ❌ WebView | ✅ React Native component |
| Setup time | Days–weeks (build connectors) | Days–weeks | Minutes (<AIAgent> wrapper) |
| Price per resolution | $0.99 + subscription | $1.50–2.00 | You decide |
No competitor can do Levels 2–4. Intercom and Zendesk answer questions (Level 1) and escalate to humans (Level 5). The middle — app-aware navigation, form assistance, and full in-app resolution — is uniquely possible because this SDK reads the React Native Fiber tree. That can't be added with a plugin or API connector.
The AI answers questions, guides users to the right screen, fills forms on their behalf, or completes full task flows — with voice support and human escalation built in. All in the existing app UI. Zero backend integration.
<AIAgent>, done. No annotations, no selectors, no API connectorsFull bidirectional voice AI powered by the Gemini Live API. Users speak their support request; the agent responds with voice AND navigates, fills forms, and resolves issues simultaneously.
💡 Speech-to-text in text mode: Install
expo-speech-recognitionfor a mic button in the chat bar — letting users dictate instead of typing. Separate from voice mode.
Every useAction you register automatically becomes a Siri shortcut and Spotlight action. One config plugin added at build time — no Swift required — and users can say:
"Hey Siri, track my order in MyApp" "Hey Siri, checkout in MyApp" "Hey Siri, cancel my last order in MyApp"
// app.json
{
"expo": {
"plugins": [
["@mobileai/react-native/withAppIntents", {
"scanDirectory": "src",
"appScheme": "myapp"
}]
]
}
}
After npx expo prebuild, every registered useAction is available in Siri and Spotlight automatically.
Or generate manually:
# Scan useAction calls → intent-manifest.json
npx @mobileai/react-native generate-intents src
# Generate Swift AppIntents code
npx @mobileai/react-native generate-swift intent-manifest.json myapp
⚠️ iOS 16+ only. Android equivalent (Google Assistant App Actions) is on the roadmap.
Your app becomes MCP-compatible with one prop. Connect any AI — Antigravity, Claude Desktop, CI/CD pipelines — to remotely read and control the running app. Find bugs without writing a single test.
MCP-only mode — just want testing? No chat popup needed:
<AIAgent
showChatBar={false}
mcpServerUrl="ws://localhost:3101"
analyticsKey="mobileai_pub_xxx"
navRef={navRef}
>
<App />
</AIAgent>
The most powerful use case: test your app without writing test code. Connect your AI (Antigravity, Claude Desktop, or any MCP client) to the emulator and describe what to check — in English. No selectors to maintain, no flaky tests, self-healing by design.
Skip the test framework. Just ask:
Ad-hoc — ask your AI anything about the running app:
"Is the Laptop Stand price consistent between the home screen and the product detail page?"
YAML Test Plans — commit reusable checks to your repo:
# tests/smoke.yaml
checks:
- id: price-sync
check: "Read the Laptop Stand price on home, tap it, compare with detail page"
- id: profile-email
check: "Go to Profile tab. Is the email displayed under the user's name?"
Then tell your AI: "Read tests/smoke.yaml and run each check on the emulator"
Real Results — 5 bugs found autonomously:
| # | What was checked | Bug found | AI steps |
|---|---|---|---|
| 1 | Price consistency (list → detail) | Laptop Stand: $45.99 vs $49.99 | 2 |
| 2 | Profile completeness | Email missing — only name shown | 2 |
| 3 | Settings navigation | Help Center missing from Support section | 2 |
| 4 | Description vs specifications | "breathable mesh" vs "Leather Upper" | 3 |
| 5 | Cross-screen price sync | Yoga Mat: $39.99 vs $34.99 | 4 |
Install the public React Native SDK:
npm install @mobileai/react-native
Requires React Native >=0.83.0 <0.84.0 and works with Expo managed workflow through a development build or prebuild. The base package includes native modules for screenshot capture and the elevated overlay, so Expo Go is not supported after installing it.
npx expo prebuild
npx expo run:ios
npx expo run:android
For React Native CLI apps, run the normal native install/build step after installing the package, such as cd ios && pod install.
react-native-view-shot is a required native dependency for screenshot capture and is included with
@mobileai/react-native, so you do not need to add it separately. Rebuild the native app after install so it can be autolinked.
npx expo install expo-speech-recognition
Automatically detected. No extra config needed — a mic icon appears in the text chat bar, letting users speak their message instead of typing. This is separate from voice mode.
npm install react-native-audio-api
Expo Managed — add to app.json:
{
"expo": {
"android": { "permissions": ["RECORD_AUDIO", "MODIFY_AUDIO_SETTINGS"] },
"ios": { "infoPlist": { "NSMicrophoneUsageDescription": "Required for voice chat with AI assistant" } }
}
}
Then rebuild: npx expo prebuild && npx expo run:android (or run:ios)
Expo Bare / React Native CLI — add RECORD_AUDIO + MODIFY_AUDIO_SETTINGS to AndroidManifest.xml and NSMicrophoneUsageDescription to Info.plist, then rebuild.
Hardware echo cancellation (AEC) is automatically enabled — no extra setup.
npx expo install @react-native-async-storage/async-storage
Optional but recommended when using:
Without it, both features gracefully degrade: tickets are only visible during the current session, and the tooltip shows every launch instead of once.
Add one line to your metro.config.js — the AI gets a map of every screen in your app, auto-generated on each dev start:
// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);
Or generate it manually anytime:
npx @mobileai/react-native generate-map
Without this, the AI can only see the currently mounted screen — it has no idea what other screens exist or how to reach them. Example: "Write a review for the Laptop Stand" — the AI sees the Home screen but doesn't know a
WriteReviewscreen exists 3 levels deep. With a map, it sees every screen in your app and knows exactly how to get there:Home → Products → Detail → Reviews → WriteReview.
If you use a MobileAI publishable key, the SDK now defaults to the hosted MobileAI text and voice proxies automatically. You only need to pass proxyUrl and voiceProxyUrl when you want to override them with your own backend.
import { AIAgent } from '@mobileai/react-native';
import { NavigationContainer, useNavigationContainerRef } from '@react-navigation/native';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1
export default function App() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
// Your MobileAI Dashboard ID
// This now auto-configures the hosted MobileAI text + voice proxies too.
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
screenMap={screenMap} // optional but recommended
>
<NavigationContainer ref={navRef}>
{/* Your existing screens — zero changes needed */}
</NavigationContainer>
</AIAgent>
);
}
In your root layout (app/_layout.tsx):
import { AIAgent } from '@mobileai/react-native';
import { Slot, useNavigationContainerRef } from 'expo-router';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1
export default function RootLayout() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
// Hosted MobileAI proxies are inferred automatically from analyticsKey
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
screenMap={screenMap}
>
<Slot />
</AIAgent>
);
}
The examples above use Gemini (default). To use OpenAI for text mode, add the provider prop. Voice mode is not supported with OpenAI.
<AIAgent
provider="openai"
apiKey="YOUR_OPENAI_API_KEY"
// model="gpt-4.1-mini" ← default, or use any OpenAI model
navRef={navRef}
>
{/* Same app, different brain */}
</AIAgent>
A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms, answer questions.
For the standard MobileAI Cloud setup, this is enough:
<AIAgent analyticsKey="mobileai_pub_xxxxxxxx" navRef={navRef} />
Only pass explicit proxy props when:
Set enableUIControl={false} for a lightweight FAQ / support assistant. Single LLM call, ~70% fewer tokens:
<AIAgent enableUIControl={false} knowledgeBase={KNOWLEDGE} />
| Full Agent (default) | Knowledge-Only | |
|---|---|---|
| UI analysis | ✅ Full structure read | ❌ Skipped |
| Tokens per request | ~500-2000 | ~200 |
| Agent loop | Up to 25 steps | Single call |
| Tools available | 7 | 2 (done, query_knowledge) |
The agent operates in copilot mode by default. It navigates, scrolls, types, and fills forms silently — then pauses once before the final irreversible action (place order, delete account, submit payment) to ask the user for confirmation.
// Default — copilot mode, zero extra config:
<AIAgent analyticsKey="mobileai_pub_xxx" navRef={navRef}>
<App />
</AIAgent>
What the AI does silently:
What the AI pauses on (asks the user first):
<AIAgent interactionMode="autopilot" />
Use autopilot for power users, accessibility tools, or repeat-task automation where confirmations are unwanted.
In copilot mode, the prompt handles ~95% of cases automatically. For extra safety on your most sensitive buttons, add aiConfirm={true} — this adds a code-level block that cannot be bypassed even if the LLM ignores the prompt:
// These elements will ALWAYS require confirmation before the AI touches them
<Pressable aiConfirm onPress={deleteAccount}>
<Text>Delete Account</Text>
</Pressable>
<Pressable aiConfirm onPress={placeOrder}>
<Text>Place Order</Text>
</Pressable>
<TextInput aiConfirm placeholder="Credit card number" />
aiConfirm works on any interactive element: Pressable, TextInput, Slider, Picker, Switch, DatePicker.
💡 Dev tip: In
__DEV__mode, the SDK logs a reminder to addaiConfirmto critical elements after each copilot task.
| Layer | Mechanism | Developer effort |
|---|---|---|
| Prompt (primary) | AI uses ask_user before irreversible commits |
Zero |
aiConfirm prop (optional safety net) |
Code blocks specific elements | Add prop to 2–3 critical buttons |
| Dev warning (preventive) | Logs tip in __DEV__ mode |
Zero |
Transform the AI agent into a production-grade support system. The AI resolves issues directly inside your app UI — no backend API integrations required. When it can't help, it escalates to a live human agent.
import { buildSupportPrompt, createEscalateTool } from '@mobileai/react-native';
<AIAgent
analyticsKey="mobileai_pub_xxx" // required for MobileAI escalation
instructions={{
system: buildSupportPrompt({
enabled: true,
greeting: {
message: "Hi! 👋 How can I help you today?",
agentName: "Support",
},
quickReplies: [
{ label: "Track my order", icon: "📦" },
{ label: "Cancel order", icon: "❌" },
{ label: "Talk to a human", icon: "👤" },
],
escalation: { provider: 'mobileai' },
csat: { enabled: true },
}),
}}
customTools={{ escalate: createEscalateTool({ provider: 'mobileai' }) }}
userContext={{
userId: user.id,
name: user.name,
email: user.email,
plan: 'pro',
}}
>
<App />
</AIAgent>
| Provider | What happens |
|---|---|
'mobileai' |
Ticket → MobileAI Dashboard inbox + WebSocket live chat |
'custom' |
Calls your onEscalate callback — wire to Intercom, Zendesk, etc. |
// Custom provider — bring your own live chat:
createEscalateTool({
provider: 'custom',
onEscalate: (context) => {
Intercom.presentNewConversation();
// context includes: userId, message, screenName, chatHistory
},
})
Pass user identity to the escalation ticket for agent visibility in the dashboard:
<AIAgent
userContext={{
userId: 'usr_123',
name: 'Ahmed Hassan',
email: '[email protected]',
plan: 'pro',
custom: { region: 'cairo', language: 'ar' },
}}
pushToken={expoPushToken} // for offline support reply notifications
pushTokenType="expo" // 'fcm' | 'expo' | 'apns'
/>
By default, the AI navigates by reading what's on screen and tapping visible elements. Screen mapping gives the AI a complete map of every screen and how they connect — via static analysis of your source code (AST). No API key needed, runs in ~2 seconds.
Add to your metro.config.js — the screen map auto-generates every time Metro starts:
// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);
// ... rest of your Metro config
Then pass the generated map to <AIAgent>:
import screenMap from './ai-screen-map.json';
<AIAgent screenMap={screenMap} navRef={navRef}>
<App />
</AIAgent>
That's it. Works with both Expo Router and React Navigation — auto-detected.
| Without Screen Map | With Screen Map |
|---|---|
| AI sees only the current screen | AI knows every screen in your app |
| Must explore to find features | Plans the full navigation path upfront |
| Deep screens may be unreachable | Knows each screen's navigatesTo links |
| No knowledge of dynamic routes | Understands item/[id], category/[id] patterns |
<AIAgent screenMap={screenMap} useScreenMap={false} />
Manual generation:
npx @mobileai/react-native generate-map
Watch mode — auto-regenerates on file changes:
npx @mobileai/react-native generate-map --watch
npm scripts — auto-run before start/build:
{
"scripts": {
"generate-map": "npx @mobileai/react-native generate-map",
"prestart": "npm run generate-map",
"prebuild": "npm run generate-map"
}
}
| Flag | Description |
|---|---|
--watch, -w |
Watch for file changes and auto-regenerate |
--dir=./path |
Custom project directory |
💡 The generated
ai-screen-map.jsonis committed to your repo — no runtime cost.
Give the AI domain knowledge it can query on demand — policies, FAQs, product details. Uses a query_knowledge tool to fetch only relevant entries (no token waste).
import type { KnowledgeEntry } from '@mobileai/react-native';
const KNOWLEDGE: KnowledgeEntry[] = [
{
id: 'shipping',
title: 'Shipping Policy',
content: 'Free shipping on orders over $75. Standard: 5-7 days. Express: 2-3 days.',
tags: ['shipping', 'delivery'],
},
{
id: 'returns',
title: 'Return Policy',
content: '30-day returns on all items. Refunds in 5-7 business days.',
tags: ['return', 'refund'],
screens: ['product/[id]', 'order-history'], // only surface on these screens
},
];
<AIAgent knowledgeBase={KNOWLEDGE} />
<AIAgent
knowledgeBase={{
retrieve: async (query: string, screenName?: string) => {
const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
return results.json();
},
}}
/>
┌──────────────────┐ ┌──────────────────┐ WebSocket ┌──────────────────┐
│ Antigravity │ Streamable HTTP │ │ │ │
│ Claude Desktop │ ◄──────────────► │ @mobileai/ │ ◄─────────────► │ Your React │
│ or any MCP │ (port 3100) │ mcp-server │ (port 3101) │ Native App │
│ compatible AI │ + Legacy SSE │ │ │ │
└──────────────────┘ └──────────────────┘ └──────────────────┘
1. Start the MCP bridge — no install needed:
npx @mobileai/mcp-server
2. Connect your React Native app:
<AIAgent
apiKey="YOUR_API_KEY"
mcpServerUrl="ws://localhost:3101"
/>
3. Connect your AI:
Add to ~/.gemini/antigravity/mcp_config.json:
{
"mcpServers": {
"mobile-app": {
"command": "npx",
"args": ["@mobileai/mcp-server"]
}
}
}
Click Refresh in MCP Store. You'll see mobile-app with 2 tools: execute_task and get_app_status.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"mobile-app": {
"url": "http://localhost:3100/mcp/sse"
}
}
}
http://localhost:3100/mcphttp://localhost:3100/mcp/sse| Tool | Description |
|---|---|
execute_task(command) |
Send a natural language command to the app |
get_app_status() |
Check if the React Native app is connected |
| Variable | Default | Description |
|---|---|---|
MCP_PORT |
3100 |
HTTP port for MCP clients |
WS_PORT |
3101 |
WebSocket port for the React Native app |
<AIAgent> Props| Prop | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
— | API key for your provider (prototyping only — use proxyUrl in production). |
provider |
'gemini' | 'openai' |
'gemini' |
LLM provider for text mode. |
proxyUrl |
string |
Hosted MobileAI text proxy when analyticsKey is set |
Backend proxy URL (production). Routes all LLM traffic through your server. |
proxyHeaders |
Record<string, string> |
— | Auth headers for proxy (e.g., Authorization: Bearer ${token}). |
voiceProxyUrl |
string |
Hosted MobileAI voice proxy when analyticsKey is set; otherwise falls back to proxyUrl |
Dedicated proxy for Voice Mode WebSockets. |
voiceProxyHeaders |
Record<string, string> |
— | Auth headers for voice proxy. |
model |
string |
Provider default | Model name (e.g. gemini-2.5-flash, gpt-4.1-mini). |
navRef |
NavigationContainerRef |
— | Navigation ref for auto-navigation. |
children |
ReactNode |
— | Your app — zero changes needed inside. |
| Prop | Type | Default | Description |
|---|---|---|---|
interactionMode |
'copilot' | 'autopilot' |
'copilot' |
Copilot (default): AI pauses before irreversible actions. Autopilot: full autonomy, no confirmation. |
showDiscoveryTooltip |
boolean |
true |
Show one-time animated tooltip on FAB explaining AI capabilities. Dismissed after 6s or first tap. |
maxSteps |
number |
25 |
Max agent steps per task. |
maxTokenBudget |
number |
— | Max total tokens before auto-stopping the agent loop. |
maxCostUSD |
number |
— | Max estimated cost (USD) before auto-stopping. |
stepDelay |
number |
— | Delay between agent steps in ms. |
enableUIControl |
boolean |
true |
When false, AI becomes knowledge-only (faster, fewer tokens). |
enableVoice |
boolean |
false |
Show voice mode tab. |
showChatBar |
boolean |
true |
Show the floating chat bar. |
| Prop | Type | Default | Description |
|---|---|---|---|
screenMap |
ScreenMap |
— | Pre-generated screen map from generate-map CLI. |
useScreenMap |
boolean |
true |
Set false to disable screen map without removing the prop. |
router |
{ push, replace, back } |
— | Expo Router instance (from useRouter()). |
pathname |
string |
— | Current pathname (from usePathname() — Expo Router). |
| Prop | Type | Default | Description |
|---|---|---|---|
instructions |
{ system?, getScreenInstructions? } |
— | Custom system prompt + per-screen instructions. |
customTools |
Record<string, ToolDefinition | null> |
— | Add custom tools or remove built-in ones (set to null). |
knowledgeBase |
KnowledgeEntry[] | { retrieve } |
— | Domain knowledge the AI can query via query_knowledge. |
knowledgeMaxTokens |
number |
2000 |
Max tokens for knowledge results. |
transformScreenContent |
(content: string) => string |
— | Transform/mask screen content before the LLM sees it. |
blocks |
Array<BlockDefinition | React.ComponentType<any>> |
— | Register built-in/custom rich blocks for chat and screen injection. |
richUITheme |
Partial<RichUITheme> |
— | Global rich UI theme overrides. |
richUISurfaceThemes |
{ chat?: Partial<RichUITheme>, zone?: Partial<RichUITheme>, support?: Partial<RichUITheme> } |
— | Optional per-surface theme overrides. |
blockActionHandlers |
Record<string, (payload: Record<string, unknown>) => void> |
— | Register block action handlers for button/toggle/chip interactions. |
| Prop | Type | Default | Description |
|---|---|---|---|
interactiveBlacklist |
React.RefObject<any>[] |
— | Refs of elements the AI must NOT interact with. |
interactiveWhitelist |
React.RefObject<any>[] |
— | If set, AI can ONLY interact with these elements. |
| Prop | Type | Default | Description |
|---|---|---|---|
userContext |
{ userId?, name?, email?, plan?, custom? } |
— | Logged-in user identity — attached to escalation tickets. |
pushToken |
string |
— | Push token for offline support reply notifications. |
pushTokenType |
'fcm' | 'expo' | 'apns' |
— | Type of the push token. |
| Prop | Type | Default | Description |
|---|---|---|---|
proactiveHelp |
ProactiveHelpConfig |
— | Detects user hesitation and shows a contextual help nudge. |
<AIAgent
proactiveHelp={{
enabled: true,
pulseAfterMinutes: 2, // subtle FAB pulse to catch attention
badgeAfterMinutes: 4, // badge: "Need help with this screen?"
badgeText: "Need help?",
dismissForSession: true, // once dismissed, won't show again this session
generateSuggestion: (screen) => {
if (screen === 'Checkout') return 'Having trouble with checkout?';
return undefined;
},
}}
/>
| Prop | Type | Default | Description |
|---|---|---|---|
analyticsKey |
string |
— | Publishable key (mobileai_pub_xxx) — enables auto-analytics and, by default, the hosted MobileAI text/voice proxies. |
analyticsProxyUrl |
string |
— | Enterprise: route events through your backend. |
analyticsProxyHeaders |
Record<string, string> |
— | Auth headers for analytics proxy. |
| Prop | Type | Default | Description |
|---|---|---|---|
mcpServerUrl |
string |
— | WebSocket URL for the MCP bridge (e.g. ws://localhost:3101). |
| Prop | Type | Default | Description |
|---|---|---|---|
onResult |
(result) => void |
— | Called when agent finishes a task. |
onBeforeTask |
() => void |
— | Called before task execution starts. |
onAfterTask |
(result) => void |
— | Called after task completes. |
onBeforeStep |
(stepCount) => void |
— | Called before each agent step. |
onAfterStep |
(history) => void |
— | Called after each step (with full step history). |
onTokenUsage |
(usage) => void |
— | Token usage data per step. |
onAskUser |
(question) => Promise<string> |
— | Custom handler for ask_user — agent blocks until resolved. |
| Prop | Type | Default | Description |
|---|---|---|---|
accentColor |
string |
— | Quick accent color for FAB, send button, active states. |
theme |
ChatBarTheme |
— | Full chat bar theme override. |
debug |
boolean |
false |
Enable SDK debug logging. |
// Quick — one color:
<AIAgent accentColor="#6C5CE7" />
// Full theme:
<AIAgent
accentColor="#6C5CE7"
theme={{
backgroundColor: 'rgba(44, 30, 104, 0.95)',
inputBackgroundColor: 'rgba(255, 255, 255, 0.12)',
textColor: '#ffffff',
successColor: 'rgba(40, 167, 69, 0.3)',
errorColor: 'rgba(220, 53, 69, 0.3)',
}}
/>
useAction — Custom AI-Callable Business LogicRegister isolated, headless logic for the AI to call (e.g., API requests, checkouts).
The handler is kept automatically fresh internally, so you never get stuck with a stale closure. The optional deps array re-registers the action so the AI sees an updated description.
import { useAction } from '@mobileai/react-native';
function CartScreen() {
const { cart, clearCart, getTotal } = useCart();
// Passing [cart.length] ensures the AI receives the live item count in its context!
useAction(
'checkout',
`Place the order and checkout (${cart.length} items for $${getTotal()})`,
{},
async () => {
if (cart.length === 0) return { success: false, message: 'Cart is empty' };
// Human-in-the-loop: AI pauses until user taps Confirm
return new Promise((resolve) => {
Alert.alert('Confirm Order', `Place order for $${getTotal()}?`, [
{ text: 'Cancel', onPress: () => resolve({ success: false, message: 'User denied.' }) },
{ text: 'Confirm', onPress: () => { clearCart(); resolve({ success: true, message: `Order placed!` }); } },
]);
});
},
[cart.length, getTotal]
);
}
useAI — Headless / Custom Chat UIimport { useAI } from '@mobileai/react-native';
function CustomChat() {
const { send, isLoading, status, messages } = useAI();
const summary = (msg) =>
msg.content
.map((node) => (node.type === 'text' ? node.content : `[${node.blockType}]`))
.join('\n');
return (
<View style={{ flex: 1 }}>
<FlatList data={messages} renderItem={({ item }) => <Text>{summary(item)}</Text>} />
{isLoading && <Text>{status}</Text>}
<TextInput onSubmitEditing={(e) => send(e.nativeEvent.text)} placeholder="Ask the AI..." />
</View>
);
}
Chat history persists across navigation. Override settings per-screen:
const { send } = useAI({
enableUIControl: false,
onResult: (result) => router.push('/(tabs)/chat'),
});
The SDK can answer in three surfaces:
AIZoneAs the app developer, your job is to define the UI surfaces the AI is allowed to use. You do that by registering block components on AIAgent, optionally adding themed screen zones with AIZone, and rendering rich message content correctly if you build your own chat UI.
Use chat UI when the answer should remain part of the conversation and be easy to scan again later.
Typical use cases:
Built-in chat block interfaces:
ProductCard for a concrete entity such as a dish, product, offer, listing, or planComparisonCard for side-by-side choices and tradeoffsFactCard for support, FAQ, status, and policy informationActionCard for guided next steps and recommended actionsFormCard for lightweight choices, confirmations, and inline inputsUse screen UI only when placement matters more than transcript history.
Typical use cases:
For most apps, chat UI should be the default rich surface. Add screen UI only where in-place help is clearly more useful than a chat response.
AIAgentRegister the blocks you want the AI to use across chat and screen surfaces:
import {
AIAgent,
ProductCard,
ComparisonCard,
FactCard,
ActionCard,
FormCard,
} from '@mobileai/react-native';
import {
NavigationContainer,
useNavigationContainerRef,
} from '@react-navigation/native';
export default function App() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
blocks={[ProductCard, ComparisonCard, FactCard, ActionCard, FormCard]}
richUITheme={{
colors: {
blockSurface: '#141327',
primaryText: '#161616',
inverseText: '#ffffff',
accent: '#ff6a6a',
},
}}
blockActionHandlers={{
choose_option: (payload) => {
console.log('User chose option', payload);
},
submit_preferences: (payload) => {
console.log('Submitted preferences', payload);
},
}}
>
<NavigationContainer ref={navRef}>
<AppNavigator />
</NavigationContainer>
</AIAgent>
);
}
What this enables:
AIZoneblockActionHandlersAIZone Only Where On-Screen Placement Makes SenseIf you want the AI to place UI inside the current screen, wrap the relevant area in an AIZone and whitelist the allowed blocks:
import { AIZone, FactCard, ProductCard } from '@mobileai/react-native';
function DishDetailScreen() {
return (
<AIZone
id="dish-detail-summary"
allowInjectBlock
interventionEligible
proactiveIntervention={false}
blocks={[FactCard, ProductCard]}
>
<DishDetailContent />
</AIZone>
);
}
Use AIZone for narrow, contextual placement. It is not required for chat cards.
If you build your own chat surface with useAI(), render message content with RichContentRenderer so text and blocks appear correctly:
import { FlatList } from 'react-native';
import { RichContentRenderer, useAI } from '@mobileai/react-native';
function CustomChatScreen() {
const { messages } = useAI();
return (
<FlatList
data={messages}
keyExtractor={(item, index) => `${item.timestamp}-${index}`}
renderItem={({ item }) => (
<RichContentRenderer
content={item.content}
surface="chat"
isUser={item.role === 'user'}
/>
)}
/>
);
}
This is the only extra setup needed for custom chat rendering. The built-in MobileAI chat bar already renders rich content automatically.
You can define your own block interfaces when the built-ins are not specific enough for your app.
Use custom blocks for domain-specific UI such as:
Custom blocks are plain React components registered as BlockDefinition objects:
import type { BlockDefinition } from '@mobileai/react-native';
import { Text, View } from 'react-native';
function RewardCard(props: {
title: string;
points: number;
description?: string;
}) {
return (
<View>
<Text>{props.title}</Text>
<Text>{props.points} points</Text>
{props.description ? <Text>{props.description}</Text> : null}
</View>
);
}
const RewardCardBlock: BlockDefinition = {
name: 'RewardCard',
component: RewardCard,
allowedPlacements: ['chat', 'zone'],
interventionEligible: true,
interventionType: 'decision_support',
propSchema: {
title: { type: 'string', required: true },
points: { type: 'number', required: true },
description: { type: 'string' },
},
};
<AIAgent blocks={[ProductCard, FactCard, RewardCardBlock]}>
<App />
</AIAgent>
The AI only sees the block name and prop schema, not your component source. Keep block props explicit and serializable.
Just add analyticsKey — every button tap, screen navigation, and session is tracked automatically. It also enables the default hosted MobileAI AI proxies unless you override them. Zero code changes to your app components.
<AIAgent
analyticsKey="mobileai_pub_abc123" // ← enables full auto-capture
navRef={navRef}
>
<App />
</AIAgent>
What's captured automatically:
| Event | Data | How |
|---|---|---|
user_interaction |
Button label, screen, coordinates, actor: 'user' |
Root touch interceptor |
screen_view |
Screen name, previous screen | Navigation ref listener |
session_start |
Device, OS, SDK version | On mount |
session_end |
Duration, event count | On background |
agent_request |
User query | On AI task start |
agent_step |
Tool name, args, result | On each AI action |
agent_complete |
Success, steps, cost | On AI task end |
When the AI agent taps a button on behalf of the user, those taps are not counted as user_interaction events — they're already captured as agent_step events with full context.
This means your funnels and retention charts always show real human behaviour, while the AI's actions are separately attributed for ROI analysis. No other analytics SDK can offer this because they don't own the app root.
| Event | Who | Dashboard use |
|---|---|---|
user_interaction { actor: 'user' } |
Human only | Funnels, retention, journeys |
agent_step { tool: 'tap' } |
AI only | Agent ROI, resolution rate |
Custom business events — track what matters to you:
import { MobileAI } from '@mobileai/react-native';
MobileAI.track('purchase_complete', { order_id: 'ord_1', total: 29.99 });
MobileAI.identify('user_123', { plan: 'pro' });
Enterprise: use
analyticsProxyUrlto route events through your own backend — zero keys in the app bundle.
Use this only when you want to override the default hosted MobileAI proxy behavior or route traffic through your own backend.
<AIAgent
proxyUrl="https://myapp.vercel.app/api/gemini"
proxyHeaders={{ Authorization: `Bearer ${userToken}` }}
voiceProxyUrl="https://voice-server.render.com" // only if text proxy is serverless
navRef={navRef}
>
voiceProxyUrlfalls back toproxyUrlif not set. Only needed when your text API is on a serverless platform that can't hold WebSocket connections.
import { NextResponse } from 'next/server';
export async function POST(req: Request) {
const body = await req.json();
const response = await fetch('https://generativelanguage.googleapis.com/...', {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'x-goog-api-key': process.env.GEMINI_API_KEY! },
body: JSON.stringify(body),
});
return NextResponse.json(await response.json());
}
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
const app = express();
const geminiProxy = createProxyMiddleware({
target: 'https://generativelanguage.googleapis.com',
changeOrigin: true,
ws: true,
pathRewrite: (path) => `${path}${path.includes('?') ? '&' : '?'}key=${process.env.GEMINI_API_KEY}`,
});
app.use('/v1beta/models', geminiProxy);
const server = app.listen(3000);
server.on('upgrade', geminiProxy.upgrade);
// AI will never see or interact with this element:
<Pressable aiIgnore={true}><Text>Admin Panel</Text></Pressable>
// In copilot mode, AI must confirm before touching this element:
<Pressable aiConfirm={true} onPress={deleteAccount}>
<Text>Delete Account</Text>
</Pressable>
<AIAgent transformScreenContent={(c) => c.replace(/\b\d{13,16}\b/g, '****-****-****-****')} />
<AIAgent instructions={{
system: 'You are a food delivery assistant.',
getScreenInstructions: (screen) => screen === 'Cart' ? 'Confirm total before checkout.' : undefined,
}} />
| Hook | When |
|---|---|
onBeforeTask |
Before task execution starts |
onBeforeStep |
Before each agent step |
onAfterStep |
After each step (with full history) |
onAfterTask |
After task completes (success or failure) |
AIZone marks specific sections of your UI so the AI can operate within them with special capabilities: simplify cluttered areas, render rich blocks, or highlight elements.
import { AIZone, FactCard, ProductCard } from '@mobileai/react-native';
// Allow AI to simplify this zone if it's too cluttered
<AIZone id="product-details" allowSimplify>
<View>
<Text aiPriority="high">Price: $29.99</Text>
<Text aiPriority="low">SKU: ABC-123</Text>
<Text aiPriority="low">Weight: 500g</Text>
</View>
</AIZone>
// Allow AI to inject contextual cards from a safe template whitelist
<AIZone
id="checkout-summary"
allowInjectBlock
allowHighlight
blocks={[FactCard, ProductCard]}
proactiveIntervention={false}
>
<CheckoutSummary />
</AIZone>
// Deprecated migration path (still supported):
<AIZone id="legacy" allowInjectCard templates={[InfoCard, ReviewSummary]}>
<LegacyPanel />
</AIZone>
aiPriority AttributeTag any element with aiPriority to control AI visibility:
| Value | Effect |
|---|---|
"high" |
Always rendered — surfaced first in AI context |
"low" |
Hidden when AI calls simplify_zone() on the enclosing AIZone |
| Prop | Type | Description |
|---|---|---|
id |
string |
Unique zone identifier the AI uses to target operations |
allowSimplify |
boolean |
AI can call simplify_zone(id) to hide aiPriority="low" elements |
allowHighlight |
boolean |
AI can visually highlight elements inside this zone |
allowInjectHint |
boolean |
AI can inject a contextual text hint into this zone |
allowInjectBlock |
boolean |
AI can render registered rich blocks into this zone |
allowInjectCard |
boolean |
Deprecated alias for allowInjectBlock |
blocks |
BlockDefinition[] | React.ComponentType<any>[] |
Whitelist of blocks the zone may render; required when block rendering is enabled |
templates |
React.ComponentType<any>[] |
Deprecated alias for blocks |
interventionEligible |
boolean |
Enables strict screen-intervention mode checks for this zone |
proactiveIntervention |
boolean |
Enables optional proactive render_block calls in this zone |
When using block rendering, always pass a whitelist via blocks (or templates for legacy apps).
The AI can only instantiate registered blocks and props; it never generates raw JSX.
Built-in block names: FactCard, ProductCard, ActionCard, ComparisonCard, FormCard.
Compatibility wrappers remain for migration:
InfoCard (maps to FactCard)ReviewSummary (maps to ProductCard)| Tool | What it does |
|---|---|
tap(index) |
Tap any interactive element — buttons, switches, checkboxes, custom components |
long_press(index) |
Long-press an element to trigger context menus |
type(index, text) |
Type into a text input |
scroll(direction, amount?) |
Scroll content — auto-detects edge, rejects PagerView |
slider(index, value) |
Drag a slider to a specific value |
picker(index, value) |
Select a value from a dropdown/picker |
date_picker(index, date) |
Set a date on a date picker |
navigate(screen) |
Navigate to any screen |
wait(seconds) |
Wait for loading states before acting |
capture_screenshot(reason) |
Capture the SDK root component as an image (requires react-native-view-shot) |
render_block(zoneId, blockType, props) |
Render a registered block into an AIZone as a contextual intervention |
inject_card(zoneId, blockType, props) |
Deprecated alias of render_block for migration |
done(reply, previewText?, success?) |
Return mixed chat content (text and block nodes) with a preview string |
done(text, success) |
Deprecated compatibility form for text-only responses |
ask_user(question) |
Ask the user for clarification |
query_knowledge(question) |
Search the knowledge base |
cd /Users/mohamedsalah/mobileai-suite-copy/react-native-ai-agent && npm run buildcd /Users/mohamedsalah/mobileai-suite-copy/feedyum-fullstack/feedyum && npx expo start -cCan you quickly show me a summary of this dish in context.ProductCard/FactCard rendered in the dish-detail-summary zonerender_block call or inject_card alias call and zone injection.If you only get text, check these fast:
AIZone on this screen has allowInjectBlock, blocks, and interventionEligible={true}Gemini is the default provider and powers all modes (text + voice). OpenAI is available as a text mode alternative via
provider="openai". Voice mode usesgemini-2.5-flash-native-audio-preview(Gemini only).
MIT © Mohamed Salah
👋 Let's connect — LinkedIn
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"react-native-agentic-ai": {
"command": "npx",
"args": [
"-y",
"react-native-agentic-ai"
]
}
}
}