loading…
Search for a command to run...
loading…
Add an in-app AI support agent to React Native apps that understands UI, navigates screens, fills forms, and escalates to humans.
Add an in-app AI support agent to React Native apps that understands UI, navigates screens, fills forms, and escalates to humans.
A React Native SDK for adding an in-app AI assistant that can read your app UI, answer questions, guide users, perform approved actions, and hand off to human support.
npm version React Native License

MobileAI adds a screen-aware assistant to your React Native app. It can:
useAction.The default user-facing mode is copilot mode: the assistant can help with routine app steps after approval, while the runtime enforces confirmation, semantic action safety, masking, and blocked controls.
npm install @mobileai/react-native
Optional peers depend on the features you enable:
npm install @react-native-async-storage/async-storage react-native-screens
npm install react-native-audio-api expo-speech-recognition
For Expo apps, use a development build or prebuild workflow. Native modules used by the SDK are not available in Expo Go.
A screen map gives the assistant route names, screen descriptions, and navigation chains.
npx react-native-ai-agent generate-map
You can also auto-generate it during Metro startup:
// metro.config.js
const { getDefaultConfig } = require('expo/metro-config');
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);
module.exports = getDefaultConfig(__dirname);
import { NavigationContainer, useNavigationContainerRef } from '@react-navigation/native';
import { AIAgent } from '@mobileai/react-native';
import screenMap from './ai-screen-map.json';
export default function App() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
screenMap={screenMap}
>
<NavigationContainer ref={navRef}>{/* your screens */}</NavigationContainer>
</AIAgent>
);
}
import { Slot, useNavigationContainerRef } from 'expo-router';
import { AIAgent } from '@mobileai/react-native';
import screenMap from '../ai-screen-map.json';
export default function RootLayout() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
screenMap={screenMap}
>
<Slot />
</AIAgent>
);
}
Use analyticsKey when you want MobileAI Cloud to handle the hosted AI proxy, knowledge base, analytics, and support escalation.
For local prototyping only, you can pass a provider API key directly:
<AIAgent provider="gemini" apiKey="YOUR_DEV_ONLY_KEY" navRef={navRef}>
{children}
</AIAgent>
For production without MobileAI Cloud, route requests through your backend:
<AIAgent
provider="openai"
proxyUrl="https://api.example.com/mobileai/chat"
proxyHeaders={{ Authorization: `Bearer ${sessionToken}` }}
navRef={navRef}
>
{children}
</AIAgent>
| Mode | What it does |
|---|---|
companion |
Screen-aware guidance only. The assistant can explain what it sees and guide the user, but cannot control the UI. |
copilot |
Default. Performs approved app actions while the runtime enforces guardrails. |
autopilot |
Runs trusted automation flows with minimal interruption. Use only for low-risk workflows. |
Companion mode is useful when trust matters more than automation. The assistant can look at the current screen, answer questions, explain confusing states, suggest the safest next step, query knowledge or app data, and escalate to a human if configured. It cannot tap, type, scroll, navigate, submit forms, highlight elements, inject UI, or otherwise operate the app on the user's behalf.
<AIAgent interactionMode="companion" analyticsKey="mobileai_pub_xxxxxxxx">
{children}
</AIAgent>
The runtime can inspect the current React tree at execution time. The generated ai-screen-map.json adds app-wide navigation knowledge so the assistant can understand screens the user is not currently viewing.
<AIAgent screenMap={screenMap} useScreenMap>
{children}
</AIAgent>
useDataUse useData for information the assistant should fetch directly instead of guessing from the UI.
import { useData } from '@mobileai/react-native';
function OrdersScreen() {
useData(
'orders',
'Read the signed-in customer orders and delivery status',
{
orderId: 'Order identifier',
status: 'Current delivery status',
eta: 'Estimated arrival time',
},
async ({ query }) => searchOrders(query)
);
return null;
}
useActionUse useAction for safe app-owned operations that are better executed through code than by tapping UI.
import { useAction } from '@mobileai/react-native';
function CartScreen() {
useAction(
'apply_coupon',
'Apply a coupon code to the current cart',
{
code: { type: 'string', description: 'Coupon code', required: true },
},
async ({ code }) => applyCoupon(String(code))
);
return null;
}
Pass static entries, a retriever, or configure project knowledge in MobileAI Cloud.
<AIAgent
knowledgeBase={[
{
id: 'returns',
title: 'Return policy',
content: 'Customers can request returns within 30 days.',
},
]}
>
{children}
</AIAgent>
Guardrails are enforced by the runtime, not by the assistant alone.
allow, ask, or block.aiConfirm forces confirmation for specific controls.aiIgnore removes sensitive controls from the assistant's visible target list.transformScreenContent can mask sensitive text before provider calls.actionSafety.onDecision.The SDK ships with default semantic guardrails in copilot mode. You can tune or replace them:
<AIAgent
interactionMode="copilot"
actionSafety={{
classifier: 'default',
unknownActionDecision: 'ask',
approvalReuse: 'risk-boundary',
onDecision: (decision) => {
console.log(decision.capability, decision.risk, decision.decision);
},
}}
>
{children}
</AIAgent>
Read the full safety model in docs/guardrails.md.
import { AIAgent, buildSupportPrompt } from '@mobileai/react-native';
<AIAgent
analyticsKey="mobileai_pub_xxxxxxxx"
instructions={{
system: buildSupportPrompt({
enabled: true,
persona: { agentName: 'Nora', preset: 'warm-concise' },
autoEscalateTopics: ['account deletion', 'legal request'],
}),
}}
userContext={{ userId: user.id, email: user.email }}
navRef={navRef}
>
{children}
</AIAgent>;
Use companion mode when you want help without UI control. The assistant stays beside the user: it can read the screen, reason about what is visible, use knowledge and app data, and tell the user what to do next in plain language.
For example, if the user says "my latest order is late", companion mode should not just say "go to Orders." It can explain what matters: check the latest order, look for ETA or driver status, and use the order-specific Help option if the ETA has passed or tracking is stale.
<AIAgent
interactionMode="companion"
analyticsKey="mobileai_pub_xxxxxxxx"
navRef={navRef}
>
{children}
</AIAgent>
With analyticsKey, the SDK can create MobileAI support tickets and stream human replies back into the assistant UI.
<AIAgent
analyticsKey="mobileai_pub_xxxxxxxx"
userContext={{
userId: user.id,
name: user.name,
email: user.email,
plan: user.plan,
}}
pushToken={expoPushToken}
pushTokenType="expo"
>
{children}
</AIAgent>
Hide the built-in chat bar and drive the assistant from your own UI.
import { AIAgent, useAI } from '@mobileai/react-native';
function CustomAssistantInput() {
const { send, isLoading, status, messages, cancel } = useAI();
return null;
}
<AIAgent showChatBar={false} analyticsKey="mobileai_pub_xxxxxxxx">
<CustomAssistantInput />
{children}
</AIAgent>;
RichContentRenderer, AIZone, themes, and block handlers.>=0.83.0 <0.84.0react-native-screens for navigation-heavy appsreact-native-audio-api, expo-speech-recognition@react-native-async-storage/async-storageThe assistant cannot navigate. Pass navRef, generate a screen map, and make sure route names in the map match your navigation setup.
The assistant does not see a control. Add an accessibilityLabel, make sure the element is mounted, and avoid wrapping important controls in components that hide host props.
A sensitive control is visible to the assistant. Add aiIgnore or mask it with transformScreenContent.
The runtime asks more than expected. Check actionSafety.onDecision logs. Low confidence, unknown capability, changed scope, and high-impact risk boundaries intentionally ask instead of silently acting.
Voice mode does not start. Confirm native permissions and install the optional voice dependencies in a native build.
See LICENSE.
Добавь это в claude_desktop_config.json и перезапусти Claude Desktop.
{
"mcpServers": {
"react-native": {
"command": "npx",
"args": [
"-y",
"@mobileai/react-native"
]
}
}
}pro-tip
Поставил React Native? Скажи Claude: «запомни почему я установил React Native и что хочу попробовать» — попадёт в твой Vault.
как это работает →