@mobileai/react-native 0.9.12 → 0.9.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/README.md +480 -61
  2. package/lib/module/components/AIAgent.js +61 -3
  3. package/lib/module/components/AIAgent.js.map +1 -1
  4. package/lib/module/components/AgentChatBar.js +12 -2
  5. package/lib/module/components/AgentChatBar.js.map +1 -1
  6. package/lib/module/components/DiscoveryTooltip.js +128 -0
  7. package/lib/module/components/DiscoveryTooltip.js.map +1 -0
  8. package/lib/module/core/AgentRuntime.js +77 -1
  9. package/lib/module/core/AgentRuntime.js.map +1 -1
  10. package/lib/module/core/FiberTreeWalker.js +5 -1
  11. package/lib/module/core/FiberTreeWalker.js.map +1 -1
  12. package/lib/module/core/systemPrompt.js +41 -3
  13. package/lib/module/core/systemPrompt.js.map +1 -1
  14. package/lib/module/index.js.map +1 -1
  15. package/lib/typescript/src/components/AIAgent.d.ts +18 -2
  16. package/lib/typescript/src/components/AIAgent.d.ts.map +1 -1
  17. package/lib/typescript/src/components/AgentChatBar.d.ts +5 -1
  18. package/lib/typescript/src/components/AgentChatBar.d.ts.map +1 -1
  19. package/lib/typescript/src/components/DiscoveryTooltip.d.ts +15 -0
  20. package/lib/typescript/src/components/DiscoveryTooltip.d.ts.map +1 -0
  21. package/lib/typescript/src/core/AgentRuntime.d.ts +7 -0
  22. package/lib/typescript/src/core/AgentRuntime.d.ts.map +1 -1
  23. package/lib/typescript/src/core/FiberTreeWalker.d.ts.map +1 -1
  24. package/lib/typescript/src/core/systemPrompt.d.ts +1 -1
  25. package/lib/typescript/src/core/systemPrompt.d.ts.map +1 -1
  26. package/lib/typescript/src/core/types.d.ts +19 -0
  27. package/lib/typescript/src/core/types.d.ts.map +1 -1
  28. package/lib/typescript/src/index.d.ts +1 -1
  29. package/lib/typescript/src/index.d.ts.map +1 -1
  30. package/package.json +1 -1
  31. package/src/components/AIAgent.tsx +75 -1
  32. package/src/components/AgentChatBar.tsx +19 -1
  33. package/src/components/DiscoveryTooltip.tsx +148 -0
  34. package/src/core/AgentRuntime.ts +87 -1
  35. package/src/core/FiberTreeWalker.ts +5 -0
  36. package/src/core/systemPrompt.ts +41 -3
  37. package/src/core/types.ts +21 -0
  38. package/src/index.ts +1 -0
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
- # Agentic AI for React Native
1
+ # AI Support That Resolves — Not Deflects
2
2
 
3
- > **Add an autonomous AI agent to any React Native app no rewrite needed.** Wrap your app with `<AIAgent>` and get: natural language UI control, real-time voice conversations, and a built-in knowledge base. Fully customizable, production-grade security, performant, and lightweight. Plus: an MCP bridge that lets any AI connect to and test your app.
3
+ > **Drop in one React Native component and your app gets AI support that answers questions, navigates users to the right screen, fills forms, and resolves issues end-to-end with live human backup when needed. No backend integration required.**
4
4
 
5
5
  **Two names, one package — pick whichever you prefer:**
6
6
 
@@ -10,10 +10,10 @@ npm install @mobileai/react-native
10
10
  npm install react-native-agentic-ai
11
11
  ```
12
12
 
13
- ### 🤖 AI Agent — Autonomous UI Control
13
+ ### 🤖 AI Support Agent — Answers, Acts, and Resolves Inside Your App
14
14
 
15
15
  <p align="center">
16
- <img src="./assets/demo.gif" alt="AI Agent autonomously controlling a React Native app UI via natural language" width="350" />
16
+ <img src="./assets/demo.gif" alt="AI Support Agent navigating the app and resolving user issues end-to-end" width="350" />
17
17
  </p>
18
18
 
19
19
  ### 🧪 AI-Powered Testing — Test Your App in English, Not Code
@@ -40,63 +40,184 @@ npm install react-native-agentic-ai
40
40
 
41
41
  ---
42
42
 
43
- ## 🧠 How It Works Structure-First Agentic AI
43
+ ## 💡 The Problem With Every Support Tool Today
44
44
 
45
- What if your AI could understand your app the way a real user does not by looking at pixels, but by reading the actual UI structure?
45
+ Intercom, Zendesk, and every chat widget all do the same thing: send the user instructions in a chat bubble.
46
46
 
47
- That's what this SDK does. It reads your app's live UI natively — every button, label, input, and screen — in real time. The AI understands your app's structure, not a screenshot of it.
47
+ *"To cancel your order, go to Orders, tap the order, then tap Cancel."*
48
48
 
49
- **No OCR. No image pipelines. No selectors. No annotations. No view wrappers.**
49
+ That's not support. That's documentation delivery with a chat UI.
50
50
 
51
- The result: an AI that truly understands your appand can act on it autonomously.
51
+ **This SDK takes a different approach.** Instead of telling users where to go, it with the user's permission goes there for them.
52
52
 
53
- | | This SDK | Screenshot-based AI | Build It Yourself |
54
- |---|---|---|---|
55
- | **Setup** | `<AIAgent>`one wrapper | Vision model + custom pipeline | Months of custom code |
56
- | **How it reads UI** | Native structure — real time | Screenshot → OCR | Custom integration |
57
- | **AI agent loop** | Built-in multi-step | Build from scratch | Build from scratch |
58
- | **Voice mode** | ✅ Real-time bidirectional | ❌ | ❌ |
59
- | **Custom business logic** | `useAction` hook | Custom code | Custom code |
60
- | **MCP bridge (any AI connects)** | ✅ One command | ❌ | ❌ |
61
- | **Knowledge base** | Built-in retrieval | | |
53
+ ---
54
+
55
+ ## 🧠 How It Works The App's UI Is the Integration Layer
56
+
57
+ Every other support tool needs you to build API connectors: endpoints, webhooks, action definitions in their dashboard. Months of backend work before the AI can do anything useful.
58
+
59
+ This SDK reads your app's live UI natively — every button, label, input, and screen — in real time. **There's nothing to integrate. The UI is already the integration.** The app already knows how to cancel orders, update addresses, apply promo codes — it has buttons for all of it. The AI just uses them.
60
+
61
+ **No OCR. No image pipelines. No selectors. No annotations. No backend connectors.**
62
+
63
+ ### Why This Matters in the Support Context
64
+
65
+ The most important insight: UI control is only uncomfortable when it's unexpected. In a support conversation, the user has already asked for help — they're in a *"please help me"* mindset:
66
+
67
+ | Context | User reaction to AI controlling UI |
68
+ |:---|:---|
69
+ | Unprompted (out of nowhere) | 😨 "What is happening?" |
70
+ | **In a support chat — user asked for help** | 😊 "Yes please, do it for me" |
71
+ | **User is frustrated and types "how do I..."** | 😮‍💨 "Thank God, yes" |
72
+
73
+ ---
74
+
75
+ ## 🎟️ The 5-Level Support Ladder
76
+
77
+ The SDK handles every tier of support automatically — from a simple FAQ answer to live human chat:
78
+
79
+ ```
80
+ ┌──────────────────────────────────────────────────────┐
81
+ │ Level 1: Knowledge Answer │
82
+ │ Answers from knowledge base — instant, zero UI │
83
+ │ "What's your return policy?" → answered directly │
84
+ ├──────────────────────────────────────────────────────┤
85
+ │ Level 2: Show & Guide │
86
+ │ AI navigates to exact screen, user acts last │
87
+ │ "Settings → Notifications. It's right here. ☘️" │
88
+ ├──────────────────────────────────────────────────────┤
89
+ │ Level 3: Do & Confirm (Copilot — default) │
90
+ │ AI fills forms, user confirms the final action │
91
+ │ "I've typed your new address. Tap Save to confirm." │
92
+ ├──────────────────────────────────────────────────────┤
93
+ │ Level 4: Full Resolution (Autopilot) │
94
+ │ AI completes entire flow with one permission gate │
95
+ │ "Done! Order #4521 cancelled. Refund in 3-5 days." │
96
+ ├──────────────────────────────────────────────────────┤
97
+ │ Level 5: Human Escalation │
98
+ │ Live agent via WebSocket — seamless handoff │
99
+ │ Billing dispute, legal issues, edge cases │
100
+ └──────────────────────────────────────────────────────┘
101
+ ```
102
+
103
+ ### Example conversations
104
+
105
+ <details>
106
+ <summary><b>Level 1 — Knowledge Answer</b></summary>
107
+
108
+ ```
109
+ User: "What's your return policy?"
110
+ AI: "You can return items within 30 days. Full refund in 5-7 business days."
111
+ [from knowledge base — no UI needed]
112
+ ```
113
+ </details>
114
+
115
+ <details>
116
+ <summary><b>Level 2 — Show & Guide</b></summary>
117
+
118
+ ```
119
+ User: "How do I turn on price alerts?"
120
+ AI: "That's in Settings → Notifications. Want me to take you there?"
121
+ User: "Yes"
122
+ AI: [navigates to Settings → scrolls to Notifications section]
123
+ AI: "Here it is. The Price Alerts toggle is right here. ☘️"
124
+ [user taps the toggle themselves]
125
+ ```
126
+ </details>
127
+
128
+ <details>
129
+ <summary><b>Level 3 — Do & Confirm (Copilot, default)</b></summary>
130
+
131
+ ```
132
+ User: "I need to change my shipping address"
133
+ AI: "Sure! I'll navigate to your address settings."
134
+ [navigates to Profile → Address]
135
+ AI: "What's the new address?"
136
+ User: "15 Omar St, Nasr City, Cairo"
137
+ AI: [types the address into the fields]
138
+ AI: "I've filled in '15 Omar St, Nasr City, Cairo'. Tap Save to confirm."
139
+ [user taps Save themselves]
140
+ ```
141
+ </details>
142
+
143
+ <details>
144
+ <summary><b>Level 4 — Full Resolution (Autopilot)</b></summary>
145
+
146
+ ```
147
+ User: "Cancel my last order"
148
+ AI: "Order #4521 — Margherita pizza, $12.99. Want me to cancel it?"
149
+ User: "Yes"
150
+ AI: [navigates to Orders → finds #4521 → taps Cancel → taps Confirm]
151
+ AI: "Done! Order #4521 is cancelled. Refund arrives in 3-5 days."
152
+ ```
153
+ </details>
154
+
155
+ <details>
156
+ <summary><b>Level 5 — Human Escalation</b></summary>
157
+
158
+ ```
159
+ User: "I was charged twice for my order"
160
+ AI: "I can see your order history, but billing disputes need a human agent."
161
+ [triggers escalate → live agent chat via WebSocket]
162
+ ```
163
+ </details>
164
+
165
+ ---
166
+
167
+ ## ⚙️ Why Not Intercom or Zendesk?
168
+
169
+ | | Intercom Fin | Zendesk AI | **This SDK** |
170
+ |:---|:---|:---|:---|
171
+ | **Answer questions** | ✅ | ✅ | ✅ Knowledge base |
172
+ | **Navigate user to right screen** | ❌ | ❌ | ✅ App-aware navigation |
173
+ | **Fill forms for the user** | ❌ | ❌ | ✅ Types directly into fields |
174
+ | **Execute in-app actions** | Via API connectors *(must build)* | Via API connectors | ✅ Via UI — zero backend work |
175
+ | **Voice support** | ❌ | ❌ | ✅ Gemini Live |
176
+ | **Human escalation** | ✅ | ✅ | ✅ WebSocket live chat |
177
+ | **Mobile-native** | ❌ WebView overlay | ❌ WebView | ✅ React Native component |
178
+ | **Setup time** | Days–weeks (build connectors) | Days–weeks | **Minutes** (`<AIAgent>` wrapper) |
179
+ | **Price per resolution** | $0.99 + subscription | $1.50–2.00 | You decide |
180
+
181
+ ### The moat
182
+
183
+ No competitor can do Levels 2–4. Intercom and Zendesk answer questions (Level 1) and escalate to humans (Level 5). The middle — **app-aware navigation, form assistance, and full in-app resolution** — is uniquely possible because this SDK reads the React Native Fiber tree. That can't be added with a plugin or API connector.
62
184
 
63
185
  ---
64
186
 
65
187
  ## ✨ What's Inside
66
188
 
67
- ### Ship to Production
189
+ ### Support Your Users
68
190
 
69
- #### 🤖 Autonomous AI Agent — Natural Language UI Automation
191
+ #### 🦹 AI Support Agent — Resolves at Every Level
70
192
 
71
- Your users describe what they want in natural language. The SDK reads the live screen, plans a sequence of actions, and executes them end-to-endtapping buttons, filling forms, navigating screens all autonomously. Powered by **Gemini**. **OpenAI** is also supported as a text mode alternative.
193
+ The AI answers questions, guides users to the right screen, fills forms on their behalf, or completes full task flows with voice support and human escalation built in. All in the existing app UI. Zero backend integration.
72
194
 
73
- - **Zero-config** — wrap your app with `<AIAgent>`, done. No annotations, no selectors
74
- - **Multi-step reasoning** — navigates across screens to complete complex tasks
75
- - **Custom actions** — expose any business logic (checkout, API calls, mutations) via `useAction`
76
- - **Knowledge base** — AI queries your FAQs, policies, product data on demand
77
- - **Human-in-the-loop** — native `Alert.alert` confirmation before critical actions
195
+ - **Zero-config** — wrap your app with `<AIAgent>`, done. No annotations, no selectors, no API connectors
196
+ - **5-level resolution** — knowledge answer guided navigation copilot → full resolution → human escalation
197
+ - **Copilot mode** (default) AI pauses once before irreversible actions (order, delete, submit). User always stays in control
198
+ - **Human escalation** — live chat via WebSocket, CSAT survey, ticket dashboard all built in
199
+ - **Knowledge base** — policies, FAQs, product data queried on demand — no token waste
78
200
 
79
- #### 🎤 Real-time Voice AI Agent Bidirectional Audio with Gemini Live API
201
+ #### 🎤 Real-time Voice SupportUsers Speak, AI Acts
80
202
 
81
- Full bidirectional voice AI powered by the Gemini Live API (Gemini only). Users speak naturally; the agent responds with voice AND controls your app simultaneously.
203
+ Full bidirectional voice AI powered by the Gemini Live API. Users speak their support request; the agent responds with voice AND navigates, fills forms, and resolves issues simultaneously.
82
204
 
83
205
  - **Sub-second latency** — real-time audio via WebSockets, not turn-based
84
- - **Full UI control** — same tap, type, navigate, custom actions as text mode — all by voice
85
- - **Screen-aware** — auto-detects screen changes and updates its context instantly
206
+ - **Full resolution** — same navigate, type, tap as text mode — all by voice
207
+ - **Screen-aware** — auto-detects screen changes and updates context instantly
86
208
 
87
- > 💡 **Speech-to-text in text mode:** Install `expo-speech-recognition` and a mic button appears in the chat bar — letting users dictate messages instead of typing. This is separate from voice mode.
209
+ > 💡 **Speech-to-text in text mode:** Install `expo-speech-recognition` for a mic button in the chat bar — letting users dictate instead of typing. Separate from voice mode.
88
210
 
89
211
  ---
90
212
 
91
213
  ### Supercharge Your Dev Workflow
92
214
 
93
- #### 🔌 MCP Bridge — Connect Any AI to Your App
215
+ #### 🔌 MCP Bridge — Test Your App in English, Not Code
94
216
 
95
- Your app becomes MCP-compatible with one prop. Any AI that speaks the Model Context Protocol editors, autonomous agents, CI/CD pipelines, custom scripts can remotely read and control your app.
217
+ Your app becomes MCP-compatible with one prop. Connect any AI — Antigravity, Claude Desktop, CI/CD pipelines — to remotely read and control the running app. Find bugs without writing a single test.
96
218
 
97
- The MCP bridge uses the **same `AgentRuntime`** that powers the in-app AI agent. If the agent can do it via chat, an external AI can do it via MCP.
219
+ **MCP-only mode just want testing? No chat popup needed:**
98
220
 
99
- **MCP-only mode** — just want testing? No chat popup needed:
100
221
  ```tsx
101
222
  <AIAgent
102
223
  showChatBar={false}
@@ -199,13 +320,17 @@ Then rebuild: `npx expo prebuild && npx expo run:android` (or `run:ios`)
199
320
  </details>
200
321
 
201
322
  <details>
202
- <summary><b>💬 Human Support</b> — persist tickets and restore them across sessions</summary>
323
+ <summary><b>💬 Human Support &amp; Ticket Persistence</b> — persist tickets and discovery tooltip state across sessions</summary>
203
324
 
204
325
  ```bash
205
326
  npx expo install @react-native-async-storage/async-storage
206
327
  ```
207
328
 
208
- **Optional** but recommended when using human escalation support. Without it, support tickets are only visible during the current app session and won't be restored after the app restarts.
329
+ **Optional** but recommended when using:
330
+ - **Human escalation support** — tickets survive app restarts
331
+ - **Discovery tooltip** — remembers if the user has already seen it
332
+
333
+ Without it, both features gracefully degrade: tickets are only visible during the current session, and the tooltip shows every launch instead of once.
209
334
 
210
335
  </details>
211
336
 
@@ -320,6 +445,171 @@ Set `enableUIControl={false}` for a lightweight FAQ / support assistant. Single
320
445
 
321
446
  ---
322
447
 
448
+ ## 🛡️ Copilot Mode — Safe-by-Default UI Automation
449
+
450
+ The agent operates in **copilot mode** by default. It navigates, scrolls, types, and fills forms silently — then pauses **once** before the final irreversible action (place order, delete account, submit payment) to ask the user for confirmation.
451
+
452
+ ```tsx
453
+ // Default — copilot mode, zero extra config:
454
+ <AIAgent apiKey="..." navRef={navRef}>
455
+ <App />
456
+ </AIAgent>
457
+ ```
458
+
459
+ **What the AI does silently:**
460
+ - Navigating between screens and tabs
461
+ - Scrolling to find content
462
+ - Typing into form fields
463
+ - Selecting options and filters
464
+ - Adding items to cart
465
+
466
+ **What the AI pauses on** (asks the user first):
467
+ - Placing an order / completing a purchase
468
+ - Submitting a form that sends data to a server
469
+ - Deleting anything (account, item, message)
470
+ - Confirming a payment or transaction
471
+ - Saving account/profile changes
472
+
473
+ ### Opt-out to Full Autonomy
474
+
475
+ ```tsx
476
+ <AIAgent interactionMode="autopilot" />
477
+ ```
478
+
479
+ Use `autopilot` for power users, accessibility tools, or repeat-task automation where confirmations are unwanted.
480
+
481
+ ### Optional: Mark Specific Buttons as Critical (Safety Net)
482
+
483
+ In copilot mode, the prompt handles ~95% of cases automatically. For extra safety on your most sensitive buttons, add `aiConfirm={true}` — this adds a code-level block that cannot be bypassed even if the LLM ignores the prompt:
484
+
485
+ ```tsx
486
+ // These elements will ALWAYS require confirmation before the AI touches them
487
+ <Pressable aiConfirm onPress={deleteAccount}>
488
+ <Text>Delete Account</Text>
489
+ </Pressable>
490
+
491
+ <Pressable aiConfirm onPress={placeOrder}>
492
+ <Text>Place Order</Text>
493
+ </Pressable>
494
+
495
+ <TextInput aiConfirm placeholder="Credit card number" />
496
+ ```
497
+
498
+ `aiConfirm` works on any interactive element: `Pressable`, `TextInput`, `Slider`, `Picker`, `Switch`, `DatePicker`.
499
+
500
+ > 💡 **Dev tip**: In `__DEV__` mode, the SDK logs a reminder to add `aiConfirm` to critical elements after each copilot task.
501
+
502
+ ### Three-Layer Safety Model
503
+
504
+ | Layer | Mechanism | Developer effort |
505
+ |:---|:---|:---|
506
+ | **Prompt** (primary) | AI uses `ask_user` before irreversible commits | Zero |
507
+ | **`aiConfirm` prop** (optional safety net) | Code blocks specific elements | Add prop to 2–3 critical buttons |
508
+ | **Dev warning** (preventive) | Logs tip in `__DEV__` mode | Zero |
509
+
510
+ ---
511
+
512
+ ## 💬 Human Support Mode
513
+
514
+ Transform the AI agent into a production-grade support system. The AI resolves issues directly inside your app UI — no backend API integrations required. When it can't help, it escalates to a live human agent.
515
+
516
+ ```tsx
517
+ import { SupportGreeting, buildSupportPrompt, createEscalateTool } from '@mobileai/react-native';
518
+
519
+ <AIAgent
520
+ apiKey="..."
521
+ analyticsKey="mobileai_pub_xxx" // required for MobileAI escalation
522
+ instructions={{
523
+ system: buildSupportPrompt({
524
+ enabled: true,
525
+ greeting: {
526
+ message: "Hi! 👋 How can I help you today?",
527
+ agentName: "Support",
528
+ },
529
+ quickReplies: [
530
+ { label: "Track my order", icon: "📦" },
531
+ { label: "Cancel order", icon: "❌" },
532
+ { label: "Talk to a human", icon: "👤" },
533
+ ],
534
+ escalation: { provider: 'mobileai' },
535
+ csat: { enabled: true },
536
+ }),
537
+ }}
538
+ customTools={{ escalate: createEscalateTool({ provider: 'mobileai' }) }}
539
+ userContext={{
540
+ userId: user.id,
541
+ name: user.name,
542
+ email: user.email,
543
+ plan: 'pro',
544
+ }}
545
+ >
546
+ <App />
547
+ </AIAgent>
548
+ ```
549
+
550
+ ### What Happens on Escalation
551
+
552
+ 1. AI creates a ticket in the **MobileAI Dashboard** inbox
553
+ 2. User receives a real-time live chat thread (WebSocket)
554
+ 3. Support agent replies — user sees messages instantly
555
+ 4. Ticket is closed when resolved — a CSAT survey appears
556
+
557
+ ### Escalation Providers
558
+
559
+ | Provider | What happens |
560
+ |:---|:---|
561
+ | `'mobileai'` | Ticket → MobileAI Dashboard inbox + WebSocket live chat |
562
+ | `'custom'` | Calls your `onEscalate` callback — wire to Intercom, Zendesk, etc. |
563
+
564
+ ```tsx
565
+ // Custom provider — bring your own live chat:
566
+ createEscalateTool({
567
+ provider: 'custom',
568
+ onEscalate: (context) => {
569
+ Intercom.presentNewConversation();
570
+ // context includes: userId, message, screenName, chatHistory
571
+ },
572
+ })
573
+ ```
574
+
575
+ ### User Context
576
+
577
+ Pass user identity to the escalation ticket for agent visibility in the dashboard:
578
+
579
+ ```tsx
580
+ <AIAgent
581
+ userContext={{
582
+ userId: 'usr_123',
583
+ name: 'Ahmed Hassan',
584
+ email: 'ahmed@example.com',
585
+ plan: 'pro',
586
+ custom: { region: 'cairo', language: 'ar' },
587
+ }}
588
+ pushToken={expoPushToken} // for offline support reply notifications
589
+ pushTokenType="expo" // 'fcm' | 'expo' | 'apns'
590
+ />
591
+ ```
592
+
593
+ ### `SupportGreeting` — Standalone Greeting Component
594
+
595
+ Render the support greeting independently if you have a custom chat UI:
596
+
597
+ ```tsx
598
+ import { SupportGreeting } from '@mobileai/react-native';
599
+
600
+ <SupportGreeting
601
+ message="Hi! 👋 How can I help?"
602
+ agentName="Support"
603
+ quickReplies={[
604
+ { label: 'Track order', icon: '📦' },
605
+ { label: 'Talk to human', icon: '👤' },
606
+ ]}
607
+ onQuickReply={(text) => send(text)}
608
+ />
609
+ ```
610
+
611
+ ---
612
+
323
613
  ## 🗺️ Screen Mapping — Navigation Intelligence
324
614
 
325
615
  By default, the AI navigates by reading what's on screen and tapping visible elements. **Screen mapping** gives the AI a complete map of every screen and how they connect — via static analysis of your source code (AST). No API key needed, runs in ~2 seconds.
@@ -540,43 +830,123 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
540
830
 
541
831
  ### `<AIAgent>` Props
542
832
 
833
+ #### Core
834
+
543
835
  | Prop | Type | Default | Description |
544
836
  |------|------|---------|-------------|
545
- | `apiKey` | `string` | — | API key for your provider (prototyping only). |
837
+ | `apiKey` | `string` | — | API key for your provider (prototyping only — use `proxyUrl` in production). |
546
838
  | `provider` | `'gemini' \| 'openai'` | `'gemini'` | LLM provider for text mode. |
547
- | `proxyUrl` | `string` | — | Backend proxy URL (production). |
548
- | `proxyHeaders` | `Record<string, string>` | — | Auth headers for proxy. |
549
- | `voiceProxyUrl` | `string` | — | Dedicated proxy for Voice Mode WebSockets. |
839
+ | `proxyUrl` | `string` | — | Backend proxy URL (production). Routes all LLM traffic through your server. |
840
+ | `proxyHeaders` | `Record<string, string>` | — | Auth headers for proxy (e.g., `Authorization: Bearer ${token}`). |
841
+ | `voiceProxyUrl` | `string` | — | Dedicated proxy for Voice Mode WebSockets. Falls back to `proxyUrl`. |
550
842
  | `voiceProxyHeaders` | `Record<string, string>` | — | Auth headers for voice proxy. |
551
843
  | `model` | `string` | Provider default | Model name (e.g. `gemini-2.5-flash`, `gpt-4.1-mini`). |
552
844
  | `navRef` | `NavigationContainerRef` | — | Navigation ref for auto-navigation. |
845
+ | `children` | `ReactNode` | — | Your app — zero changes needed inside. |
846
+
847
+ #### Behavior
848
+
849
+ | Prop | Type | Default | Description |
850
+ |------|------|---------|-------------|
851
+ | `interactionMode` | `'copilot' \| 'autopilot'` | `'copilot'` | **Copilot** (default): AI pauses before irreversible actions. **Autopilot**: full autonomy, no confirmation. |
852
+ | `showDiscoveryTooltip` | `boolean` | `true` | Show one-time animated tooltip on FAB explaining AI capabilities. Dismissed after 6s or first tap. |
553
853
  | `maxSteps` | `number` | `25` | Max agent steps per task. |
554
854
  | `maxTokenBudget` | `number` | — | Max total tokens before auto-stopping the agent loop. |
555
855
  | `maxCostUSD` | `number` | — | Max estimated cost (USD) before auto-stopping. |
856
+ | `stepDelay` | `number` | — | Delay between agent steps in ms. |
857
+ | `enableUIControl` | `boolean` | `true` | When `false`, AI becomes knowledge-only (faster, fewer tokens). |
858
+ | `enableVoice` | `boolean` | `false` | Show voice mode tab. |
556
859
  | `showChatBar` | `boolean` | `true` | Show the floating chat bar. |
557
- | `enableVoice` | `boolean` | `true` | Enable voice mode tab. |
558
- | `enableUIControl` | `boolean` | `true` | When `false`, AI becomes knowledge-only. |
860
+
861
+ #### Navigation
862
+
863
+ | Prop | Type | Default | Description |
864
+ |------|------|---------|-------------|
559
865
  | `screenMap` | `ScreenMap` | — | Pre-generated screen map from `generate-map` CLI. |
560
866
  | `useScreenMap` | `boolean` | `true` | Set `false` to disable screen map without removing the prop. |
867
+ | `router` | `{ push, replace, back }` | — | Expo Router instance (from `useRouter()`). |
868
+ | `pathname` | `string` | — | Current pathname (from `usePathname()` — Expo Router). |
869
+
870
+ #### AI
871
+
872
+ | Prop | Type | Default | Description |
873
+ |------|------|---------|-------------|
561
874
  | `instructions` | `{ system?, getScreenInstructions? }` | — | Custom system prompt + per-screen instructions. |
562
- | `customTools` | `Record<string, ToolDefinition \| null>` | — | Override or remove built-in tools. |
563
- | `knowledgeBase` | `KnowledgeEntry[] \| KnowledgeRetriever` | — | Domain knowledge the AI can query. |
875
+ | `customTools` | `Record<string, ToolDefinition \| null>` | — | Add custom tools or remove built-in ones (set to `null`). |
876
+ | `knowledgeBase` | `KnowledgeEntry[] \| { retrieve }` | — | Domain knowledge the AI can query via `query_knowledge`. |
564
877
  | `knowledgeMaxTokens` | `number` | `2000` | Max tokens for knowledge results. |
565
- | `mcpServerUrl` | `string` | — | WebSocket URL for MCP bridge. |
566
- | `accentColor` | `string` | — | Accent color for the chat bar. |
567
- | `theme` | `ChatBarTheme` | — | Full chat bar color customization. |
568
- | `onResult` | `(result) => void` | — | Called when agent finishes. |
569
- | `onBeforeStep` | `(stepCount) => void` | | Called before each step. |
570
- | `onAfterStep` | `(history) => void` | — | Called after each step. |
571
- | `onTokenUsage` | `(usage) => void` | — | Token usage per step. |
572
- | `onAskUser` | `(question) => Promise<string>` | — | Handle `ask_user` inline agent waits for your response. |
573
- | `stepDelay` | `number` | — | Delay between steps (ms). |
574
- | `router` | `{ push, replace, back }` | — | Expo Router instance. |
575
- | `pathname` | `string` | — | Current pathname (Expo Router). |
878
+ | `transformScreenContent` | `(content: string) => string` | — | Transform/mask screen content before the LLM sees it. |
879
+
880
+ #### Security
881
+
882
+ | Prop | Type | Default | Description |
883
+ |------|------|---------|-------------|
884
+ | `interactiveBlacklist` | `React.RefObject<any>[]` | — | Refs of elements the AI must NOT interact with. |
885
+ | `interactiveWhitelist` | `React.RefObject<any>[]` | — | If set, AI can ONLY interact with these elements. |
886
+
887
+ #### Support
888
+
889
+ | Prop | Type | Default | Description |
890
+ |------|------|---------|-------------|
891
+ | `userContext` | `{ userId?, name?, email?, plan?, custom? }` | — | Logged-in user identity — attached to escalation tickets. |
892
+ | `pushToken` | `string` | — | Push token for offline support reply notifications. |
893
+ | `pushTokenType` | `'fcm' \| 'expo' \| 'apns'` | — | Type of the push token. |
894
+
895
+ #### Proactive Help
896
+
897
+ | Prop | Type | Default | Description |
898
+ |------|------|---------|-------------|
899
+ | `proactiveHelp` | `ProactiveHelpConfig` | — | Detects user hesitation and shows a contextual help nudge. |
900
+
901
+ ```tsx
902
+ <AIAgent
903
+ proactiveHelp={{
904
+ enabled: true,
905
+ pulseAfterMinutes: 2, // subtle FAB pulse to catch attention
906
+ badgeAfterMinutes: 4, // badge: "Need help with this screen?"
907
+ badgeText: "Need help?",
908
+ dismissForSession: true, // once dismissed, won't show again this session
909
+ generateSuggestion: (screen) => {
910
+ if (screen === 'Checkout') return 'Having trouble with checkout?';
911
+ return undefined;
912
+ },
913
+ }}
914
+ />
915
+ ```
916
+
917
+ #### Analytics
918
+
919
+ | Prop | Type | Default | Description |
920
+ |------|------|---------|-------------|
921
+ | `analyticsKey` | `string` | — | Publishable key (`mobileai_pub_xxx`) — enables auto-analytics. |
922
+ | `analyticsProxyUrl` | `string` | — | Enterprise: route events through your backend. |
923
+ | `analyticsProxyHeaders` | `Record<string, string>` | — | Auth headers for analytics proxy. |
924
+
925
+ #### MCP
926
+
927
+ | Prop | Type | Default | Description |
928
+ |------|------|---------|-------------|
929
+ | `mcpServerUrl` | `string` | — | WebSocket URL for the MCP bridge (e.g. `ws://localhost:3101`). |
930
+
931
+ #### Lifecycle & Callbacks
932
+
933
+ | Prop | Type | Default | Description |
934
+ |------|------|---------|-------------|
935
+ | `onResult` | `(result) => void` | — | Called when agent finishes a task. |
936
+ | `onBeforeTask` | `() => void` | — | Called before task execution starts. |
937
+ | `onAfterTask` | `(result) => void` | — | Called after task completes. |
938
+ | `onBeforeStep` | `(stepCount) => void` | — | Called before each agent step. |
939
+ | `onAfterStep` | `(history) => void` | — | Called after each step (with full step history). |
940
+ | `onTokenUsage` | `(usage) => void` | — | Token usage data per step. |
941
+ | `onAskUser` | `(question) => Promise<string>` | — | Custom handler for `ask_user` — agent blocks until resolved. |
942
+
943
+ #### Theming
944
+
945
+ | Prop | Type | Default | Description |
946
+ |------|------|---------|-------------|
947
+ | `accentColor` | `string` | — | Quick accent color for FAB, send button, active states. |
948
+ | `theme` | `ChatBarTheme` | — | Full chat bar theme override. |
576
949
  | `debug` | `boolean` | `false` | Enable SDK debug logging. |
577
- | `analyticsKey` | `string` | — | Publishable key (`mobileai_pub_xxx`) for zero-config analytics to MobileAI Cloud. |
578
- | `analyticsProxyUrl` | `string` | — | Enterprise: route analytics events through your backend proxy. |
579
- | `analyticsProxyHeaders` | `Record<string, string>` | — | Auth headers for analyticsProxyUrl. |
580
950
 
581
951
  ### 🎨 Customization
582
952
 
@@ -767,7 +1137,13 @@ server.on('upgrade', geminiProxy.upgrade);
767
1137
  ### Element Gating — Hide Elements from AI
768
1138
 
769
1139
  ```tsx
1140
+ // AI will never see or interact with this element:
770
1141
  <Pressable aiIgnore={true}><Text>Admin Panel</Text></Pressable>
1142
+
1143
+ // In copilot mode, AI must confirm before touching this element:
1144
+ <Pressable aiConfirm={true} onPress={deleteAccount}>
1145
+ <Text>Delete Account</Text>
1146
+ </Pressable>
771
1147
  ```
772
1148
 
773
1149
  ### Content Masking — Sanitize Before LLM Sees It
@@ -789,10 +1165,53 @@ server.on('upgrade', geminiProxy.upgrade);
789
1165
 
790
1166
  | Hook | When |
791
1167
  |------|------|
1168
+ | `onBeforeTask` | Before task execution starts |
792
1169
  | `onBeforeStep` | Before each agent step |
793
1170
  | `onAfterStep` | After each step (with full history) |
794
- | `onBeforeTask` | Before task execution |
795
- | `onAfterTask` | After task completes |
1171
+ | `onAfterTask` | After task completes (success or failure) |
1172
+
1173
+ ---
1174
+
1175
+ ## 🧩 AIZone — Contextual AI Regions
1176
+
1177
+ `AIZone` marks specific sections of your UI so the AI can operate within them with special capabilities: simplify cluttered areas, inject contextual cards, or highlight elements.
1178
+
1179
+ ```tsx
1180
+ import { AIZone } from '@mobileai/react-native';
1181
+
1182
+ // Allow AI to simplify this zone if it's too cluttered
1183
+ <AIZone id="product-details" allowSimplify>
1184
+ <View>
1185
+ <Text aiPriority="high">Price: $29.99</Text>
1186
+ <Text aiPriority="low">SKU: ABC-123</Text>
1187
+ <Text aiPriority="low">Weight: 500g</Text>
1188
+ </View>
1189
+ </AIZone>
1190
+
1191
+ // Allow AI to inject contextual cards (e.g. "Need help?" dialogs)
1192
+ <AIZone id="checkout-summary" allowInjectCard allowHighlight>
1193
+ <CheckoutSummary />
1194
+ </AIZone>
1195
+ ```
1196
+
1197
+ ### `aiPriority` Attribute
1198
+
1199
+ Tag any element with `aiPriority` to control AI visibility:
1200
+
1201
+ | Value | Effect |
1202
+ |:---|:---|
1203
+ | `"high"` | Always rendered — surfaced first in AI context |
1204
+ | `"low"` | Hidden when AI calls `simplify_zone()` on the enclosing `AIZone` |
1205
+
1206
+ ### AIZone Props
1207
+
1208
+ | Prop | Type | Description |
1209
+ |:---|:---|:---|
1210
+ | `id` | `string` | Unique zone identifier the AI uses to target operations |
1211
+ | `allowSimplify` | `boolean` | AI can call `simplify_zone(id)` to hide `aiPriority="low"` elements |
1212
+ | `allowHighlight` | `boolean` | AI can visually highlight elements inside this zone |
1213
+ | `allowInjectHint` | `boolean` | AI can inject a contextual text hint into this zone |
1214
+ | `allowInjectCard` | `boolean` | AI can inject a pre-built card template into this zone |
796
1215
 
797
1216
  ---
798
1217