@mobileai/react-native 0.9.12 → 0.9.14
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +480 -69
- package/lib/module/components/AIAgent.js +61 -3
- package/lib/module/components/AIAgent.js.map +1 -1
- package/lib/module/components/AgentChatBar.js +12 -2
- package/lib/module/components/AgentChatBar.js.map +1 -1
- package/lib/module/components/DiscoveryTooltip.js +128 -0
- package/lib/module/components/DiscoveryTooltip.js.map +1 -0
- package/lib/module/core/AgentRuntime.js +77 -1
- package/lib/module/core/AgentRuntime.js.map +1 -1
- package/lib/module/core/FiberTreeWalker.js +5 -1
- package/lib/module/core/FiberTreeWalker.js.map +1 -1
- package/lib/module/core/systemPrompt.js +41 -3
- package/lib/module/core/systemPrompt.js.map +1 -1
- package/lib/module/index.js.map +1 -1
- package/lib/typescript/src/components/AIAgent.d.ts +18 -2
- package/lib/typescript/src/components/AIAgent.d.ts.map +1 -1
- package/lib/typescript/src/components/AgentChatBar.d.ts +5 -1
- package/lib/typescript/src/components/AgentChatBar.d.ts.map +1 -1
- package/lib/typescript/src/components/DiscoveryTooltip.d.ts +15 -0
- package/lib/typescript/src/components/DiscoveryTooltip.d.ts.map +1 -0
- package/lib/typescript/src/core/AgentRuntime.d.ts +7 -0
- package/lib/typescript/src/core/AgentRuntime.d.ts.map +1 -1
- package/lib/typescript/src/core/FiberTreeWalker.d.ts.map +1 -1
- package/lib/typescript/src/core/systemPrompt.d.ts +1 -1
- package/lib/typescript/src/core/systemPrompt.d.ts.map +1 -1
- package/lib/typescript/src/core/types.d.ts +19 -0
- package/lib/typescript/src/core/types.d.ts.map +1 -1
- package/lib/typescript/src/index.d.ts +1 -1
- package/lib/typescript/src/index.d.ts.map +1 -1
- package/package.json +1 -1
- package/src/components/AIAgent.tsx +75 -1
- package/src/components/AgentChatBar.tsx +19 -1
- package/src/components/DiscoveryTooltip.tsx +148 -0
- package/src/core/AgentRuntime.ts +87 -1
- package/src/core/FiberTreeWalker.ts +5 -0
- package/src/core/systemPrompt.ts +41 -3
- package/src/core/types.ts +21 -0
- package/src/index.ts +1 -0
package/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
#
|
|
1
|
+
# AI Support That Resolves — Not Deflects
|
|
2
2
|
|
|
3
|
-
> **
|
|
3
|
+
> **Drop in one React Native component and your app gets AI support that answers questions, navigates users to the right screen, fills forms, and resolves issues end-to-end — with live human backup when needed. No custom API connectors required — the app UI is already the integration.**
|
|
4
4
|
|
|
5
5
|
**Two names, one package — pick whichever you prefer:**
|
|
6
6
|
|
|
@@ -10,20 +10,12 @@ npm install @mobileai/react-native
|
|
|
10
10
|
npm install react-native-agentic-ai
|
|
11
11
|
```
|
|
12
12
|
|
|
13
|
-
### 🤖 AI Agent —
|
|
13
|
+
### 🤖 AI Support Agent — Answers, Acts, and Resolves Inside Your App
|
|
14
14
|
|
|
15
15
|
<p align="center">
|
|
16
|
-
<img src="./assets/demo.gif" alt="AI Agent
|
|
16
|
+
<img src="./assets/demo.gif" alt="AI Support Agent navigating the app and resolving user issues end-to-end" width="350" />
|
|
17
17
|
</p>
|
|
18
18
|
|
|
19
|
-
### 🧪 AI-Powered Testing — Test Your App in English, Not Code
|
|
20
|
-
|
|
21
|
-
<p align="center">
|
|
22
|
-
<img src="./assets/mcp-testing.gif" alt="AI-Powered Testing via Model Context Protocol — finding bugs in React Native app without test code" width="700" />
|
|
23
|
-
</p>
|
|
24
|
-
|
|
25
|
-
> *Google Antigravity running 5 checks on the emulator and finding 5 real bugs — zero test code, zero selectors, just English.*
|
|
26
|
-
|
|
27
19
|
---
|
|
28
20
|
|
|
29
21
|
[](https://www.npmjs.com/package/@mobileai/react-native)
|
|
@@ -40,63 +32,184 @@ npm install react-native-agentic-ai
|
|
|
40
32
|
|
|
41
33
|
---
|
|
42
34
|
|
|
43
|
-
##
|
|
35
|
+
## 💡 The Problem With Every Support Tool Today
|
|
44
36
|
|
|
45
|
-
|
|
37
|
+
Intercom, Zendesk, and every chat widget all do the same thing: send the user instructions in a chat bubble.
|
|
46
38
|
|
|
47
|
-
|
|
39
|
+
*"To cancel your order, go to Orders, tap the order, then tap Cancel."*
|
|
48
40
|
|
|
49
|
-
|
|
41
|
+
That's not support. That's documentation delivery with a chat UI.
|
|
50
42
|
|
|
51
|
-
|
|
43
|
+
**This SDK takes a different approach.** Instead of telling users where to go, it — with the user's permission — goes there for them.
|
|
52
44
|
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## 🧠 How It Works — The App's UI Is the Integration Layer
|
|
48
|
+
|
|
49
|
+
Every other support tool needs you to build API connectors: endpoints, webhooks, action definitions in their dashboard. Months of backend work before the AI can do anything useful.
|
|
50
|
+
|
|
51
|
+
This SDK reads your app's live UI natively — every button, label, input, and screen — in real time. **There's nothing to integrate. The UI is already the integration.** The app already knows how to cancel orders, update addresses, apply promo codes — it has buttons for all of it. The AI just uses them.
|
|
52
|
+
|
|
53
|
+
**No OCR. No image pipelines. No selectors. No annotations. No backend connectors.**
|
|
54
|
+
|
|
55
|
+
### Why This Matters in the Support Context
|
|
56
|
+
|
|
57
|
+
The most important insight: UI control is only uncomfortable when it's unexpected. In a support conversation, the user has already asked for help — they're in a *"please help me"* mindset:
|
|
58
|
+
|
|
59
|
+
| Context | User reaction to AI controlling UI |
|
|
60
|
+
|:---|:---|
|
|
61
|
+
| Unprompted (out of nowhere) | 😨 "What is happening?" |
|
|
62
|
+
| **In a support chat — user asked for help** | 😊 "Yes please, do it for me" |
|
|
63
|
+
| **User is frustrated and types "how do I..."** | 😮💨 "Thank God, yes" |
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## 🎟️ The 5-Level Support Ladder
|
|
68
|
+
|
|
69
|
+
The SDK handles every tier of support automatically — from a simple FAQ answer to live human chat:
|
|
70
|
+
|
|
71
|
+
```
|
|
72
|
+
┌──────────────────────────────────────────────────────┐
|
|
73
|
+
│ Level 1: Knowledge Answer │
|
|
74
|
+
│ Answers from knowledge base — instant, zero UI │
|
|
75
|
+
│ "What's your return policy?" → answered directly │
|
|
76
|
+
├──────────────────────────────────────────────────────┤
|
|
77
|
+
│ Level 2: Show & Guide │
|
|
78
|
+
│ AI navigates to exact screen, user acts last │
|
|
79
|
+
│ "Settings → Notifications. It's right here. ☘️" │
|
|
80
|
+
├──────────────────────────────────────────────────────┤
|
|
81
|
+
│ Level 3: Do & Confirm (Copilot — default) │
|
|
82
|
+
│ AI fills forms, user confirms the final action │
|
|
83
|
+
│ "I've typed your new address. Tap Save to confirm." │
|
|
84
|
+
├──────────────────────────────────────────────────────┤
|
|
85
|
+
│ Level 4: Full Resolution (Autopilot) │
|
|
86
|
+
│ AI completes entire flow with one permission gate │
|
|
87
|
+
│ "Done! Order #4521 cancelled. Refund in 3-5 days." │
|
|
88
|
+
├──────────────────────────────────────────────────────┤
|
|
89
|
+
│ Level 5: Human Escalation │
|
|
90
|
+
│ Live agent via WebSocket — seamless handoff │
|
|
91
|
+
│ Billing dispute, legal issues, edge cases │
|
|
92
|
+
└──────────────────────────────────────────────────────┘
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Example conversations
|
|
96
|
+
|
|
97
|
+
<details>
|
|
98
|
+
<summary><b>Level 1 — Knowledge Answer</b></summary>
|
|
99
|
+
|
|
100
|
+
```
|
|
101
|
+
User: "What's your return policy?"
|
|
102
|
+
AI: "You can return items within 30 days. Full refund in 5-7 business days."
|
|
103
|
+
[from knowledge base — no UI needed]
|
|
104
|
+
```
|
|
105
|
+
</details>
|
|
106
|
+
|
|
107
|
+
<details>
|
|
108
|
+
<summary><b>Level 2 — Show & Guide</b></summary>
|
|
109
|
+
|
|
110
|
+
```
|
|
111
|
+
User: "How do I turn on price alerts?"
|
|
112
|
+
AI: "That's in Settings → Notifications. Want me to take you there?"
|
|
113
|
+
User: "Yes"
|
|
114
|
+
AI: [navigates to Settings → scrolls to Notifications section]
|
|
115
|
+
AI: "Here it is. The Price Alerts toggle is right here. ☘️"
|
|
116
|
+
[user taps the toggle themselves]
|
|
117
|
+
```
|
|
118
|
+
</details>
|
|
119
|
+
|
|
120
|
+
<details>
|
|
121
|
+
<summary><b>Level 3 — Do & Confirm (Copilot, default)</b></summary>
|
|
122
|
+
|
|
123
|
+
```
|
|
124
|
+
User: "I need to change my shipping address"
|
|
125
|
+
AI: "Sure! I'll navigate to your address settings."
|
|
126
|
+
[navigates to Profile → Address]
|
|
127
|
+
AI: "What's the new address?"
|
|
128
|
+
User: "15 Omar St, Nasr City, Cairo"
|
|
129
|
+
AI: [types the address into the fields]
|
|
130
|
+
AI: "I've filled in '15 Omar St, Nasr City, Cairo'. Tap Save to confirm."
|
|
131
|
+
[user taps Save themselves]
|
|
132
|
+
```
|
|
133
|
+
</details>
|
|
134
|
+
|
|
135
|
+
<details>
|
|
136
|
+
<summary><b>Level 4 — Full Resolution (Autopilot)</b></summary>
|
|
137
|
+
|
|
138
|
+
```
|
|
139
|
+
User: "Cancel my last order"
|
|
140
|
+
AI: "Order #4521 — Margherita pizza, $12.99. Want me to cancel it?"
|
|
141
|
+
User: "Yes"
|
|
142
|
+
AI: [navigates to Orders → finds #4521 → taps Cancel → taps Confirm]
|
|
143
|
+
AI: "Done! Order #4521 is cancelled. Refund arrives in 3-5 days."
|
|
144
|
+
```
|
|
145
|
+
</details>
|
|
146
|
+
|
|
147
|
+
<details>
|
|
148
|
+
<summary><b>Level 5 — Human Escalation</b></summary>
|
|
149
|
+
|
|
150
|
+
```
|
|
151
|
+
User: "I was charged twice for my order"
|
|
152
|
+
AI: "I can see your order history, but billing disputes need a human agent."
|
|
153
|
+
[triggers escalate → live agent chat via WebSocket]
|
|
154
|
+
```
|
|
155
|
+
</details>
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## ⚙️ Why Not Intercom or Zendesk?
|
|
160
|
+
|
|
161
|
+
| | Intercom Fin | Zendesk AI | **This SDK** |
|
|
162
|
+
|:---|:---|:---|:---|
|
|
163
|
+
| **Answer questions** | ✅ | ✅ | ✅ Knowledge base |
|
|
164
|
+
| **Navigate user to right screen** | ❌ | ❌ | ✅ App-aware navigation |
|
|
165
|
+
| **Fill forms for the user** | ❌ | ❌ | ✅ Types directly into fields |
|
|
166
|
+
| **Execute in-app actions** | Via API connectors *(must build)* | Via API connectors | ✅ Via UI — zero backend work |
|
|
167
|
+
| **Voice support** | ❌ | ❌ | ✅ Gemini Live |
|
|
168
|
+
| **Human escalation** | ✅ | ✅ | ✅ WebSocket live chat |
|
|
169
|
+
| **Mobile-native** | ❌ WebView overlay | ❌ WebView | ✅ React Native component |
|
|
170
|
+
| **Setup time** | Days–weeks (build connectors) | Days–weeks | **Minutes** (`<AIAgent>` wrapper) |
|
|
171
|
+
| **Price per resolution** | $0.99 + subscription | $1.50–2.00 | You decide |
|
|
172
|
+
|
|
173
|
+
### The moat
|
|
174
|
+
|
|
175
|
+
No competitor can do Levels 2–4. Intercom and Zendesk answer questions (Level 1) and escalate to humans (Level 5). The middle — **app-aware navigation, form assistance, and full in-app resolution** — is uniquely possible because this SDK reads the React Native Fiber tree. That can't be added with a plugin or API connector.
|
|
62
176
|
|
|
63
177
|
---
|
|
64
178
|
|
|
65
179
|
## ✨ What's Inside
|
|
66
180
|
|
|
67
|
-
###
|
|
181
|
+
### Support Your Users
|
|
68
182
|
|
|
69
|
-
####
|
|
183
|
+
#### 🦹 AI Support Agent — Resolves at Every Level
|
|
70
184
|
|
|
71
|
-
|
|
185
|
+
The AI answers questions, guides users to the right screen, fills forms on their behalf, or completes full task flows — with voice support and human escalation built in. All in the existing app UI. Zero backend integration.
|
|
72
186
|
|
|
73
|
-
- **Zero-config** — wrap your app with `<AIAgent>`, done. No annotations, no selectors
|
|
74
|
-
- **
|
|
75
|
-
- **
|
|
76
|
-
- **
|
|
77
|
-
- **
|
|
187
|
+
- **Zero-config** — wrap your app with `<AIAgent>`, done. No annotations, no selectors, no API connectors
|
|
188
|
+
- **5-level resolution** — knowledge answer → guided navigation → copilot → full resolution → human escalation
|
|
189
|
+
- **Copilot mode** (default) — AI pauses once before irreversible actions (order, delete, submit). User always stays in control
|
|
190
|
+
- **Human escalation** — live chat via WebSocket, CSAT survey, ticket dashboard — all built in
|
|
191
|
+
- **Knowledge base** — policies, FAQs, product data queried on demand — no token waste
|
|
78
192
|
|
|
79
|
-
#### 🎤 Real-time Voice
|
|
193
|
+
#### 🎤 Real-time Voice Support — Users Speak, AI Acts
|
|
80
194
|
|
|
81
|
-
Full bidirectional voice AI powered by the Gemini Live API
|
|
195
|
+
Full bidirectional voice AI powered by the Gemini Live API. Users speak their support request; the agent responds with voice AND navigates, fills forms, and resolves issues simultaneously.
|
|
82
196
|
|
|
83
197
|
- **Sub-second latency** — real-time audio via WebSockets, not turn-based
|
|
84
|
-
- **Full
|
|
85
|
-
- **Screen-aware** — auto-detects screen changes and updates
|
|
198
|
+
- **Full resolution** — same navigate, type, tap as text mode — all by voice
|
|
199
|
+
- **Screen-aware** — auto-detects screen changes and updates context instantly
|
|
86
200
|
|
|
87
|
-
> 💡 **Speech-to-text in text mode:** Install `expo-speech-recognition`
|
|
201
|
+
> 💡 **Speech-to-text in text mode:** Install `expo-speech-recognition` for a mic button in the chat bar — letting users dictate instead of typing. Separate from voice mode.
|
|
88
202
|
|
|
89
203
|
---
|
|
90
204
|
|
|
91
205
|
### Supercharge Your Dev Workflow
|
|
92
206
|
|
|
93
|
-
#### 🔌 MCP Bridge —
|
|
207
|
+
#### 🔌 MCP Bridge — Test Your App in English, Not Code
|
|
94
208
|
|
|
95
|
-
Your app becomes MCP-compatible with one prop.
|
|
209
|
+
Your app becomes MCP-compatible with one prop. Connect any AI — Antigravity, Claude Desktop, CI/CD pipelines — to remotely read and control the running app. Find bugs without writing a single test.
|
|
96
210
|
|
|
97
|
-
|
|
211
|
+
**MCP-only mode — just want testing? No chat popup needed:**
|
|
98
212
|
|
|
99
|
-
**MCP-only mode** — just want testing? No chat popup needed:
|
|
100
213
|
```tsx
|
|
101
214
|
<AIAgent
|
|
102
215
|
showChatBar={false}
|
|
@@ -199,13 +312,17 @@ Then rebuild: `npx expo prebuild && npx expo run:android` (or `run:ios`)
|
|
|
199
312
|
</details>
|
|
200
313
|
|
|
201
314
|
<details>
|
|
202
|
-
<summary><b>💬 Human Support</b> — persist tickets and
|
|
315
|
+
<summary><b>💬 Human Support & Ticket Persistence</b> — persist tickets and discovery tooltip state across sessions</summary>
|
|
203
316
|
|
|
204
317
|
```bash
|
|
205
318
|
npx expo install @react-native-async-storage/async-storage
|
|
206
319
|
```
|
|
207
320
|
|
|
208
|
-
**Optional** but recommended when using
|
|
321
|
+
**Optional** but recommended when using:
|
|
322
|
+
- **Human escalation support** — tickets survive app restarts
|
|
323
|
+
- **Discovery tooltip** — remembers if the user has already seen it
|
|
324
|
+
|
|
325
|
+
Without it, both features gracefully degrade: tickets are only visible during the current session, and the tooltip shows every launch instead of once.
|
|
209
326
|
|
|
210
327
|
</details>
|
|
211
328
|
|
|
@@ -320,6 +437,171 @@ Set `enableUIControl={false}` for a lightweight FAQ / support assistant. Single
|
|
|
320
437
|
|
|
321
438
|
---
|
|
322
439
|
|
|
440
|
+
## 🛡️ Copilot Mode — Safe-by-Default UI Automation
|
|
441
|
+
|
|
442
|
+
The agent operates in **copilot mode** by default. It navigates, scrolls, types, and fills forms silently — then pauses **once** before the final irreversible action (place order, delete account, submit payment) to ask the user for confirmation.
|
|
443
|
+
|
|
444
|
+
```tsx
|
|
445
|
+
// Default — copilot mode, zero extra config:
|
|
446
|
+
<AIAgent apiKey="..." navRef={navRef}>
|
|
447
|
+
<App />
|
|
448
|
+
</AIAgent>
|
|
449
|
+
```
|
|
450
|
+
|
|
451
|
+
**What the AI does silently:**
|
|
452
|
+
- Navigating between screens and tabs
|
|
453
|
+
- Scrolling to find content
|
|
454
|
+
- Typing into form fields
|
|
455
|
+
- Selecting options and filters
|
|
456
|
+
- Adding items to cart
|
|
457
|
+
|
|
458
|
+
**What the AI pauses on** (asks the user first):
|
|
459
|
+
- Placing an order / completing a purchase
|
|
460
|
+
- Submitting a form that sends data to a server
|
|
461
|
+
- Deleting anything (account, item, message)
|
|
462
|
+
- Confirming a payment or transaction
|
|
463
|
+
- Saving account/profile changes
|
|
464
|
+
|
|
465
|
+
### Opt-out to Full Autonomy
|
|
466
|
+
|
|
467
|
+
```tsx
|
|
468
|
+
<AIAgent interactionMode="autopilot" />
|
|
469
|
+
```
|
|
470
|
+
|
|
471
|
+
Use `autopilot` for power users, accessibility tools, or repeat-task automation where confirmations are unwanted.
|
|
472
|
+
|
|
473
|
+
### Optional: Mark Specific Buttons as Critical (Safety Net)
|
|
474
|
+
|
|
475
|
+
In copilot mode, the prompt handles ~95% of cases automatically. For extra safety on your most sensitive buttons, add `aiConfirm={true}` — this adds a code-level block that cannot be bypassed even if the LLM ignores the prompt:
|
|
476
|
+
|
|
477
|
+
```tsx
|
|
478
|
+
// These elements will ALWAYS require confirmation before the AI touches them
|
|
479
|
+
<Pressable aiConfirm onPress={deleteAccount}>
|
|
480
|
+
<Text>Delete Account</Text>
|
|
481
|
+
</Pressable>
|
|
482
|
+
|
|
483
|
+
<Pressable aiConfirm onPress={placeOrder}>
|
|
484
|
+
<Text>Place Order</Text>
|
|
485
|
+
</Pressable>
|
|
486
|
+
|
|
487
|
+
<TextInput aiConfirm placeholder="Credit card number" />
|
|
488
|
+
```
|
|
489
|
+
|
|
490
|
+
`aiConfirm` works on any interactive element: `Pressable`, `TextInput`, `Slider`, `Picker`, `Switch`, `DatePicker`.
|
|
491
|
+
|
|
492
|
+
> 💡 **Dev tip**: In `__DEV__` mode, the SDK logs a reminder to add `aiConfirm` to critical elements after each copilot task.
|
|
493
|
+
|
|
494
|
+
### Three-Layer Safety Model
|
|
495
|
+
|
|
496
|
+
| Layer | Mechanism | Developer effort |
|
|
497
|
+
|:---|:---|:---|
|
|
498
|
+
| **Prompt** (primary) | AI uses `ask_user` before irreversible commits | Zero |
|
|
499
|
+
| **`aiConfirm` prop** (optional safety net) | Code blocks specific elements | Add prop to 2–3 critical buttons |
|
|
500
|
+
| **Dev warning** (preventive) | Logs tip in `__DEV__` mode | Zero |
|
|
501
|
+
|
|
502
|
+
---
|
|
503
|
+
|
|
504
|
+
## 💬 Human Support Mode
|
|
505
|
+
|
|
506
|
+
Transform the AI agent into a production-grade support system. The AI resolves issues directly inside your app UI — no backend API integrations required. When it can't help, it escalates to a live human agent.
|
|
507
|
+
|
|
508
|
+
```tsx
|
|
509
|
+
import { SupportGreeting, buildSupportPrompt, createEscalateTool } from '@mobileai/react-native';
|
|
510
|
+
|
|
511
|
+
<AIAgent
|
|
512
|
+
apiKey="..."
|
|
513
|
+
analyticsKey="mobileai_pub_xxx" // required for MobileAI escalation
|
|
514
|
+
instructions={{
|
|
515
|
+
system: buildSupportPrompt({
|
|
516
|
+
enabled: true,
|
|
517
|
+
greeting: {
|
|
518
|
+
message: "Hi! 👋 How can I help you today?",
|
|
519
|
+
agentName: "Support",
|
|
520
|
+
},
|
|
521
|
+
quickReplies: [
|
|
522
|
+
{ label: "Track my order", icon: "📦" },
|
|
523
|
+
{ label: "Cancel order", icon: "❌" },
|
|
524
|
+
{ label: "Talk to a human", icon: "👤" },
|
|
525
|
+
],
|
|
526
|
+
escalation: { provider: 'mobileai' },
|
|
527
|
+
csat: { enabled: true },
|
|
528
|
+
}),
|
|
529
|
+
}}
|
|
530
|
+
customTools={{ escalate: createEscalateTool({ provider: 'mobileai' }) }}
|
|
531
|
+
userContext={{
|
|
532
|
+
userId: user.id,
|
|
533
|
+
name: user.name,
|
|
534
|
+
email: user.email,
|
|
535
|
+
plan: 'pro',
|
|
536
|
+
}}
|
|
537
|
+
>
|
|
538
|
+
<App />
|
|
539
|
+
</AIAgent>
|
|
540
|
+
```
|
|
541
|
+
|
|
542
|
+
### What Happens on Escalation
|
|
543
|
+
|
|
544
|
+
1. AI creates a ticket in the **MobileAI Dashboard** inbox
|
|
545
|
+
2. User receives a real-time live chat thread (WebSocket)
|
|
546
|
+
3. Support agent replies — user sees messages instantly
|
|
547
|
+
4. Ticket is closed when resolved — a CSAT survey appears
|
|
548
|
+
|
|
549
|
+
### Escalation Providers
|
|
550
|
+
|
|
551
|
+
| Provider | What happens |
|
|
552
|
+
|:---|:---|
|
|
553
|
+
| `'mobileai'` | Ticket → MobileAI Dashboard inbox + WebSocket live chat |
|
|
554
|
+
| `'custom'` | Calls your `onEscalate` callback — wire to Intercom, Zendesk, etc. |
|
|
555
|
+
|
|
556
|
+
```tsx
|
|
557
|
+
// Custom provider — bring your own live chat:
|
|
558
|
+
createEscalateTool({
|
|
559
|
+
provider: 'custom',
|
|
560
|
+
onEscalate: (context) => {
|
|
561
|
+
Intercom.presentNewConversation();
|
|
562
|
+
// context includes: userId, message, screenName, chatHistory
|
|
563
|
+
},
|
|
564
|
+
})
|
|
565
|
+
```
|
|
566
|
+
|
|
567
|
+
### User Context
|
|
568
|
+
|
|
569
|
+
Pass user identity to the escalation ticket for agent visibility in the dashboard:
|
|
570
|
+
|
|
571
|
+
```tsx
|
|
572
|
+
<AIAgent
|
|
573
|
+
userContext={{
|
|
574
|
+
userId: 'usr_123',
|
|
575
|
+
name: 'Ahmed Hassan',
|
|
576
|
+
email: 'ahmed@example.com',
|
|
577
|
+
plan: 'pro',
|
|
578
|
+
custom: { region: 'cairo', language: 'ar' },
|
|
579
|
+
}}
|
|
580
|
+
pushToken={expoPushToken} // for offline support reply notifications
|
|
581
|
+
pushTokenType="expo" // 'fcm' | 'expo' | 'apns'
|
|
582
|
+
/>
|
|
583
|
+
```
|
|
584
|
+
|
|
585
|
+
### `SupportGreeting` — Standalone Greeting Component
|
|
586
|
+
|
|
587
|
+
Render the support greeting independently if you have a custom chat UI:
|
|
588
|
+
|
|
589
|
+
```tsx
|
|
590
|
+
import { SupportGreeting } from '@mobileai/react-native';
|
|
591
|
+
|
|
592
|
+
<SupportGreeting
|
|
593
|
+
message="Hi! 👋 How can I help?"
|
|
594
|
+
agentName="Support"
|
|
595
|
+
quickReplies={[
|
|
596
|
+
{ label: 'Track order', icon: '📦' },
|
|
597
|
+
{ label: 'Talk to human', icon: '👤' },
|
|
598
|
+
]}
|
|
599
|
+
onQuickReply={(text) => send(text)}
|
|
600
|
+
/>
|
|
601
|
+
```
|
|
602
|
+
|
|
603
|
+
---
|
|
604
|
+
|
|
323
605
|
## 🗺️ Screen Mapping — Navigation Intelligence
|
|
324
606
|
|
|
325
607
|
By default, the AI navigates by reading what's on screen and tapping visible elements. **Screen mapping** gives the AI a complete map of every screen and how they connect — via static analysis of your source code (AST). No API key needed, runs in ~2 seconds.
|
|
@@ -540,43 +822,123 @@ Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
|
|
|
540
822
|
|
|
541
823
|
### `<AIAgent>` Props
|
|
542
824
|
|
|
825
|
+
#### Core
|
|
826
|
+
|
|
543
827
|
| Prop | Type | Default | Description |
|
|
544
828
|
|------|------|---------|-------------|
|
|
545
|
-
| `apiKey` | `string` | — | API key for your provider (prototyping only). |
|
|
829
|
+
| `apiKey` | `string` | — | API key for your provider (prototyping only — use `proxyUrl` in production). |
|
|
546
830
|
| `provider` | `'gemini' \| 'openai'` | `'gemini'` | LLM provider for text mode. |
|
|
547
|
-
| `proxyUrl` | `string` | — | Backend proxy URL (production). |
|
|
548
|
-
| `proxyHeaders` | `Record<string, string>` | — | Auth headers for proxy. |
|
|
549
|
-
| `voiceProxyUrl` | `string` | — | Dedicated proxy for Voice Mode WebSockets. |
|
|
831
|
+
| `proxyUrl` | `string` | — | Backend proxy URL (production). Routes all LLM traffic through your server. |
|
|
832
|
+
| `proxyHeaders` | `Record<string, string>` | — | Auth headers for proxy (e.g., `Authorization: Bearer ${token}`). |
|
|
833
|
+
| `voiceProxyUrl` | `string` | — | Dedicated proxy for Voice Mode WebSockets. Falls back to `proxyUrl`. |
|
|
550
834
|
| `voiceProxyHeaders` | `Record<string, string>` | — | Auth headers for voice proxy. |
|
|
551
835
|
| `model` | `string` | Provider default | Model name (e.g. `gemini-2.5-flash`, `gpt-4.1-mini`). |
|
|
552
836
|
| `navRef` | `NavigationContainerRef` | — | Navigation ref for auto-navigation. |
|
|
837
|
+
| `children` | `ReactNode` | — | Your app — zero changes needed inside. |
|
|
838
|
+
|
|
839
|
+
#### Behavior
|
|
840
|
+
|
|
841
|
+
| Prop | Type | Default | Description |
|
|
842
|
+
|------|------|---------|-------------|
|
|
843
|
+
| `interactionMode` | `'copilot' \| 'autopilot'` | `'copilot'` | **Copilot** (default): AI pauses before irreversible actions. **Autopilot**: full autonomy, no confirmation. |
|
|
844
|
+
| `showDiscoveryTooltip` | `boolean` | `true` | Show one-time animated tooltip on FAB explaining AI capabilities. Dismissed after 6s or first tap. |
|
|
553
845
|
| `maxSteps` | `number` | `25` | Max agent steps per task. |
|
|
554
846
|
| `maxTokenBudget` | `number` | — | Max total tokens before auto-stopping the agent loop. |
|
|
555
847
|
| `maxCostUSD` | `number` | — | Max estimated cost (USD) before auto-stopping. |
|
|
848
|
+
| `stepDelay` | `number` | — | Delay between agent steps in ms. |
|
|
849
|
+
| `enableUIControl` | `boolean` | `true` | When `false`, AI becomes knowledge-only (faster, fewer tokens). |
|
|
850
|
+
| `enableVoice` | `boolean` | `false` | Show voice mode tab. |
|
|
556
851
|
| `showChatBar` | `boolean` | `true` | Show the floating chat bar. |
|
|
557
|
-
|
|
558
|
-
|
|
852
|
+
|
|
853
|
+
#### Navigation
|
|
854
|
+
|
|
855
|
+
| Prop | Type | Default | Description |
|
|
856
|
+
|------|------|---------|-------------|
|
|
559
857
|
| `screenMap` | `ScreenMap` | — | Pre-generated screen map from `generate-map` CLI. |
|
|
560
858
|
| `useScreenMap` | `boolean` | `true` | Set `false` to disable screen map without removing the prop. |
|
|
859
|
+
| `router` | `{ push, replace, back }` | — | Expo Router instance (from `useRouter()`). |
|
|
860
|
+
| `pathname` | `string` | — | Current pathname (from `usePathname()` — Expo Router). |
|
|
861
|
+
|
|
862
|
+
#### AI
|
|
863
|
+
|
|
864
|
+
| Prop | Type | Default | Description |
|
|
865
|
+
|------|------|---------|-------------|
|
|
561
866
|
| `instructions` | `{ system?, getScreenInstructions? }` | — | Custom system prompt + per-screen instructions. |
|
|
562
|
-
| `customTools` | `Record<string, ToolDefinition \| null>` | — |
|
|
563
|
-
| `knowledgeBase` | `KnowledgeEntry[] \|
|
|
867
|
+
| `customTools` | `Record<string, ToolDefinition \| null>` | — | Add custom tools or remove built-in ones (set to `null`). |
|
|
868
|
+
| `knowledgeBase` | `KnowledgeEntry[] \| { retrieve }` | — | Domain knowledge the AI can query via `query_knowledge`. |
|
|
564
869
|
| `knowledgeMaxTokens` | `number` | `2000` | Max tokens for knowledge results. |
|
|
565
|
-
| `
|
|
566
|
-
|
|
567
|
-
|
|
568
|
-
|
|
569
|
-
|
|
|
570
|
-
|
|
571
|
-
| `
|
|
572
|
-
| `
|
|
573
|
-
|
|
574
|
-
|
|
575
|
-
|
|
870
|
+
| `transformScreenContent` | `(content: string) => string` | — | Transform/mask screen content before the LLM sees it. |
|
|
871
|
+
|
|
872
|
+
#### Security
|
|
873
|
+
|
|
874
|
+
| Prop | Type | Default | Description |
|
|
875
|
+
|------|------|---------|-------------|
|
|
876
|
+
| `interactiveBlacklist` | `React.RefObject<any>[]` | — | Refs of elements the AI must NOT interact with. |
|
|
877
|
+
| `interactiveWhitelist` | `React.RefObject<any>[]` | — | If set, AI can ONLY interact with these elements. |
|
|
878
|
+
|
|
879
|
+
#### Support
|
|
880
|
+
|
|
881
|
+
| Prop | Type | Default | Description |
|
|
882
|
+
|------|------|---------|-------------|
|
|
883
|
+
| `userContext` | `{ userId?, name?, email?, plan?, custom? }` | — | Logged-in user identity — attached to escalation tickets. |
|
|
884
|
+
| `pushToken` | `string` | — | Push token for offline support reply notifications. |
|
|
885
|
+
| `pushTokenType` | `'fcm' \| 'expo' \| 'apns'` | — | Type of the push token. |
|
|
886
|
+
|
|
887
|
+
#### Proactive Help
|
|
888
|
+
|
|
889
|
+
| Prop | Type | Default | Description |
|
|
890
|
+
|------|------|---------|-------------|
|
|
891
|
+
| `proactiveHelp` | `ProactiveHelpConfig` | — | Detects user hesitation and shows a contextual help nudge. |
|
|
892
|
+
|
|
893
|
+
```tsx
|
|
894
|
+
<AIAgent
|
|
895
|
+
proactiveHelp={{
|
|
896
|
+
enabled: true,
|
|
897
|
+
pulseAfterMinutes: 2, // subtle FAB pulse to catch attention
|
|
898
|
+
badgeAfterMinutes: 4, // badge: "Need help with this screen?"
|
|
899
|
+
badgeText: "Need help?",
|
|
900
|
+
dismissForSession: true, // once dismissed, won't show again this session
|
|
901
|
+
generateSuggestion: (screen) => {
|
|
902
|
+
if (screen === 'Checkout') return 'Having trouble with checkout?';
|
|
903
|
+
return undefined;
|
|
904
|
+
},
|
|
905
|
+
}}
|
|
906
|
+
/>
|
|
907
|
+
```
|
|
908
|
+
|
|
909
|
+
#### Analytics
|
|
910
|
+
|
|
911
|
+
| Prop | Type | Default | Description |
|
|
912
|
+
|------|------|---------|-------------|
|
|
913
|
+
| `analyticsKey` | `string` | — | Publishable key (`mobileai_pub_xxx`) — enables auto-analytics. |
|
|
914
|
+
| `analyticsProxyUrl` | `string` | — | Enterprise: route events through your backend. |
|
|
915
|
+
| `analyticsProxyHeaders` | `Record<string, string>` | — | Auth headers for analytics proxy. |
|
|
916
|
+
|
|
917
|
+
#### MCP
|
|
918
|
+
|
|
919
|
+
| Prop | Type | Default | Description |
|
|
920
|
+
|------|------|---------|-------------|
|
|
921
|
+
| `mcpServerUrl` | `string` | — | WebSocket URL for the MCP bridge (e.g. `ws://localhost:3101`). |
|
|
922
|
+
|
|
923
|
+
#### Lifecycle & Callbacks
|
|
924
|
+
|
|
925
|
+
| Prop | Type | Default | Description |
|
|
926
|
+
|------|------|---------|-------------|
|
|
927
|
+
| `onResult` | `(result) => void` | — | Called when agent finishes a task. |
|
|
928
|
+
| `onBeforeTask` | `() => void` | — | Called before task execution starts. |
|
|
929
|
+
| `onAfterTask` | `(result) => void` | — | Called after task completes. |
|
|
930
|
+
| `onBeforeStep` | `(stepCount) => void` | — | Called before each agent step. |
|
|
931
|
+
| `onAfterStep` | `(history) => void` | — | Called after each step (with full step history). |
|
|
932
|
+
| `onTokenUsage` | `(usage) => void` | — | Token usage data per step. |
|
|
933
|
+
| `onAskUser` | `(question) => Promise<string>` | — | Custom handler for `ask_user` — agent blocks until resolved. |
|
|
934
|
+
|
|
935
|
+
#### Theming
|
|
936
|
+
|
|
937
|
+
| Prop | Type | Default | Description |
|
|
938
|
+
|------|------|---------|-------------|
|
|
939
|
+
| `accentColor` | `string` | — | Quick accent color for FAB, send button, active states. |
|
|
940
|
+
| `theme` | `ChatBarTheme` | — | Full chat bar theme override. |
|
|
576
941
|
| `debug` | `boolean` | `false` | Enable SDK debug logging. |
|
|
577
|
-
| `analyticsKey` | `string` | — | Publishable key (`mobileai_pub_xxx`) for zero-config analytics to MobileAI Cloud. |
|
|
578
|
-
| `analyticsProxyUrl` | `string` | — | Enterprise: route analytics events through your backend proxy. |
|
|
579
|
-
| `analyticsProxyHeaders` | `Record<string, string>` | — | Auth headers for analyticsProxyUrl. |
|
|
580
942
|
|
|
581
943
|
### 🎨 Customization
|
|
582
944
|
|
|
@@ -767,7 +1129,13 @@ server.on('upgrade', geminiProxy.upgrade);
|
|
|
767
1129
|
### Element Gating — Hide Elements from AI
|
|
768
1130
|
|
|
769
1131
|
```tsx
|
|
1132
|
+
// AI will never see or interact with this element:
|
|
770
1133
|
<Pressable aiIgnore={true}><Text>Admin Panel</Text></Pressable>
|
|
1134
|
+
|
|
1135
|
+
// In copilot mode, AI must confirm before touching this element:
|
|
1136
|
+
<Pressable aiConfirm={true} onPress={deleteAccount}>
|
|
1137
|
+
<Text>Delete Account</Text>
|
|
1138
|
+
</Pressable>
|
|
771
1139
|
```
|
|
772
1140
|
|
|
773
1141
|
### Content Masking — Sanitize Before LLM Sees It
|
|
@@ -789,10 +1157,53 @@ server.on('upgrade', geminiProxy.upgrade);
|
|
|
789
1157
|
|
|
790
1158
|
| Hook | When |
|
|
791
1159
|
|------|------|
|
|
1160
|
+
| `onBeforeTask` | Before task execution starts |
|
|
792
1161
|
| `onBeforeStep` | Before each agent step |
|
|
793
1162
|
| `onAfterStep` | After each step (with full history) |
|
|
794
|
-
| `
|
|
795
|
-
|
|
1163
|
+
| `onAfterTask` | After task completes (success or failure) |
|
|
1164
|
+
|
|
1165
|
+
---
|
|
1166
|
+
|
|
1167
|
+
## 🧩 AIZone — Contextual AI Regions
|
|
1168
|
+
|
|
1169
|
+
`AIZone` marks specific sections of your UI so the AI can operate within them with special capabilities: simplify cluttered areas, inject contextual cards, or highlight elements.
|
|
1170
|
+
|
|
1171
|
+
```tsx
|
|
1172
|
+
import { AIZone } from '@mobileai/react-native';
|
|
1173
|
+
|
|
1174
|
+
// Allow AI to simplify this zone if it's too cluttered
|
|
1175
|
+
<AIZone id="product-details" allowSimplify>
|
|
1176
|
+
<View>
|
|
1177
|
+
<Text aiPriority="high">Price: $29.99</Text>
|
|
1178
|
+
<Text aiPriority="low">SKU: ABC-123</Text>
|
|
1179
|
+
<Text aiPriority="low">Weight: 500g</Text>
|
|
1180
|
+
</View>
|
|
1181
|
+
</AIZone>
|
|
1182
|
+
|
|
1183
|
+
// Allow AI to inject contextual cards (e.g. "Need help?" dialogs)
|
|
1184
|
+
<AIZone id="checkout-summary" allowInjectCard allowHighlight>
|
|
1185
|
+
<CheckoutSummary />
|
|
1186
|
+
</AIZone>
|
|
1187
|
+
```
|
|
1188
|
+
|
|
1189
|
+
### `aiPriority` Attribute
|
|
1190
|
+
|
|
1191
|
+
Tag any element with `aiPriority` to control AI visibility:
|
|
1192
|
+
|
|
1193
|
+
| Value | Effect |
|
|
1194
|
+
|:---|:---|
|
|
1195
|
+
| `"high"` | Always rendered — surfaced first in AI context |
|
|
1196
|
+
| `"low"` | Hidden when AI calls `simplify_zone()` on the enclosing `AIZone` |
|
|
1197
|
+
|
|
1198
|
+
### AIZone Props
|
|
1199
|
+
|
|
1200
|
+
| Prop | Type | Description |
|
|
1201
|
+
|:---|:---|:---|
|
|
1202
|
+
| `id` | `string` | Unique zone identifier the AI uses to target operations |
|
|
1203
|
+
| `allowSimplify` | `boolean` | AI can call `simplify_zone(id)` to hide `aiPriority="low"` elements |
|
|
1204
|
+
| `allowHighlight` | `boolean` | AI can visually highlight elements inside this zone |
|
|
1205
|
+
| `allowInjectHint` | `boolean` | AI can inject a contextual text hint into this zone |
|
|
1206
|
+
| `allowInjectCard` | `boolean` | AI can inject a pre-built card template into this zone |
|
|
796
1207
|
|
|
797
1208
|
---
|
|
798
1209
|
|