@mobileai/react-native 0.5.9 → 0.5.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +73 -71
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -32,6 +32,8 @@
32
32
  - 🧭 **Auto-navigation** — Navigates between screens to complete multi-step tasks.
33
33
  - 🧩 **Custom actions** — Expose any business logic (checkout, API calls) as AI-callable tools with `useAction`.
34
34
  - 🌐 **MCP bridge** — Let external AI agents (OpenClaw, Claude Desktop) control your app remotely.
35
+ - 🧠 **Knowledge base** — Give the AI domain knowledge (policies, FAQs, product info) it can query on demand. Static array or bring your own retriever.
36
+ - 💡 **Knowledge-only mode** — Set `enableUIControl={false}` for a lightweight AI assistant with no UI interaction — single LLM call, ~70% fewer tokens.
35
37
 
36
38
 
37
39
  ### 🎤 Voice Mode (Live Agent)
@@ -182,6 +184,77 @@ export default function RootLayout() {
182
184
 
183
185
  A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms — it reads your live UI and acts.
184
186
 
187
+ ## 🧠 Knowledge Base
188
+
189
+ Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
190
+
191
+ ### Static Array (Simple)
192
+
193
+ Pass an array of entries — the SDK handles keyword-based retrieval internally:
194
+
195
+ ```tsx
196
+ import type { KnowledgeEntry } from '@mobileai/react-native';
197
+
198
+ const KNOWLEDGE: KnowledgeEntry[] = [
199
+ {
200
+ id: 'shipping',
201
+ title: 'Shipping Policy',
202
+ content: 'Free shipping on orders over $75. Standard: 5-7 days ($4.99). Express: 2-3 days ($12.99).',
203
+ tags: ['shipping', 'delivery', 'free shipping'],
204
+ },
205
+ {
206
+ id: 'returns',
207
+ title: 'Return Policy',
208
+ content: '30-day returns on all items. Refunds processed in 5-7 business days.',
209
+ tags: ['return', 'refund', 'exchange'],
210
+ screens: ['product/[id]', 'order-history'], // optional: only surface on these screens
211
+ },
212
+ ];
213
+
214
+ <AIAgent knowledgeBase={KNOWLEDGE} />
215
+ ```
216
+
217
+ ### Custom Retriever (Advanced)
218
+
219
+ Bring your own retrieval logic — call an API, vector database, or any async source:
220
+
221
+ ```tsx
222
+ <AIAgent
223
+ knowledgeBase={{
224
+ retrieve: async (query: string, screenName?: string) => {
225
+ const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
226
+ return results.json();
227
+ },
228
+ }}
229
+ />
230
+ ```
231
+
232
+ The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
233
+
234
+ ### Knowledge-Only Mode
235
+
236
+ Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
237
+
238
+ ```tsx
239
+ <AIAgent
240
+ enableUIControl={false}
241
+ knowledgeBase={KNOWLEDGE}
242
+ />
243
+ ```
244
+
245
+ **How it differs from full mode:**
246
+
247
+ | | Full mode (default) | Knowledge-only mode |
248
+ |---|---|---|
249
+ | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
250
+ | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
251
+ | Screenshots | ✅ Optional | ❌ Skipped |
252
+ | Agent loop | Up to 10 steps | Single LLM call |
253
+ | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
254
+ | System prompt | ~1,500 tokens | ~400 tokens |
255
+
256
+ The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
257
+
185
258
  ## 🔌 API Reference
186
259
 
187
260
  ### `<AIAgent>` Component
@@ -415,77 +488,6 @@ This starts two servers:
415
488
  | `ask_user(question)` | Ask the user for clarification. |
416
489
  | `query_knowledge(question)` | Search the knowledge base. Only available when `knowledgeBase` is configured. |
417
490
 
418
- ## 🧠 Knowledge Base
419
-
420
- Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
421
-
422
- ### Static Array (Simple)
423
-
424
- Pass an array of entries — the SDK handles keyword-based retrieval internally:
425
-
426
- ```tsx
427
- import type { KnowledgeEntry } from '@mobileai/react-native';
428
-
429
- const KNOWLEDGE: KnowledgeEntry[] = [
430
- {
431
- id: 'shipping',
432
- title: 'Shipping Policy',
433
- content: 'Free shipping on orders over $75. Standard: 5-7 days ($4.99). Express: 2-3 days ($12.99).',
434
- tags: ['shipping', 'delivery', 'free shipping'],
435
- },
436
- {
437
- id: 'returns',
438
- title: 'Return Policy',
439
- content: '30-day returns on all items. Refunds processed in 5-7 business days.',
440
- tags: ['return', 'refund', 'exchange'],
441
- screens: ['product/[id]', 'order-history'], // optional: only surface on these screens
442
- },
443
- ];
444
-
445
- <AIAgent knowledgeBase={KNOWLEDGE} />
446
- ```
447
-
448
- ### Custom Retriever (Advanced)
449
-
450
- Bring your own retrieval logic — call an API, vector database, or any async source:
451
-
452
- ```tsx
453
- <AIAgent
454
- knowledgeBase={{
455
- retrieve: async (query: string, screenName?: string) => {
456
- const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
457
- return results.json();
458
- },
459
- }}
460
- />
461
- ```
462
-
463
- The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
464
-
465
- ### Knowledge-Only Mode
466
-
467
- Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
468
-
469
- ```tsx
470
- <AIAgent
471
- enableUIControl={false}
472
- knowledgeBase={KNOWLEDGE}
473
- />
474
- ```
475
-
476
- **How it differs from full mode:**
477
-
478
- | | Full mode (default) | Knowledge-only mode |
479
- |---|---|---|
480
- | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
481
- | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
482
- | Screenshots | ✅ Optional | ❌ Skipped |
483
- | Agent loop | Up to 10 steps | Single LLM call |
484
- | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
485
- | System prompt | ~1,500 tokens | ~400 tokens |
486
-
487
- The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
488
-
489
491
  ## 📋 Requirements
490
492
 
491
493
  - React Native 0.72+
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mobileai/react-native",
3
- "version": "0.5.9",
3
+ "version": "0.5.11",
4
4
  "description": "Build autonomous AI agents for React Native and Expo apps. Provides AI-native UI traversal, tool calling, and structured reasoning.",
5
5
  "main": "./lib/module/index.js",
6
6
  "source": "./src/index.ts",