react-native-agentic-ai 0.5.9 → 0.5.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +71 -71
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -182,6 +182,77 @@ export default function RootLayout() {
182
182
 
183
183
  A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms — it reads your live UI and acts.
184
184
 
185
+ ## 🧠 Knowledge Base
186
+
187
+ Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
188
+
189
+ ### Static Array (Simple)
190
+
191
+ Pass an array of entries — the SDK handles keyword-based retrieval internally:
192
+
193
+ ```tsx
194
+ import type { KnowledgeEntry } from '@mobileai/react-native';
195
+
196
+ const KNOWLEDGE: KnowledgeEntry[] = [
197
+ {
198
+ id: 'shipping',
199
+ title: 'Shipping Policy',
200
+ content: 'Free shipping on orders over $75. Standard: 5-7 days ($4.99). Express: 2-3 days ($12.99).',
201
+ tags: ['shipping', 'delivery', 'free shipping'],
202
+ },
203
+ {
204
+ id: 'returns',
205
+ title: 'Return Policy',
206
+ content: '30-day returns on all items. Refunds processed in 5-7 business days.',
207
+ tags: ['return', 'refund', 'exchange'],
208
+ screens: ['product/[id]', 'order-history'], // optional: only surface on these screens
209
+ },
210
+ ];
211
+
212
+ <AIAgent knowledgeBase={KNOWLEDGE} />
213
+ ```
214
+
215
+ ### Custom Retriever (Advanced)
216
+
217
+ Bring your own retrieval logic — call an API, vector database, or any async source:
218
+
219
+ ```tsx
220
+ <AIAgent
221
+ knowledgeBase={{
222
+ retrieve: async (query: string, screenName?: string) => {
223
+ const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
224
+ return results.json();
225
+ },
226
+ }}
227
+ />
228
+ ```
229
+
230
+ The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
231
+
232
+ ### Knowledge-Only Mode
233
+
234
+ Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
235
+
236
+ ```tsx
237
+ <AIAgent
238
+ enableUIControl={false}
239
+ knowledgeBase={KNOWLEDGE}
240
+ />
241
+ ```
242
+
243
+ **How it differs from full mode:**
244
+
245
+ | | Full mode (default) | Knowledge-only mode |
246
+ |---|---|---|
247
+ | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
248
+ | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
249
+ | Screenshots | ✅ Optional | ❌ Skipped |
250
+ | Agent loop | Up to 10 steps | Single LLM call |
251
+ | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
252
+ | System prompt | ~1,500 tokens | ~400 tokens |
253
+
254
+ The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
255
+
185
256
  ## 🔌 API Reference
186
257
 
187
258
  ### `<AIAgent>` Component
@@ -415,77 +486,6 @@ This starts two servers:
415
486
  | `ask_user(question)` | Ask the user for clarification. |
416
487
  | `query_knowledge(question)` | Search the knowledge base. Only available when `knowledgeBase` is configured. |
417
488
 
418
- ## 🧠 Knowledge Base
419
-
420
- Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
421
-
422
- ### Static Array (Simple)
423
-
424
- Pass an array of entries — the SDK handles keyword-based retrieval internally:
425
-
426
- ```tsx
427
- import type { KnowledgeEntry } from '@mobileai/react-native';
428
-
429
- const KNOWLEDGE: KnowledgeEntry[] = [
430
- {
431
- id: 'shipping',
432
- title: 'Shipping Policy',
433
- content: 'Free shipping on orders over $75. Standard: 5-7 days ($4.99). Express: 2-3 days ($12.99).',
434
- tags: ['shipping', 'delivery', 'free shipping'],
435
- },
436
- {
437
- id: 'returns',
438
- title: 'Return Policy',
439
- content: '30-day returns on all items. Refunds processed in 5-7 business days.',
440
- tags: ['return', 'refund', 'exchange'],
441
- screens: ['product/[id]', 'order-history'], // optional: only surface on these screens
442
- },
443
- ];
444
-
445
- <AIAgent knowledgeBase={KNOWLEDGE} />
446
- ```
447
-
448
- ### Custom Retriever (Advanced)
449
-
450
- Bring your own retrieval logic — call an API, vector database, or any async source:
451
-
452
- ```tsx
453
- <AIAgent
454
- knowledgeBase={{
455
- retrieve: async (query: string, screenName?: string) => {
456
- const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
457
- return results.json();
458
- },
459
- }}
460
- />
461
- ```
462
-
463
- The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
464
-
465
- ### Knowledge-Only Mode
466
-
467
- Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
468
-
469
- ```tsx
470
- <AIAgent
471
- enableUIControl={false}
472
- knowledgeBase={KNOWLEDGE}
473
- />
474
- ```
475
-
476
- **How it differs from full mode:**
477
-
478
- | | Full mode (default) | Knowledge-only mode |
479
- |---|---|---|
480
- | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
481
- | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
482
- | Screenshots | ✅ Optional | ❌ Skipped |
483
- | Agent loop | Up to 10 steps | Single LLM call |
484
- | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
485
- | System prompt | ~1,500 tokens | ~400 tokens |
486
-
487
- The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
488
-
489
489
  ## 📋 Requirements
490
490
 
491
491
  - React Native 0.72+
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "react-native-agentic-ai",
3
- "version": "0.5.9",
3
+ "version": "0.5.10",
4
4
  "description": "Build autonomous AI agents for React Native and Expo apps. Provides AI-native UI traversal, tool calling, and structured reasoning.",
5
5
  "main": "./lib/module/index.js",
6
6
  "source": "./src/index.ts",