react-native-agentic-ai 0.5.12 → 0.5.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +25 -23
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -186,10 +186,35 @@ export default function RootLayout() {
186
186
 
187
187
  A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms — it reads your live UI and acts.
188
188
 
189
+ ### Knowledge-Only Mode (No UI Automation)
190
+
191
+ Don't need UI automation? Set `enableUIControl={false}`. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
192
+
193
+ ```tsx
194
+ <AIAgent
195
+ enableUIControl={false}
196
+ knowledgeBase={KNOWLEDGE}
197
+ />
198
+ ```
199
+
200
+ **How it differs from full mode:**
201
+
202
+ | | Full mode (default) | Knowledge-only mode |
203
+ |---|---|---|
204
+ | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
205
+ | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
206
+ | Screenshots | ✅ Optional | ❌ Skipped |
207
+ | Agent loop | Up to 10 steps | Single LLM call |
208
+ | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
209
+ | System prompt | ~1,500 tokens | ~400 tokens |
210
+
211
+ The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
212
+
189
213
  ## 🧠 Knowledge Base
190
214
 
191
215
  Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
192
216
 
217
+
193
218
  ### Static Array (Simple)
194
219
 
195
220
  Pass an array of entries — the SDK handles keyword-based retrieval internally:
@@ -233,29 +258,6 @@ Bring your own retrieval logic — call an API, vector database, or any async so
233
258
 
234
259
  The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
235
260
 
236
- ### Knowledge-Only Mode
237
-
238
- Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
239
-
240
- ```tsx
241
- <AIAgent
242
- enableUIControl={false}
243
- knowledgeBase={KNOWLEDGE}
244
- />
245
- ```
246
-
247
- **How it differs from full mode:**
248
-
249
- | | Full mode (default) | Knowledge-only mode |
250
- |---|---|---|
251
- | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
252
- | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
253
- | Screenshots | ✅ Optional | ❌ Skipped |
254
- | Agent loop | Up to 10 steps | Single LLM call |
255
- | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
256
- | System prompt | ~1,500 tokens | ~400 tokens |
257
-
258
- The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
259
261
 
260
262
  ## 🔌 API Reference
261
263
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "react-native-agentic-ai",
3
- "version": "0.5.12",
3
+ "version": "0.5.14",
4
4
  "description": "Build autonomous AI agents for React Native and Expo apps. Provides AI-native UI traversal, tool calling, and structured reasoning.",
5
5
  "main": "./lib/module/index.js",
6
6
  "source": "./src/index.ts",