react-native-agentic-ai 0.5.12 → 0.5.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +24 -23
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -190,6 +190,30 @@ A floating chat bar appears automatically. Ask the AI to navigate, tap buttons,
190
190
 
191
191
  Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
192
192
 
193
+ ### 💡 Knowledge-Only Mode
194
+
195
+ Don't need UI automation? Set `enableUIControl={false}`. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
196
+
197
+ ```tsx
198
+ <AIAgent
199
+ enableUIControl={false}
200
+ knowledgeBase={KNOWLEDGE}
201
+ />
202
+ ```
203
+
204
+ **How it differs from full mode:**
205
+
206
+ | | Full mode (default) | Knowledge-only mode |
207
+ |---|---|---|
208
+ | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
209
+ | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
210
+ | Screenshots | ✅ Optional | ❌ Skipped |
211
+ | Agent loop | Up to 10 steps | Single LLM call |
212
+ | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
213
+ | System prompt | ~1,500 tokens | ~400 tokens |
214
+
215
+ The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
216
+
193
217
  ### Static Array (Simple)
194
218
 
195
219
  Pass an array of entries — the SDK handles keyword-based retrieval internally:
@@ -233,29 +257,6 @@ Bring your own retrieval logic — call an API, vector database, or any async so
233
257
 
234
258
  The retriever receives the user's question and current screen name, and returns a formatted string with the relevant knowledge.
235
259
 
236
- ### Knowledge-Only Mode
237
-
238
- Set `enableUIControl={false}` to disable all UI interactions. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
239
-
240
- ```tsx
241
- <AIAgent
242
- enableUIControl={false}
243
- knowledgeBase={KNOWLEDGE}
244
- />
245
- ```
246
-
247
- **How it differs from full mode:**
248
-
249
- | | Full mode (default) | Knowledge-only mode |
250
- |---|---|---|
251
- | UI tree analysis | ✅ Full fiber walk | ❌ Skipped |
252
- | Screen content sent to LLM | ✅ ~500-2000 tokens | ❌ Only screen name |
253
- | Screenshots | ✅ Optional | ❌ Skipped |
254
- | Agent loop | Up to 10 steps | Single LLM call |
255
- | Available tools | 7 (tap, type, navigate, ...) | 2 (done, query_knowledge) |
256
- | System prompt | ~1,500 tokens | ~400 tokens |
257
-
258
- The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
259
260
 
260
261
  ## 🔌 API Reference
261
262
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "react-native-agentic-ai",
3
- "version": "0.5.12",
3
+ "version": "0.5.13",
4
4
  "description": "Build autonomous AI agents for React Native and Expo apps. Provides AI-native UI traversal, tool calling, and structured reasoning.",
5
5
  "main": "./lib/module/index.js",
6
6
  "source": "./src/index.ts",