@mobileai/react-native 0.5.13 → 0.5.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +7 -5
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -34,6 +34,7 @@
34
34
  - 🌐 **MCP bridge** — Let external AI agents (OpenClaw, Claude Desktop) control your app remotely.
35
35
  - 🧠 **Knowledge base** — Give the AI domain knowledge (policies, FAQs, product info) it can query on demand. Static array or bring your own retriever.
36
36
  - 💡 **Knowledge-only mode** — Set `enableUIControl={false}` for a lightweight AI assistant with no UI interaction — single LLM call, ~70% fewer tokens.
37
+ - 🎙️ **Voice dictation** — Let users speak their request instead of typing. Automatically enabled if `expo-speech-recognition` is installed.
37
38
 
38
39
 
39
40
  ### 🎤 Voice Mode (Live Agent)
@@ -186,11 +187,7 @@ export default function RootLayout() {
186
187
 
187
188
  A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms — it reads your live UI and acts.
188
189
 
189
- ## 🧠 Knowledge Base
190
-
191
- Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
192
-
193
- ### 💡 Knowledge-Only Mode
190
+ ### Knowledge-Only Mode (No UI Automation)
194
191
 
195
192
  Don't need UI automation? Set `enableUIControl={false}`. The AI becomes a lightweight FAQ / support assistant — no screen analysis, no multi-step agent loop, just question → answer:
196
193
 
@@ -214,6 +211,11 @@ Don't need UI automation? Set `enableUIControl={false}`. The AI becomes a lightw
214
211
 
215
212
  The AI still knows the current **screen name** (from navigation state, zero cost), so `screens`-filtered knowledge entries work correctly. It just can't see what's *on* the screen — ideal for domain Q&A where answers come from knowledge, not UI.
216
213
 
214
+ ## 🧠 Knowledge Base
215
+
216
+ Give the AI domain-specific knowledge it can query on demand — policies, FAQs, product details, etc. The AI uses a `query_knowledge` tool to fetch relevant entries only when needed (no token waste).
217
+
218
+
217
219
  ### Static Array (Simple)
218
220
 
219
221
  Pass an array of entries — the SDK handles keyword-based retrieval internally:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mobileai/react-native",
3
- "version": "0.5.13",
3
+ "version": "0.5.15",
4
4
  "description": "Build autonomous AI agents for React Native and Expo apps. Provides AI-native UI traversal, tool calling, and structured reasoning.",
5
5
  "main": "./lib/module/index.js",
6
6
  "source": "./src/index.ts",