react-native-ai-core 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2026 Alberto Fernandez
3
+ Copyright (c) 2026 Alberto Fernandez (@albertoroda)
4
4
  Permission is hereby granted, free of charge, to any person obtaining a copy
5
5
  of this software and associated documentation files (the "Software"), to deal
6
6
  in the Software without restriction, including without limitation the rights
package/README.md CHANGED
@@ -47,7 +47,14 @@ Add to `android/gradle.properties`:
47
47
  minSdkVersion=26
48
48
  ```
49
49
 
50
- No additional permissions are required. The native module is auto-linked.
50
+ The native module is auto-linked. No manual permission declarations are needed the library's `AndroidManifest.xml` is merged automatically into your app and includes:
51
+
52
+ - `FOREGROUND_SERVICE` — keeps the process alive during background generation
53
+ - `FOREGROUND_SERVICE_DATA_SYNC` — required foreground service type (Android 14+)
54
+
55
+ These permissions will appear in your compiled app manifest and may be visible in Play Store security reviews.
56
+
57
+ > **Note:** Testing requires a physical Android device. The Android emulator does not support NPU hardware or the AICore system service, so the ML Kit AICore backend will not function on it. The MediaPipe backend may run in an emulator but is not officially supported or tested in that environment.
51
58
 
52
59
  ---
53
60
 
@@ -99,6 +106,90 @@ To start a fresh conversation without releasing the model:
99
106
  await AICore.resetConversation();
100
107
  ```
101
108
 
109
+ ### Structured output with runtime validation
110
+
111
+ For app-internal AI features such as extraction, classification, routing, or tool orchestration, use `generateStructuredResponse(...)`.
112
+
113
+ - Validates optional structured input before generation
114
+ - Forces JSON-only output
115
+ - Extracts JSON even if the model wraps it in extra text
116
+ - Validates the final payload with `zod`
117
+ - Retries automatically with a repair prompt when validation fails
118
+ - Uses a stateless native request so it does not pollute chat conversation history
119
+ - Supports an `AbortSignal` to cancel mid-generation
120
+
121
+ ```tsx
122
+ import { z } from 'zod';
123
+ import { generateStructuredResponse } from 'react-native-ai-core';
124
+
125
+ const TicketSchema = z.object({
126
+ category: z.enum(['bug', 'billing', 'feature']),
127
+ priority: z.enum(['low', 'medium', 'high']),
128
+ summary: z.string(),
129
+ needsHuman: z.boolean(),
130
+ });
131
+
132
+ const ctrl = new AbortController();
133
+
134
+ const result = await generateStructuredResponse({
135
+ prompt: 'Classify this support request and summarize it.',
136
+ input: {
137
+ message: 'The app crashes when I try to export a PDF invoice.',
138
+ },
139
+ output: TicketSchema,
140
+ signal: ctrl.signal,
141
+ });
142
+
143
+ // Cancel mid-way
144
+ ctrl.abort();
145
+ ```
146
+
147
+ Recommended for reliability on-device:
148
+ - Keep the prompt short and task-specific
149
+ - Keep `input` compact and validated before sending it
150
+ - Prefer small output schemas over deeply nested ones
151
+ - Use this API for internal app workflows, not long-form generation
152
+ - Repair retries are bounded and prompt size is trimmed internally to avoid hitting the same context limits as chat flows
153
+
154
+ There is also a concrete demo helper in [example/src/examples/structuredOutputExample.ts](example/src/examples/structuredOutputExample.ts).
155
+
156
+ ### Cancelling generation
157
+
158
+ Both chat and structured generation can be stopped at any point.
159
+
160
+ **Chat / streaming:**
161
+
162
+ ```tsx
163
+ import { cancelGeneration } from 'react-native-ai-core';
164
+
165
+ // Stop an ongoing generateResponse or generateResponseStream call
166
+ await cancelGeneration();
167
+ ```
168
+
169
+ For streaming, the `onComplete` callback fires normally after cancellation — `onError` is not called.
170
+
171
+ **Structured output:**
172
+
173
+ Pass an `AbortSignal` from an `AbortController` to `generateStructuredResponse`. When you call `ctrl.abort()` the tree-walker stops at the next field boundary and rejects with an `Error` whose `name` is `'AbortError'`.
174
+
175
+ ```tsx
176
+ const ctrl = new AbortController();
177
+
178
+ // Start generation
179
+ const promise = generateStructuredResponse({ ..., signal: ctrl.signal });
180
+
181
+ // Cancel from a button handler
182
+ ctrl.abort();
183
+
184
+ try {
185
+ await promise;
186
+ } catch (err) {
187
+ if (err.name === 'AbortError') {
188
+ console.log('Cancelled by user');
189
+ }
190
+ }
191
+ ```
192
+
102
193
  ---
103
194
 
104
195
  ## API Reference
@@ -120,6 +211,23 @@ Returns `true` on success. Throws on failure.
120
211
 
121
212
  ---
122
213
 
214
+ ### `cancelGeneration(): Promise<void>`
215
+
216
+ Cancels the in-progress generation immediately.
217
+
218
+ - For **streaming** (`generateResponseStream`): stops the token stream and fires `onComplete` (not `onError`).
219
+ - For **blocking** (`generateResponse`): rejects the pending promise with code `CANCELLED`.
220
+ - Safe to call even when no generation is running.
221
+
222
+ ```tsx
223
+ await AICore.cancelGeneration();
224
+ // or named export:
225
+ import { cancelGeneration } from 'react-native-ai-core';
226
+ await cancelGeneration();
227
+ ```
228
+
229
+ ---
230
+
123
231
  ### `generateResponse(prompt: string): Promise<string>`
124
232
 
125
233
  Generates a complete response synchronously (waits for the full output).
@@ -134,6 +242,47 @@ const response = await AICore.generateResponse('Tell me a joke');
134
242
 
135
243
  ---
136
244
 
245
+ ### `generateStructuredResponse(options): Promise<T>`
246
+
247
+ Generates stateless structured JSON and validates it against a user-defined `zod` schema.
248
+
249
+ ```tsx
250
+ import { z } from 'zod';
251
+
252
+ const OutputSchema = z.object({
253
+ intent: z.enum(['search', 'reply', 'ignore']),
254
+ confidence: z.number(),
255
+ });
256
+
257
+ const ctrl = new AbortController();
258
+
259
+ const output = await generateStructuredResponse({
260
+ prompt: 'Determine the next action for this message.',
261
+ input: { message: 'Can you send me the invoice again?' },
262
+ output: OutputSchema,
263
+ signal: ctrl.signal,
264
+ });
265
+ ```
266
+
267
+ Options:
268
+
269
+ | Option | Type | Description |
270
+ |---|---|---|
271
+ | `prompt` | `string` | Natural language instruction |
272
+ | `input` | `unknown` | Optional structured input object |
273
+ | `inputSchema` | `ZodType` | Optional schema to validate `input` before generation |
274
+ | `output` | `ZodType` | Required schema to validate the model output |
275
+ | `strategy` | `'single' \| 'chunked'` | `'single'` (default) generates the whole JSON in one call. `'chunked'` walks the schema field-by-field — use for large or complex schemas |
276
+ | `maxRetries` | `number` | Repair attempts when validation fails (default `2`) |
277
+ | `maxContinuations` | `number` | Max continuation calls when JSON is truncated (default `8`) |
278
+ | `timeoutMs` | `number` | Per-call timeout in ms (default `300000`) |
279
+ | `onProgress` | `(field, done) => void` | Called for each field during `'chunked'` generation |
280
+ | `signal` | `AbortSignal` | Pass a signal to cancel mid-generation. Rejects with `Error { name: 'AbortError' }` |
281
+
282
+ Throws `StructuredOutputError` if valid JSON matching the schema cannot be produced after retries.
283
+
284
+ ---
285
+
137
286
  ### `generateResponseStream(prompt: string, callbacks: StreamCallbacks): () => void`
138
287
 
139
288
  Generates a response token by token via streaming. Returns a cleanup function to remove event listeners.
@@ -217,6 +366,7 @@ A ready-to-use React hook is available in the example app as a reference impleme
217
366
  - Streaming with incremental message updates
218
367
  - Conversation history reset on clear
219
368
  - Error state management
369
+ - `stopGeneration()` — calls `cancelGeneration()` to abort the in-progress response
220
370
 
221
371
  See [`example/src/hooks/useAICore.ts`](example/src/hooks/useAICore.ts).
222
372
 
@@ -249,7 +399,7 @@ Dependency: `com.google.mediapipe:tasks-genai:0.10.22`
249
399
  - [ ] Model quantization options
250
400
  - [ ] System prompt / persona configuration
251
401
  - [ ] Token count estimation
252
- - [ ] Abort/cancel streaming mid-generation
402
+ - [x] Abort/cancel streaming mid-generation
253
403
  - [ ] Web support (WebGPU / WASM)
254
404
 
255
405
  ---
@@ -266,7 +416,7 @@ Contributions, issues and feature requests are welcome.
266
416
 
267
417
  ## License
268
418
 
269
- MIT © [Alberto Fernandez](https://github.com/albertofernandezroda)
419
+ MIT © [Alberto Fernandez](https://github.com/albertoroda)
270
420
 
271
421
  ---
272
422
 
@@ -1,2 +1,14 @@
1
1
  <manifest xmlns:android="http://schemas.android.com/apk/res/android">
2
+
3
+ <!-- Keeps the process alive while AI generation runs in background -->
4
+ <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
5
+ <uses-permission android:name="android.permission.FOREGROUND_SERVICE_DATA_SYNC" />
6
+
7
+ <application>
8
+ <service
9
+ android:name=".InferenceService"
10
+ android:exported="false"
11
+ android:foregroundServiceType="dataSync" />
12
+ </application>
13
+
2
14
  </manifest>