ak-gemini 1.2.0 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,391 +1,356 @@
1
- # AK-Gemini
1
+ # ak-gemini
2
2
 
3
- **Generic, type-safe, and highly configurable wrapper for Google's Gemini AI JSON transformation.**
4
- Use this to power LLM-driven data pipelines, JSON mapping, or any automated AI transformation step, locally or in cloud functions.
5
-
6
- ---
7
-
8
- ## Features
9
-
10
- * **Model-Agnostic:** Use any Gemini model (`gemini-2.5-flash` by default)
11
- * **Declarative Few-shot Examples:** Seed transformations using example mappings, with support for custom keys (`PROMPT`, `ANSWER`, `CONTEXT`, or your own)
12
- * **Automatic Validation & Repair:** Validate outputs with your own async function; auto-repair failed payloads with LLM feedback loop (exponential backoff, fully configurable)
13
- * **Token Counting & Safety:** Preview the *exact* Gemini token consumption for any operation—including all examples, instructions, and your input—before sending, so you can avoid window errors and manage costs.
14
- * **Conversation Management:** Clear conversation history while preserving examples, or send stateless one-off messages that don't affect history
15
- * **Response Metadata:** Access actual model version and token counts from API responses for billing verification and debugging
16
- * **Strong TypeScript/JSDoc Typings:** All public APIs fully typed (see `/types`)
17
- * **Minimal API Surface:** Dead simple, no ceremony—init, seed, transform, validate.
18
- * **Robust Logging:** Pluggable logger for all steps, easy debugging
19
-
20
- ---
21
-
22
- ## Install
3
+ **Modular, type-safe wrapper for Google's Gemini AI.** Five class exports for different interaction patterns — JSON transformation, chat, stateless messages, tool-using agents, and code-writing agents — all sharing a common base.
23
4
 
24
5
  ```sh
25
6
  npm install ak-gemini
26
7
  ```
27
8
 
28
- Requires Node.js 18+, and [@google/genai](https://www.npmjs.com/package/@google/genai).
9
+ Requires Node.js 18+ and [@google/genai](https://www.npmjs.com/package/@google/genai).
29
10
 
30
11
  ---
31
12
 
32
- ## Usage
33
-
34
- ### 1. **Setup**
35
-
36
- Set your `GEMINI_API_KEY` environment variable:
13
+ ## Quick Start
37
14
 
38
15
  ```sh
39
- export GEMINI_API_KEY=sk-your-gemini-api-key
16
+ export GEMINI_API_KEY=your-key
40
17
  ```
41
18
 
42
- or pass it directly in the constructor options.
19
+ ```javascript
20
+ import { Transformer, Chat, Message, ToolAgent, CodeAgent } from 'ak-gemini';
21
+ ```
43
22
 
44
23
  ---
45
24
 
46
- ### 2. **Basic Example**
25
+ ## Classes
26
+
27
+ ### Transformer — JSON Transformation
47
28
 
48
- ```js
49
- import AITransformer from 'ak-gemini';
29
+ Transform structured data using few-shot examples with validation and retry.
50
30
 
51
- const transformer = new AITransformer({
52
- modelName: 'gemini-2.5-flash', // or your preferred Gemini model
53
- sourceKey: 'INPUT', // Custom prompt key (default: 'PROMPT')
54
- targetKey: 'OUTPUT', // Custom answer key (default: 'ANSWER')
55
- contextKey: 'CONTEXT', // Optional, for per-example context
56
- maxRetries: 2, // Optional, for validation-repair loops
57
- // responseSchema: { ... }, // Optional, strict output typing
31
+ ```javascript
32
+ const transformer = new Transformer({
33
+ modelName: 'gemini-2.5-flash',
34
+ sourceKey: 'INPUT',
35
+ targetKey: 'OUTPUT'
58
36
  });
59
37
 
60
- const examples = [
38
+ await transformer.init();
39
+ await transformer.seed([
61
40
  {
62
- CONTEXT: "Generate professional profiles with emoji representations",
63
- INPUT: { "name": "Alice" },
64
- OUTPUT: { "name": "Alice", "profession": "data scientist", "life_as_told_by_emoji": ["🔬", "💡", "📊", "🧠", "🌟"] }
41
+ INPUT: { name: 'Alice' },
42
+ OUTPUT: { name: 'Alice', role: 'engineer', emoji: '👩‍💻' }
65
43
  }
66
- ];
67
-
68
- await transformer.init();
69
- await transformer.seed(examples);
44
+ ]);
70
45
 
71
- const result = await transformer.message({ name: "Bob" });
72
- console.log(result);
73
- // → { name: "Bob", profession: "...", life_as_told_by_emoji: [ ... ] }
46
+ const result = await transformer.send({ name: 'Bob' });
47
+ // → { name: 'Bob', role: '...', emoji: '...' }
74
48
  ```
75
49
 
76
- ---
77
-
78
- ### 3. **Token Window Safety/Preview**
50
+ **Validation & self-healing:**
79
51
 
80
- Before calling `.message()` or `.seed()`, you can preview the INPUT token usage that will be sent to Gemini—*including* your system instructions, examples, and user input. This is vital for avoiding window errors and managing context size:
52
+ ```javascript
53
+ const result = await transformer.send({ name: 'Bob' }, {}, async (output) => {
54
+ if (!output.role) throw new Error('Missing role field');
55
+ return output;
56
+ });
57
+ ```
81
58
 
82
- ```js
83
- const { inputTokens } = await transformer.estimate({ name: "Bob" });
84
- console.log(`Input tokens: ${inputTokens}`);
59
+ ### Chat — Multi-Turn Conversation
85
60
 
86
- // Optional: abort or trim if over limit
87
- if (inputTokens > 32000) throw new Error("Request too large for selected Gemini model");
61
+ ```javascript
62
+ const chat = new Chat({
63
+ systemPrompt: 'You are a helpful assistant.'
64
+ });
88
65
 
89
- // After the call, check actual usage (input + output)
90
- await transformer.message({ name: "Bob" });
91
- const usage = transformer.getLastUsage();
92
- console.log(`Actual usage: ${usage.promptTokens} in, ${usage.responseTokens} out`);
66
+ const r1 = await chat.send('My name is Alice.');
67
+ const r2 = await chat.send('What is my name?');
68
+ // r2.text "Alice"
93
69
  ```
94
70
 
95
- ---
96
-
97
- ### 4. **Automatic Validation & Self-Healing**
71
+ ### Message — Stateless One-Off
98
72
 
99
- You can pass a custom async validatorif it fails, the transformer will attempt to self-correct using LLM feedback, retrying up to `maxRetries` times:
73
+ Each call is independentno history maintained.
100
74
 
101
- ```js
102
- const validator = async (payload) => {
103
- if (!payload.profession || !Array.isArray(payload.life_as_told_by_emoji)) {
104
- throw new Error('Invalid profile format');
75
+ ```javascript
76
+ const msg = new Message({
77
+ systemPrompt: 'Extract entities as JSON.',
78
+ responseMimeType: 'application/json',
79
+ responseSchema: {
80
+ type: 'object',
81
+ properties: {
82
+ entities: { type: 'array', items: { type: 'string' } }
83
+ }
105
84
  }
106
- return payload;
107
- };
85
+ });
108
86
 
109
- const validPayload = await transformer.transformWithValidation({ name: "Lynn" }, validator);
110
- console.log(validPayload);
87
+ const result = await msg.send('Alice works at Acme in New York.');
88
+ // result.data → { entities: ['Alice', 'Acme', 'New York'] }
111
89
  ```
112
90
 
113
- ---
114
-
115
- ### 5. **Conversation Management**
116
-
117
- Manage chat history to control costs and isolate requests:
118
-
119
- ```js
120
- // Clear conversation history while preserving seeded examples
121
- await transformer.clearConversation();
122
-
123
- // Send a stateless message that doesn't affect chat history
124
- const result = await transformer.message({ query: "one-off question" }, { stateless: true });
91
+ ### ToolAgent — Agent with User-Provided Tools
92
+
93
+ Provide tool declarations and an executor function. The agent manages the tool-use loop automatically.
94
+
95
+ ```javascript
96
+ const agent = new ToolAgent({
97
+ systemPrompt: 'You are a research assistant.',
98
+ tools: [
99
+ {
100
+ name: 'http_get',
101
+ description: 'Fetch a URL',
102
+ parametersJsonSchema: {
103
+ type: 'object',
104
+ properties: { url: { type: 'string' } },
105
+ required: ['url']
106
+ }
107
+ }
108
+ ],
109
+ toolExecutor: async (toolName, args) => {
110
+ if (toolName === 'http_get') {
111
+ const res = await fetch(args.url);
112
+ return { status: res.status, body: await res.text() };
113
+ }
114
+ },
115
+ onBeforeExecution: async (toolName, args) => {
116
+ console.log(`About to call ${toolName}`);
117
+ return true; // return false to deny
118
+ }
119
+ });
125
120
 
126
- // Check actual model and token usage from last API call
127
- console.log(transformer.lastResponseMetadata);
128
- // { modelVersion: 'gemini-2.5-flash-001', requestedModel: 'gemini-2.5-flash',
129
- // promptTokens: 150, responseTokens: 42, totalTokens: 192, timestamp: 1703... }
121
+ const result = await agent.chat('Fetch https://api.example.com/data');
122
+ console.log(result.text); // Agent's summary
123
+ console.log(result.toolCalls); // [{ name, args, result }]
130
124
  ```
131
125
 
132
- ---
133
-
134
- ## API
126
+ **Streaming:**
135
127
 
136
- ### Constructor
137
-
138
- ```js
139
- new AITransformer(options)
128
+ ```javascript
129
+ for await (const event of agent.stream('Fetch the data')) {
130
+ if (event.type === 'text') process.stdout.write(event.text);
131
+ if (event.type === 'tool_call') console.log(`Calling ${event.toolName}...`);
132
+ if (event.type === 'tool_result') console.log(`Result:`, event.result);
133
+ if (event.type === 'done') console.log('Done!');
134
+ }
140
135
  ```
141
136
 
142
- | Option | Type | Default | Description |
143
- | ------------------ | ------ | ------------------ | ------------------------------------------------- |
144
- | modelName | string | 'gemini-2.5-flash' | Gemini model to use |
145
- | sourceKey | string | 'PROMPT' | Key for prompt/example input |
146
- | targetKey | string | 'ANSWER' | Key for expected output in examples |
147
- | contextKey | string | 'CONTEXT' | Key for per-example context (optional) |
148
- | examplesFile | string | null | Path to JSON file containing examples |
149
- | exampleData | array | null | Inline array of example objects |
150
- | responseSchema | object | null | Optional JSON schema for strict output validation |
151
- | maxRetries | number | 3 | Retries for validation+rebuild loop |
152
- | retryDelay | number | 1000 | Initial retry delay in ms (exponential backoff) |
153
- | logLevel | string | 'info' | Log level: 'trace', 'debug', 'info', 'warn', 'error', 'fatal', or 'none' |
154
- | chatConfig | object | ... | Gemini chat config overrides |
155
- | systemInstructions | string/null/false | (default prompt) | System prompt for Gemini. Pass `null` or `false` to disable. |
156
- | maxOutputTokens | number | 50000 | Maximum tokens in generated response |
157
- | thinkingConfig | object | null | Thinking features config (see below) |
158
- | enableGrounding | boolean | false | Enable Google Search grounding (WARNING: $35/1k queries) |
159
- | labels | object | null | Billing labels for cost attribution |
160
- | apiKey | string | env var | Gemini API key (or use `GEMINI_API_KEY` env var) |
161
- | vertexai | boolean | false | Use Vertex AI instead of Gemini API |
162
- | project | string | env var | GCP project ID (for Vertex AI) |
163
- | location | string | 'global' | GCP region (for Vertex AI) |
164
- | googleAuthOptions | object | null | Auth options for Vertex AI (keyFilename, credentials) |
165
-
166
- ---
167
-
168
- ### Methods
169
-
170
- #### `await transformer.init()`
171
-
172
- Initializes Gemini chat session (idempotent).
173
-
174
- #### `await transformer.seed(examples?)`
175
-
176
- Seeds the model with example transformations (uses keys from constructor).
177
- You can omit `examples` to use the `examplesFile` (if provided).
178
-
179
- #### `await transformer.message(sourcePayload, options?)`
180
-
181
- Transforms input JSON to output JSON using the seeded examples and system instructions. Throws if estimated token window would be exceeded.
182
-
183
- **Options:**
184
- - `stateless: true` — Send a one-off message without affecting chat history (uses `generateContent` instead of chat)
185
- - `labels: {}` — Per-message billing labels
186
-
187
- #### `await transformer.estimate(sourcePayload)`
188
-
189
- Returns `{ inputTokens }` — the estimated INPUT tokens for the request (system instructions + all examples + your sourcePayload).
190
- Use this to preview token window safety and manage costs before sending.
191
-
192
- **Note:** This only estimates input tokens. Output tokens cannot be predicted before the API call. Use `getLastUsage()` after `message()` to see actual consumption.
193
-
194
- #### `await transformer.transformWithValidation(sourcePayload, validatorFn, options?)`
195
-
196
- Runs transformation, validates with your async validator, and (optionally) repairs payload using LLM until valid or retries are exhausted.
197
- Throws if all attempts fail.
198
-
199
- #### `await transformer.rebuild(lastPayload, errorMessage)`
200
-
201
- Given a failed payload and error message, uses LLM to generate a corrected payload.
202
-
203
- #### `await transformer.reset()`
204
-
205
- Resets the Gemini chat session, clearing all history/examples.
206
-
207
- #### `transformer.getHistory()`
208
-
209
- Returns the current chat history (for debugging).
210
-
211
- #### `await transformer.clearConversation()`
212
-
213
- Clears conversation history while preserving seeded examples. Useful for starting fresh user sessions without re-seeding.
214
-
215
- #### `transformer.getLastUsage()`
216
-
217
- Returns structured usage data for billing verification. Token counts are **cumulative across all retry attempts** - if validation failed and a retry was needed, you see the total tokens consumed, not just the final successful call. Returns `null` if no API call has been made yet.
137
+ ### CodeAgent Agent That Writes and Executes Code
138
+
139
+ Instead of calling tools one by one, the model writes JavaScript that can do everything — read files, write files, run commands — in a single script. Inspired by the [code mode](https://blog.cloudflare.com/code-mode/) philosophy.
140
+
141
+ ```javascript
142
+ const agent = new CodeAgent({
143
+ workingDirectory: '/path/to/my/project',
144
+ onCodeExecution: (code, output) => {
145
+ console.log('Ran:', code.slice(0, 100));
146
+ console.log('Output:', output.stdout);
147
+ },
148
+ onBeforeExecution: async (code) => {
149
+ // Review code before execution
150
+ console.log('About to run:', code);
151
+ return true; // return false to deny
152
+ }
153
+ });
218
154
 
219
- ```js
220
- const usage = transformer.getLastUsage();
221
- // {
222
- // promptTokens: 300, // CUMULATIVE input tokens across all attempts
223
- // responseTokens: 84, // CUMULATIVE output tokens across all attempts
224
- // totalTokens: 384, // CUMULATIVE total tokens
225
- // attempts: 2, // Number of attempts (1 = first try success, 2+ = retries needed)
226
- // modelVersion: 'gemini-2.5-flash-001', // Actual model that responded
227
- // requestedModel: 'gemini-2.5-flash', // Model you requested
228
- // timestamp: 1703... // When response was received
229
- // }
155
+ const result = await agent.chat('Find all TODO comments in the codebase');
156
+ console.log(result.text); // Agent's summary
157
+ console.log(result.codeExecutions); // [{ code, output, stderr, exitCode }]
230
158
  ```
231
159
 
232
- ---
233
-
234
- ### Properties
235
-
236
- #### `transformer.lastResponseMetadata`
237
-
238
- After each API call, contains metadata from the response:
239
-
240
- ```js
241
- {
242
- modelVersion: string | null, // Actual model version that responded (e.g., 'gemini-2.5-flash-001')
243
- requestedModel: string, // Model you requested (e.g., 'gemini-2.5-flash')
244
- promptTokens: number, // Tokens in the prompt
245
- responseTokens: number, // Tokens in the response
246
- totalTokens: number, // Total tokens used
247
- timestamp: number // When response was received
160
+ **How it works:**
161
+ 1. On `init()`, gathers codebase context (file tree + key files like package.json)
162
+ 2. Injects context into the system prompt so the model understands the project
163
+ 3. Model writes JavaScript using the `execute_code` tool
164
+ 4. Code runs in a Node.js child process that inherits `process.env`
165
+ 5. Output (stdout/stderr) feeds back to the model
166
+ 6. Model decides if more work is needed
167
+
168
+ **Streaming:**
169
+
170
+ ```javascript
171
+ for await (const event of agent.stream('Refactor the auth module')) {
172
+ if (event.type === 'text') process.stdout.write(event.text);
173
+ if (event.type === 'code') console.log('\n[Running code...]');
174
+ if (event.type === 'output') console.log('[Output]:', event.stdout);
175
+ if (event.type === 'done') console.log('\nDone!');
248
176
  }
249
177
  ```
250
178
 
251
- Useful for verifying billing, debugging model behavior, and tracking token usage.
252
-
253
179
  ---
254
180
 
255
- ## Examples
181
+ ## Stopping Agents
256
182
 
257
- ### Seed with Custom Example Keys
183
+ Both `ToolAgent` and `CodeAgent` support a `stop()` method to cancel execution mid-loop. This is useful for implementing user-facing cancel buttons or safety limits.
258
184
 
259
- ```js
260
- const transformer = new AITransformer({
261
- sourceKey: 'INPUT',
262
- targetKey: 'OUTPUT',
263
- contextKey: 'CTX'
264
- });
185
+ ```javascript
186
+ const agent = new CodeAgent({ workingDirectory: '.' });
265
187
 
266
- await transformer.init();
267
- await transformer.seed([
268
- {
269
- CTX: "You are a dog expert.",
270
- INPUT: { breed: "golden retriever" },
271
- OUTPUT: { breed: "golden retriever", size: "large", friendly: true }
188
+ // Stop from a callback
189
+ const agent = new ToolAgent({
190
+ tools: [...],
191
+ toolExecutor: myExecutor,
192
+ onBeforeExecution: async (toolName, args) => {
193
+ if (toolName === 'dangerous_tool') {
194
+ agent.stop(); // Stop the agent entirely
195
+ return false; // Deny this specific execution
196
+ }
197
+ return true;
272
198
  }
273
- ]);
199
+ });
274
200
 
275
- const dog = await transformer.message({ breed: "chihuahua" });
201
+ // Stop externally (e.g., from a timeout or user action)
202
+ setTimeout(() => agent.stop(), 60_000);
203
+ const result = await agent.chat('Do some work');
276
204
  ```
277
205
 
278
- ---
279
-
280
- ### Use With Validation and Retry
281
-
282
- ```js
283
- const result = await transformer.transformWithValidation(
284
- { name: "Bob" },
285
- async (output) => {
286
- if (!output.name || !output.profession) throw new Error("Missing fields");
287
- return output;
288
- }
289
- );
290
- ```
206
+ For `CodeAgent`, `stop()` also kills any currently running child process via SIGTERM.
291
207
 
292
208
  ---
293
209
 
294
- ## Vertex AI Authentication
210
+ ## Shared Features
295
211
 
296
- Use Vertex AI instead of the Gemini API for enterprise features, VPC controls, and GCP billing integration.
212
+ All classes extend `BaseGemini` and share these features:
297
213
 
298
- ### With Service Account Key File
214
+ ### Authentication
299
215
 
300
- ```js
301
- const transformer = new AITransformer({
302
- vertexai: true,
303
- project: 'my-gcp-project',
304
- location: 'us-central1', // Optional: defaults to 'global' endpoint
305
- googleAuthOptions: {
306
- keyFilename: './service-account.json'
307
- }
308
- });
309
- ```
310
-
311
- ### With Application Default Credentials
216
+ ```javascript
217
+ // Gemini API (default)
218
+ new Chat({ apiKey: 'your-key' }); // or GEMINI_API_KEY env var
312
219
 
313
- ```js
314
- // Uses GOOGLE_APPLICATION_CREDENTIALS env var or `gcloud auth application-default login`
315
- const transformer = new AITransformer({
316
- vertexai: true,
317
- project: 'my-gcp-project' // or GOOGLE_CLOUD_PROJECT env var
318
- });
220
+ // Vertex AI
221
+ new Chat({ vertexai: true, project: 'my-gcp-project' });
319
222
  ```
320
223
 
321
- ---
224
+ ### Token Estimation
322
225
 
323
- ## Advanced Configuration
226
+ ```javascript
227
+ const { inputTokens } = await instance.estimate({ some: 'payload' });
228
+ const cost = await instance.estimateCost({ some: 'payload' });
229
+ ```
324
230
 
325
- ### Disabling System Instructions
231
+ ### Usage Tracking
326
232
 
327
- By default, the transformer uses built-in system instructions optimized for JSON transformation. You can provide custom instructions or disable them entirely:
233
+ ```javascript
234
+ const usage = instance.getLastUsage();
235
+ // { promptTokens, responseTokens, totalTokens, attempts, modelVersion, requestedModel, timestamp }
236
+ ```
328
237
 
329
- ```js
330
- // Custom system instructions
331
- new AITransformer({ systemInstructions: "You are a helpful assistant..." });
238
+ ### Few-Shot Seeding
332
239
 
333
- // Disable system instructions entirely (use Gemini's default behavior)
334
- new AITransformer({ systemInstructions: null });
335
- new AITransformer({ systemInstructions: false });
240
+ ```javascript
241
+ await instance.seed([
242
+ { PROMPT: { x: 1 }, ANSWER: { y: 2 } }
243
+ ]);
336
244
  ```
337
245
 
338
246
  ### Thinking Configuration
339
247
 
340
- For models that support extended thinking (like `gemini-2.5-flash`):
341
-
342
- ```js
343
- const transformer = new AITransformer({
248
+ ```javascript
249
+ new Chat({
344
250
  modelName: 'gemini-2.5-flash',
345
- thinkingConfig: {
346
- thinkingBudget: 1024, // Token budget for thinking
347
- }
251
+ thinkingConfig: { thinkingBudget: 1024 }
348
252
  });
349
253
  ```
350
254
 
351
- ### Billing Labels
352
-
353
- Labels flow through to GCP billing reports for cost attribution:
255
+ ### Billing Labels (Vertex AI)
354
256
 
355
- ```js
356
- const transformer = new AITransformer({
357
- labels: {
358
- client: 'acme_corp',
359
- app: 'data_pipeline',
360
- environment: 'production'
361
- }
257
+ ```javascript
258
+ new Transformer({
259
+ vertexai: true,
260
+ project: 'my-project',
261
+ labels: { app: 'pipeline', env: 'prod' }
362
262
  });
363
-
364
- // Override per-message
365
- await transformer.message(payload, { labels: { request_type: 'batch' } });
366
263
  ```
367
264
 
368
265
  ---
369
266
 
370
- ## Token Window Management & Error Handling
371
-
372
- * Throws on missing credentials (API key for Gemini API, or project ID for Vertex AI)
373
- * `.message()` and `.seed()` will *estimate* and prevent calls that would exceed Gemini's model window
374
- * All API and parsing errors surfaced as `Error` with context
375
- * Validator and retry failures include the number of attempts and last error
267
+ ## Constructor Options
268
+
269
+ All classes accept `BaseGeminiOptions`:
270
+
271
+ | Option | Type | Default | Description |
272
+ |--------|------|---------|-------------|
273
+ | `modelName` | string | `'gemini-2.5-flash'` | Gemini model to use |
274
+ | `systemPrompt` | string | varies by class | System prompt |
275
+ | `apiKey` | string | env var | Gemini API key |
276
+ | `vertexai` | boolean | `false` | Use Vertex AI |
277
+ | `project` | string | env var | GCP project ID |
278
+ | `location` | string | `'global'` | GCP region |
279
+ | `chatConfig` | object | — | Gemini chat config overrides |
280
+ | `thinkingConfig` | object | — | Thinking features config |
281
+ | `maxOutputTokens` | number | `50000` | Max tokens in response (`null` removes limit) |
282
+ | `logLevel` | string | based on NODE_ENV | `'trace'`\|`'debug'`\|`'info'`\|`'warn'`\|`'error'`\|`'none'` |
283
+ | `labels` | object | — | Billing labels (Vertex AI) |
284
+
285
+ ### Transformer-Specific
286
+
287
+ | Option | Type | Default | Description |
288
+ |--------|------|---------|-------------|
289
+ | `sourceKey`/`promptKey` | string | `'PROMPT'` | Key for input in examples |
290
+ | `targetKey`/`answerKey` | string | `'ANSWER'` | Key for output in examples |
291
+ | `contextKey` | string | `'CONTEXT'` | Key for context in examples |
292
+ | `maxRetries` | number | `3` | Retry attempts for validation |
293
+ | `retryDelay` | number | `1000` | Initial retry delay (ms) |
294
+ | `responseSchema` | object | — | JSON schema for output validation |
295
+ | `asyncValidator` | function | — | Global async validator |
296
+ | `enableGrounding` | boolean | `false` | Enable Google Search grounding |
297
+
298
+ ### ToolAgent-Specific
299
+
300
+ | Option | Type | Default | Description |
301
+ |--------|------|---------|-------------|
302
+ | `tools` | array | — | Tool declarations (FunctionDeclaration format) |
303
+ | `toolExecutor` | function | — | `async (toolName, args) => result` |
304
+ | `maxToolRounds` | number | `10` | Max tool-use loop iterations |
305
+ | `onToolCall` | function | — | Notification callback when tool is called |
306
+ | `onBeforeExecution` | function | — | `async (toolName, args) => boolean` — gate execution |
307
+
308
+ ### CodeAgent-Specific
309
+
310
+ | Option | Type | Default | Description |
311
+ |--------|------|---------|-------------|
312
+ | `workingDirectory` | string | `process.cwd()` | Directory for code execution |
313
+ | `maxRounds` | number | `10` | Max code execution loop iterations |
314
+ | `timeout` | number | `30000` | Per-execution timeout (ms) |
315
+ | `onBeforeExecution` | function | — | `async (code) => boolean` — gate execution |
316
+ | `onCodeExecution` | function | — | Notification after execution |
317
+
318
+ ### Message-Specific
319
+
320
+ | Option | Type | Default | Description |
321
+ |--------|------|---------|-------------|
322
+ | `responseSchema` | object | — | Schema for structured output |
323
+ | `responseMimeType` | string | — | e.g. `'application/json'` |
376
324
 
377
325
  ---
378
326
 
379
- ## Testing
327
+ ## Exports
380
328
 
381
- * **Jest test suite included**
382
- * Real API integration tests as well as local unit tests
383
- * 100% coverage for all error cases, configuration options, edge cases
329
+ ```javascript
330
+ // Named exports
331
+ import { Transformer, Chat, Message, ToolAgent, CodeAgent, BaseGemini, log } from 'ak-gemini';
332
+ import { extractJSON, attemptJSONRecovery } from 'ak-gemini';
384
333
 
385
- Run tests with:
334
+ // Default export (namespace)
335
+ import AI from 'ak-gemini';
336
+ new AI.Transformer({ ... });
337
+
338
+ // CommonJS
339
+ const { Transformer, Chat } = require('ak-gemini');
340
+ ```
341
+
342
+ ---
343
+
344
+ ## Testing
386
345
 
387
346
  ```sh
388
347
  npm test
389
348
  ```
390
349
 
350
+ All tests use real Gemini API calls (no mocks). Rate limiting (429 errors) can cause intermittent failures.
351
+
391
352
  ---
353
+
354
+ ## Migration from v1.x
355
+
356
+ See [MIGRATION.md](./MIGRATION.md) for a detailed guide on upgrading from v1.x to v2.0.