@memberjunction/ai-gemini 4.0.0 → 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +140 -0
  2. package/package.json +3 -3
  3. package/readme.md +99 -320
package/README.md ADDED
@@ -0,0 +1,140 @@
1
+ # @memberjunction/ai-gemini
2
+
3
+ MemberJunction AI provider for Google Gemini models. This package provides both LLM and image generation capabilities, supporting Gemini Pro, Flash, and advanced image generation models with native multimodal support, thinking/reasoning, and streaming.
4
+
5
+ ## Architecture
6
+
7
+ ```mermaid
8
+ graph TD
9
+ A["GeminiLLM<br/>(Chat Provider)"] -->|extends| B["BaseLLM<br/>(@memberjunction/ai)"]
10
+ C["GeminiImageGenerator<br/>(Image Provider)"] -->|extends| D["BaseImageGenerator<br/>(@memberjunction/ai)"]
11
+ A -->|wraps| E["GoogleGenAI<br/>(@google/genai)"]
12
+ C -->|wraps| E
13
+ A -->|provides| F["Chat + Streaming"]
14
+ A -->|provides| G["Thinking/Reasoning<br/>Budget Control"]
15
+ C -->|provides| H["Image Generation<br/>+ Editing + Variations"]
16
+ B -->|registered via| I["@RegisterClass"]
17
+ D -->|registered via| I
18
+
19
+ style A fill:#7c5295,stroke:#563a6b,color:#fff
20
+ style C fill:#7c5295,stroke:#563a6b,color:#fff
21
+ style B fill:#2d6a9f,stroke:#1a4971,color:#fff
22
+ style D fill:#2d6a9f,stroke:#1a4971,color:#fff
23
+ style E fill:#2d8659,stroke:#1a5c3a,color:#fff
24
+ style F fill:#b8762f,stroke:#8a5722,color:#fff
25
+ style G fill:#b8762f,stroke:#8a5722,color:#fff
26
+ style H fill:#b8762f,stroke:#8a5722,color:#fff
27
+ style I fill:#b8762f,stroke:#8a5722,color:#fff
28
+ ```
29
+
30
+ ## Features
31
+
32
+ ### LLM (GeminiLLM)
33
+ - **Chat Completions**: Full conversational AI with system instructions
34
+ - **Streaming**: Real-time response streaming with chunk processing
35
+ - **Thinking/Reasoning**: Configurable thinking budget for Gemini 2.5+ and thinking levels for Gemini 3+
36
+ - **Multimodal Input**: Native support for text, images, audio, video, and file inputs
37
+ - **Message Alternation**: Automatic handling of Gemini's role alternation requirements
38
+ - **Safety Handling**: Detection and reporting of content blocking with detailed safety ratings
39
+ - **Effort Level Mapping**: Maps MJ effort levels (1-100) to Gemini thinking budgets (0-24576)
40
+
41
+ ### Image Generation (GeminiImageGenerator)
42
+ - **Text-to-Image**: Generate images using Gemini 3 Pro Image model
43
+ - **Image Editing**: Edit existing images using multimodal context
44
+ - **Image Variations**: Create variations of existing images
45
+ - **Resolution Control**: Support for sizes up to 4K (3840x2160)
46
+ - **Style and Quality**: Configurable style and quality parameters
47
+
48
+ ## Installation
49
+
50
+ ```bash
51
+ npm install @memberjunction/ai-gemini
52
+ ```
53
+
54
+ ## Usage
55
+
56
+ ### Chat Completion
57
+
58
+ ```typescript
59
+ import { GeminiLLM } from '@memberjunction/ai-gemini';
60
+
61
+ const llm = new GeminiLLM('your-google-api-key');
62
+
63
+ const result = await llm.ChatCompletion({
64
+ model: 'gemini-2.5-flash',
65
+ messages: [
66
+ { role: 'system', content: 'You are a helpful assistant.' },
67
+ { role: 'user', content: 'Explain quantum computing.' }
68
+ ],
69
+ temperature: 0.7
70
+ });
71
+ ```
72
+
73
+ ### Streaming with Thinking
74
+
75
+ ```typescript
76
+ const result = await llm.ChatCompletion({
77
+ model: 'gemini-2.5-pro',
78
+ messages: [{ role: 'user', content: 'Solve this math problem step by step.' }],
79
+ effortLevel: '75', // High thinking budget
80
+ streaming: true,
81
+ streamingCallbacks: {
82
+ OnContent: (content) => process.stdout.write(content)
83
+ }
84
+ });
85
+
86
+ // Access thinking content
87
+ console.log('Thinking:', result.data.choices[0].message.thinking);
88
+ ```
89
+
90
+ ### Image Generation
91
+
92
+ ```typescript
93
+ import { GeminiImageGenerator } from '@memberjunction/ai-gemini';
94
+
95
+ const generator = new GeminiImageGenerator('your-google-api-key');
96
+
97
+ const result = await generator.GenerateImage({
98
+ prompt: 'A futuristic city at night',
99
+ model: 'gemini-3-pro-image-preview',
100
+ size: '2048x2048'
101
+ });
102
+ ```
103
+
104
+ ## Thinking Budget / Effort Level
105
+
106
+ The provider maps MJ effort levels to Gemini's thinking system:
107
+
108
+ | Effort Level | Gemini 2.5 (Budget) | Gemini 3+ (Level) |
109
+ |-------------|---------------------|-------------------|
110
+ | 1-5 (Flash only) | 0 (disabled) | MINIMAL |
111
+ | 1-33 | 1024-4096 | LOW |
112
+ | 34-66 | 4097-12288 | MEDIUM |
113
+ | 67-100 | 12289-24576 | HIGH |
114
+
115
+ ## Supported Parameters
116
+
117
+ | Parameter | Supported | Notes |
118
+ |-----------|-----------|-------|
119
+ | temperature | Yes | Default 0.5 |
120
+ | topP | Yes | Nucleus sampling |
121
+ | topK | Yes | Top-K sampling |
122
+ | seed | Yes | Deterministic outputs |
123
+ | stopSequences | Yes | Custom stop sequences |
124
+ | effortLevel | Yes | Maps to thinking budget/level |
125
+ | responseFormat | Yes | JSON and text modes |
126
+ | streaming | Yes | Real-time streaming |
127
+ | frequencyPenalty | No | Not supported by Gemini |
128
+ | presencePenalty | No | Not supported by Gemini |
129
+ | minP | No | Not supported by Gemini |
130
+
131
+ ## Class Registration
132
+
133
+ - `GeminiLLM` -- Registered via `@RegisterClass(BaseLLM, 'GeminiLLM')`
134
+ - `GeminiImageGenerator` -- Registered via `@RegisterClass(BaseImageGenerator, 'GeminiImageGenerator')`
135
+
136
+ ## Dependencies
137
+
138
+ - `@memberjunction/ai` - Core AI abstractions
139
+ - `@memberjunction/global` - Class registration
140
+ - `@google/genai` - Google GenAI SDK
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@memberjunction/ai-gemini",
3
3
  "type": "module",
4
- "version": "4.0.0",
4
+ "version": "4.1.0",
5
5
  "description": "MemberJunction Wrapper for Google Gemini AI Models",
6
6
  "main": "dist/index.js",
7
7
  "types": "dist/index.d.ts",
@@ -20,8 +20,8 @@
20
20
  "typescript": "^5.9.3"
21
21
  },
22
22
  "dependencies": {
23
- "@memberjunction/ai": "4.0.0",
24
- "@memberjunction/global": "4.0.0",
23
+ "@memberjunction/ai": "4.1.0",
24
+ "@memberjunction/global": "4.1.0",
25
25
  "@google/genai": "^1.40.0"
26
26
  },
27
27
  "repository": {
package/readme.md CHANGED
@@ -1,19 +1,49 @@
1
1
  # @memberjunction/ai-gemini
2
2
 
3
- A comprehensive wrapper for Google's Gemini AI models that seamlessly integrates with the MemberJunction AI framework, providing access to Google's powerful generative AI capabilities.
3
+ MemberJunction AI provider for Google Gemini models. This package provides both LLM and image generation capabilities, supporting Gemini Pro, Flash, and advanced image generation models with native multimodal support, thinking/reasoning, and streaming.
4
+
5
+ ## Architecture
6
+
7
+ ```mermaid
8
+ graph TD
9
+ A["GeminiLLM<br/>(Chat Provider)"] -->|extends| B["BaseLLM<br/>(@memberjunction/ai)"]
10
+ C["GeminiImageGenerator<br/>(Image Provider)"] -->|extends| D["BaseImageGenerator<br/>(@memberjunction/ai)"]
11
+ A -->|wraps| E["GoogleGenAI<br/>(@google/genai)"]
12
+ C -->|wraps| E
13
+ A -->|provides| F["Chat + Streaming"]
14
+ A -->|provides| G["Thinking/Reasoning<br/>Budget Control"]
15
+ C -->|provides| H["Image Generation<br/>+ Editing + Variations"]
16
+ B -->|registered via| I["@RegisterClass"]
17
+ D -->|registered via| I
18
+
19
+ style A fill:#7c5295,stroke:#563a6b,color:#fff
20
+ style C fill:#7c5295,stroke:#563a6b,color:#fff
21
+ style B fill:#2d6a9f,stroke:#1a4971,color:#fff
22
+ style D fill:#2d6a9f,stroke:#1a4971,color:#fff
23
+ style E fill:#2d8659,stroke:#1a5c3a,color:#fff
24
+ style F fill:#b8762f,stroke:#8a5722,color:#fff
25
+ style G fill:#b8762f,stroke:#8a5722,color:#fff
26
+ style H fill:#b8762f,stroke:#8a5722,color:#fff
27
+ style I fill:#b8762f,stroke:#8a5722,color:#fff
28
+ ```
4
29
 
5
30
  ## Features
6
31
 
7
- - **Google Gemini Integration**: Connect to Google's state-of-the-art Gemini models using the official @google/genai SDK
8
- - **Standardized Interface**: Implements MemberJunction's BaseLLM abstract class
9
- - **Streaming Support**: Full support for streaming responses with real-time token generation
10
- - **Multimodal Support**: Handle text, images, audio, video, and file content
11
- - **Message Formatting**: Automatic conversion between MemberJunction and Gemini message formats
12
- - **Effort Level Support**: Leverage Gemini's reasoning mode for higher-quality responses
13
- - **Error Handling**: Robust error handling with detailed reporting
14
- - **Chat Support**: Full support for chat-based interactions with conversation history
15
- - **Temperature Control**: Fine-tune generation creativity
16
- - **Response Format Control**: Request specific response MIME types
32
+ ### LLM (GeminiLLM)
33
+ - **Chat Completions**: Full conversational AI with system instructions
34
+ - **Streaming**: Real-time response streaming with chunk processing
35
+ - **Thinking/Reasoning**: Configurable thinking budget for Gemini 2.5+ and thinking levels for Gemini 3+
36
+ - **Multimodal Input**: Native support for text, images, audio, video, and file inputs
37
+ - **Message Alternation**: Automatic handling of Gemini's role alternation requirements
38
+ - **Safety Handling**: Detection and reporting of content blocking with detailed safety ratings
39
+ - **Effort Level Mapping**: Maps MJ effort levels (1-100) to Gemini thinking budgets (0-24576)
40
+
41
+ ### Image Generation (GeminiImageGenerator)
42
+ - **Text-to-Image**: Generate images using Gemini 3 Pro Image model
43
+ - **Image Editing**: Edit existing images using multimodal context
44
+ - **Image Variations**: Create variations of existing images
45
+ - **Resolution Control**: Support for sizes up to 4K (3840x2160)
46
+ - **Style and Quality**: Configurable style and quality parameters
17
47
 
18
48
  ## Installation
19
49
 
@@ -21,341 +51,90 @@ A comprehensive wrapper for Google's Gemini AI models that seamlessly integrates
21
51
  npm install @memberjunction/ai-gemini
22
52
  ```
23
53
 
24
- ## Requirements
25
-
26
- - Node.js 16+
27
- - A Google AI Studio API key
28
- - MemberJunction Core libraries
29
-
30
54
  ## Usage
31
55
 
32
- ### Basic Setup
33
-
34
- ```typescript
35
- import { GeminiLLM } from '@memberjunction/ai-gemini';
36
-
37
- // Initialize with your Google AI API key
38
- const geminiLLM = new GeminiLLM('your-google-ai-api-key');
39
- ```
40
-
41
56
  ### Chat Completion
42
57
 
43
58
  ```typescript
44
- import { ChatParams } from '@memberjunction/ai';
45
-
46
- // Create chat parameters
47
- const chatParams: ChatParams = {
48
- model: 'gemini-pro', // or 'gemini-pro-vision' for multimodal
49
- messages: [
50
- { role: 'system', content: 'You are a helpful assistant.' },
51
- { role: 'user', content: 'What are the key features of the Gemini AI model?' }
52
- ],
53
- temperature: 0.7,
54
- maxOutputTokens: 1000
55
- };
56
-
57
- // Get a response
58
- try {
59
- const response = await geminiLLM.ChatCompletion(chatParams);
60
- if (response.success) {
61
- console.log('Response:', response.data.choices[0].message.content);
62
- console.log('Time elapsed:', response.timeElapsed, 'ms');
63
- } else {
64
- console.error('Error:', response.errorMessage);
65
- }
66
- } catch (error) {
67
- console.error('Exception:', error);
68
- }
69
- ```
70
-
71
- ### Streaming Chat Completion
72
-
73
- ```typescript
74
- import { StreamingChatCallbacks } from '@memberjunction/ai';
75
-
76
- // Define streaming callbacks
77
- const streamCallbacks: StreamingChatCallbacks = {
78
- onToken: (token: string) => {
79
- process.stdout.write(token); // Print each token as it arrives
80
- },
81
- onComplete: (fullResponse: string) => {
82
- console.log('\n\nComplete response received');
83
- },
84
- onError: (error: Error) => {
85
- console.error('Streaming error:', error);
86
- }
87
- };
88
-
89
- // Use streaming
90
- const streamParams: ChatParams = {
91
- model: 'gemini-pro',
92
- messages: [
93
- { role: 'user', content: 'Write a short story about a robot.' }
94
- ],
95
- streaming: true,
96
- streamingCallbacks: streamCallbacks
97
- };
98
-
99
- await geminiLLM.ChatCompletion(streamParams);
100
- ```
101
-
102
- ### Multimodal Content
103
-
104
- ```typescript
105
- import { ChatMessageContent } from '@memberjunction/ai';
106
-
107
- // Create multimodal content
108
- const multimodalContent: ChatMessageContent = [
109
- { type: 'text', content: 'What do you see in this image?' },
110
- { type: 'image_url', content: 'base64_encoded_image_data_here' }
111
- ];
112
-
113
- const multimodalParams: ChatParams = {
114
- model: 'gemini-pro-vision',
115
- messages: [
116
- { role: 'user', content: multimodalContent }
117
- ]
118
- };
119
-
120
- const response = await geminiLLM.ChatCompletion(multimodalParams);
121
- ```
122
-
123
- ### Enhanced Reasoning with Effort Level
124
-
125
- ```typescript
126
- // Use effort level to enable Gemini's full reasoning mode
127
- const reasoningParams: ChatParams = {
128
- model: 'gemini-pro',
129
- messages: [
130
- { role: 'user', content: 'Solve this complex logic puzzle...' }
131
- ],
132
- effortLevel: 'high' // Enables full reasoning mode
133
- };
134
-
135
- const response = await geminiLLM.ChatCompletion(reasoningParams);
136
- ```
59
+ import { GeminiLLM } from '@memberjunction/ai-gemini';
137
60
 
138
- ### Direct Access to Gemini Client
61
+ const llm = new GeminiLLM('your-google-api-key');
139
62
 
140
- ```typescript
141
- // Access the underlying GoogleGenAI client for advanced usage
142
- const geminiClient = geminiLLM.GeminiClient;
143
-
144
- // Use the client directly if needed for custom operations
145
- const chat = geminiClient.chats.create({
146
- model: 'gemini-pro',
147
- history: []
63
+ const result = await llm.ChatCompletion({
64
+ model: 'gemini-2.5-flash',
65
+ messages: [
66
+ { role: 'system', content: 'You are a helpful assistant.' },
67
+ { role: 'user', content: 'Explain quantum computing.' }
68
+ ],
69
+ temperature: 0.7
148
70
  });
149
71
  ```
150
72
 
151
- ## Supported Models
152
-
153
- Google Gemini provides several models with different capabilities:
154
-
155
- - `gemini-pro`: General-purpose text model
156
- - `gemini-pro-vision`: Multimodal model that can process images and text
157
- - `gemini-ultra`: Google's most advanced model (when available)
158
-
159
- Check the [Google AI documentation](https://ai.google.dev/models/gemini) for the latest list of supported models.
160
-
161
- ## API Reference
162
-
163
- ### GeminiLLM Class
164
-
165
- A class that extends BaseLLM to provide Google Gemini-specific functionality.
166
-
167
- #### Constructor
73
+ ### Streaming with Thinking
168
74
 
169
75
  ```typescript
170
- new GeminiLLM(apiKey: string)
171
- ```
172
-
173
- Creates a new instance of the Gemini LLM wrapper.
174
-
175
- **Parameters:**
176
- - `apiKey`: Your Google AI Studio API key
177
-
178
- #### Properties
179
-
180
- - `GeminiClient`: (read-only) Returns the underlying GoogleGenAI client instance
181
- - `SupportsStreaming`: (read-only) Returns `true` - Gemini supports streaming responses
182
-
183
- #### Methods
184
-
185
- ##### ChatCompletion(params: ChatParams): Promise<ChatResult>
186
-
187
- Perform a chat completion with Gemini models.
188
-
189
- **Parameters:**
190
- - `params`: Chat parameters including model, messages, temperature, etc.
191
-
192
- **Returns:**
193
- - Promise resolving to a `ChatResult` with the model's response
194
-
195
- ##### SummarizeText(params: SummarizeParams): Promise<SummarizeResult>
196
-
197
- Not implemented yet - will throw an error if called.
198
-
199
- ##### ClassifyText(params: ClassifyParams): Promise<ClassifyResult>
200
-
201
- Not implemented yet - will throw an error if called.
202
-
203
- #### Static Methods
204
-
205
- ##### MapMJMessageToGeminiHistoryEntry(message: ChatMessage): Content
206
-
207
- Converts a MemberJunction ChatMessage to Gemini's Content format.
208
-
209
- **Parameters:**
210
- - `message`: MemberJunction ChatMessage object
211
-
212
- **Returns:**
213
- - Gemini Content object with proper role mapping
214
-
215
- ##### MapMJContentToGeminiParts(content: ChatMessageContent): Array<Part>
216
-
217
- Converts MemberJunction message content to Gemini Parts array.
218
-
219
- **Parameters:**
220
- - `content`: String or array of content parts
221
-
222
- **Returns:**
223
- - Array of Gemini Part objects
224
-
225
- ## Response Format Control
226
-
227
- Control the format of Gemini responses using the `responseFormat` parameter:
228
-
229
- ```typescript
230
- const params: ChatParams = {
231
- // ...other parameters
232
- responseFormat: 'text/plain', // Regular text response
233
- };
234
-
235
- // For structured data
236
- const jsonParams: ChatParams = {
237
- // ...other parameters
238
- responseFormat: 'application/json', // Request JSON response
239
- };
240
- ```
241
-
242
- ## Error Handling
243
-
244
- The wrapper provides detailed error information:
76
+ const result = await llm.ChatCompletion({
77
+ model: 'gemini-2.5-pro',
78
+ messages: [{ role: 'user', content: 'Solve this math problem step by step.' }],
79
+ effortLevel: '75', // High thinking budget
80
+ streaming: true,
81
+ streamingCallbacks: {
82
+ OnContent: (content) => process.stdout.write(content)
83
+ }
84
+ });
245
85
 
246
- ```typescript
247
- try {
248
- const response = await geminiLLM.ChatCompletion(params);
249
- if (!response.success) {
250
- console.error('Error:', response.errorMessage);
251
- console.error('Status:', response.statusText);
252
- console.error('Exception:', response.exception);
253
- }
254
- } catch (error) {
255
- console.error('Exception occurred:', error);
256
- }
86
+ // Access thinking content
87
+ console.log('Thinking:', result.data.choices[0].message.thinking);
257
88
  ```
258
89
 
259
- ## Message Handling
260
-
261
- The wrapper handles proper message formatting and role conversion between MemberJunction's format and Google Gemini's expected format:
262
-
263
- - MemberJunction's `system` and `user` roles are converted to Gemini's `user` role
264
- - MemberJunction's `assistant` role is converted to Gemini's `model` role
265
- - Messages are automatically spaced to ensure alternating roles as required by Gemini
266
- - Multimodal content is properly converted with appropriate MIME types
267
-
268
- ## Content Type Support
269
-
270
- The wrapper supports various content types with automatic MIME type mapping:
271
-
272
- - **Text**: Standard text messages
273
- - **Images**: `image_url` type → `image/jpeg` MIME type
274
- - **Audio**: `audio_url` type → `audio/mpeg` MIME type
275
- - **Video**: `video_url` type → `video/mp4` MIME type
276
- - **Files**: `file_url` type → `application/octet-stream` MIME type
277
-
278
- ## Integration with MemberJunction
279
-
280
- This package is designed to work seamlessly with the MemberJunction AI framework:
90
+ ### Image Generation
281
91
 
282
92
  ```typescript
283
- import { AIEngine } from '@memberjunction/ai';
284
- import { GeminiLLM } from '@memberjunction/ai-gemini';
93
+ import { GeminiImageGenerator } from '@memberjunction/ai-gemini';
285
94
 
286
- // Register the Gemini provider with the AI engine
287
- const aiEngine = new AIEngine();
288
- const geminiProvider = new GeminiLLM('your-api-key');
95
+ const generator = new GeminiImageGenerator('your-google-api-key');
289
96
 
290
- // Use through the AI engine's unified interface
291
- const result = await aiEngine.ChatCompletion({
292
- provider: 'GeminiLLM',
293
- model: 'gemini-pro',
294
- messages: [/* ... */]
97
+ const result = await generator.GenerateImage({
98
+ prompt: 'A futuristic city at night',
99
+ model: 'gemini-3-pro-image-preview',
100
+ size: '2048x2048'
295
101
  });
296
102
  ```
297
103
 
298
- ## Performance Considerations
299
-
300
- - **Streaming**: Use streaming for long responses to improve perceived performance
301
- - **Effort Level**: Use the `effortLevel` parameter judiciously as it increases latency and cost
302
- - **Model Selection**: Choose the appropriate model based on your needs (text-only vs multimodal)
303
- - **Message Spacing**: The wrapper automatically handles message spacing, adding minimal overhead
104
+ ## Thinking Budget / Effort Level
304
105
 
305
- ## Limitations
106
+ The provider maps MJ effort levels to Gemini's thinking system:
306
107
 
307
- Currently, the wrapper implements:
308
- - ✅ Chat completion functionality (streaming and non-streaming)
309
- - Multimodal content support
310
- - Effort level configuration for enhanced reasoning
311
- - `SummarizeText` functionality (not implemented)
312
- - `ClassifyText` functionality (not implemented)
313
- - ❌ Detailed token usage reporting (Gemini doesn't provide this)
314
-
315
- ## Dependencies
316
-
317
- - `@google/genai` (v0.14.0): Official Google GenAI SDK
318
- - `@memberjunction/ai` (v2.43.0): MemberJunction AI core framework
319
- - `@memberjunction/global` (v2.43.0): MemberJunction global utilities
320
-
321
- ## Development
322
-
323
- ### Building
324
-
325
- ```bash
326
- npm run build
327
- ```
328
-
329
- ### Testing
330
-
331
- Tests are not currently implemented. To add tests:
332
-
333
- ```bash
334
- npm test
335
- ```
108
+ | Effort Level | Gemini 2.5 (Budget) | Gemini 3+ (Level) |
109
+ |-------------|---------------------|-------------------|
110
+ | 1-5 (Flash only) | 0 (disabled) | MINIMAL |
111
+ | 1-33 | 1024-4096 | LOW |
112
+ | 34-66 | 4097-12288 | MEDIUM |
113
+ | 67-100 | 12289-24576 | HIGH |
336
114
 
337
115
  ## Supported Parameters
338
116
 
339
- The Gemini provider supports the following LLM parameters:
340
-
341
- **Supported:**
342
- - `temperature` - Controls randomness in the output (0.0-1.0)
343
- - `maxOutputTokens` - Maximum number of tokens to generate
344
- - `topP` - Nucleus sampling threshold (0.0-1.0)
345
- - `topK` - Limits vocabulary to top K tokens
346
- - `seed` - For deterministic outputs
347
- - `stopSequences` - Array of sequences where the API will stop generating
348
- - `responseFormat` - Output format as MIME type (text/plain, application/json, etc.)
349
-
350
- **Not Supported:**
351
- - `frequencyPenalty` - Not available in Gemini API
352
- - `presencePenalty` - Not available in Gemini API
353
- - `minP` - Not available in Gemini API
117
+ | Parameter | Supported | Notes |
118
+ |-----------|-----------|-------|
119
+ | temperature | Yes | Default 0.5 |
120
+ | topP | Yes | Nucleus sampling |
121
+ | topK | Yes | Top-K sampling |
122
+ | seed | Yes | Deterministic outputs |
123
+ | stopSequences | Yes | Custom stop sequences |
124
+ | effortLevel | Yes | Maps to thinking budget/level |
125
+ | responseFormat | Yes | JSON and text modes |
126
+ | streaming | Yes | Real-time streaming |
127
+ | frequencyPenalty | No | Not supported by Gemini |
128
+ | presencePenalty | No | Not supported by Gemini |
129
+ | minP | No | Not supported by Gemini |
130
+
131
+ ## Class Registration
132
+
133
+ - `GeminiLLM` -- Registered via `@RegisterClass(BaseLLM, 'GeminiLLM')`
134
+ - `GeminiImageGenerator` -- Registered via `@RegisterClass(BaseImageGenerator, 'GeminiImageGenerator')`
354
135
 
355
- ## License
356
-
357
- ISC
358
-
359
- ## Contributing
136
+ ## Dependencies
360
137
 
361
- For bug reports, feature requests, or contributions, please visit the [MemberJunction repository](https://github.com/MemberJunction/MJ).
138
+ - `@memberjunction/ai` - Core AI abstractions
139
+ - `@memberjunction/global` - Class registration
140
+ - `@google/genai` - Google GenAI SDK