@memberjunction/ai-mistral 4.0.0 → 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +114 -0
  2. package/package.json +3 -3
  3. package/readme.md +72 -361
package/README.md ADDED
@@ -0,0 +1,114 @@
1
+ # @memberjunction/ai-mistral
2
+
3
+ MemberJunction AI provider for Mistral AI. This package provides both LLM and embedding capabilities using Mistral's models, implementing `BaseLLM` and `BaseEmbeddings` from `@memberjunction/ai`.
4
+
5
+ ## Architecture
6
+
7
+ ```mermaid
8
+ graph TD
9
+ A["MistralLLM<br/>(Provider)"] -->|extends| B["BaseLLM<br/>(@memberjunction/ai)"]
10
+ C["MistralEmbedding<br/>(Provider)"] -->|extends| D["BaseEmbeddings<br/>(@memberjunction/ai)"]
11
+ A -->|wraps| E["Mistral Client<br/>(@mistralai/mistralai)"]
12
+ C -->|wraps| E
13
+ A -->|provides| F["Chat Completions<br/>+ Streaming"]
14
+ C -->|provides| G["Text Embeddings"]
15
+ B -->|registered via| H["@RegisterClass"]
16
+ D -->|registered via| H
17
+
18
+ style A fill:#7c5295,stroke:#563a6b,color:#fff
19
+ style C fill:#7c5295,stroke:#563a6b,color:#fff
20
+ style B fill:#2d6a9f,stroke:#1a4971,color:#fff
21
+ style D fill:#2d6a9f,stroke:#1a4971,color:#fff
22
+ style E fill:#2d8659,stroke:#1a5c3a,color:#fff
23
+ style F fill:#b8762f,stroke:#8a5722,color:#fff
24
+ style G fill:#b8762f,stroke:#8a5722,color:#fff
25
+ style H fill:#b8762f,stroke:#8a5722,color:#fff
26
+ ```
27
+
28
+ ## Features
29
+
30
+ - **Chat Completions**: Conversational AI with Mistral Large, Medium, Small, and open models
31
+ - **Streaming**: Real-time response streaming support
32
+ - **Text Embeddings**: Vector embeddings via Mistral's embedding models
33
+ - **Thinking/Reasoning**: Extraction of thinking content from reasoning models
34
+ - **JSON Mode**: Response format control for structured outputs
35
+ - **Multimodal Support**: Handling of image content in messages
36
+
37
+ ## Installation
38
+
39
+ ```bash
40
+ npm install @memberjunction/ai-mistral
41
+ ```
42
+
43
+ ## Usage
44
+
45
+ ### Chat Completion
46
+
47
+ ```typescript
48
+ import { MistralLLM } from '@memberjunction/ai-mistral';
49
+
50
+ const llm = new MistralLLM('your-mistral-api-key');
51
+
52
+ const result = await llm.ChatCompletion({
53
+ model: 'mistral-large-latest',
54
+ messages: [
55
+ { role: 'user', content: 'Explain transformers in machine learning.' }
56
+ ],
57
+ temperature: 0.7
58
+ });
59
+
60
+ if (result.success) {
61
+ console.log(result.data.choices[0].message.content);
62
+ }
63
+ ```
64
+
65
+ ### Streaming
66
+
67
+ ```typescript
68
+ const result = await llm.ChatCompletion({
69
+ model: 'mistral-small-latest',
70
+ messages: [{ role: 'user', content: 'Write a poem.' }],
71
+ streaming: true,
72
+ streamingCallbacks: {
73
+ OnContent: (content) => process.stdout.write(content),
74
+ OnComplete: () => console.log('\nDone!')
75
+ }
76
+ });
77
+ ```
78
+
79
+ ### Embeddings
80
+
81
+ ```typescript
82
+ import { MistralEmbedding } from '@memberjunction/ai-mistral';
83
+
84
+ const embedder = new MistralEmbedding('your-mistral-api-key');
85
+
86
+ const result = await embedder.EmbedText({
87
+ text: 'Sample text for embedding',
88
+ model: 'mistral-embed'
89
+ });
90
+
91
+ console.log(`Dimensions: ${result.vector.length}`);
92
+ ```
93
+
94
+ ## Supported Parameters
95
+
96
+ | Parameter | Supported | Notes |
97
+ |-----------|-----------|-------|
98
+ | temperature | Yes | Controls randomness |
99
+ | maxOutputTokens | Yes | Maximum response length |
100
+ | topP | Yes | Nucleus sampling |
101
+ | seed | Yes | Deterministic outputs |
102
+ | responseFormat | Yes | JSON mode support |
103
+ | streaming | Yes | Real-time streaming |
104
+
105
+ ## Class Registration
106
+
107
+ - `MistralLLM` -- Registered via `@RegisterClass(BaseLLM, 'MistralLLM')`
108
+ - `MistralEmbedding` -- Registered via `@RegisterClass(BaseEmbeddings, 'MistralEmbedding')`
109
+
110
+ ## Dependencies
111
+
112
+ - `@memberjunction/ai` - Core AI abstractions
113
+ - `@memberjunction/global` - Class registration
114
+ - `@mistralai/mistralai` - Official Mistral AI SDK
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@memberjunction/ai-mistral",
3
3
  "type": "module",
4
- "version": "4.0.0",
4
+ "version": "4.1.0",
5
5
  "description": "MemberJunction Wrapper for Mistral AI's AI Models",
6
6
  "main": "dist/index.js",
7
7
  "types": "dist/index.d.ts",
@@ -20,8 +20,8 @@
20
20
  "typescript": "^5.9.3"
21
21
  },
22
22
  "dependencies": {
23
- "@memberjunction/ai": "4.0.0",
24
- "@memberjunction/global": "4.0.0",
23
+ "@memberjunction/ai": "4.1.0",
24
+ "@memberjunction/global": "4.1.0",
25
25
  "@mistralai/mistralai": "^1.14.0",
26
26
  "axios-retry": "4.5.0"
27
27
  },
package/readme.md CHANGED
@@ -1,18 +1,38 @@
1
1
  # @memberjunction/ai-mistral
2
2
 
3
- A comprehensive wrapper for Mistral AI's models, enabling seamless integration with the MemberJunction AI framework for natural language processing and embedding tasks.
3
+ MemberJunction AI provider for Mistral AI. This package provides both LLM and embedding capabilities using Mistral's models, implementing `BaseLLM` and `BaseEmbeddings` from `@memberjunction/ai`.
4
+
5
+ ## Architecture
6
+
7
+ ```mermaid
8
+ graph TD
9
+ A["MistralLLM<br/>(Provider)"] -->|extends| B["BaseLLM<br/>(@memberjunction/ai)"]
10
+ C["MistralEmbedding<br/>(Provider)"] -->|extends| D["BaseEmbeddings<br/>(@memberjunction/ai)"]
11
+ A -->|wraps| E["Mistral Client<br/>(@mistralai/mistralai)"]
12
+ C -->|wraps| E
13
+ A -->|provides| F["Chat Completions<br/>+ Streaming"]
14
+ C -->|provides| G["Text Embeddings"]
15
+ B -->|registered via| H["@RegisterClass"]
16
+ D -->|registered via| H
17
+
18
+ style A fill:#7c5295,stroke:#563a6b,color:#fff
19
+ style C fill:#7c5295,stroke:#563a6b,color:#fff
20
+ style B fill:#2d6a9f,stroke:#1a4971,color:#fff
21
+ style D fill:#2d6a9f,stroke:#1a4971,color:#fff
22
+ style E fill:#2d8659,stroke:#1a5c3a,color:#fff
23
+ style F fill:#b8762f,stroke:#8a5722,color:#fff
24
+ style G fill:#b8762f,stroke:#8a5722,color:#fff
25
+ style H fill:#b8762f,stroke:#8a5722,color:#fff
26
+ ```
4
27
 
5
28
  ## Features
6
29
 
7
- - **Mistral AI Integration**: Connect to Mistral's powerful language models and embedding models
8
- - **Standardized Interface**: Implements MemberJunction's BaseLLM and BaseEmbeddings abstract classes
9
- - **Streaming Support**: Full support for streaming chat completions
10
- - **Token Usage Tracking**: Automatic tracking of prompt and completion tokens
11
- - **Response Format Control**: Support for standard text and JSON response formats
12
- - **Multi-Modal Support**: Handles text, images, and documents in chat messages
13
- - **Error Handling**: Robust error handling with detailed reporting
14
- - **Chat Completion**: Full support for chat-based interactions with Mistral models
15
- - **Text Embeddings**: Generate vector embeddings for text using Mistral's embedding models
30
+ - **Chat Completions**: Conversational AI with Mistral Large, Medium, Small, and open models
31
+ - **Streaming**: Real-time response streaming support
32
+ - **Text Embeddings**: Vector embeddings via Mistral's embedding models
33
+ - **Thinking/Reasoning**: Extraction of thinking content from reasoning models
34
+ - **JSON Mode**: Response format control for structured outputs
35
+ - **Multimodal Support**: Handling of image content in messages
16
36
 
17
37
  ## Installation
18
38
 
@@ -20,384 +40,75 @@ A comprehensive wrapper for Mistral AI's models, enabling seamless integration w
20
40
  npm install @memberjunction/ai-mistral
21
41
  ```
22
42
 
23
- ## Requirements
24
-
25
- - Node.js 16+
26
- - TypeScript 5.4.5+
27
- - A Mistral AI API key
28
- - MemberJunction Core libraries (@memberjunction/ai, @memberjunction/global)
29
-
30
43
  ## Usage
31
44
 
32
- ### Basic Setup
33
-
34
- ```typescript
35
- import { MistralLLM } from '@memberjunction/ai-mistral';
36
-
37
- // Initialize with your Mistral API key
38
- const mistralLLM = new MistralLLM('your-mistral-api-key');
39
- ```
40
-
41
45
  ### Chat Completion
42
46
 
43
47
  ```typescript
44
- import { ChatParams, ChatMessageRole } from '@memberjunction/ai';
45
-
46
- // Create chat parameters
47
- const chatParams: ChatParams = {
48
- model: 'mistral-large-latest', // or other models like 'open-mistral-7b', 'mistral-small-latest'
49
- messages: [
50
- { role: ChatMessageRole.system, content: 'You are a helpful assistant.' },
51
- { role: ChatMessageRole.user, content: 'What are the main principles of machine learning?' }
52
- ],
53
- temperature: 0.7,
54
- maxOutputTokens: 1000
55
- };
56
-
57
- // Get a response
58
- try {
59
- const response = await mistralLLM.ChatCompletion(chatParams);
60
- if (response.success) {
61
- console.log('Response:', response.data.choices[0].message.content);
62
- console.log('Token Usage:', response.data.usage);
63
- console.log('Time Elapsed (ms):', response.timeElapsed);
64
- } else {
65
- console.error('Error:', response.errorMessage);
66
- }
67
- } catch (error) {
68
- console.error('Exception:', error);
69
- }
70
- ```
71
-
72
- ### JSON Response Format
48
+ import { MistralLLM } from '@memberjunction/ai-mistral';
73
49
 
74
- ```typescript
75
- // Request a structured JSON response
76
- const jsonParams: ChatParams = {
77
- model: 'mistral-large-latest',
78
- messages: [
79
- { role: ChatMessageRole.system, content: 'You are a helpful assistant that responds in JSON format.' },
80
- { role: ChatMessageRole.user, content: 'Give me data about the top 3 machine learning algorithms in JSON format' }
81
- ],
82
- maxOutputTokens: 1000,
83
- responseFormat: 'JSON' // This will add the appropriate response_format parameter
84
- };
50
+ const llm = new MistralLLM('your-mistral-api-key');
85
51
 
86
- const jsonResponse = await mistralLLM.ChatCompletion(jsonParams);
52
+ const result = await llm.ChatCompletion({
53
+ model: 'mistral-large-latest',
54
+ messages: [
55
+ { role: 'user', content: 'Explain transformers in machine learning.' }
56
+ ],
57
+ temperature: 0.7
58
+ });
87
59
 
88
- // Parse the JSON response
89
- if (jsonResponse.success) {
90
- const structuredData = JSON.parse(jsonResponse.data.choices[0].message.content);
91
- console.log('Structured Data:', structuredData);
60
+ if (result.success) {
61
+ console.log(result.data.choices[0].message.content);
92
62
  }
93
63
  ```
94
64
 
95
- ### Streaming Chat Completion
96
-
97
- ```typescript
98
- // Mistral supports streaming responses
99
- const streamParams: ChatParams = {
100
- model: 'mistral-large-latest',
101
- messages: [
102
- { role: ChatMessageRole.system, content: 'You are a helpful assistant.' },
103
- { role: ChatMessageRole.user, content: 'Write a short story about AI' }
104
- ],
105
- maxOutputTokens: 1000,
106
- stream: true, // Enable streaming
107
- streamCallback: (content: string) => {
108
- // Handle each chunk of streamed content
109
- process.stdout.write(content);
110
- }
111
- };
112
-
113
- const streamResponse = await mistralLLM.ChatCompletion(streamParams);
114
- console.log('\nStreaming complete!');
115
- console.log('Total tokens:', streamResponse.data.usage);
116
- ```
117
-
118
- ### Multi-Modal Messages
65
+ ### Streaming
119
66
 
120
67
  ```typescript
121
- // Mistral supports images and documents in messages
122
- const multiModalParams: ChatParams = {
123
- model: 'mistral-large-latest',
124
- messages: [
125
- {
126
- role: ChatMessageRole.user,
127
- content: [
128
- { type: 'text', content: 'What do you see in this image?' },
129
- { type: 'image_url', content: 'https://example.com/image.jpg' }
130
- ]
68
+ const result = await llm.ChatCompletion({
69
+ model: 'mistral-small-latest',
70
+ messages: [{ role: 'user', content: 'Write a poem.' }],
71
+ streaming: true,
72
+ streamingCallbacks: {
73
+ OnContent: (content) => process.stdout.write(content),
74
+ OnComplete: () => console.log('\nDone!')
131
75
  }
132
- ],
133
- maxOutputTokens: 1000
134
- };
135
-
136
- // For documents
137
- const documentParams: ChatParams = {
138
- model: 'mistral-large-latest',
139
- messages: [
140
- {
141
- role: ChatMessageRole.user,
142
- content: [
143
- { type: 'text', content: 'Summarize this document' },
144
- { type: 'file_url', content: 'https://example.com/document.pdf' } // Converted to document_url for Mistral
145
- ]
146
- }
147
- ],
148
- maxOutputTokens: 1000
149
- };
76
+ });
150
77
  ```
151
78
 
152
- ### Text Embeddings
79
+ ### Embeddings
153
80
 
154
81
  ```typescript
155
82
  import { MistralEmbedding } from '@memberjunction/ai-mistral';
156
- import { EmbedTextParams, EmbedTextsParams } from '@memberjunction/ai';
157
-
158
- // Initialize the embedding client
159
- const mistralEmbedding = new MistralEmbedding('your-mistral-api-key');
160
-
161
- // Embed a single text
162
- const embedParams: EmbedTextParams = {
163
- text: 'Machine learning is a subset of artificial intelligence.',
164
- model: 'mistral-embed' // Optional, defaults to 'mistral-embed'
165
- };
166
-
167
- const embedResult = await mistralEmbedding.EmbedText(embedParams);
168
- console.log('Embedding vector dimensions:', embedResult.vector.length); // 1024 dimensions
169
- console.log('Token usage:', embedResult.ModelUsage);
170
-
171
- // Embed multiple texts
172
- const multiEmbedParams: EmbedTextsParams = {
173
- texts: [
174
- 'Natural language processing enables computers to understand human language.',
175
- 'Deep learning uses neural networks with multiple layers.',
176
- 'Computer vision allows machines to interpret visual information.'
177
- ],
178
- model: 'mistral-embed'
179
- };
180
83
 
181
- const multiEmbedResult = await mistralEmbedding.EmbedTexts(multiEmbedParams);
182
- console.log('Number of embeddings:', multiEmbedResult.vectors.length);
183
- console.log('Total token usage:', multiEmbedResult.ModelUsage);
84
+ const embedder = new MistralEmbedding('your-mistral-api-key');
184
85
 
185
- // Get available embedding models
186
- const embeddingModels = await mistralEmbedding.GetEmbeddingModels();
187
- console.log('Available models:', embeddingModels);
188
- ```
189
-
190
- ### Direct Access to Mistral Client
191
-
192
- ```typescript
193
- // Access the underlying Mistral client for advanced usage
194
- const mistralClient = mistralLLM.Client;
86
+ const result = await embedder.EmbedText({
87
+ text: 'Sample text for embedding',
88
+ model: 'mistral-embed'
89
+ });
195
90
 
196
- // Use the client directly if needed
197
- const modelList = await mistralClient.models.list();
198
- console.log('Available models:', modelList);
199
- ```
200
-
201
- ## Supported Models
202
-
203
- Mistral AI offers several models with different capabilities and price points:
204
-
205
- - `mistral-large-latest`: Mistral's most powerful model (at the time of writing)
206
- - `mistral-medium-latest`: Mid-tier model balancing performance and cost
207
- - `mistral-small-latest`: Smaller, more efficient model
208
- - `open-mistral-7b`: Open-source 7B parameter model
209
- - `open-mixtral-8x7b`: Open-source mixture-of-experts model
210
-
211
- Check the [Mistral AI documentation](https://docs.mistral.ai/) for the latest list of supported models.
212
-
213
- ## API Reference
214
-
215
- ### MistralLLM Class
216
-
217
- A class that extends BaseLLM to provide Mistral-specific functionality.
218
-
219
- #### Constructor
220
-
221
- ```typescript
222
- new MistralLLM(apiKey: string)
223
- ```
224
-
225
- #### Properties
226
-
227
- - `Client`: (read-only) Returns the underlying Mistral client instance
228
- - `SupportsStreaming`: (read-only) Returns `true` - Mistral supports streaming
229
-
230
- #### Methods
231
-
232
- - `ChatCompletion(params: ChatParams): Promise<ChatResult>` - Perform a chat completion (supports both streaming and non-streaming)
233
- - `SummarizeText(params: SummarizeParams): Promise<SummarizeResult>` - Not implemented yet
234
- - `ClassifyText(params: ClassifyParams): Promise<ClassifyResult>` - Not implemented yet
235
-
236
- ### MistralEmbedding Class
237
-
238
- A class that extends BaseEmbeddings to provide Mistral embedding functionality.
239
-
240
- #### Constructor
241
-
242
- ```typescript
243
- new MistralEmbedding(apiKey: string)
244
- ```
245
-
246
- #### Properties
247
-
248
- - `Client`: (read-only) Returns the underlying Mistral client instance
249
-
250
- #### Methods
251
-
252
- - `EmbedText(params: EmbedTextParams): Promise<EmbedTextResult>` - Generate embedding for a single text
253
- - `EmbedTexts(params: EmbedTextsParams): Promise<EmbedTextsResult>` - Generate embeddings for multiple texts
254
- - `GetEmbeddingModels(): Promise<any>` - Get list of available embedding models
255
-
256
- ## Response Format Control
257
-
258
- The wrapper supports different response formats:
259
-
260
- ```typescript
261
- // For JSON responses
262
- const params: ChatParams = {
263
- // ...other parameters
264
- responseFormat: 'JSON'
265
- };
266
-
267
- // For regular text responses (default)
268
- const textParams: ChatParams = {
269
- // ...other parameters
270
- // No need to specify responseFormat for regular text
271
- };
272
- ```
273
-
274
- ## Error Handling
275
-
276
- The wrapper provides detailed error information:
277
-
278
- ```typescript
279
- try {
280
- const response = await mistralLLM.ChatCompletion(params);
281
- if (!response.success) {
282
- console.error('Error:', response.errorMessage);
283
- console.error('Status:', response.statusText);
284
- console.error('Exception:', response.exception);
285
- }
286
- } catch (error) {
287
- console.error('Exception occurred:', error);
288
- }
289
- ```
290
-
291
- ## Token Usage Tracking
292
-
293
- Monitor token usage for billing and quota management:
294
-
295
- ```typescript
296
- const response = await mistralLLM.ChatCompletion(params);
297
- if (response.success) {
298
- console.log('Prompt Tokens:', response.data.usage.promptTokens);
299
- console.log('Completion Tokens:', response.data.usage.completionTokens);
300
- console.log('Total Tokens:', response.data.usage.totalTokens);
301
- }
302
- ```
303
-
304
- ## Special Behaviors
305
-
306
- ### Message Formatting
307
- - The wrapper automatically ensures Mistral's requirement that the last message must be from 'user' or 'tool'
308
- - If the last message is not from a user, a placeholder user message "ok" is automatically appended
309
-
310
- ### Multi-Modal Content
311
- - Image URLs are passed through as `image_url` type
312
- - File URLs are converted to `document_url` type for Mistral compatibility
313
- - Unsupported content types are filtered out with a warning
314
-
315
- ## Limitations
316
-
317
- Currently, the wrapper implements:
318
- - Chat completion functionality with full streaming support
319
- - Text embedding functionality with single and batch processing
320
- - Token usage tracking for both chat and embeddings
321
-
322
- Not yet implemented:
323
- - `SummarizeText` functionality
324
- - `ClassifyText` functionality
325
- - `effortLevel`/`reasoning_effort` parameter (not currently supported by Mistral API)
326
-
327
- ## Integration with MemberJunction
328
-
329
- This package is designed to work seamlessly with the MemberJunction AI framework:
330
-
331
- ### Class Registration
332
- Both `MistralLLM` and `MistralEmbedding` are automatically registered with the MemberJunction class factory using the `@RegisterClass` decorator:
333
-
334
- ```typescript
335
- // Classes are registered and can be instantiated via the class factory
336
- import { ClassFactory } from '@memberjunction/global';
337
-
338
- const mistralLLM = ClassFactory.CreateInstance<BaseLLM>(BaseLLM, 'MistralLLM', apiKey);
339
- const mistralEmbedding = ClassFactory.CreateInstance<BaseEmbeddings>(BaseEmbeddings, 'MistralEmbedding', apiKey);
340
- ```
341
-
342
- ### Tree-Shaking Prevention
343
- The package exports loader functions to prevent tree-shaking:
344
-
345
- ```typescript
346
- import { LoadMistralLLM, LoadMistralEmbedding } from '@memberjunction/ai-mistral';
347
-
348
- // Call these in your application initialization to ensure classes are registered
349
- LoadMistralLLM();
350
- LoadMistralEmbedding();
351
- ```
352
-
353
- ## Dependencies
354
-
355
- - `@mistralai/mistralai`: ^1.6.0 - Official Mistral AI Node.js SDK
356
- - `@memberjunction/ai`: 2.43.0 - MemberJunction AI core framework
357
- - `@memberjunction/global`: 2.43.0 - MemberJunction global utilities
358
- - `axios-retry`: 4.3.0 - Retry mechanism for API calls
359
-
360
- ## Development
361
-
362
- ### Building
363
-
364
- ```bash
365
- npm run build
366
- ```
367
-
368
- ### Development Mode
369
-
370
- ```bash
371
- npm start
91
+ console.log(`Dimensions: ${result.vector.length}`);
372
92
  ```
373
93
 
374
94
  ## Supported Parameters
375
95
 
376
- The Mistral provider supports the following LLM parameters:
96
+ | Parameter | Supported | Notes |
97
+ |-----------|-----------|-------|
98
+ | temperature | Yes | Controls randomness |
99
+ | maxOutputTokens | Yes | Maximum response length |
100
+ | topP | Yes | Nucleus sampling |
101
+ | seed | Yes | Deterministic outputs |
102
+ | responseFormat | Yes | JSON mode support |
103
+ | streaming | Yes | Real-time streaming |
377
104
 
378
- **Supported:**
379
- - `temperature` - Controls randomness in the output (0.0-1.0)
380
- - `maxOutputTokens` - Maximum number of tokens to generate
381
- - `topP` - Nucleus sampling threshold (0.0-1.0)
382
- - `topK` - Limits vocabulary to top K tokens
383
- - `seed` - For deterministic outputs (passed as `randomSeed` to Mistral API)
384
- - `stopSequences` - Array of sequences where the API will stop generating
385
- - `responseFormat` - Output format (Text or JSON)
105
+ ## Class Registration
386
106
 
387
- **Not Supported:**
388
- - `frequencyPenalty` - Not available in Mistral API
389
- - `presencePenalty` - Not available in Mistral API
390
- - `minP` - Not available in Mistral API
107
+ - `MistralLLM` -- Registered via `@RegisterClass(BaseLLM, 'MistralLLM')`
108
+ - `MistralEmbedding` -- Registered via `@RegisterClass(BaseEmbeddings, 'MistralEmbedding')`
391
109
 
392
- ## License
393
-
394
- ISC
395
-
396
- ## Contributing
110
+ ## Dependencies
397
111
 
398
- When contributing to this package:
399
- 1. Follow the MemberJunction code style guide
400
- 2. Ensure all TypeScript types are properly defined
401
- 3. Add appropriate error handling
402
- 4. Update documentation for any new features
403
- 5. Test with various Mistral models
112
+ - `@memberjunction/ai` - Core AI abstractions
113
+ - `@memberjunction/global` - Class registration
114
+ - `@mistralai/mistralai` - Official Mistral AI SDK