modelmix 1.0.2 → 1.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,14 +1,15 @@
1
- # 🧬 ModelMix: Integrate Multiple AI Language Models with Ease 🚀
1
+ # 🧬 ModelMix: Unified API for Diverse AI Language Models
2
2
 
3
3
  **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling parallel requests to avoid provider restrictions.
4
4
 
5
- ## Features
5
+ ## Features
6
6
 
7
7
  - **Unified Interface**: Interact with multiple AI models through a single, coherent API.
8
8
  - **Request Control**: Manage the number of parallel requests to adhere to provider limitations.
9
9
  - **Flexible Integration**: Easily integrate popular models like OpenAI and Anthropic, as well as custom models such as Perplexity.
10
+ - **History Tracking:** Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with max_history.
10
11
 
11
- ## Installation 📦
12
+ ## 📦 Installation
12
13
 
13
14
  First, install the ModelMix package:
14
15
 
@@ -22,7 +23,7 @@ You'll also need to install the respective SDKs for each model provider you plan
22
23
  npm install openai @anthropic-ai/sdk dotenv
23
24
  ```
24
25
 
25
- ## Usage 🛠️
26
+ ## 🛠️ Usage
26
27
 
27
28
  Here's a quick example to get you started:
28
29
 
@@ -37,14 +38,22 @@ Here's a quick example to get you started:
37
38
 
38
39
  ```javascript
39
40
  import 'dotenv/config'
40
- const env = process.env;
41
-
42
41
  import OpenAI from 'openai';
43
42
  import Anthropic from '@anthropic-ai/sdk';
44
43
 
45
44
  import { ModelMix, OpenAIModel, AnthropicModel, CustomModel } from 'model-mix';
46
45
 
47
- const driver = new ModelMix({ system: "You are ALF from Melmac.", max_tokens: 200 });
46
+ const env = process.env;
47
+
48
+ const driver = new ModelMix({
49
+ options: {
50
+ max_tokens: 200,
51
+ },
52
+ config: {
53
+ system: "You are ALF from Melmac.",
54
+ max_history: 2
55
+ }
56
+ });
48
57
 
49
58
  driver.attach(new OpenAIModel(new OpenAI({ apiKey: env.OPENAI_API_KEY })));
50
59
  driver.attach(new AnthropicModel(new Anthropic({ apiKey: env.ANTHROPIC_API_KEY })));
@@ -52,7 +61,8 @@ Here's a quick example to get you started:
52
61
  config: {
53
62
  url: 'https://api.perplexity.ai/chat/completions',
54
63
  bearer: env.PPLX_API_KEY,
55
- prefix: ["pplx", "llama", "mixtral"]
64
+ prefix: ["pplx", "llama", "mixtral"],
65
+ system: "You are my personal assistant."
56
66
  }
57
67
  }));
58
68
  ```
@@ -60,75 +70,87 @@ Here's a quick example to get you started:
60
70
  3. **Generate responses from different models**:
61
71
 
62
72
  ```javascript
63
- const question = 'Have you ever eaten a cat?';
73
+ const claude = await driver.create('claude-3-sonnet-20240229', { temperature: 0.5 });
74
+ await claude.addImage("./watson.png")
75
+ const imageDescription = await claude.addText("describe the image").message();
76
+ console.log(imageDescription);
77
+
78
+ const gpt = await driver.create('gpt-4o', { temperature: 0.5 });
79
+ const question = await gpt.addText("Have you ever eaten a cat?").message();
80
+ console.log(question);
81
+
82
+ const pplx = await driver.create('pplx-70b-online', { max_tokens: 500 });
83
+ await pplx.addText('How much is ETH trading in USD?');
84
+ const news = await pplx.addText('What are the 3 most recent Ethereum news?').message();
85
+ console.log(news);
86
+ ```
64
87
 
65
- console.log("OpenAI - gpt-4o");
66
- const txtGPT = await driver.create(question, 'gpt-4o', { max_tokens: 100 });
67
- console.log(txtGPT);
88
+ 4. Find the files for this example at: [/ModelMix/demo](https://github.com/clasen/ModelMix/tree/master/demo).
68
89
 
69
- console.log("-------\n");
70
90
 
71
- console.log("Anthropic - claude-3-sonnet-20240229");
72
- const txtClaude = await driver.create(question, 'claude-3-sonnet-20240229', { temperature: 0.5 });
73
- console.log(txtClaude);
91
+ ## 📚 ModelMix Class Overview
74
92
 
75
- console.log("-------\n");
93
+ #### ModelMix
76
94
 
77
- console.log("Perplexity - pplx-70b-online");
78
- const txtPPLX = await driver.create(question, 'pplx-70b-online');
79
- console.log(txtPPLX);
80
- ```
95
+ **Constructor**
81
96
 
82
- ## ModelMix Class Overview 📚
97
+ ```javascript
98
+ new ModelMix(args = { options: {}, config: {} })
99
+ ```
83
100
 
84
- ### `ModelMix` Class
101
+ - **args**: Configuration object with `options` and `config` properties.
85
102
 
86
- #### Constructor
87
- - **options**: An optional configuration object for setting default values.
103
+ **Methods**
88
104
 
89
- #### Methods
90
- - **`attach(modelInstance)`**: Attach a new model instance to the ModelMix.
91
- - **modelInstance**: An instance of a model class (e.g., `OpenAIModel`, `AnthropicModel`, `CustomModel`).
105
+ - `attach(modelInstance)`: Attaches a model instance to the `ModelMix`.
106
+ - `create(modelKey, overOptions = {})`: Creates a new `MessageHandler` for the specified model.
92
107
 
93
- - **`create(prompt, modelKey, options = {})`**: Create a request to a specific model.
94
- - **prompt**: The input prompt to send to the model.
95
- - **modelKey**: The model key to identify which model to use.
96
- - **options**: Optional configuration overrides for the request.
108
+ #### MessageHandler
97
109
 
98
- - **`processQueue(modelEntry)`**: Process the request queue for a specific model instance.
99
- - **modelEntry**: The model instance whose queue will be processed.
110
+ - **mix**: Instance of `ModelMix`.
111
+ - **modelEntry**: Model entry object.
112
+ - **options**: Options for the message handler.
113
+ - **config**: Configuration for the message handler.
100
114
 
101
- ### `OpenAIModel` Class
115
+ **Methods**
102
116
 
103
- #### Constructor
104
- - **openai**: An instance of the OpenAI client.
105
- - **args**: Configuration and options for the model.
117
+ - `new()`: Initializes a new message handler instance.
118
+ - `addText(text, config = { role: "user" })`: Adds a text message.
119
+ - `addImage(filePath, config = { role: "user" })`: Adds an image message from a file path.
120
+ - `message()`: Sends the message and returns the response.
121
+ - `raw()`: Sends the message and returns the raw response data.
106
122
 
107
- #### Methods
108
- - **`create(prompt, options = {})`**: Send a prompt to the OpenAI model and return the response.
109
- - **prompt**: The input prompt to send to the model.
110
- - **options**: Optional configuration overrides for the request.
123
+ #### OpenAIModel
111
124
 
112
- ### `AnthropicModel` Class
125
+ **Constructor**
113
126
 
114
- #### Constructor
115
- - **anthropic**: An instance of the Anthropic client.
116
- - **args**: Configuration and options for the model.
127
+ ```javascript
128
+ new OpenAIModel(openai, args = { options: {}, config: {} })
129
+ ```
130
+
131
+ - **openai**: Instance of the OpenAI client.
132
+ - **args**: Configuration object with `options` and `config` properties.
133
+
134
+ #### AnthropicModel
135
+
136
+ **Constructor**
117
137
 
118
- #### Methods
119
- - **`create(prompt, options = {})`**: Send a prompt to the Anthropic model and return the response.
120
- - **prompt**: The input prompt to send to the model.
121
- - **options**: Optional configuration overrides for the request.
138
+ ```javascript
139
+ new AnthropicModel(anthropic, args = { options: {}, config: {} })
140
+ ```
141
+
142
+ - **anthropic**: Instance of the Anthropic client.
143
+ - **args**: Configuration object with `options` and `config` properties.
122
144
 
123
- ### `CustomModel` Class
145
+ #### CustomModel
124
146
 
125
- #### Constructor
126
- - **args**: Configuration and options for the model.
147
+ **Constructor**
148
+
149
+ ```javascript
150
+ new CustomModel(args = { config: {}, options: {} })
151
+ ```
127
152
 
128
- #### Methods
129
- - **`create(prompt, options = {})`**: Send a prompt to a custom model and return the response.
130
- - **prompt**: The input prompt to send to the model.
131
- - **options**: Optional configuration overrides for the request.
153
+ - **args**: Configuration object with `config` and `options` properties.
132
154
 
133
155
  ## 🤝 Contributing
134
156
 
package/demo/demo.mjs CHANGED
@@ -1,13 +1,22 @@
1
1
 
2
2
  import 'dotenv/config'
3
- const env = process.env;
4
-
5
3
  import OpenAI from 'openai';
6
4
  import Anthropic from '@anthropic-ai/sdk';
7
5
 
8
6
  import { ModelMix, OpenAIModel, AnthropicModel, CustomModel } from '../index.js';
9
7
 
10
- const driver = new ModelMix({ system: "You are ALF from Melmac.", max_tokens: 200 });
8
+ const env = process.env;
9
+
10
+
11
+ const driver = new ModelMix({
12
+ options: {
13
+ max_tokens: 200,
14
+ },
15
+ config: {
16
+ system: "You are ALF from Melmac.",
17
+ max_history: 2
18
+ }
19
+ });
11
20
 
12
21
  driver.attach(new OpenAIModel(new OpenAI({ apiKey: env.OPENAI_API_KEY })));
13
22
  driver.attach(new AnthropicModel(new Anthropic({ apiKey: env.ANTHROPIC_API_KEY })));
@@ -15,35 +24,21 @@ driver.attach(new CustomModel({
15
24
  config: {
16
25
  url: 'https://api.perplexity.ai/chat/completions',
17
26
  bearer: env.PPLX_API_KEY,
18
- prefix: ["pplx", "llama", "mixtral"]
27
+ prefix: ["pplx", "llama", "mixtral"],
28
+ system: "You are my personal assistant."
19
29
  }
20
30
  }));
21
31
 
22
- const question = 'Have you ever eaten a cat?';
23
-
24
- console.log("OpenAI - gpt-4o");
25
- const txtGPT = await driver.create(question, 'gpt-4o', { max_tokens: 100 });
26
- console.log(txtGPT);
27
- /*
28
- OpenAI - gpt-4o
29
- No way, I have not eaten a cat, at least not since I crash-landed on Earth! I've learned a lot about how pets are cherished members of the family here. So while cats were considered a delicacy on Melmac, I've since adapted to Earth customs and have a new appreciation for them, even though I still might joke about it from time to time! Have you got any snacks lying around that aren’t feline?
30
- */
31
-
32
- console.log("-------\n");
33
-
34
- console.log("Anthropic - claude-3-sonnet-20240229");
35
- const txtClaude = await driver.create(question, 'claude-3-sonnet-20240229', { temperature: 0.5 });
36
- console.log(txtClaude);
37
- /*
38
- Anthropic - claude-3-sonnet-20240229
39
- No, I'm an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't actually eat anything since I'm an artificial intelligence without a physical body.
40
- */
41
-
42
- console.log("-------\n");
43
-
44
- console.log("Perplexity - pplx-70b-online");
45
- const txtPPLX = await driver.create(question, 'pplx-70b-online');
46
- console.log(txtPPLX);
47
- /*
48
- As ALF from Melmac, I can assure you that I have not personally eaten a cat. Cats are not consumed in my homeworld, as our diet is not Earth-based. On Earth, however, there are instances where cat meat has been consumed in various cultures, historically and in times of necessity. The search results mention that cat meat is considered food in some countries, such as parts of Asia, and there have been reports of it being consumed during famine periods in places like Italy. It's important to note that the consumption of cat meat is often frowned upon and has faced opposition, particularly in regions where cats are popular as pets.
49
- */
32
+ const claude = await driver.create('claude-3-sonnet-20240229', { temperature: 0.5 });
33
+ await claude.addImage("./watson.png")
34
+ const imageDescription = await claude.addText("describe the image").message();
35
+ console.log(imageDescription);
36
+
37
+ const gpt = await driver.create('gpt-4o', { temperature: 0.5 });
38
+ const question = await gpt.addText("Have you ever eaten a cat?").message();
39
+ console.log(question);
40
+
41
+ const pplx = await driver.create('pplx-70b-online', { max_tokens: 500 });
42
+ await pplx.addText('How much is ETH trading in USD?');
43
+ const news = await pplx.addText('What are the 3 most recent Ethereum news?').message();
44
+ console.log(news);
Binary file
package/index.js CHANGED
@@ -1,16 +1,23 @@
1
1
  const axios = require('axios');
2
+ const fs = require('fs').promises;
3
+ const mime = require('mime-types');
2
4
 
3
5
  class ModelMix {
4
- constructor(options = {}) {
6
+ constructor(args = { options: {}, config: {} }) {
5
7
  this.models = {};
6
8
  this.defaultOptions = {
7
9
  model: 'gpt-4o',
8
10
  max_tokens: 2000,
9
11
  temperature: 1,
10
- system: 'You are an assistant.',
11
12
  top_p: 1,
12
- ...options
13
+ ...args.options
13
14
  };
15
+
16
+ this.config = {
17
+ system: 'You are an assistant.',
18
+ max_history: 5, // Default max history
19
+ ...args.config
20
+ }
14
21
  }
15
22
 
16
23
  attach(modelInstance) {
@@ -20,7 +27,7 @@ class ModelMix {
20
27
  modelInstance.active_requests = 0;
21
28
  }
22
29
 
23
- async create(prompt, modelKey, options = {}) {
30
+ async create(modelKey, overOptions = {}) {
24
31
  const modelEntry = Object.values(this.models).find(entry =>
25
32
  entry.config.prefix.some(p => modelKey.startsWith(p))
26
33
  );
@@ -29,12 +36,10 @@ class ModelMix {
29
36
  throw new Error(`Model with prefix matching ${modelKey} is not attached.`);
30
37
  }
31
38
 
32
- const config = { ...this.defaultOptions, ...modelEntry.options, ...options, model: modelKey };
39
+ const options = { ...this.defaultOptions, ...modelEntry.options, ...overOptions, model: modelKey };
40
+ const config = { ...this.config, ...modelEntry.config };
33
41
 
34
- return new Promise((resolve, reject) => {
35
- modelEntry.queue.push({ prompt, config, resolve, reject });
36
- this.processQueue(modelEntry);
37
- });
42
+ return new MessageHandler(this, modelEntry, options, config);
38
43
  }
39
44
 
40
45
  async processQueue(modelEntry) {
@@ -50,7 +55,7 @@ class ModelMix {
50
55
  modelEntry.active_requests++;
51
56
 
52
57
  try {
53
- const result = await modelEntry.create(nextTask.prompt, nextTask.config);
58
+ const result = await modelEntry.create(nextTask.args);
54
59
  nextTask.resolve(result);
55
60
  } catch (error) {
56
61
  nextTask.reject(error);
@@ -61,6 +66,110 @@ class ModelMix {
61
66
  }
62
67
  }
63
68
 
69
+ class MessageHandler {
70
+ constructor(mix, modelEntry, options, config) {
71
+ this.mix = mix;
72
+ this.modelEntry = modelEntry;
73
+ this.options = options;
74
+ this.config = config;
75
+ this.messages = [];
76
+ }
77
+
78
+ new() {
79
+ this.messages = [];
80
+ return this;
81
+ }
82
+
83
+ addText(text, config = { role: "user" }) {
84
+ const content = [{
85
+ type: "text",
86
+ text
87
+ }];
88
+
89
+ this.messages.push({ ...config, content });
90
+ return this;
91
+ }
92
+
93
+ async addImage(filePath, config = { role: "user" }) {
94
+ try {
95
+ const imageBuffer = await fs.readFile(filePath);
96
+ const mimeType = mime.lookup(filePath);
97
+
98
+ if (!mimeType || !mimeType.startsWith('image/')) {
99
+ throw new Error('Invalid image file type');
100
+ }
101
+
102
+ const data = imageBuffer.toString('base64');
103
+
104
+ const imageMessage = {
105
+ ...config,
106
+ content: [
107
+ {
108
+ type: "image",
109
+ "source": {
110
+ type: "base64",
111
+ media_type: mimeType,
112
+ data
113
+ }
114
+ }
115
+ ]
116
+ };
117
+
118
+ this.messages.push(imageMessage);
119
+ } catch (error) {
120
+ console.error('Error reading the image file:', error);
121
+ }
122
+
123
+ return this;
124
+ }
125
+
126
+ async message() {
127
+ const response = await this.execute();
128
+ return response.message;
129
+ }
130
+
131
+ async raw() {
132
+ const data = await this.execute();
133
+ return data.response;
134
+ }
135
+
136
+ groupByRoles(messages) {
137
+ return messages.reduce((acc, message) => {
138
+ const existingRole = acc.find(item => item.role === message.role);
139
+ if (existingRole) {
140
+ existingRole.content = existingRole.content.concat(message.content);
141
+ } else {
142
+ acc.push({ role: message.role, content: Array.isArray(message.content) ? message.content : [message.content] });
143
+ }
144
+ return acc;
145
+ }, []);
146
+ }
147
+
148
+ async execute() {
149
+
150
+ this.messages = this.groupByRoles(this.messages);
151
+
152
+ if (this.messages.length === 0) { // Only system message is present
153
+ throw new Error("No user messages have been added. Use addMessage(prompt) to add a message.");
154
+ }
155
+
156
+ this.messages = this.messages.slice(-this.config.max_history);
157
+ this.options.messages = this.messages;
158
+
159
+ return new Promise((resolve, reject) => {
160
+ this.modelEntry.queue.push({
161
+ args: { options: this.options, config: this.config },
162
+ resolve: (result) => {
163
+ this.messages.push({ role: "assistant", content: result.message });
164
+ resolve(result);
165
+ },
166
+ reject
167
+ });
168
+ this.mix.processQueue(this.modelEntry);
169
+ });
170
+ }
171
+ }
172
+
64
173
  class OpenAIModel {
65
174
  constructor(openai, args = { options: {}, config: {} }) {
66
175
  this.openai = openai;
@@ -79,17 +188,32 @@ class OpenAIModel {
79
188
  }
80
189
  }
81
190
 
82
- async create(prompt, options = {}) {
83
- options.messages = [
84
- { role: "system", content: options.system },
85
- { role: "user", content: prompt }
86
- ];
87
-
88
- delete options.system;
191
+ async create(args = { options: {}, config: {} }) {
89
192
 
90
- const response = await this.openai.chat.completions.create(options);
193
+ args.options.messages = [{ role: 'system', content: args.config.system }, ...args.options.messages || []];
194
+ args.options.messages = this.convertMessages(args.options.messages);
195
+ const response = await this.openai.chat.completions.create(args.options);
196
+ return { response, message: response.choices[0].message.content };
197
+ }
91
198
 
92
- return response.choices[0].message.content;
199
+ convertMessages(messages) {
200
+ return messages.map(message => {
201
+ if (message.role === 'user' && message.content instanceof Array) {
202
+ message.content = message.content.map(content => {
203
+ if (content.type === 'image') {
204
+ const { type, media_type, data } = content.source;
205
+ return {
206
+ type: 'image_url',
207
+ image_url: {
208
+ url: `data:${media_type};${type},${data}`
209
+ }
210
+ };
211
+ }
212
+ return content;
213
+ });
214
+ }
215
+ return message;
216
+ });
93
217
  }
94
218
  }
95
219
 
@@ -108,19 +232,14 @@ class AnthropicModel {
108
232
  }
109
233
  }
110
234
 
111
- async create(prompt, options = {}) {
112
- options.messages = [
113
- { role: "user", content: prompt }
114
- ];
235
+ async create(args = { config: {}, options: {} }) {
115
236
 
116
- const response = await this.anthropic.messages.create(options);
237
+ args.options.system = args.config.system;
117
238
 
239
+ const response = await this.anthropic.messages.create(args.options);
118
240
  const responseText = response.content[0].text;
119
241
 
120
- // Add a short delay to avoid hitting rate limits
121
- await new Promise(resolve => setTimeout(resolve, 1000));
122
-
123
- return responseText.trim();
242
+ return { response, message: responseText.trim() };
124
243
  }
125
244
  }
126
245
 
@@ -144,19 +263,10 @@ class CustomModel {
144
263
  };
145
264
  }
146
265
 
147
- async create(prompt, options = {}) {
148
- const mergedOptions = {
149
- ...this.options,
150
- ...options,
151
- messages: [
152
- { role: "system", content: options.system || "" },
153
- { role: "user", content: prompt }
154
- ]
155
- };
156
-
157
- delete mergedOptions.system;
158
-
159
- const response = await axios.post(this.config.url, mergedOptions, {
266
+ async create(args = { config: {}, options: {} }) {
267
+ args.options.messages = [{ role: 'system', content: args.config.system }, ...args.options.messages || []];
268
+
269
+ const response = await axios.post(this.config.url, args.options, {
160
270
  headers: {
161
271
  'accept': 'application/json',
162
272
  'authorization': `Bearer ${this.config.bearer}`,
@@ -164,8 +274,8 @@ class CustomModel {
164
274
  }
165
275
  });
166
276
 
167
- return response.data.choices[0].message.content;
277
+ return { response: response.data, message: response.data.choices[0].message.content };
168
278
  }
169
279
  }
170
280
 
171
- module.exports = { OpenAIModel, AnthropicModel, CustomModel, ModelMix };
281
+ module.exports = { OpenAIModel, AnthropicModel, CustomModel, ModelMix };
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "modelmix",
3
- "version": "1.0.2",
4
- "description": "🧬 ModelMix - Integrate AI Models with Ease.",
3
+ "version": "1.1.2",
4
+ "description": "🧬 ModelMix - Unified API for Diverse AI Language Models.",
5
5
  "main": "index.js",
6
6
  "repository": {
7
7
  "type": "git",