modelmix 1.2.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,13 +1,13 @@
1
1
  # 🧬 ModelMix: Unified API for Diverse AI Language Models
2
2
 
3
- **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling parallel requests to avoid provider restrictions.
3
+ **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling parallel requests to avoid provider restrictions.
4
4
 
5
- ## ✨ Features
5
+ ## ✨ Features
6
6
 
7
7
  - **Unified Interface**: Interact with multiple AI models through a single, coherent API.
8
- - **Request Control**: Manage the number of parallel requests to adhere to provider limitations.
9
- - **Flexible Integration**: Easily integrate popular models like OpenAI and Anthropic, as well as custom models such as Perplexity.
10
- - **History Tracking:** Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with max_history.
8
+ - **Request Control**: Manage the number of parallel requests to adhere to provider limitations (`max_request`).
9
+ - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Perplexity, Ollama, or custom models.
10
+ - **History Tracking**: Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with `max_history`.
11
11
 
12
12
  ## 📦 Installation
13
13
 
@@ -17,10 +17,10 @@ First, install the ModelMix package:
17
17
  npm install modelmix
18
18
  ```
19
19
 
20
- You'll also need to install the respective SDKs for each model provider you plan to use:
20
+ Also, install dotenv to manage environment variables:
21
21
 
22
22
  ```bash
23
- npm install openai @anthropic-ai/sdk dotenv
23
+ npm install dotenv
24
24
  ```
25
25
 
26
26
  ## 🛠️ Usage
@@ -37,80 +37,94 @@ Here's a quick example to get you started:
37
37
  2. **Create and configure your models**:
38
38
 
39
39
  ```javascript
40
- import 'dotenv/config'
41
- import OpenAI from 'openai';
42
- import Anthropic from '@anthropic-ai/sdk';
43
-
44
- import { ModelMix, OpenAIModel, AnthropicModel, CustomModel } from 'modelmix';
40
+ import 'dotenv/config';
41
+ import { ModelMix, MixOpenAI, MixAnthropic, MixPerplexity, MixOllama } from 'modelmix';
45
42
 
46
43
  const env = process.env;
47
44
 
48
- const driver = new ModelMix({
45
+ const mmix = new ModelMix({
49
46
  options: {
50
47
  max_tokens: 200,
51
48
  },
52
49
  config: {
53
50
  system: "You are ALF from Melmac.",
54
- max_history: 2
51
+ max_history: 2,
52
+ max_request: 1,
55
53
  }
56
54
  });
57
55
 
58
- driver.attach(new OpenAIModel(new OpenAI({ apiKey: env.OPENAI_API_KEY })));
59
- driver.attach(new AnthropicModel(new Anthropic({ apiKey: env.ANTHROPIC_API_KEY })));
60
- driver.attach(new CustomModel({
56
+ mmix.attach(new MixOpenAI({ config: { apiKey: env.OPENAI_API_KEY } }));
57
+ mmix.attach(new MixAnthropic({ config: { apiKey: env.ANTHROPIC_API_KEY } }));
58
+ mmix.attach(new MixPerplexity({
61
59
  config: {
62
- url: 'https://api.perplexity.ai/chat/completions',
63
- bearer: env.PPLX_API_KEY,
64
- prefix: ["pplx", "llama", "mixtral"],
60
+ apiKey: env.PPLX_API_KEY
61
+ },
62
+ options: {
65
63
  system: "You are my personal assistant."
66
64
  }
67
65
  }));
66
+ mmix.attach(new MixOllama({
67
+ config: {
68
+ url: 'http://localhost:11434/api/chat',
69
+ prefix: ['llava'],
70
+ },
71
+ options: {
72
+ temperature: 0.5,
73
+ }
74
+ }));
68
75
  ```
69
76
 
70
77
  3. **Generate responses from different models**:
71
78
 
72
79
  ```javascript
73
- const claude = await driver.create('claude-3-sonnet-20240229', { temperature: 0.5 });
74
- await claude.addImage("./watson.png")
75
- const imageDescription = await claude.addText("describe the image").message();
80
+ console.log("\n" + '--------| gpt-4o |--------');
81
+ const gpt = mmix.create('gpt-4o', { temperature: 0.5 }).addText("Have you ever eaten a cat?");
82
+ console.log(await gpt.message());
83
+
84
+ console.log("\n" + '--------| claude-3-sonnet-20240229 |--------');
85
+ const claude = mmix.create('claude-3-sonnet-20240229', { temperature: 0.5 });
86
+ claude.addImage("./watson.png");
87
+ const imageDescription = await claude.addText("Describe the image").message();
76
88
  console.log(imageDescription);
77
89
 
78
- const gpt = await driver.create('gpt-4o', { temperature: 0.5 });
79
- const question = await gpt.addText("Have you ever eaten a cat?").message();
80
- console.log(question);
81
-
82
- const pplx = await driver.create('pplx-70b-online', { max_tokens: 500 });
83
- await pplx.addText('How much is ETH trading in USD?');
90
+ console.log("\n" + '--------| pplx-70b-online |--------');
91
+ const pplx = mmix.create('pplx-70b-online', { max_tokens: 500 });
92
+ pplx.addText('How much is ETH trading in USD?');
84
93
  const news = await pplx.addText('What are the 3 most recent Ethereum news?').message();
85
94
  console.log(news);
86
- ```
87
95
 
96
+ console.log("\n" + '--------| ollama (llava:latest) |--------');
97
+ await mmix.create('llava:latest')
98
+ .addImage("./watson.png")
99
+ .addText("What is the predominant color?")
100
+ .stream((data) => { console.log(data.message); });
101
+ ```
88
102
  4. Find the files for this example at: [/ModelMix/demo](https://github.com/clasen/ModelMix/tree/master/demo).
89
-
90
-
103
+
91
104
  ## 📚 ModelMix Class Overview
92
105
 
93
106
  ```javascript
94
107
  new ModelMix(args = { options: {}, config: {} })
95
108
  ```
109
+
96
110
  - **args**: Configuration object with `options` and `config` properties.
97
111
  - **options**: This object contains default options that are applied to all models. These options can be overridden when creating a specific model instance. Examples of default options include:
98
112
  - `max_tokens`: Sets the maximum number of tokens to generate, e.g., 2000.
99
113
  - `temperature`: Controls the randomness of the model's output, e.g., 1.
100
114
  - `top_p`: Controls the diversity of the output, e.g., 1.
101
- - ...
115
+ - ...(Additional default options can be added as needed)
102
116
  - **config**: This object contains configuration settings that control the behavior of the `ModelMix` instance. These settings can also be overridden for specific model instances. Examples of configuration settings include:
103
117
  - `system`: Sets the default system message for the model, e.g., "You are an assistant."
104
118
  - `max_history`: Limits the number of historical messages to retain, e.g., 5.
105
- - `max_request`: Limits the number of parallel request.
106
- - ...
119
+ - `max_request`: Limits the number of parallel requests, e.g., 1.
120
+ - ...(Additional configuration parameters can be added as needed)
107
121
 
108
122
  **Methods**
109
123
 
110
124
  - `attach(modelInstance)`: Attaches a model instance to the `ModelMix`.
111
125
  - `create(modelKey, overOptions = {})`: Creates a new `MessageHandler` for the specified model.
112
126
 
113
- #### MessageHandler
127
+ ### MessageHandler Class Overview
114
128
 
115
129
  **Methods**
116
130
 
@@ -119,42 +133,69 @@ new ModelMix(args = { options: {}, config: {} })
119
133
  - `addImage(filePath, config = { role: "user" })`: Adds an image message from a file path.
120
134
  - `message()`: Sends the message and returns the response.
121
135
  - `raw()`: Sends the message and returns the raw response data.
136
+ - `stream(callback)`: Sends the message and streams the response, invoking the callback with each streamed part.
122
137
 
123
- #### OpenAIModel
138
+ ### MixCustom Class Overview
124
139
 
125
140
  ```javascript
126
- new OpenAIModel(openai, args = { options: {}, config: {} })
141
+ new MixCustom(args = { config: {}, options: {}, headers: {} })
127
142
  ```
128
143
 
129
- - **openai**: Instance of the OpenAI client.
130
- - **args**: Configuration object with `options` and `config` properties.
144
+ - **args**: Configuration object with `config`, `options`, and `headers` properties.
145
+ - **config**:
146
+ - `url`: The endpoint URL to which the model sends requests.
147
+ - `prefix`: An array of strings used as a prefix for requests.
148
+ - ...(Additional configuration parameters can be added as needed)
149
+ - **options**: This object contains default options that are applied to all models. These options can be overridden when creating a specific model instance. Examples of default options include:
150
+ - `max_tokens`: Sets the maximum number of tokens to generate, e.g., 2000.
151
+ - `temperature`: Controls the randomness of the model's output, e.g., 1.
152
+ - `top_p`: Controls the diversity of the output, e.g., 1.
153
+ - ...(Additional default options can be added as needed)
154
+ - **headers**:
155
+ - `authorization`: The authorization header, typically including a Bearer token for API access.
156
+ - `x-api-key`: A custom header for API key if needed.
157
+ - ...(Additional headers can be added as needed)
131
158
 
132
- #### AnthropicModel
159
+ ### MixOpenAI Class Overview
133
160
 
134
161
  ```javascript
135
- new AnthropicModel(anthropic, args = { options: {}, config: {} })
162
+ new MixOpenAI(args = { config: {}, options: {} })
136
163
  ```
137
164
 
138
- - **anthropic**: Instance of the Anthropic client.
139
- - **args**: Configuration object with `options` and `config` properties.
165
+ - **args**: Configuration object with `config` and `options` properties.
166
+ - **config**: Specific configuration settings for OpenAI, including the `apiKey`.
167
+ - **options**: Default options for OpenAI model instances.
140
168
 
141
- #### CustomModel
169
+ ### MixAnthropic Class Overview
142
170
 
143
171
  ```javascript
144
- new CustomModel(args = { config: {}, options: {} })
172
+ new MixAnthropic(args = { config: {}, options: {} })
145
173
  ```
146
174
 
147
175
  - **args**: Configuration object with `config` and `options` properties.
148
- - **config**:
149
- - `url`:
150
- - `bearer`:
151
- - `prefix`:
152
- - ...
153
- - **options**: This object contains default options that are applied to all models. These options can be overridden when creating a specific model instance. Examples of default options include:
154
- - `max_tokens`: Sets the maximum number of tokens to generate, e.g., 2000.
155
- - `temperature`: Controls the randomness of the model's output, e.g., 1.
156
- - `top_p`: Controls the diversity of the output, e.g., 1.
157
- - ...
176
+ - **config**: Specific configuration settings for Anthropic, including the `apiKey`.
177
+ - **options**: Default options for Anthropic model instances.
178
+
179
+ ### MixPerplexity Class Overview
180
+
181
+ ```javascript
182
+ new MixPerplexity(args = { config: {}, options: {} })
183
+ ```
184
+
185
+ - **args**: Configuration object with `config` and `options` properties.
186
+ - **config**: Specific configuration settings for Perplexity, including the `apiKey`.
187
+ - **options**: Default options for Perplexity model instances.
188
+
189
+ ### MixOllama Class Overview
190
+
191
+ ```javascript
192
+ new MixOllama(args = { config: {}, options: {} })
193
+ ```
194
+
195
+ - **args**: Configuration object with `config` and `options` properties.
196
+ - **config**: Specific configuration settings for Ollama.
197
+ - `url`: The endpoint URL to which the model sends requests.
198
+ - **options**: Default options for Ollama model instances.
158
199
 
159
200
  ## 🤝 Contributing
160
201
 
@@ -0,0 +1,28 @@
1
+ import 'dotenv/config'
2
+
3
+ import { ModelMix, MixCustom } from '../index.js';
4
+
5
+ const env = process.env;
6
+
7
+ const mmix = new ModelMix({
8
+ options: {
9
+ max_tokens: 200,
10
+ },
11
+ config: {
12
+ system: 'You are ALF from Melmac.',
13
+ max_history: 2
14
+ }
15
+ });
16
+
17
+ mmix.attach(new MixCustom({
18
+ config: {
19
+ url: 'https://api.perplexity.ai/chat/completions',
20
+ prefix: ["pplx", "llama", "mixtral"],
21
+ },
22
+ headers: {
23
+ 'authorization': `Bearer ${env.PPLX_API_KEY}`
24
+ }
25
+ }));
26
+
27
+ const r = await mmix.create('pplx-70b-online').addText('do you like cats?').message()
28
+ console.log(r)
package/demo/demo.mjs CHANGED
@@ -1,9 +1,5 @@
1
-
2
1
  import 'dotenv/config'
3
- import OpenAI from 'openai';
4
- import Anthropic from '@anthropic-ai/sdk';
5
-
6
- import { ModelMix, OpenAIModel, AnthropicModel, CustomModel } from '../index.js';
2
+ import { ModelMix, MixOpenAI, MixAnthropic, MixPerplexity, MixOllama } from '../index.js';
7
3
 
8
4
  const env = process.env;
9
5
 
@@ -12,35 +8,50 @@ const mmix = new ModelMix({
12
8
  max_tokens: 200,
13
9
  },
14
10
  config: {
15
- system: "You are ALF from Melmac.",
16
- max_history: 2
11
+ system: 'You are ALF from Melmac.',
12
+ max_history: 2,
13
+ max_request: 1,
17
14
  }
18
15
  });
19
16
 
20
- mmix.attach(new OpenAIModel(new OpenAI({ apiKey: env.OPENAI_API_KEY })));
21
- mmix.attach(new AnthropicModel(new Anthropic({ apiKey: env.ANTHROPIC_API_KEY })));
22
- mmix.attach(new CustomModel({
17
+
18
+ mmix.attach(new MixOpenAI({ config: { apiKey: env.OPENAI_API_KEY } }));
19
+ mmix.attach(new MixAnthropic({ config: { apiKey: env.ANTHROPIC_API_KEY } }));
20
+ mmix.attach(new MixPerplexity({
21
+ config: {
22
+ apiKey: env.PPLX_API_KEY
23
+ },
24
+ system: 'You are my personal assistant.'
25
+ }));
26
+ mmix.attach(new MixOllama({
23
27
  config: {
24
- url: 'https://api.perplexity.ai/chat/completions',
25
- bearer: env.PPLX_API_KEY,
26
- prefix: ["pplx", "llama", "mixtral"],
27
- system: "You are my personal assistant."
28
+ url: 'http://localhost:11434/api/chat',
29
+ prefix: ['llava'],
30
+ },
31
+ options: {
32
+ temperature: 0.5,
28
33
  }
29
34
  }));
30
35
 
36
+
31
37
  console.log("\n" + '--------| gpt-4o |--------');
32
- const gpt = await mmix.create('gpt-4o', { temperature: 0.5 });
33
- const question = await gpt.addText("Have you ever eaten a cat?").message();
34
- console.log(question);
38
+ const gpt = mmix.create('gpt-4o', { temperature: 0.5 }).addText("Have you ever eaten a cat?");
39
+ console.log(await gpt.message());
35
40
 
36
41
  console.log("\n" + '--------| claude-3-sonnet-20240229 |--------');
37
- const claude = await mmix.create('claude-3-sonnet-20240229', { temperature: 0.5 });
38
- await claude.addImage("./watson.png")
39
- const imageDescription = await claude.addText("describe the image").message();
42
+ const claude = mmix.create('claude-3-sonnet-20240229', { temperature: 0.5 });
43
+ claude.addImage('./watson.png');
44
+ const imageDescription = await claude.addText('describe the image').message();
40
45
  console.log(imageDescription);
41
46
 
42
47
  console.log("\n" + '--------| pplx-70b-online |--------');
43
- const pplx = await mmix.create('pplx-70b-online', { max_tokens: 500 });
44
- await pplx.addText('How much is ETH trading in USD?');
48
+ const pplx = mmix.create('pplx-70b-online', { max_tokens: 500 });
49
+ pplx.addText('How much is ETH trading in USD?');
45
50
  const news = await pplx.addText('What are the 3 most recent Ethereum news?').message();
46
51
  console.log(news);
52
+
53
+ console.log("\n" + '--------| ollama (llava:latest) |--------');
54
+ await mmix.create('llava:latest')
55
+ .addImage('./watson.png')
56
+ .addText('what is the predominant color?')
57
+ .stream((data) => { console.log(data.message); });
@@ -84,6 +84,27 @@
84
84
  "node": ">= 0.8"
85
85
  }
86
86
  },
87
+ "node_modules/debug": {
88
+ "version": "4.3.4",
89
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz",
90
+ "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==",
91
+ "dependencies": {
92
+ "ms": "2.1.2"
93
+ },
94
+ "engines": {
95
+ "node": ">=6.0"
96
+ },
97
+ "peerDependenciesMeta": {
98
+ "supports-color": {
99
+ "optional": true
100
+ }
101
+ }
102
+ },
103
+ "node_modules/debug/node_modules/ms": {
104
+ "version": "2.1.2",
105
+ "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
106
+ "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
107
+ },
87
108
  "node_modules/delayed-stream": {
88
109
  "version": "1.0.0",
89
110
  "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
@@ -157,6 +178,14 @@
157
178
  "ms": "^2.0.0"
158
179
  }
159
180
  },
181
+ "node_modules/lemonlog": {
182
+ "version": "1.1.2",
183
+ "resolved": "https://registry.npmjs.org/lemonlog/-/lemonlog-1.1.2.tgz",
184
+ "integrity": "sha512-ztAM2mVrr9+CLoKvXKCJeGb6jNNDZUnATlcIouFuj7ymyNKXXZhx4gXyQ5lsW6oLx5KXKb/vLclXVdcqJ5sHVA==",
185
+ "dependencies": {
186
+ "debug": "^4.3.4"
187
+ }
188
+ },
160
189
  "node_modules/mime-db": {
161
190
  "version": "1.52.0",
162
191
  "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
@@ -0,0 +1,20 @@
1
+ (The MIT License)
2
+
3
+ Copyright (c) 2014-2017 TJ Holowaychuk <tj@vision-media.ca>
4
+ Copyright (c) 2018-2021 Josh Junon
5
+
6
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software
7
+ and associated documentation files (the 'Software'), to deal in the Software without restriction,
8
+ including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
9
+ and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
10
+ subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all copies or substantial
13
+ portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
16
+ LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
17
+ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18
+ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
19
+ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
20
+