modelmix 4.1.8 → 4.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,4 +1,4 @@
1
- # 🧬 ModelMix: Unified API for Diverse AI LLM
1
+ # 🧬 ModelMix: Reliable interface with automatic fallback for AI LLMs
2
2
 
3
3
  **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling request rates to avoid provider restrictions. The module also supports the Model Context Protocol (MCP), allowing you to enhance your models with powerful capabilities like web search, code execution, and custom functions.
4
4
 
@@ -8,7 +8,7 @@ Ever found yourself wanting to integrate AI models into your projects but worrie
8
8
 
9
9
  - **Unified Interface**: Interact with multiple AI models through a single, coherent API.
10
10
  - **Request Rate Control**: Manage the rate of requests to adhere to provider limitations using Bottleneck.
11
- - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Gemini, Perplexity, Groq, Together AI, Lambda, Ollama, LM Studio or custom models.
11
+ - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Gemini, Perplexity, Groq, Together AI, Lambda, OpenRouter, Ollama, LM Studio or custom models.
12
12
  - **History Tracking**: Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with `max_history`.
13
13
  - **Model Fallbacks**: Automatically try different models if one fails or is unavailable.
14
14
  - **Chain Multiple Models**: Create powerful chains of models that work together, with automatic fallback if one fails.
@@ -27,6 +27,7 @@ Only the API keys you plan to use are required.
27
27
  ```plaintext
28
28
  ANTHROPIC_API_KEY="sk-ant-..."
29
29
  OPENAI_API_KEY="sk-proj-..."
30
+ OPENROUTER_API_KEY="sk-or-..."
30
31
  MINIMAX_API_KEY="your-minimax-key..."
31
32
  ...
32
33
  GEMINI_API_KEY="AIza..."
@@ -77,6 +78,16 @@ const ETH = ModelMix.new()
77
78
  console.log(ETH.price);
78
79
  ```
79
80
 
81
+ **This example uses providers with free quotas (OpenRouter, Groq, Cerebras) - just get the API key and you're ready to go. If one model runs out of quota, ModelMix automatically falls back to the next model in the chain.**
82
+ ```javascript
83
+ ModelMix.new()
84
+ .gptOss()
85
+ .kimiK2()
86
+ .deepseekR1()
87
+ .hermes3()
88
+ .addText('What is the capital of France?');
89
+ ```
90
+
80
91
  This pattern allows you to:
81
92
  - Chain multiple models together
82
93
  - Automatically fall back to the next model if one fails
@@ -139,6 +150,8 @@ Here's a comprehensive list of available methods:
139
150
  | `gemini25flash()` | Google | gemini-2.5-flash | [\$0.30 / \$2.50][3] |
140
151
  | `grok4()` | Grok | grok-4-0709 | [\$3.00 / \$15.00][6] |
141
152
  | `grok41[think]()` | Grok | grok-4-1-fast | [\$0.20 / \$0.50][6] |
153
+ | `deepseekV32()` | Fireworks | fireworks/models/deepseek-v3p2 | [\$0.56 / \$1.68][10] |
154
+ | `GLM47()` | Fireworks | fireworks/models/glm-4p7 | [\$0.55 / \$2.19][10] |
142
155
  | `minimaxM21()` | MiniMax | MiniMax-M2.1 | [\$0.30 / \$1.20][9] |
143
156
  | `sonar()` | Perplexity | sonar | [\$1.00 / \$1.00][4] |
144
157
  | `sonarPro()` | Perplexity | sonar-pro | [\$3.00 / \$15.00][4] |
@@ -158,6 +171,7 @@ Here's a comprehensive list of available methods:
158
171
  [7]: https://www.together.ai/pricing "Together AI"
159
172
  [8]: https://lambda.ai/inference "Lambda Pricing"
160
173
  [9]: https://www.minimax.io/price "MiniMax Pricing"
174
+ [10]: https://fireworks.ai/pricing#serverless-pricing "Fireworks Pricing"
161
175
 
162
176
  Each method accepts optional `options` and `config` parameters to customize the model's behavior. For example:
163
177
 
@@ -406,6 +420,16 @@ new MixOpenAI(args = { config: {}, options: {} })
406
420
  - **config**: Specific configuration settings for OpenAI, including the `apiKey`.
407
421
  - **options**: Default options for OpenAI model instances.
408
422
 
423
+ ### MixOpenRouter Class Overview
424
+
425
+ ```javascript
426
+ new MixOpenRouter(args = { config: {}, options: {} })
427
+ ```
428
+
429
+ - **args**: Configuration object with `config` and `options` properties.
430
+ - **config**: Specific configuration settings for OpenRouter, including the `apiKey`.
431
+ - **options**: Default options for OpenRouter model instances.
432
+
409
433
  ### MixAnthropic Class Overview
410
434
 
411
435
  ```javascript
@@ -0,0 +1,23 @@
1
+ process.loadEnvFile();
2
+ import { ModelMix } from '../index.js';
3
+
4
+ async function main() {
5
+ try {
6
+ const ai = ModelMix.new();
7
+
8
+ const response = await ai
9
+ .GLM46()
10
+ .deepseekV32()
11
+ .GLM47()
12
+ .addText('What is the capital of France?')
13
+ .message();
14
+
15
+ console.log(response);
16
+
17
+ } catch (error) {
18
+ console.error('Error:', error.message);
19
+ }
20
+ }
21
+
22
+ main();
23
+
package/demo/free.js ADDED
@@ -0,0 +1,19 @@
1
+ import { fileURLToPath } from 'url';
2
+ import { dirname, join } from 'path';
3
+
4
+ const __filename = fileURLToPath(import.meta.url);
5
+ const __dirname = dirname(__filename);
6
+ process.loadEnvFile(join(__dirname, '../.env'));
7
+ import { ModelMix } from '../index.js';
8
+
9
+ const ai = ModelMix.new({ config: { debug: true } })
10
+ .gptOss()
11
+ .kimiK2()
12
+ .deepseekR1()
13
+ .hermes3()
14
+ .addText('What is the capital of France?');
15
+
16
+ const response = await ai.message();
17
+ console.log('Response from Claude via OpenRouter:', response);
18
+
19
+
package/index.js CHANGED
@@ -12,7 +12,7 @@ const { MCPToolsManager } = require('./mcp-tools');
12
12
 
13
13
  class ModelMix {
14
14
 
15
- constructor({ options = {}, config = {} } = {}) {
15
+ constructor({ options = {}, config = {}, mix = {} } = {}) {
16
16
  this.models = [];
17
17
  this.messages = [];
18
18
  this.tools = {};
@@ -38,6 +38,8 @@ class ModelMix {
38
38
  bottleneck: defaultBottleneckConfig,
39
39
  ...config
40
40
  }
41
+ const freeMix = { openrouter: true, cerebras: true, groq: true, together: false, lambda: false };
42
+ this.mix = { ...freeMix, ...mix };
41
43
 
42
44
  this.limiter = new Bottleneck(this.config.bottleneck);
43
45
 
@@ -49,8 +51,8 @@ class ModelMix {
49
51
  return this;
50
52
  }
51
53
 
52
- static new({ options = {}, config = {} } = {}) {
53
- return new ModelMix({ options, config });
54
+ static new({ options = {}, config = {}, mix = {} } = {}) {
55
+ return new ModelMix({ options, config, mix });
54
56
  }
55
57
 
56
58
  new() {
@@ -130,10 +132,12 @@ class ModelMix {
130
132
  gpt52chat({ options = {}, config = {} } = {}) {
131
133
  return this.attach('gpt-5.2-chat-latest', new MixOpenAI({ options, config }));
132
134
  }
133
- gptOss({ options = {}, config = {}, mix = { together: false, cerebras: false, groq: true } } = {}) {
134
- if (mix.together) return this.attach('openai/gpt-oss-120b', new MixTogether({ options, config }));
135
- if (mix.cerebras) return this.attach('gpt-oss-120b', new MixCerebras({ options, config }));
136
- if (mix.groq) return this.attach('openai/gpt-oss-120b', new MixGroq({ options, config }));
135
+ gptOss({ options = {}, config = {}, mix = {} } = {}) {
136
+ mix = { ...this.mix, ...mix };
137
+ if (mix.together) this.attach('openai/gpt-oss-120b', new MixTogether({ options, config }));
138
+ if (mix.cerebras) this.attach('gpt-oss-120b', new MixCerebras({ options, config }));
139
+ if (mix.groq) this.attach('openai/gpt-oss-120b', new MixGroq({ options, config }));
140
+ if (mix.openrouter) this.attach('openai/gpt-oss-120b:free', new MixOpenRouter({ options, config }));
137
141
  return this;
138
142
  }
139
143
  opus45({ options = {}, config = {} } = {}) {
@@ -218,39 +222,50 @@ class ModelMix {
218
222
  return this;
219
223
  }
220
224
 
221
- scout({ options = {}, config = {}, mix = { groq: true, together: false, cerebras: false } } = {}) {
225
+ scout({ options = {}, config = {}, mix = {} } = {}) {
226
+ mix = { ...this.mix, ...mix };
222
227
  if (mix.groq) this.attach('meta-llama/llama-4-scout-17b-16e-instruct', new MixGroq({ options, config }));
223
228
  if (mix.together) this.attach('meta-llama/Llama-4-Scout-17B-16E-Instruct', new MixTogether({ options, config }));
224
229
  if (mix.cerebras) this.attach('llama-4-scout-17b-16e-instruct', new MixCerebras({ options, config }));
225
230
  return this;
226
231
  }
227
- maverick({ options = {}, config = {}, mix = { groq: true, together: false, lambda: false } } = {}) {
232
+ maverick({ options = {}, config = {}, mix = {} } = {}) {
233
+ mix = { ...this.mix, ...mix };
228
234
  if (mix.groq) this.attach('meta-llama/llama-4-maverick-17b-128e-instruct', new MixGroq({ options, config }));
229
235
  if (mix.together) this.attach('meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', new MixTogether({ options, config }));
230
236
  if (mix.lambda) this.attach('llama-4-maverick-17b-128e-instruct-fp8', new MixLambda({ options, config }));
231
237
  return this;
232
238
  }
233
239
 
234
- deepseekR1({ options = {}, config = {}, mix = { groq: true, together: false, cerebras: false } } = {}) {
240
+ deepseekR1({ options = {}, config = {}, mix = {} } = {}) {
241
+ mix = { ...this.mix, ...mix };
235
242
  if (mix.groq) this.attach('deepseek-r1-distill-llama-70b', new MixGroq({ options, config }));
236
243
  if (mix.together) this.attach('deepseek-ai/DeepSeek-R1', new MixTogether({ options, config }));
237
244
  if (mix.cerebras) this.attach('deepseek-r1-distill-llama-70b', new MixCerebras({ options, config }));
245
+ if (mix.openrouter) this.attach('deepseek/deepseek-r1-0528:free', new MixOpenRouter({ options, config }));
238
246
  return this;
239
247
  }
240
248
 
241
- hermes3({ options = {}, config = {}, mix = { lambda: true } } = {}) {
242
- this.attach('Hermes-3-Llama-3.1-405B-FP8', new MixLambda({ options, config }));
249
+ hermes3({ options = {}, config = {}, mix = {} } = {}) {
250
+ mix = { ...this.mix, ...mix };
251
+ if (mix.lambda) this.attach('Hermes-3-Llama-3.1-405B-FP8', new MixLambda({ options, config }));
252
+ if (mix.openrouter) this.attach('nousresearch/hermes-3-llama-3.1-405b:free', new MixOpenRouter({ options, config }));
243
253
  return this;
244
254
  }
245
255
 
246
- kimiK2({ options = {}, config = {}, mix = { together: false, groq: true } } = {}) {
256
+ kimiK2({ options = {}, config = {}, mix = {} } = {}) {
257
+ mix = { ...this.mix, ...mix };
247
258
  if (mix.together) this.attach('moonshotai/Kimi-K2-Instruct-0905', new MixTogether({ options, config }));
248
259
  if (mix.groq) this.attach('moonshotai/kimi-k2-instruct-0905', new MixGroq({ options, config }));
260
+ if (mix.openrouter) this.attach('moonshotai/kimi-k2:free', new MixOpenRouter({ options, config }));
249
261
  return this;
250
262
  }
251
263
 
252
- kimiK2think({ options = {}, config = {} } = {}) {
253
- return this.attach('moonshotai/Kimi-K2-Thinking', new MixTogether({ options, config }));
264
+ kimiK2think({ options = {}, config = {}, mix = { together: true } } = {}) {
265
+ mix = { ...this.mix, ...mix };
266
+ if (mix.together) this.attach('moonshotai/Kimi-K2-Thinking', new MixTogether({ options, config }));
267
+ if (mix.openrouter) this.attach('moonshotai/kimi-k2-thinking', new MixOpenRouter({ options, config }));
268
+ return this;
254
269
  }
255
270
 
256
271
  lmstudio({ options = {}, config = {} } = {}) {
@@ -269,6 +284,31 @@ class ModelMix {
269
284
  return this.attach('MiniMax-M2-Stable', new MixMiniMax({ options, config }));
270
285
  }
271
286
 
287
+ deepseekV32({ options = {}, config = {}, mix = {} } = {}) {
288
+ mix = { ...this.mix, ...mix };
289
+ if (mix.fireworks) this.attach('accounts/fireworks/models/deepseek-v3p2', new MixFireworks({ options, config }));
290
+ if (mix.openrouter) this.attach('deepseek/deepseek-v3.2', new MixOpenRouter({ options, config }));
291
+ return this;
292
+ }
293
+
294
+ GLM47({ options = {}, config = {}, mix = { fireworks: true } } = {}) {
295
+ if (mix.fireworks) this.attach('accounts/fireworks/models/glm-4p7', new MixFireworks({ options, config }));
296
+ if (mix.openrouter) this.attach('z-ai/glm-4.7', new MixOpenRouter({ options, config }));
297
+ return this;
298
+ }
299
+
300
+ GLM46({ options = {}, config = {}, mix = { cerebras: true } } = {}) {
301
+ mix = { ...this.mix, ...mix };
302
+ if (mix.cerebras) this.attach('zai-glm-4.6', new MixCerebras({ options, config }));
303
+ return this;
304
+ }
305
+
306
+ GLM45({ options = {}, config = {}, mix = { openrouter: true } } = {}) {
307
+ mix = { ...this.mix, ...mix };
308
+ if (mix.openrouter) this.attach('z-ai/glm-4.5-air:free', new MixOpenRouter({ options, config }));
309
+ return this;
310
+ }
311
+
272
312
  addText(text, { role = "user" } = {}) {
273
313
  const content = [{
274
314
  type: "text",
@@ -1147,6 +1187,21 @@ class MixOpenAI extends MixCustom {
1147
1187
  }
1148
1188
  }
1149
1189
 
1190
+ class MixOpenRouter extends MixOpenAI {
1191
+ getDefaultConfig(customConfig) {
1192
+
1193
+ if (!process.env.OPENROUTER_API_KEY) {
1194
+ throw new Error('OpenRouter API key not found. Please provide it in config or set OPENROUTER_API_KEY environment variable.');
1195
+ }
1196
+
1197
+ return MixCustom.prototype.getDefaultConfig.call(this, {
1198
+ url: 'https://openrouter.ai/api/v1/chat/completions',
1199
+ apiKey: process.env.OPENROUTER_API_KEY,
1200
+ ...customConfig
1201
+ });
1202
+ }
1203
+ }
1204
+
1150
1205
  class MixAnthropic extends MixCustom {
1151
1206
 
1152
1207
  static thinkingOptions = {
@@ -1584,6 +1639,21 @@ class MixCerebras extends MixCustom {
1584
1639
  }
1585
1640
  }
1586
1641
 
1642
+ class MixFireworks extends MixCustom {
1643
+ getDefaultConfig(customConfig) {
1644
+
1645
+ if (!process.env.FIREWORKS_API_KEY) {
1646
+ throw new Error('Fireworks API key not found. Please provide it in config or set FIREWORKS_API_KEY environment variable.');
1647
+ }
1648
+
1649
+ return super.getDefaultConfig({
1650
+ url: 'https://api.fireworks.ai/inference/v1/chat/completions',
1651
+ apiKey: process.env.FIREWORKS_API_KEY,
1652
+ ...customConfig
1653
+ });
1654
+ }
1655
+ }
1656
+
1587
1657
  class MixGoogle extends MixCustom {
1588
1658
  getDefaultConfig(customConfig) {
1589
1659
  return super.getDefaultConfig({
@@ -1775,4 +1845,4 @@ class MixGoogle extends MixCustom {
1775
1845
  }
1776
1846
  }
1777
1847
 
1778
- module.exports = { MixCustom, ModelMix, MixAnthropic, MixMiniMax, MixOpenAI, MixPerplexity, MixOllama, MixLMStudio, MixGroq, MixTogether, MixGrok, MixCerebras, MixGoogle };
1848
+ module.exports = { MixCustom, ModelMix, MixAnthropic, MixMiniMax, MixOpenAI, MixOpenRouter, MixPerplexity, MixOllama, MixLMStudio, MixGroq, MixTogether, MixGrok, MixCerebras, MixGoogle, MixFireworks };
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "modelmix",
3
- "version": "4.1.8",
4
- "description": "🧬 ModelMix - Unified API for Diverse AI LLM.",
3
+ "version": "4.2.2",
4
+ "description": "🧬 Reliable interface with automatic fallback for AI LLMs.",
5
5
  "main": "index.js",
6
6
  "repository": {
7
7
  "type": "git",
@@ -25,19 +25,18 @@
25
25
  "gpt5",
26
26
  "opus",
27
27
  "sonnet",
28
- "multimodal",
29
- "m2",
28
+ "openrouter",
30
29
  "gemini",
31
- "ollama",
30
+ "glm",
32
31
  "lmstudio",
33
- "nano",
34
32
  "deepseek",
35
33
  "oss",
36
34
  "k2",
37
35
  "reasoning",
38
36
  "minimax",
39
- "cerebras",
40
37
  "thinking",
38
+ "fireworks",
39
+ "glm",
41
40
  "clasen"
42
41
  ],
43
42
  "author": "Martin Clasen",