modelmix 3.4.0 → 3.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/MODELS.md CHANGED
@@ -60,6 +60,33 @@ All providers inherit from `MixCustom` base class which provides common function
60
60
  - Removes `max_tokens` and `temperature` for o1/o3 models
61
61
  - Converts image messages to base64 data URLs
62
62
 
63
+ #### Function Calling
64
+
65
+ **CALL**
66
+ ```json
67
+ {
68
+ role: 'assistant',
69
+ tool_calls: [
70
+ {
71
+ id: 'call_GibonUAFsx7yHs20AhmzELG9',
72
+ type: 'function',
73
+ function: {
74
+ name: 'brave_web_search',
75
+ arguments: '{"query":"Pope Francis death"}'
76
+ }
77
+ }
78
+ ]
79
+ }
80
+ ```
81
+ **USE**
82
+ ```json
83
+ {
84
+ role: "tool",
85
+ tool_call_id: "call_GibonUAFsx7yHs20AhmzELG9",
86
+ content: "Pope Francis death 2022-12-15"
87
+ }
88
+ ```
89
+
63
90
  ### Anthropic (MixAnthropic)
64
91
  - **Base URL**: `https://api.anthropic.com/v1/messages`
65
92
  - **Input Format**:
@@ -105,6 +132,41 @@ All providers inherit from `MixCustom` base class which provides common function
105
132
  - Uses `x-api-key` header instead of `authorization`
106
133
  - Requires `anthropic-version` header
107
134
 
135
+ ### Function Calling
136
+
137
+ **CALL**
138
+ ```json
139
+ {
140
+ role: 'assistant',
141
+ content: [
142
+ {
143
+ type: 'text',
144
+ text: "I'll search for information about Pope Francis's death."
145
+ },
146
+ {
147
+ type: 'tool_use',
148
+ id: 'toolu_018YeoPLbQwE6WKLSJipkGLE',
149
+ name: 'brave_web_search',
150
+ input: { query: 'When did Pope Francis die?' }
151
+ }
152
+ ]
153
+ }
154
+ ```
155
+ **USE**
156
+ ```json
157
+ {
158
+ role: 'user',
159
+ content: [
160
+ {
161
+ type: 'tool_result',
162
+ tool_use_id: 'toolu_01GbfgjLrtNhnE9ZqinJmXYc',
163
+ content: 'Pope Francis died on April 21, 2025.'
164
+ }
165
+ ]
166
+ }
167
+ ```
168
+
169
+
108
170
  ### Perplexity (MixPerplexity)
109
171
  - **Base URL**: `https://api.perplexity.ai/chat/completions`
110
172
  - **Input Format**: Same as OpenAI
@@ -196,6 +258,40 @@ All providers inherit from `MixCustom` base class which provides common function
196
258
  - Pro: More capable, better for complex tasks
197
259
  - Pro Exp: Experimental version with latest features
198
260
 
261
+ ### Function Calling
262
+
263
+ **CALL**
264
+ ```json
265
+ {
266
+ role: 'model',
267
+ parts: [
268
+ {
269
+ functionCall: {
270
+ name: `getWeather`,
271
+ args: { "city": "tokio" },
272
+ }
273
+ },
274
+ ],
275
+ }
276
+ ```
277
+
278
+ **USE**
279
+ ```json
280
+ {
281
+ role: 'user',
282
+ parts: [
283
+ {
284
+ functionResponse: {
285
+ name: `getWeather`,
286
+ response: {
287
+ output: `20 grados`,
288
+ },
289
+ }
290
+ },
291
+ ],
292
+ }
293
+ ```
294
+
199
295
  ### Cerebras (MixCerebras)
200
296
  - **Base URL**: `https://api.cerebras.ai/v1/chat/completions`
201
297
  - **Input Format**: Same as Together
package/README.md CHANGED
@@ -1,47 +1,43 @@
1
1
  # 🧬 ModelMix: Unified API for Diverse AI LLM
2
2
 
3
- **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling request rates to avoid provider restrictions.
3
+ **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling request rates to avoid provider restrictions. The module also supports the Model Context Protocol (MCP), allowing you to enhance your models with powerful capabilities like web search, code execution, and custom functions.
4
4
 
5
- Are you one of those developers who wants to apply language models to everything? Do you need a reliable fallback system to ensure your application never fails? ModelMix is the answer! It allows you to chain multiple models together, automatically falling back to the next model if one fails, ensuring your application always gets a response.
5
+ Ever found yourself wanting to integrate AI models into your projects but worried about reliability? ModelMix helps you build resilient AI applications by chaining multiple models together. If one model fails, it automatically switches to the next one, ensuring your application keeps running smoothly.
6
6
 
7
7
  ## ✨ Features
8
8
 
9
9
  - **Unified Interface**: Interact with multiple AI models through a single, coherent API.
10
10
  - **Request Rate Control**: Manage the rate of requests to adhere to provider limitations using Bottleneck.
11
- - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Perplexity, Groq, Together AI, Ollama, LM Studio, Google Gemini or custom models.
11
+ - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Gemini, Perplexity, Groq, Together AI, Lambda, Ollama, LM Studio or custom models.
12
12
  - **History Tracking**: Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with `max_history`.
13
13
  - **Model Fallbacks**: Automatically try different models if one fails or is unavailable.
14
14
  - **Chain Multiple Models**: Create powerful chains of models that work together, with automatic fallback if one fails.
15
-
16
- ## 📦 Installation
17
-
18
- First, install the ModelMix package:
19
-
20
- ```bash
21
- npm install modelmix
22
- ```
23
-
24
- Recommended: install dotenv to manage environment variables:
25
-
26
- ```bash
27
- npm install dotenv
28
- ```
15
+ - **Model Context Protocol (MCP) Support**: Seamlessly integrate external tools and capabilities like web search, code execution, or custom functions through the Model Context Protocol standard.
29
16
 
30
17
  ## 🛠️ Usage
31
18
 
32
- Here's a quick example to get you started:
33
-
34
- 1. **Setup your environment variables (.env file)**:
35
- ```plaintext
36
- OPENAI_API_KEY="your_openai_api_key"
37
- ANTHROPIC_API_KEY="your_anthropic_api_key"
38
- PPLX_API_KEY="your_perplexity_api_key"
39
- GROQ_API_KEY="your_groq_api_key"
40
- TOGETHER_API_KEY="your_together_api_key"
41
- GOOGLE_API_KEY="your_google_api_key"
42
- ```
19
+ 1. **Install the ModelMix package:**
20
+ Recommended: install dotenv to manage environment variables
21
+
22
+ ```bash
23
+ npm install modelmix dotenv
24
+ ```
25
+
26
+ 2. **Setup your environment variables (.env file)**:
27
+ ```plaintext
28
+ ANTHROPIC_API_KEY="sk-ant-..."
29
+ OPENAI_API_KEY="sk-proj-..."
30
+ PPLX_API_KEY="pplx-..."
31
+ GROQ_API_KEY="gsk_..."
32
+ TOGETHER_API_KEY="49a96..."
33
+ XAI_API_KEY="xai-..."
34
+ CEREBRAS_API_KEY="csk-..."
35
+ GOOGLE_API_KEY="AIza..."
36
+ LAMBDA_API_KEY="secret_..."
37
+ BRAVE_API_KEY="BSA0..._fm"
38
+ ```
43
39
 
44
- 2. **Create and configure your models**:
40
+ 3. **Create and configure your models**:
45
41
 
46
42
  ```javascript
47
43
  import 'dotenv/config';
@@ -56,16 +52,17 @@ const outputExample = { countries: [{ name: "", capital: "" }] };
56
52
  console.log(await model.json(outputExample));
57
53
  ```
58
54
 
55
+ **Basic setup with system prompt and debug mode**
59
56
  ```javascript
60
- // Basic setup with system prompt and debug mode
61
57
  const setup = {
62
58
  config: {
63
59
  system: "You are ALF, if they ask your name, respond with 'ALF'.",
64
60
  debug: true
65
61
  }
66
62
  };
67
-
68
- // Chain multiple models with automatic fallback
63
+ ```
64
+ **Chain multiple models with automatic fallback**
65
+ ```javascript
69
66
  const model = await ModelMix.new(setup)
70
67
  .sonnet37think() // (main model) Anthropic claude-3-7-sonnet-20250219
71
68
  .o4mini() // (fallback 1) OpenAI o4-mini
@@ -77,8 +74,8 @@ const model = await ModelMix.new(setup)
77
74
  console.log(await model.message());
78
75
  ```
79
76
 
77
+ **Use Perplexity to get the price of ETH**
80
78
  ```javascript
81
-
82
79
  const ETH = ModelMix.new()
83
80
  .sonar() // Perplexity sonar
84
81
  .addText('How much is ETH trading in USD?')
@@ -92,6 +89,29 @@ This pattern allows you to:
92
89
  - Get structured JSON responses when needed
93
90
  - Keep your code clean and maintainable
94
91
 
92
+ ## 🔧 Model Context Protocol (MCP) Integration
93
+
94
+ ModelMix makes it incredibly easy to enhance your AI models with powerful capabilities through the Model Context Protocol. With just a few lines of code, you can add features like web search, code execution, or any custom functionality to your models.
95
+
96
+ ### Example: Adding Web Search Capability
97
+
98
+ ```javascript
99
+ const mmix = ModelMix.new({ config: { max_history: 10 } }).gpt41nano();
100
+ mmix.setSystem('You are an assistant and today is ' + new Date().toISOString());
101
+
102
+ // Add web search capability through MCP
103
+ await mmix.addMCP('@modelcontextprotocol/server-brave-search');
104
+ mmix.addText('Use Internet: When did the last Christian pope die?');
105
+ console.log(await mmix.message());
106
+ ```
107
+
108
+ This simple integration allows your model to:
109
+ - Search the web in real-time
110
+ - Access up-to-date information
111
+ - Combine AI reasoning with external data
112
+
113
+ The Model Context Protocol makes it easy to add any capability to your models, from web search to code execution, database queries, or custom functions. All with just a few lines of code!
114
+
95
115
  ## ⚡️ Shorthand Methods
96
116
 
97
117
  ModelMix provides convenient shorthand methods for quickly accessing different AI models.
@@ -120,6 +140,7 @@ Here's a comprehensive list of available methods:
120
140
  | `qwen3()` | Together | Qwen3-235B-A22B-fp8-tput | [\$0.20 / \$0.60][7] |
121
141
  | `scout()` | Groq | Llama-4-Scout-17B-16E-Instruct | [\$0.11 / \$0.34][5] |
122
142
  | `maverick()` | Groq | Maverick-17B-128E-Instruct-FP8 | [\$0.20 / \$0.60][5] |
143
+ | `hermes3()` | Lambda | Hermes-3-Llama-3.1-405B-FP8 | [\$0.80 / \$0.80][8] |
123
144
 
124
145
  [1]: https://openai.com/api/pricing/ "Pricing | OpenAI"
125
146
  [2]: https://docs.anthropic.com/en/docs/about-claude/pricing "Pricing - Anthropic"
@@ -128,6 +149,7 @@ Here's a comprehensive list of available methods:
128
149
  [5]: https://groq.com/pricing/ "Groq Pricing"
129
150
  [6]: https://docs.x.ai/docs/models "xAI"
130
151
  [7]: https://www.together.ai/pricing "Together AI"
152
+ [8]: https://lambda.ai/inference "Lambda Pricing"
131
153
 
132
154
  Each method accepts optional `options` and `config` parameters to customize the model's behavior. For example:
133
155
 
package/demo/default.env CHANGED
@@ -1,6 +1,10 @@
1
- ANTHROPIC_API_KEY=""
2
- OPENAI_API_KEY=""
3
- PPLX_API_KEY=""
4
- GROQ_API_KEY=""
5
- TOGETHER_API_KEY=""
6
- XAI_API_KEY=""
1
+ ANTHROPIC_API_KEY="sk-ant-..."
2
+ OPENAI_API_KEY="sk-proj-..."
3
+ PPLX_API_KEY="pplx-..."
4
+ GROQ_API_KEY="gsk_..."
5
+ TOGETHER_API_KEY="49a96..."
6
+ XAI_API_KEY="xai-..."
7
+ CEREBRAS_API_KEY="csk-..."
8
+ GOOGLE_API_KEY="AIza..."
9
+ LAMBDA_API_KEY="secret_..."
10
+ BRAVE_API_KEY="BSA0..._fm"
package/demo/json.mjs ADDED
@@ -0,0 +1,13 @@
1
+ import 'dotenv/config'
2
+ import { ModelMix } from '../index.js';
3
+
4
+ const model = await ModelMix.new({ config: { debug: true } })
5
+ .scout({ config: { temperature: 0 } })
6
+ .o4mini()
7
+ .sonnet37think()
8
+ .gpt45()
9
+ .gemini25flash()
10
+ .addText("Name and capital of 3 South American countries.")
11
+
12
+ const jsonResult = await model.json({ countries: [{ name: "", capital: "" }] });
13
+ console.log(jsonResult);
package/demo/mcp.mjs ADDED
@@ -0,0 +1,11 @@
1
+ import 'dotenv/config';
2
+ import { ModelMix } from '../index.js';
3
+
4
+ const mmix = ModelMix.new({ config: { max_history: 10 } }).gpt41nano();
5
+ mmix.setSystem('You are an assistant and today is ' + new Date().toISOString());
6
+
7
+ // Add web search capability through MCP
8
+ await mmix.addMCP('@modelcontextprotocol/server-brave-search');
9
+
10
+ mmix.addText('Use Internet: When did the last Christian pope die?');
11
+ console.log(await mmix.message());
@@ -10,7 +10,8 @@
10
10
  "license": "ISC",
11
11
  "dependencies": {
12
12
  "@anthropic-ai/sdk": "^0.20.9",
13
- "dotenv": "^16.5.0"
13
+ "dotenv": "^16.5.0",
14
+ "lemonlog": "^1.1.4"
14
15
  }
15
16
  },
16
17
  ".api/apis/pplx": {
@@ -93,6 +94,23 @@
93
94
  "node": ">= 0.8"
94
95
  }
95
96
  },
97
+ "node_modules/debug": {
98
+ "version": "4.4.1",
99
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.1.tgz",
100
+ "integrity": "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==",
101
+ "license": "MIT",
102
+ "dependencies": {
103
+ "ms": "^2.1.3"
104
+ },
105
+ "engines": {
106
+ "node": ">=6.0"
107
+ },
108
+ "peerDependenciesMeta": {
109
+ "supports-color": {
110
+ "optional": true
111
+ }
112
+ }
113
+ },
96
114
  "node_modules/delayed-stream": {
97
115
  "version": "1.0.0",
98
116
  "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
@@ -167,6 +185,15 @@
167
185
  "ms": "^2.0.0"
168
186
  }
169
187
  },
188
+ "node_modules/lemonlog": {
189
+ "version": "1.1.4",
190
+ "resolved": "https://registry.npmjs.org/lemonlog/-/lemonlog-1.1.4.tgz",
191
+ "integrity": "sha512-NWcXK7Nl+K5E0xxzcux9ktR9hiRSSjK0xpFbuAm/qy/wg5TuIYGfg+lIDWthVvFscrSsYCPLyPk/AvxY+w7n6A==",
192
+ "license": "MIT",
193
+ "dependencies": {
194
+ "debug": "^4.1.1"
195
+ }
196
+ },
170
197
  "node_modules/mime-db": {
171
198
  "version": "1.52.0",
172
199
  "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
package/demo/package.json CHANGED
@@ -10,6 +10,7 @@
10
10
  "license": "ISC",
11
11
  "dependencies": {
12
12
  "@anthropic-ai/sdk": "^0.20.9",
13
- "dotenv": "^16.5.0"
13
+ "dotenv": "^16.5.0",
14
+ "lemonlog": "^1.1.4"
14
15
  }
15
16
  }
package/demo/short.mjs CHANGED
@@ -9,26 +9,12 @@ const setup = {
9
9
  }
10
10
  };
11
11
 
12
- const result = await ModelMix.new(setup)
13
- .scout({ config: { temperature: 0 } })
14
- .addText("What's your name?")
15
- .message();
16
-
17
- console.log(result);
18
-
19
- const model = await ModelMix.new({ config: { debug: true } })
20
- .scout({ config: { temperature: 0 } })
21
- .o4mini()
22
- .sonnet37think()
23
- .gpt45()
24
- .gemini25flash()
25
- .addText("Name and capital of 3 South American countries.")
26
-
27
- const jsonResult = await model.json({ countries: [{ name: "", capital: "" }] });
28
-
29
- console.log(jsonResult);
30
-
31
- model.addText("Name and capital of 1 South American countries.")
32
-
33
- const jsonResult2 = await model.json({ countries: [{ name: "", capital: "" }] });
34
- console.log(jsonResult2);
12
+ const mmix = await ModelMix.new(setup)
13
+ .sonnet37think() // (main model) Anthropic claude-3-7-sonnet-20250219
14
+ .o4mini() // (fallback 1) OpenAI o4-mini
15
+ .gemini25proExp({ config: { temperature: 0 } }) // (fallback 2) Google gemini-2.5-pro-exp-03-25
16
+ .gpt41nano() // (fallback 3) OpenAI gpt-4.1-nano
17
+ .grok3mini() // (fallback 4) Grok grok-3-mini-beta
18
+ .addText("What's your name?");
19
+
20
+ console.log(await mmix.message());
package/index.js CHANGED
@@ -5,11 +5,16 @@ const log = require('lemonlog')('ModelMix');
5
5
  const Bottleneck = require('bottleneck');
6
6
  const path = require('path');
7
7
  const generateJsonSchema = require('./schema');
8
+ const { Client } = require("@modelcontextprotocol/sdk/client/index.js");
9
+ const { StdioClientTransport } = require("@modelcontextprotocol/sdk/client/stdio.js");
8
10
 
9
11
  class ModelMix {
10
12
  constructor({ options = {}, config = {} } = {}) {
11
13
  this.models = [];
12
14
  this.messages = [];
15
+ this.tools = {};
16
+ this.toolClient = {};
17
+ this.mcp = {};
13
18
  this.options = {
14
19
  max_tokens: 5000,
15
20
  temperature: 1, // 1 --> More creative, 0 --> More deterministic.
@@ -129,8 +134,9 @@ class ModelMix {
129
134
  return this.attach('grok-3-mini-beta', new MixGrok({ options, config }));
130
135
  }
131
136
 
132
- qwen3({ options = {}, config = {}, mix = { together: true } } = {}) {
137
+ qwen3({ options = {}, config = {}, mix = { together: true, cerebras: false } } = {}) {
133
138
  if (mix.together) this.attach('Qwen/Qwen3-235B-A22B-fp8-tput', new MixTogether({ options, config }));
139
+ if (mix.cerebras) this.attach('qwen-3-32b', new MixCerebras({ options, config }));
134
140
  return this;
135
141
  }
136
142
 
@@ -140,9 +146,10 @@ class ModelMix {
140
146
  if (mix.cerebras) this.attach('llama-4-scout-17b-16e-instruct', new MixCerebras({ options, config }));
141
147
  return this;
142
148
  }
143
- maverick({ options = {}, config = {}, mix = { groq: true, together: false } } = {}) {
149
+ maverick({ options = {}, config = {}, mix = { groq: true, together: false, lambda: false } } = {}) {
144
150
  if (mix.groq) this.attach('meta-llama/llama-4-maverick-17b-128e-instruct', new MixGroq({ options, config }));
145
151
  if (mix.together) this.attach('meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', new MixTogether({ options, config }));
152
+ if (mix.lambda) this.attach('llama-4-maverick-17b-128e-instruct-fp8', new MixLambda({ options, config }));
146
153
  return this;
147
154
  }
148
155
 
@@ -153,6 +160,11 @@ class ModelMix {
153
160
  return this;
154
161
  }
155
162
 
163
+ hermes3({ options = {}, config = {}, mix = { lambda: true } } = {}) {
164
+ this.attach('Hermes-3-Llama-3.1-405B-FP8', new MixLambda({ options, config }));
165
+ return this;
166
+ }
167
+
156
168
  addText(text, { role = "user" } = {}) {
157
169
  const content = [{
158
170
  type: "text",
@@ -319,10 +331,11 @@ class ModelMix {
319
331
  groupByRoles(messages) {
320
332
  return messages.reduce((acc, currentMessage, index) => {
321
333
  if (index === 0 || currentMessage.role !== messages[index - 1].role) {
322
- acc.push({
323
- role: currentMessage.role,
324
- content: currentMessage.content
325
- });
334
+ // acc.push({
335
+ // role: currentMessage.role,
336
+ // content: currentMessage.content
337
+ // });
338
+ acc.push(currentMessage);
326
339
  } else {
327
340
  acc[acc.length - 1].content = acc[acc.length - 1].content.concat(currentMessage.content);
328
341
  }
@@ -390,10 +403,12 @@ class ModelMix {
390
403
  const currentModel = this.models[i];
391
404
  const currentModelKey = currentModel.key;
392
405
  const providerInstance = currentModel.provider;
406
+ const optionsTools = providerInstance.getOptionsTools(this.tools);
393
407
 
394
408
  let options = {
395
409
  ...this.options,
396
410
  ...providerInstance.options,
411
+ ...optionsTools,
397
412
  model: currentModelKey
398
413
  };
399
414
 
@@ -414,7 +429,29 @@ class ModelMix {
414
429
 
415
430
  const result = await providerInstance.create({ options, config });
416
431
 
417
- this.messages.push({ role: "assistant", content: result.message });
432
+ if (result.toolCalls.length > 0) {
433
+
434
+ if (result.message) {
435
+ if (result.signature) {
436
+ this.messages.push({
437
+ role: "assistant", content: [{
438
+ type: "thinking",
439
+ thinking: result.think,
440
+ signature: result.signature
441
+ }]
442
+ });
443
+ } else {
444
+ this.addText(result.message, { role: "assistant" });
445
+ }
446
+ }
447
+
448
+ this.messages.push({ role: "assistant", content: result.toolCalls, tool_calls: result.toolCalls });
449
+
450
+ const content = await this.processToolCalls(result.toolCalls);
451
+ this.messages.push({ role: 'tool', content });
452
+
453
+ return this.execute();
454
+ }
418
455
 
419
456
  if (config.debug) {
420
457
  log.debug(`Request successful with model: ${currentModelKey}`);
@@ -444,6 +481,69 @@ class ModelMix {
444
481
  throw lastError || new Error("Failed to get response from any model, and no specific error was caught.");
445
482
  });
446
483
  }
484
+
485
+ async processToolCalls(toolCalls) {
486
+ const result = []
487
+
488
+ for (const toolCall of toolCalls) {
489
+ const client = this.toolClient[toolCall.function.name];
490
+
491
+ const response = await client.callTool({
492
+ name: toolCall.function.name,
493
+ arguments: JSON.parse(toolCall.function.arguments)
494
+ });
495
+
496
+ result.push({
497
+ name: toolCall.function.name,
498
+ tool_call_id: toolCall.id,
499
+ content: response.content.map(item => item.text).join("\n")
500
+ });
501
+ }
502
+ return result;
503
+ }
504
+
505
+ async addMCP() {
506
+
507
+ const key = arguments[0];
508
+
509
+ if (this.mcp[key]) {
510
+ log.info(`MCP ${key} already attached.`);
511
+ return;
512
+ }
513
+
514
+ if (this.config.max_history < 3) {
515
+ log.warn(`MCP ${key} requires at least 3 max_history. Setting to 3.`);
516
+ this.config.max_history = 3;
517
+ }
518
+
519
+ const env = {}
520
+ for (const key in process.env) {
521
+ if (['OPENAI', 'ANTHR', 'GOOGLE', 'GROQ', 'TOGET', 'LAMBDA', 'PPLX', 'XAI', 'CEREBR'].some(prefix => key.startsWith(prefix))) continue;
522
+ env[key] = process.env[key];
523
+ }
524
+
525
+ const transport = new StdioClientTransport({
526
+ command: "npx",
527
+ args: ["-y", ...arguments],
528
+ env
529
+ });
530
+
531
+ // Crear el cliente MCP
532
+ this.mcp[key] = new Client({
533
+ name: key,
534
+ version: "1.0.0"
535
+ });
536
+
537
+ await this.mcp[key].connect(transport);
538
+
539
+ const { tools } = await this.mcp[key].listTools();
540
+ this.tools[key] = tools;
541
+
542
+ for (const tool of tools) {
543
+ this.toolClient[tool.name] = this.mcp[key];
544
+ }
545
+
546
+ }
447
547
  }
448
548
 
449
549
  class MixCustom {
@@ -477,8 +577,15 @@ class MixCustom {
477
577
  };
478
578
  }
479
579
 
580
+ convertMessages(messages, config) {
581
+ return MixOpenAI.convertMessages(messages, config);
582
+ }
583
+
480
584
  async create({ config = {}, options = {} } = {}) {
481
585
  try {
586
+
587
+ options.messages = this.convertMessages(options.messages, config);
588
+
482
589
  if (config.debug) {
483
590
  log.debug("config");
484
591
  log.info(config);
@@ -562,33 +669,64 @@ class MixCustom {
562
669
  }
563
670
 
564
671
  extractDelta(data) {
565
- if (data.choices && data.choices[0].delta.content) return data.choices[0].delta.content;
566
- return '';
672
+ return data.choices[0].delta.content;
567
673
  }
568
674
 
569
- extractMessage(data) {
570
- if (data.choices && data.choices[0].message.content) return data.choices[0].message.content.trim();
571
- return '';
675
+ static extractMessage(data) {
676
+ const message = data.choices[0].message?.content?.trim() || '';
677
+ const endTagIndex = message.indexOf('</think>');
678
+ if (message.startsWith('<think>') && endTagIndex !== -1) {
679
+ return message.substring(endTagIndex + 8).trim();
680
+ }
681
+ return message;
572
682
  }
573
683
 
574
- processResponse(response) {
575
- let message = this.extractMessage(response.data);
576
-
577
- if (message.startsWith('<think>')) {
578
- const endTagIndex = message.indexOf('</think>');
579
- if (endTagIndex !== -1) {
580
- const think = message.substring(7, endTagIndex).trim();
581
- message = message.substring(endTagIndex + 8).trim();
582
- return { response: response.data, message, think };
684
+ static extractThink(data) {
685
+
686
+ if (data.choices[0].message?.reasoning_content) {
687
+ return data.choices[0].message.reasoning_content;
688
+ }
689
+
690
+ const message = data.choices[0].message?.content?.trim() || '';
691
+ const endTagIndex = message.indexOf('</think>');
692
+ if (message.startsWith('<think>') && endTagIndex !== -1) {
693
+ return message.substring(7, endTagIndex).trim();
694
+ }
695
+ return null;
696
+ }
697
+
698
+ static extractToolCalls(data) {
699
+ return data.choices[0].message?.tool_calls?.map(call => ({
700
+ id: call.id,
701
+ type: 'function',
702
+ function: {
703
+ name: call.function.name,
704
+ arguments: call.function.arguments
583
705
  }
706
+ })) || []
707
+ }
708
+
709
+ processResponse(response) {
710
+ return {
711
+ message: MixCustom.extractMessage(response.data),
712
+ think: MixCustom.extractThink(response.data),
713
+ toolCalls: MixCustom.extractToolCalls(response.data),
714
+ response: response.data
584
715
  }
716
+ }
585
717
 
586
- return { response: response.data, message };
718
+ getOptionsTools(tools) {
719
+ return MixOpenAI.getOptionsTools(tools);
587
720
  }
588
721
  }
589
722
 
590
723
  class MixOpenAI extends MixCustom {
591
724
  getDefaultConfig(customConfig) {
725
+
726
+ if (!process.env.OPENAI_API_KEY) {
727
+ throw new Error('OpenAI API key not found. Please provide it in config or set OPENAI_API_KEY environment variable.');
728
+ }
729
+
592
730
  return super.getDefaultConfig({
593
731
  url: 'https://api.openai.com/v1/chat/completions',
594
732
  apiKey: process.env.OPENAI_API_KEY,
@@ -597,9 +735,6 @@ class MixOpenAI extends MixCustom {
597
735
  }
598
736
 
599
737
  async create({ config = {}, options = {} } = {}) {
600
- if (!this.config.apiKey) {
601
- throw new Error('OpenAI API key not found. Please provide it in config or set OPENAI_API_KEY environment variable.');
602
- }
603
738
 
604
739
  // Remove max_tokens and temperature for o1/o3 models
605
740
  if (options.model?.startsWith('o')) {
@@ -607,35 +742,76 @@ class MixOpenAI extends MixCustom {
607
742
  delete options.temperature;
608
743
  }
609
744
 
610
- const content = config.system + config.systemExtra;
611
- options.messages = [{ role: 'system', content }, ...options.messages || []];
612
- options.messages = MixOpenAI.convertMessages(options.messages);
613
745
  return super.create({ config, options });
614
746
  }
615
747
 
616
- static convertMessages(messages) {
617
- return messages.map(message => {
618
- if (message.role === 'user' && message.content instanceof Array) {
619
- message.content = message.content.map(content => {
748
+ static convertMessages(messages, config) {
749
+
750
+ const content = config.system + config.systemExtra;
751
+ messages = [{ role: 'system', content }, ...messages || []];
752
+
753
+ const results = []
754
+ for (const message of messages) {
755
+
756
+ if (message.tool_calls) {
757
+ results.push({ role: 'assistant', tool_calls: message.tool_calls })
758
+ continue;
759
+ }
760
+
761
+ if (message.role === 'tool') {
762
+ for (const content of message.content) {
763
+ results.push({ role: 'tool', ...content })
764
+ }
765
+ continue;
766
+ }
767
+
768
+ if (Array.isArray(message.content))
769
+ for (const content of message.content) {
620
770
  if (content.type === 'image') {
621
771
  const { type, media_type, data } = content.source;
622
- return {
772
+ message.content = {
623
773
  type: 'image_url',
624
774
  image_url: {
625
775
  url: `data:${media_type};${type},${data}`
626
776
  }
627
777
  };
628
778
  }
629
- return content;
779
+ }
780
+
781
+ results.push(message);
782
+ }
783
+ return results;
784
+ }
785
+
786
+ static getOptionsTools(tools) {
787
+ const options = {};
788
+ options.tools = [];
789
+ for (const tool in tools) {
790
+ for (const item of tools[tool]) {
791
+ options.tools.push({
792
+ type: 'function',
793
+ function: {
794
+ name: item.name,
795
+ description: item.description,
796
+ parameters: item.inputSchema
797
+ }
630
798
  });
631
799
  }
632
- return message;
633
- });
800
+ }
801
+
802
+ // options.tool_choice = "auto";
803
+
804
+ return options;
634
805
  }
635
806
  }
636
807
 
637
808
  class MixAnthropic extends MixCustom {
638
809
  getDefaultConfig(customConfig) {
810
+
811
+ if (!process.env.ANTHROPIC_API_KEY) {
812
+ throw new Error('Anthropic API key not found. Please provide it in config or set ANTHROPIC_API_KEY environment variable.');
813
+ }
814
+
639
815
  return super.getDefaultConfig({
640
816
  url: 'https://api.anthropic.com/v1/messages',
641
817
  apiKey: process.env.ANTHROPIC_API_KEY,
@@ -644,9 +820,6 @@ class MixAnthropic extends MixCustom {
644
820
  }
645
821
 
646
822
  async create({ config = {}, options = {} } = {}) {
647
- if (!this.config.apiKey) {
648
- throw new Error('Anthropic API key not found. Please provide it in config or set ANTHROPIC_API_KEY environment variable.');
649
- }
650
823
 
651
824
  // Remove top_p for thinking
652
825
  if (options.thinking) {
@@ -659,6 +832,39 @@ class MixAnthropic extends MixCustom {
659
832
  return super.create({ config, options });
660
833
  }
661
834
 
835
+ convertMessages(messages, config) {
836
+ return MixAnthropic.convertMessages(messages, config);
837
+ }
838
+
839
+ static convertMessages(messages, config) {
840
+ return messages.map(message => {
841
+ if (message.role === 'tool') {
842
+ return {
843
+ role: "user",
844
+ content: message.content.map(content => ({
845
+ type: "tool_result",
846
+ tool_use_id: content.tool_call_id,
847
+ content: content.content
848
+ }))
849
+ }
850
+ }
851
+
852
+ message.content = message.content.map(content => {
853
+ if (content.type === 'function') {
854
+ return {
855
+ type: 'tool_use',
856
+ id: content.id,
857
+ name: content.function.name,
858
+ input: JSON.parse(content.function.arguments)
859
+ }
860
+ }
861
+ return content;
862
+ });
863
+
864
+ return message;
865
+ });
866
+ }
867
+
662
868
  getDefaultHeaders(customHeaders) {
663
869
  return super.getDefaultHeaders({
664
870
  'x-api-key': this.config.apiKey,
@@ -672,29 +878,77 @@ class MixAnthropic extends MixCustom {
672
878
  return '';
673
879
  }
674
880
 
675
- processResponse(response) {
676
- if (response.data.content) {
881
+ static extractToolCalls(data) {
677
882
 
678
- if (response.data.content?.[1]?.text) {
883
+ return data.content.map(item => {
884
+ if (item.type === 'tool_use') {
679
885
  return {
680
- think: response.data.content[0]?.thinking,
681
- message: response.data.content[1].text,
682
- response: response.data
683
- }
886
+ id: item.id,
887
+ type: 'function',
888
+ function: {
889
+ name: item.name,
890
+ arguments: JSON.stringify(item.input)
891
+ }
892
+ };
684
893
  }
894
+ return null;
895
+ }).filter(item => item !== null);
896
+ }
685
897
 
686
- if (response.data.content[0].text) {
687
- return {
688
- message: response.data.content[0].text,
689
- response: response.data
690
- }
898
+ static extractMessage(data) {
899
+ if (data.content?.[1]?.text) {
900
+ return data.content[1].text;
901
+ }
902
+ return data.content[0].text;
903
+ }
904
+
905
+ static extractThink(data) {
906
+ return data.content[0]?.thinking || null;
907
+ }
908
+
909
+ static extractSignature(data) {
910
+ return data.content[0]?.signature || null;
911
+ }
912
+
913
+ processResponse(response) {
914
+ return {
915
+ message: MixAnthropic.extractMessage(response.data),
916
+ think: MixAnthropic.extractThink(response.data),
917
+ toolCalls: MixAnthropic.extractToolCalls(response.data),
918
+ response: response.data,
919
+ signature: MixAnthropic.extractSignature(response.data)
920
+ }
921
+ }
922
+
923
+ getOptionsTools(tools) {
924
+ return MixAnthropic.getOptionsTools(tools);
925
+ }
926
+
927
+ static getOptionsTools(tools) {
928
+ const options = {};
929
+ options.tools = [];
930
+ for (const tool in tools) {
931
+ for (const item of tools[tool]) {
932
+ options.tools.push({
933
+ type: 'custom',
934
+ name: item.name,
935
+ description: item.description,
936
+ input_schema: item.inputSchema
937
+ });
691
938
  }
692
939
  }
940
+
941
+ return options;
693
942
  }
694
943
  }
695
944
 
696
945
  class MixPerplexity extends MixCustom {
697
946
  getDefaultConfig(customConfig) {
947
+
948
+ if (!process.env.PPLX_API_KEY) {
949
+ throw new Error('Perplexity API key not found. Please provide it in config or set PPLX_API_KEY environment variable.');
950
+ }
951
+
698
952
  return super.getDefaultConfig({
699
953
  url: 'https://api.perplexity.ai/chat/completions',
700
954
  apiKey: process.env.PPLX_API_KEY,
@@ -713,10 +967,6 @@ class MixPerplexity extends MixCustom {
713
967
  };
714
968
  }
715
969
 
716
- if (!this.config.apiKey) {
717
- throw new Error('Perplexity API key not found. Please provide it in config or set PPLX_API_KEY environment variable.');
718
- }
719
-
720
970
  const content = config.system + config.systemExtra;
721
971
  options.messages = [{ role: 'system', content }, ...options.messages || []];
722
972
  return super.create({ config, options });
@@ -743,19 +993,18 @@ class MixOllama extends MixCustom {
743
993
  return '';
744
994
  }
745
995
 
746
- async create({ config = {}, options = {} } = {}) {
747
-
748
- options.messages = MixOllama.convertMessages(options.messages);
749
- const content = config.system + config.systemExtra;
750
- options.messages = [{ role: 'system', content }, ...options.messages || []];
751
- return super.create({ config, options });
752
- }
753
-
754
996
  extractMessage(data) {
755
997
  return data.message.content.trim();
756
998
  }
757
999
 
758
- static convertMessages(messages) {
1000
+ convertMessages(messages, config) {
1001
+ return MixOllama.convertMessages(messages, config);
1002
+ }
1003
+
1004
+ static convertMessages(messages, config) {
1005
+ const content = config.system + config.systemExtra;
1006
+ messages = [{ role: 'system', content }, ...messages || []];
1007
+
759
1008
  return messages.map(entry => {
760
1009
  let content = '';
761
1010
  let images = [];
@@ -779,26 +1028,31 @@ class MixOllama extends MixCustom {
779
1028
 
780
1029
  class MixGrok extends MixOpenAI {
781
1030
  getDefaultConfig(customConfig) {
1031
+
1032
+ if (!process.env.XAI_API_KEY) {
1033
+ throw new Error('Grok API key not found. Please provide it in config or set XAI_API_KEY environment variable.');
1034
+ }
1035
+
782
1036
  return super.getDefaultConfig({
783
1037
  url: 'https://api.x.ai/v1/chat/completions',
784
1038
  apiKey: process.env.XAI_API_KEY,
785
1039
  ...customConfig
786
1040
  });
787
1041
  }
1042
+ }
788
1043
 
789
- processResponse(response) {
790
- const message = this.extractMessage(response.data);
791
-
792
- const output = {
793
- message: message,
794
- response: response.data
795
- }
1044
+ class MixLambda extends MixCustom {
1045
+ getDefaultConfig(customConfig) {
796
1046
 
797
- if (response.data.choices[0].message.reasoning_content) {
798
- output.think = response.data.choices[0].message.reasoning_content;
1047
+ if (!process.env.LAMBDA_API_KEY) {
1048
+ throw new Error('Lambda API key not found. Please provide it in config or set LAMBDA_API_KEY environment variable.');
799
1049
  }
800
1050
 
801
- return output;
1051
+ return super.getDefaultConfig({
1052
+ url: 'https://api.lambda.ai/v1/chat/completions',
1053
+ apiKey: process.env.LAMBDA_API_KEY,
1054
+ ...customConfig
1055
+ });
802
1056
  }
803
1057
  }
804
1058
 
@@ -809,38 +1063,30 @@ class MixLMStudio extends MixCustom {
809
1063
  ...customConfig
810
1064
  });
811
1065
  }
812
-
813
- async create({ config = {}, options = {} } = {}) {
814
- const content = config.system + config.systemExtra;
815
- options.messages = [{ role: 'system', content }, ...options.messages || []];
816
- options.messages = MixOpenAI.convertMessages(options.messages);
817
- return super.create({ config, options });
818
- }
819
1066
  }
820
1067
 
821
1068
  class MixGroq extends MixCustom {
822
1069
  getDefaultConfig(customConfig) {
1070
+
1071
+ if (!process.env.GROQ_API_KEY) {
1072
+ throw new Error('Groq API key not found. Please provide it in config or set GROQ_API_KEY environment variable.');
1073
+ }
1074
+
823
1075
  return super.getDefaultConfig({
824
1076
  url: 'https://api.groq.com/openai/v1/chat/completions',
825
1077
  apiKey: process.env.GROQ_API_KEY,
826
1078
  ...customConfig
827
1079
  });
828
1080
  }
829
-
830
- async create({ config = {}, options = {} } = {}) {
831
- if (!this.config.apiKey) {
832
- throw new Error('Groq API key not found. Please provide it in config or set GROQ_API_KEY environment variable.');
833
- }
834
-
835
- const content = config.system + config.systemExtra;
836
- options.messages = [{ role: 'system', content }, ...options.messages || []];
837
- options.messages = MixOpenAI.convertMessages(options.messages);
838
- return super.create({ config, options });
839
- }
840
1081
  }
841
1082
 
842
1083
  class MixTogether extends MixCustom {
843
1084
  getDefaultConfig(customConfig) {
1085
+
1086
+ if (!process.env.TOGETHER_API_KEY) {
1087
+ throw new Error('Together API key not found. Please provide it in config or set TOGETHER_API_KEY environment variable.');
1088
+ }
1089
+
844
1090
  return super.getDefaultConfig({
845
1091
  url: 'https://api.together.xyz/v1/chat/completions',
846
1092
  apiKey: process.env.TOGETHER_API_KEY,
@@ -854,44 +1100,21 @@ class MixTogether extends MixCustom {
854
1100
  ...customOptions
855
1101
  };
856
1102
  }
857
-
858
- static convertMessages(messages) {
859
- return messages.map(message => {
860
- if (message.content instanceof Array) {
861
- message.content = message.content.map(content => content.text).join("\n\n");
862
- }
863
- return message;
864
- });
865
- }
866
-
867
- async create({ config = {}, options = {} } = {}) {
868
- if (!this.config.apiKey) {
869
- throw new Error('Together API key not found. Please provide it in config or set TOGETHER_API_KEY environment variable.');
870
- }
871
-
872
- const content = config.system + config.systemExtra;
873
- options.messages = [{ role: 'system', content }, ...options.messages || []];
874
- options.messages = MixTogether.convertMessages(options.messages);
875
-
876
- return super.create({ config, options });
877
- }
878
1103
  }
879
1104
 
880
1105
  class MixCerebras extends MixCustom {
881
1106
  getDefaultConfig(customConfig) {
1107
+
1108
+ if (!process.env.CEREBRAS_API_KEY) {
1109
+ throw new Error('Together API key not found. Please provide it in config or set CEREBRAS_API_KEY environment variable.');
1110
+ }
1111
+
882
1112
  return super.getDefaultConfig({
883
1113
  url: 'https://api.cerebras.ai/v1/chat/completions',
884
1114
  apiKey: process.env.CEREBRAS_API_KEY,
885
1115
  ...customConfig
886
1116
  });
887
1117
  }
888
-
889
- async create({ config = {}, options = {} } = {}) {
890
- const content = config.system + config.systemExtra;
891
- options.messages = [{ role: 'system', content }, ...options.messages || []];
892
- options.messages = MixTogether.convertMessages(options.messages);
893
- return super.create({ config, options });
894
- }
895
1118
  }
896
1119
 
897
1120
  class MixGoogle extends MixCustom {
@@ -899,7 +1122,6 @@ class MixGoogle extends MixCustom {
899
1122
  return super.getDefaultConfig({
900
1123
  url: 'https://generativelanguage.googleapis.com/v1beta/models',
901
1124
  apiKey: process.env.GOOGLE_API_KEY,
902
- ...customConfig
903
1125
  });
904
1126
  }
905
1127
 
@@ -910,40 +1132,54 @@ class MixGoogle extends MixCustom {
910
1132
  };
911
1133
  }
912
1134
 
913
- getDefaultOptions(customOptions) {
914
- return {
915
- generationConfig: {
916
- responseMimeType: "text/plain"
917
- },
918
- ...customOptions
919
- };
920
- }
921
-
922
- static convertMessages(messages) {
1135
+ static convertMessages(messages, config) {
923
1136
  return messages.map(message => {
924
- const parts = [];
925
1137
 
926
- if (message.content instanceof Array) {
927
- message.content.forEach(content => {
1138
+ if (!Array.isArray(message.content)) return message;
1139
+ const role = (message.role === 'assistant' || message.role === 'tool') ? 'model' : 'user'
1140
+
1141
+ if (message.role === 'tool') {
1142
+ return {
1143
+ role,
1144
+ parts: message.content.map(content => ({
1145
+ functionResponse: {
1146
+ name: content.name,
1147
+ response: {
1148
+ output: content.content,
1149
+ },
1150
+ }
1151
+ }))
1152
+ }
1153
+ }
1154
+
1155
+ return {
1156
+ role,
1157
+ parts: message.content.map(content => {
928
1158
  if (content.type === 'text') {
929
- parts.push({ text: content.text });
930
- } else if (content.type === 'image') {
931
- parts.push({
1159
+ return { text: content.text };
1160
+ }
1161
+
1162
+ if (content.type === 'image') {
1163
+ return {
932
1164
  inline_data: {
933
1165
  mime_type: content.source.media_type,
934
1166
  data: content.source.data
935
1167
  }
936
- });
1168
+ }
937
1169
  }
938
- });
939
- } else {
940
- parts.push({ text: message.content });
941
- }
942
1170
 
943
- return {
944
- role: message.role === 'assistant' ? 'model' : 'user',
945
- parts
946
- };
1171
+ if (content.type === 'function') {
1172
+ return {
1173
+ functionCall: {
1174
+ name: content.function.name,
1175
+ args: JSON.parse(content.function.arguments)
1176
+ }
1177
+ }
1178
+ }
1179
+
1180
+ return content;
1181
+ })
1182
+ }
947
1183
  });
948
1184
  }
949
1185
 
@@ -952,30 +1188,38 @@ class MixGoogle extends MixCustom {
952
1188
  throw new Error('Google API key not found. Please provide it in config or set GOOGLE_API_KEY environment variable.');
953
1189
  }
954
1190
 
955
- const modelId = options.model || 'gemini-2.5-flash-preview-04-17';
956
1191
  const generateContentApi = options.stream ? 'streamGenerateContent' : 'generateContent';
957
1192
 
958
- // Construct the full URL with model ID, API endpoint, and API key
959
- const fullUrl = `${this.config.url}/${modelId}:${generateContentApi}?key=${this.config.apiKey}`;
1193
+ const fullUrl = `${this.config.url}/${options.model}:${generateContentApi}?key=${this.config.apiKey}`;
960
1194
 
961
- // Convert messages to Gemini format
962
- const contents = MixGoogle.convertMessages(options.messages);
963
1195
 
964
- // Add system message if present
965
- if (config.system || config.systemExtra) {
966
- contents.unshift({
967
- role: 'user',
968
- parts: [{ text: (config.system || '') + (config.systemExtra || '') }]
969
- });
1196
+ const content = config.system + config.systemExtra;
1197
+ const systemInstruction = { parts: [{ text: content }] };
1198
+
1199
+ options.messages = MixGoogle.convertMessages(options.messages);
1200
+
1201
+ const generationConfig = {
1202
+ topP: options.top_p,
1203
+ maxOutputTokens: options.max_tokens,
970
1204
  }
971
1205
 
972
- // Prepare the request payload
1206
+ generationConfig.responseMimeType = "text/plain";
1207
+
973
1208
  const payload = {
974
- contents,
975
- generationConfig: options.generationConfig || this.getDefaultOptions().generationConfig
1209
+ generationConfig,
1210
+ systemInstruction,
1211
+ contents: options.messages,
1212
+ tools: options.tools
976
1213
  };
977
1214
 
978
1215
  try {
1216
+ if (config.debug) {
1217
+ log.debug("config");
1218
+ log.info(config);
1219
+ log.debug("payload");
1220
+ log.inspect(payload);
1221
+ }
1222
+
979
1223
  if (options.stream) {
980
1224
  throw new Error('Stream is not supported for Gemini');
981
1225
  } else {
@@ -988,9 +1232,59 @@ class MixGoogle extends MixCustom {
988
1232
  }
989
1233
  }
990
1234
 
991
- extractMessage(data) {
1235
+ processResponse(response) {
1236
+ return {
1237
+ message: MixGoogle.extractMessage(response.data),
1238
+ think: null,
1239
+ toolCalls: MixGoogle.extractToolCalls(response.data),
1240
+ response: response.data
1241
+ }
1242
+ }
1243
+
1244
+ static extractToolCalls(data) {
1245
+ return data.candidates?.[0]?.content?.parts?.map(part => {
1246
+ if (part.functionCall) {
1247
+ return {
1248
+ id: part.functionCall.id,
1249
+ type: 'function',
1250
+ function: {
1251
+ name: part.functionCall.name,
1252
+ arguments: JSON.stringify(part.functionCall.args)
1253
+ }
1254
+ };
1255
+ }
1256
+ return null;
1257
+ }).filter(item => item !== null) || [];
1258
+ }
1259
+
1260
+ static extractMessage(data) {
992
1261
  return data.candidates?.[0]?.content?.parts?.[0]?.text;
993
1262
  }
1263
+
1264
+ static getOptionsTools(tools) {
1265
+ const functionDeclarations = [];
1266
+ for (const tool in tools) {
1267
+ for (const item of tools[tool]) {
1268
+ functionDeclarations.push({
1269
+ name: item.name,
1270
+ description: item.description,
1271
+ parameters: item.inputSchema
1272
+ });
1273
+ }
1274
+ }
1275
+
1276
+ const options = {
1277
+ tools: [{
1278
+ functionDeclarations
1279
+ }]
1280
+ };
1281
+
1282
+ return options;
1283
+ }
1284
+
1285
+ getOptionsTools(tools) {
1286
+ return MixGoogle.getOptionsTools(tools);
1287
+ }
994
1288
  }
995
1289
 
996
1290
  module.exports = { MixCustom, ModelMix, MixAnthropic, MixOpenAI, MixPerplexity, MixOllama, MixLMStudio, MixGroq, MixTogether, MixGrok, MixCerebras, MixGoogle };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "modelmix",
3
- "version": "3.4.0",
3
+ "version": "3.5.0",
4
4
  "description": "🧬 ModelMix - Unified API for Diverse AI LLM.",
5
5
  "main": "index.js",
6
6
  "repository": {
@@ -8,6 +8,7 @@
8
8
  "url": "git+https://github.com/clasen/ModelMix.git"
9
9
  },
10
10
  "keywords": [
11
+ "mcp",
11
12
  "llm",
12
13
  "ai",
13
14
  "model",
@@ -48,6 +49,7 @@
48
49
  },
49
50
  "homepage": "https://github.com/clasen/ModelMix#readme",
50
51
  "dependencies": {
52
+ "@modelcontextprotocol/sdk": "^1.11.2",
51
53
  "axios": "^1.8.4",
52
54
  "bottleneck": "^2.19.5",
53
55
  "lemonlog": "^1.1.2"