modelmix 3.3.8 → 3.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/MODELS.md CHANGED
@@ -60,6 +60,33 @@ All providers inherit from `MixCustom` base class which provides common function
60
60
  - Removes `max_tokens` and `temperature` for o1/o3 models
61
61
  - Converts image messages to base64 data URLs
62
62
 
63
+ #### Function Calling
64
+
65
+ **CALL**
66
+ ```json
67
+ {
68
+ role: 'assistant',
69
+ tool_calls: [
70
+ {
71
+ id: 'call_GibonUAFsx7yHs20AhmzELG9',
72
+ type: 'function',
73
+ function: {
74
+ name: 'brave_web_search',
75
+ arguments: '{"query":"Pope Francis death"}'
76
+ }
77
+ }
78
+ ]
79
+ }
80
+ ```
81
+ **USE**
82
+ ```json
83
+ {
84
+ role: "tool",
85
+ tool_call_id: "call_GibonUAFsx7yHs20AhmzELG9",
86
+ content: "Pope Francis death 2022-12-15"
87
+ }
88
+ ```
89
+
63
90
  ### Anthropic (MixAnthropic)
64
91
  - **Base URL**: `https://api.anthropic.com/v1/messages`
65
92
  - **Input Format**:
@@ -105,6 +132,41 @@ All providers inherit from `MixCustom` base class which provides common function
105
132
  - Uses `x-api-key` header instead of `authorization`
106
133
  - Requires `anthropic-version` header
107
134
 
135
+ ### Function Calling
136
+
137
+ **CALL**
138
+ ```json
139
+ {
140
+ role: 'assistant',
141
+ content: [
142
+ {
143
+ type: 'text',
144
+ text: "I'll search for information about Pope Francis's death."
145
+ },
146
+ {
147
+ type: 'tool_use',
148
+ id: 'toolu_018YeoPLbQwE6WKLSJipkGLE',
149
+ name: 'brave_web_search',
150
+ input: { query: 'When did Pope Francis die?' }
151
+ }
152
+ ]
153
+ }
154
+ ```
155
+ **USE**
156
+ ```json
157
+ {
158
+ role: 'user',
159
+ content: [
160
+ {
161
+ type: 'tool_result',
162
+ tool_use_id: 'toolu_01GbfgjLrtNhnE9ZqinJmXYc',
163
+ content: 'Pope Francis died on April 21, 2025.'
164
+ }
165
+ ]
166
+ }
167
+ ```
168
+
169
+
108
170
  ### Perplexity (MixPerplexity)
109
171
  - **Base URL**: `https://api.perplexity.ai/chat/completions`
110
172
  - **Input Format**: Same as OpenAI
@@ -196,6 +258,40 @@ All providers inherit from `MixCustom` base class which provides common function
196
258
  - Pro: More capable, better for complex tasks
197
259
  - Pro Exp: Experimental version with latest features
198
260
 
261
+ ### Function Calling
262
+
263
+ **CALL**
264
+ ```json
265
+ {
266
+ role: 'model',
267
+ parts: [
268
+ {
269
+ functionCall: {
270
+ name: `getWeather`,
271
+ args: { "city": "tokio" },
272
+ }
273
+ },
274
+ ],
275
+ }
276
+ ```
277
+
278
+ **USE**
279
+ ```json
280
+ {
281
+ role: 'user',
282
+ parts: [
283
+ {
284
+ functionResponse: {
285
+ name: `getWeather`,
286
+ response: {
287
+ output: `20 grados`,
288
+ },
289
+ }
290
+ },
291
+ ],
292
+ }
293
+ ```
294
+
199
295
  ### Cerebras (MixCerebras)
200
296
  - **Base URL**: `https://api.cerebras.ai/v1/chat/completions`
201
297
  - **Input Format**: Same as Together
package/README.md CHANGED
@@ -1,47 +1,43 @@
1
1
  # 🧬 ModelMix: Unified API for Diverse AI LLM
2
2
 
3
- **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling request rates to avoid provider restrictions.
3
+ **ModelMix** is a versatile module that enables seamless integration of various language models from different providers through a unified interface. With ModelMix, you can effortlessly manage and utilize multiple AI models while controlling request rates to avoid provider restrictions. The module also supports the Model Context Protocol (MCP), allowing you to enhance your models with powerful capabilities like web search, code execution, and custom functions.
4
4
 
5
- Are you one of those developers who wants to apply language models to everything? Do you need a reliable fallback system to ensure your application never fails? ModelMix is the answer! It allows you to chain multiple models together, automatically falling back to the next model if one fails, ensuring your application always gets a response.
5
+ Ever found yourself wanting to integrate AI models into your projects but worried about reliability? ModelMix helps you build resilient AI applications by chaining multiple models together. If one model fails, it automatically switches to the next one, ensuring your application keeps running smoothly.
6
6
 
7
7
  ## ✨ Features
8
8
 
9
9
  - **Unified Interface**: Interact with multiple AI models through a single, coherent API.
10
10
  - **Request Rate Control**: Manage the rate of requests to adhere to provider limitations using Bottleneck.
11
- - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Perplexity, Groq, Together AI, Ollama, LM Studio, Google Gemini or custom models.
11
+ - **Flexible Integration**: Easily integrate popular models like OpenAI, Anthropic, Gemini, Perplexity, Groq, Together AI, Lambda, Ollama, LM Studio or custom models.
12
12
  - **History Tracking**: Automatically logs the conversation history with model responses, allowing you to limit the number of historical messages with `max_history`.
13
13
  - **Model Fallbacks**: Automatically try different models if one fails or is unavailable.
14
14
  - **Chain Multiple Models**: Create powerful chains of models that work together, with automatic fallback if one fails.
15
-
16
- ## 📦 Installation
17
-
18
- First, install the ModelMix package:
19
-
20
- ```bash
21
- npm install modelmix
22
- ```
23
-
24
- Recommended: install dotenv to manage environment variables:
25
-
26
- ```bash
27
- npm install dotenv
28
- ```
15
+ - **Model Context Protocol (MCP) Support**: Seamlessly integrate external tools and capabilities like web search, code execution, or custom functions through the Model Context Protocol standard.
29
16
 
30
17
  ## 🛠️ Usage
31
18
 
32
- Here's a quick example to get you started:
33
-
34
- 1. **Setup your environment variables (.env file)**:
35
- ```plaintext
36
- OPENAI_API_KEY="your_openai_api_key"
37
- ANTHROPIC_API_KEY="your_anthropic_api_key"
38
- PPLX_API_KEY="your_perplexity_api_key"
39
- GROQ_API_KEY="your_groq_api_key"
40
- TOGETHER_API_KEY="your_together_api_key"
41
- GOOGLE_API_KEY="your_google_api_key"
42
- ```
19
+ 1. **Install the ModelMix package:**
20
+ Recommended: install dotenv to manage environment variables
21
+
22
+ ```bash
23
+ npm install modelmix dotenv
24
+ ```
25
+
26
+ 2. **Setup your environment variables (.env file)**:
27
+ ```plaintext
28
+ ANTHROPIC_API_KEY="sk-ant-..."
29
+ OPENAI_API_KEY="sk-proj-..."
30
+ PPLX_API_KEY="pplx-..."
31
+ GROQ_API_KEY="gsk_..."
32
+ TOGETHER_API_KEY="49a96..."
33
+ XAI_API_KEY="xai-..."
34
+ CEREBRAS_API_KEY="csk-..."
35
+ GOOGLE_API_KEY="AIza..."
36
+ LAMBDA_API_KEY="secret_..."
37
+ BRAVE_API_KEY="BSA0..._fm"
38
+ ```
43
39
 
44
- 2. **Create and configure your models**:
40
+ 3. **Create and configure your models**:
45
41
 
46
42
  ```javascript
47
43
  import 'dotenv/config';
@@ -56,16 +52,17 @@ const outputExample = { countries: [{ name: "", capital: "" }] };
56
52
  console.log(await model.json(outputExample));
57
53
  ```
58
54
 
55
+ **Basic setup with system prompt and debug mode**
59
56
  ```javascript
60
- // Basic setup with system prompt and debug mode
61
57
  const setup = {
62
58
  config: {
63
59
  system: "You are ALF, if they ask your name, respond with 'ALF'.",
64
60
  debug: true
65
61
  }
66
62
  };
67
-
68
- // Chain multiple models with automatic fallback
63
+ ```
64
+ **Chain multiple models with automatic fallback**
65
+ ```javascript
69
66
  const model = await ModelMix.new(setup)
70
67
  .sonnet37think() // (main model) Anthropic claude-3-7-sonnet-20250219
71
68
  .o4mini() // (fallback 1) OpenAI o4-mini
@@ -77,8 +74,8 @@ const model = await ModelMix.new(setup)
77
74
  console.log(await model.message());
78
75
  ```
79
76
 
77
+ **Use Perplexity to get the price of ETH**
80
78
  ```javascript
81
-
82
79
  const ETH = ModelMix.new()
83
80
  .sonar() // Perplexity sonar
84
81
  .addText('How much is ETH trading in USD?')
@@ -92,6 +89,29 @@ This pattern allows you to:
92
89
  - Get structured JSON responses when needed
93
90
  - Keep your code clean and maintainable
94
91
 
92
+ ## 🔧 Model Context Protocol (MCP) Integration
93
+
94
+ ModelMix makes it incredibly easy to enhance your AI models with powerful capabilities through the Model Context Protocol. With just a few lines of code, you can add features like web search, code execution, or any custom functionality to your models.
95
+
96
+ ### Example: Adding Web Search Capability
97
+
98
+ ```javascript
99
+ const mmix = ModelMix.new({ config: { max_history: 10 } }).gpt41nano();
100
+ mmix.setSystem('You are an assistant and today is ' + new Date().toISOString());
101
+
102
+ // Add web search capability through MCP
103
+ await mmix.addMCP('@modelcontextprotocol/server-brave-search');
104
+ mmix.addText('Use Internet: When did the last Christian pope die?');
105
+ console.log(await mmix.message());
106
+ ```
107
+
108
+ This simple integration allows your model to:
109
+ - Search the web in real-time
110
+ - Access up-to-date information
111
+ - Combine AI reasoning with external data
112
+
113
+ The Model Context Protocol makes it easy to add any capability to your models, from web search to code execution, database queries, or custom functions. All with just a few lines of code!
114
+
95
115
  ## ⚡️ Shorthand Methods
96
116
 
97
117
  ModelMix provides convenient shorthand methods for quickly accessing different AI models.
@@ -117,9 +137,10 @@ Here's a comprehensive list of available methods:
117
137
  | `grok3mini()` | Grok | grok-3-mini-beta | [\$0.30 / \$0.50][6] |
118
138
  | `sonar()` | Perplexity | sonar | [\$1.00 / \$1.00][4] |
119
139
  | `sonarPro()` | Perplexity | sonar-pro | [\$3.00 / \$15.00][4] |
120
- | `qwen3()` | Groq | Qwen3-235B-A22B-fp8-tput | [\$0.29 / \$0.39][5] |
140
+ | `qwen3()` | Together | Qwen3-235B-A22B-fp8-tput | [\$0.20 / \$0.60][7] |
121
141
  | `scout()` | Groq | Llama-4-Scout-17B-16E-Instruct | [\$0.11 / \$0.34][5] |
122
142
  | `maverick()` | Groq | Maverick-17B-128E-Instruct-FP8 | [\$0.20 / \$0.60][5] |
143
+ | `hermes3()` | Lambda | Hermes-3-Llama-3.1-405B-FP8 | [\$0.80 / \$0.80][8] |
123
144
 
124
145
  [1]: https://openai.com/api/pricing/ "Pricing | OpenAI"
125
146
  [2]: https://docs.anthropic.com/en/docs/about-claude/pricing "Pricing - Anthropic"
@@ -127,6 +148,8 @@ Here's a comprehensive list of available methods:
127
148
  [4]: https://docs.perplexity.ai/guides/pricing "Pricing - Perplexity"
128
149
  [5]: https://groq.com/pricing/ "Groq Pricing"
129
150
  [6]: https://docs.x.ai/docs/models "xAI"
151
+ [7]: https://www.together.ai/pricing "Together AI"
152
+ [8]: https://lambda.ai/inference "Lambda Pricing"
130
153
 
131
154
  Each method accepts optional `options` and `config` parameters to customize the model's behavior. For example:
132
155
 
package/demo/default.env CHANGED
@@ -1,6 +1,10 @@
1
- ANTHROPIC_API_KEY=""
2
- OPENAI_API_KEY=""
3
- PPLX_API_KEY=""
4
- GROQ_API_KEY=""
5
- TOGETHER_API_KEY=""
6
- XAI_API_KEY=""
1
+ ANTHROPIC_API_KEY="sk-ant-..."
2
+ OPENAI_API_KEY="sk-proj-..."
3
+ PPLX_API_KEY="pplx-..."
4
+ GROQ_API_KEY="gsk_..."
5
+ TOGETHER_API_KEY="49a96..."
6
+ XAI_API_KEY="xai-..."
7
+ CEREBRAS_API_KEY="csk-..."
8
+ GOOGLE_API_KEY="AIza..."
9
+ LAMBDA_API_KEY="secret_..."
10
+ BRAVE_API_KEY="BSA0..._fm"
package/demo/json.mjs ADDED
@@ -0,0 +1,13 @@
1
+ import 'dotenv/config'
2
+ import { ModelMix } from '../index.js';
3
+
4
+ const model = await ModelMix.new({ config: { debug: true } })
5
+ .scout({ config: { temperature: 0 } })
6
+ .o4mini()
7
+ .sonnet37think()
8
+ .gpt45()
9
+ .gemini25flash()
10
+ .addText("Name and capital of 3 South American countries.")
11
+
12
+ const jsonResult = await model.json({ countries: [{ name: "", capital: "" }] });
13
+ console.log(jsonResult);
package/demo/mcp.mjs ADDED
@@ -0,0 +1,11 @@
1
+ import 'dotenv/config';
2
+ import { ModelMix } from '../index.js';
3
+
4
+ const mmix = ModelMix.new({ config: { max_history: 10 } }).gpt41nano();
5
+ mmix.setSystem('You are an assistant and today is ' + new Date().toISOString());
6
+
7
+ // Add web search capability through MCP
8
+ await mmix.addMCP('@modelcontextprotocol/server-brave-search');
9
+
10
+ mmix.addText('Use Internet: When did the last Christian pope die?');
11
+ console.log(await mmix.message());
@@ -10,7 +10,8 @@
10
10
  "license": "ISC",
11
11
  "dependencies": {
12
12
  "@anthropic-ai/sdk": "^0.20.9",
13
- "dotenv": "^16.5.0"
13
+ "dotenv": "^16.5.0",
14
+ "lemonlog": "^1.1.4"
14
15
  }
15
16
  },
16
17
  ".api/apis/pplx": {
@@ -93,6 +94,23 @@
93
94
  "node": ">= 0.8"
94
95
  }
95
96
  },
97
+ "node_modules/debug": {
98
+ "version": "4.4.1",
99
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.4.1.tgz",
100
+ "integrity": "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==",
101
+ "license": "MIT",
102
+ "dependencies": {
103
+ "ms": "^2.1.3"
104
+ },
105
+ "engines": {
106
+ "node": ">=6.0"
107
+ },
108
+ "peerDependenciesMeta": {
109
+ "supports-color": {
110
+ "optional": true
111
+ }
112
+ }
113
+ },
96
114
  "node_modules/delayed-stream": {
97
115
  "version": "1.0.0",
98
116
  "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
@@ -167,6 +185,15 @@
167
185
  "ms": "^2.0.0"
168
186
  }
169
187
  },
188
+ "node_modules/lemonlog": {
189
+ "version": "1.1.4",
190
+ "resolved": "https://registry.npmjs.org/lemonlog/-/lemonlog-1.1.4.tgz",
191
+ "integrity": "sha512-NWcXK7Nl+K5E0xxzcux9ktR9hiRSSjK0xpFbuAm/qy/wg5TuIYGfg+lIDWthVvFscrSsYCPLyPk/AvxY+w7n6A==",
192
+ "license": "MIT",
193
+ "dependencies": {
194
+ "debug": "^4.1.1"
195
+ }
196
+ },
170
197
  "node_modules/mime-db": {
171
198
  "version": "1.52.0",
172
199
  "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
package/demo/package.json CHANGED
@@ -10,6 +10,7 @@
10
10
  "license": "ISC",
11
11
  "dependencies": {
12
12
  "@anthropic-ai/sdk": "^0.20.9",
13
- "dotenv": "^16.5.0"
13
+ "dotenv": "^16.5.0",
14
+ "lemonlog": "^1.1.4"
14
15
  }
15
16
  }
package/demo/short.mjs CHANGED
@@ -9,26 +9,12 @@ const setup = {
9
9
  }
10
10
  };
11
11
 
12
- const result = await ModelMix.new(setup)
13
- .scout({ config: { temperature: 0 } })
14
- .addText("What's your name?")
15
- .message();
16
-
17
- console.log(result);
18
-
19
- const model = await ModelMix.new({ config: { debug: true } })
20
- .scout({ config: { temperature: 0 } })
21
- .o4mini()
22
- .sonnet37think()
23
- .gpt45()
24
- .gemini25flash()
25
- .addText("Name and capital of 3 South American countries.")
26
-
27
- const jsonResult = await model.json({ countries: [{ name: "", capital: "" }] });
28
-
29
- console.log(jsonResult);
30
-
31
- model.addText("Name and capital of 1 South American countries.")
32
-
33
- const jsonResult2 = await model.json({ countries: [{ name: "", capital: "" }] });
34
- console.log(jsonResult2);
12
+ const mmix = await ModelMix.new(setup)
13
+ .sonnet37think() // (main model) Anthropic claude-3-7-sonnet-20250219
14
+ .o4mini() // (fallback 1) OpenAI o4-mini
15
+ .gemini25proExp({ config: { temperature: 0 } }) // (fallback 2) Google gemini-2.5-pro-exp-03-25
16
+ .gpt41nano() // (fallback 3) OpenAI gpt-4.1-nano
17
+ .grok3mini() // (fallback 4) Grok grok-3-mini-beta
18
+ .addText("What's your name?");
19
+
20
+ console.log(await mmix.message());
package/index.js CHANGED
@@ -5,11 +5,16 @@ const log = require('lemonlog')('ModelMix');
5
5
  const Bottleneck = require('bottleneck');
6
6
  const path = require('path');
7
7
  const generateJsonSchema = require('./schema');
8
+ const { Client } = require("@modelcontextprotocol/sdk/client/index.js");
9
+ const { StdioClientTransport } = require("@modelcontextprotocol/sdk/client/stdio.js");
8
10
 
9
11
  class ModelMix {
10
12
  constructor({ options = {}, config = {} } = {}) {
11
13
  this.models = [];
12
14
  this.messages = [];
15
+ this.tools = {};
16
+ this.toolClient = {};
17
+ this.mcp = {};
13
18
  this.options = {
14
19
  max_tokens: 5000,
15
20
  temperature: 1, // 1 --> More creative, 0 --> More deterministic.
@@ -129,9 +134,9 @@ class ModelMix {
129
134
  return this.attach('grok-3-mini-beta', new MixGrok({ options, config }));
130
135
  }
131
136
 
132
- qwen3({ options = {}, config = {}, mix = { groq: true, together: false } } = {}) {
133
- if (mix.groq) this.attach('qwen-qwq-32b', new MixGroq({ options, config }));
137
+ qwen3({ options = {}, config = {}, mix = { together: true, cerebras: false } } = {}) {
134
138
  if (mix.together) this.attach('Qwen/Qwen3-235B-A22B-fp8-tput', new MixTogether({ options, config }));
139
+ if (mix.cerebras) this.attach('qwen-3-32b', new MixCerebras({ options, config }));
135
140
  return this;
136
141
  }
137
142
 
@@ -141,9 +146,10 @@ class ModelMix {
141
146
  if (mix.cerebras) this.attach('llama-4-scout-17b-16e-instruct', new MixCerebras({ options, config }));
142
147
  return this;
143
148
  }
144
- maverick({ options = {}, config = {}, mix = { groq: true, together: false } } = {}) {
149
+ maverick({ options = {}, config = {}, mix = { groq: true, together: false, lambda: false } } = {}) {
145
150
  if (mix.groq) this.attach('meta-llama/llama-4-maverick-17b-128e-instruct', new MixGroq({ options, config }));
146
151
  if (mix.together) this.attach('meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', new MixTogether({ options, config }));
152
+ if (mix.lambda) this.attach('llama-4-maverick-17b-128e-instruct-fp8', new MixLambda({ options, config }));
147
153
  return this;
148
154
  }
149
155
 
@@ -154,6 +160,11 @@ class ModelMix {
154
160
  return this;
155
161
  }
156
162
 
163
+ hermes3({ options = {}, config = {}, mix = { lambda: true } } = {}) {
164
+ this.attach('Hermes-3-Llama-3.1-405B-FP8', new MixLambda({ options, config }));
165
+ return this;
166
+ }
167
+
157
168
  addText(text, { role = "user" } = {}) {
158
169
  const content = [{
159
170
  type: "text",
@@ -320,10 +331,11 @@ class ModelMix {
320
331
  groupByRoles(messages) {
321
332
  return messages.reduce((acc, currentMessage, index) => {
322
333
  if (index === 0 || currentMessage.role !== messages[index - 1].role) {
323
- acc.push({
324
- role: currentMessage.role,
325
- content: currentMessage.content
326
- });
334
+ // acc.push({
335
+ // role: currentMessage.role,
336
+ // content: currentMessage.content
337
+ // });
338
+ acc.push(currentMessage);
327
339
  } else {
328
340
  acc[acc.length - 1].content = acc[acc.length - 1].content.concat(currentMessage.content);
329
341
  }
@@ -391,10 +403,12 @@ class ModelMix {
391
403
  const currentModel = this.models[i];
392
404
  const currentModelKey = currentModel.key;
393
405
  const providerInstance = currentModel.provider;
406
+ const optionsTools = providerInstance.getOptionsTools(this.tools);
394
407
 
395
408
  let options = {
396
409
  ...this.options,
397
410
  ...providerInstance.options,
411
+ ...optionsTools,
398
412
  model: currentModelKey
399
413
  };
400
414
 
@@ -415,7 +429,29 @@ class ModelMix {
415
429
 
416
430
  const result = await providerInstance.create({ options, config });
417
431
 
418
- this.messages.push({ role: "assistant", content: result.message });
432
+ if (result.toolCalls.length > 0) {
433
+
434
+ if (result.message) {
435
+ if (result.signature) {
436
+ this.messages.push({
437
+ role: "assistant", content: [{
438
+ type: "thinking",
439
+ thinking: result.think,
440
+ signature: result.signature
441
+ }]
442
+ });
443
+ } else {
444
+ this.addText(result.message, { role: "assistant" });
445
+ }
446
+ }
447
+
448
+ this.messages.push({ role: "assistant", content: result.toolCalls, tool_calls: result.toolCalls });
449
+
450
+ const content = await this.processToolCalls(result.toolCalls);
451
+ this.messages.push({ role: 'tool', content });
452
+
453
+ return this.execute();
454
+ }
419
455
 
420
456
  if (config.debug) {
421
457
  log.debug(`Request successful with model: ${currentModelKey}`);
@@ -445,6 +481,69 @@ class ModelMix {
445
481
  throw lastError || new Error("Failed to get response from any model, and no specific error was caught.");
446
482
  });
447
483
  }
484
+
485
+ async processToolCalls(toolCalls) {
486
+ const result = []
487
+
488
+ for (const toolCall of toolCalls) {
489
+ const client = this.toolClient[toolCall.function.name];
490
+
491
+ const response = await client.callTool({
492
+ name: toolCall.function.name,
493
+ arguments: JSON.parse(toolCall.function.arguments)
494
+ });
495
+
496
+ result.push({
497
+ name: toolCall.function.name,
498
+ tool_call_id: toolCall.id,
499
+ content: response.content.map(item => item.text).join("\n")
500
+ });
501
+ }
502
+ return result;
503
+ }
504
+
505
+ async addMCP() {
506
+
507
+ const key = arguments[0];
508
+
509
+ if (this.mcp[key]) {
510
+ log.info(`MCP ${key} already attached.`);
511
+ return;
512
+ }
513
+
514
+ if (this.config.max_history < 3) {
515
+ log.warn(`MCP ${key} requires at least 3 max_history. Setting to 3.`);
516
+ this.config.max_history = 3;
517
+ }
518
+
519
+ const env = {}
520
+ for (const key in process.env) {
521
+ if (['OPENAI', 'ANTHR', 'GOOGLE', 'GROQ', 'TOGET', 'LAMBDA', 'PPLX', 'XAI', 'CEREBR'].some(prefix => key.startsWith(prefix))) continue;
522
+ env[key] = process.env[key];
523
+ }
524
+
525
+ const transport = new StdioClientTransport({
526
+ command: "npx",
527
+ args: ["-y", ...arguments],
528
+ env
529
+ });
530
+
531
+ // Crear el cliente MCP
532
+ this.mcp[key] = new Client({
533
+ name: key,
534
+ version: "1.0.0"
535
+ });
536
+
537
+ await this.mcp[key].connect(transport);
538
+
539
+ const { tools } = await this.mcp[key].listTools();
540
+ this.tools[key] = tools;
541
+
542
+ for (const tool of tools) {
543
+ this.toolClient[tool.name] = this.mcp[key];
544
+ }
545
+
546
+ }
448
547
  }
449
548
 
450
549
  class MixCustom {
@@ -478,8 +577,15 @@ class MixCustom {
478
577
  };
479
578
  }
480
579
 
580
+ convertMessages(messages, config) {
581
+ return MixOpenAI.convertMessages(messages, config);
582
+ }
583
+
481
584
  async create({ config = {}, options = {} } = {}) {
482
585
  try {
586
+
587
+ options.messages = this.convertMessages(options.messages, config);
588
+
483
589
  if (config.debug) {
484
590
  log.debug("config");
485
591
  log.info(config);
@@ -563,33 +669,64 @@ class MixCustom {
563
669
  }
564
670
 
565
671
  extractDelta(data) {
566
- if (data.choices && data.choices[0].delta.content) return data.choices[0].delta.content;
567
- return '';
672
+ return data.choices[0].delta.content;
568
673
  }
569
674
 
570
- extractMessage(data) {
571
- if (data.choices && data.choices[0].message.content) return data.choices[0].message.content.trim();
572
- return '';
675
+ static extractMessage(data) {
676
+ const message = data.choices[0].message?.content?.trim() || '';
677
+ const endTagIndex = message.indexOf('</think>');
678
+ if (message.startsWith('<think>') && endTagIndex !== -1) {
679
+ return message.substring(endTagIndex + 8).trim();
680
+ }
681
+ return message;
573
682
  }
574
683
 
575
- processResponse(response) {
576
- let message = this.extractMessage(response.data);
577
-
578
- if (message.startsWith('<think>')) {
579
- const endTagIndex = message.indexOf('</think>');
580
- if (endTagIndex !== -1) {
581
- const think = message.substring(7, endTagIndex).trim();
582
- message = message.substring(endTagIndex + 8).trim();
583
- return { response: response.data, message, think };
684
+ static extractThink(data) {
685
+
686
+ if (data.choices[0].message?.reasoning_content) {
687
+ return data.choices[0].message.reasoning_content;
688
+ }
689
+
690
+ const message = data.choices[0].message?.content?.trim() || '';
691
+ const endTagIndex = message.indexOf('</think>');
692
+ if (message.startsWith('<think>') && endTagIndex !== -1) {
693
+ return message.substring(7, endTagIndex).trim();
694
+ }
695
+ return null;
696
+ }
697
+
698
+ static extractToolCalls(data) {
699
+ return data.choices[0].message?.tool_calls?.map(call => ({
700
+ id: call.id,
701
+ type: 'function',
702
+ function: {
703
+ name: call.function.name,
704
+ arguments: call.function.arguments
584
705
  }
706
+ })) || []
707
+ }
708
+
709
+ processResponse(response) {
710
+ return {
711
+ message: MixCustom.extractMessage(response.data),
712
+ think: MixCustom.extractThink(response.data),
713
+ toolCalls: MixCustom.extractToolCalls(response.data),
714
+ response: response.data
585
715
  }
716
+ }
586
717
 
587
- return { response: response.data, message };
718
+ getOptionsTools(tools) {
719
+ return MixOpenAI.getOptionsTools(tools);
588
720
  }
589
721
  }
590
722
 
591
723
  class MixOpenAI extends MixCustom {
592
724
  getDefaultConfig(customConfig) {
725
+
726
+ if (!process.env.OPENAI_API_KEY) {
727
+ throw new Error('OpenAI API key not found. Please provide it in config or set OPENAI_API_KEY environment variable.');
728
+ }
729
+
593
730
  return super.getDefaultConfig({
594
731
  url: 'https://api.openai.com/v1/chat/completions',
595
732
  apiKey: process.env.OPENAI_API_KEY,
@@ -598,9 +735,6 @@ class MixOpenAI extends MixCustom {
598
735
  }
599
736
 
600
737
  async create({ config = {}, options = {} } = {}) {
601
- if (!this.config.apiKey) {
602
- throw new Error('OpenAI API key not found. Please provide it in config or set OPENAI_API_KEY environment variable.');
603
- }
604
738
 
605
739
  // Remove max_tokens and temperature for o1/o3 models
606
740
  if (options.model?.startsWith('o')) {
@@ -608,35 +742,76 @@ class MixOpenAI extends MixCustom {
608
742
  delete options.temperature;
609
743
  }
610
744
 
611
- const content = config.system + config.systemExtra;
612
- options.messages = [{ role: 'system', content }, ...options.messages || []];
613
- options.messages = MixOpenAI.convertMessages(options.messages);
614
745
  return super.create({ config, options });
615
746
  }
616
747
 
617
- static convertMessages(messages) {
618
- return messages.map(message => {
619
- if (message.role === 'user' && message.content instanceof Array) {
620
- message.content = message.content.map(content => {
748
+ static convertMessages(messages, config) {
749
+
750
+ const content = config.system + config.systemExtra;
751
+ messages = [{ role: 'system', content }, ...messages || []];
752
+
753
+ const results = []
754
+ for (const message of messages) {
755
+
756
+ if (message.tool_calls) {
757
+ results.push({ role: 'assistant', tool_calls: message.tool_calls })
758
+ continue;
759
+ }
760
+
761
+ if (message.role === 'tool') {
762
+ for (const content of message.content) {
763
+ results.push({ role: 'tool', ...content })
764
+ }
765
+ continue;
766
+ }
767
+
768
+ if (Array.isArray(message.content))
769
+ for (const content of message.content) {
621
770
  if (content.type === 'image') {
622
771
  const { type, media_type, data } = content.source;
623
- return {
772
+ message.content = {
624
773
  type: 'image_url',
625
774
  image_url: {
626
775
  url: `data:${media_type};${type},${data}`
627
776
  }
628
777
  };
629
778
  }
630
- return content;
779
+ }
780
+
781
+ results.push(message);
782
+ }
783
+ return results;
784
+ }
785
+
786
+ static getOptionsTools(tools) {
787
+ const options = {};
788
+ options.tools = [];
789
+ for (const tool in tools) {
790
+ for (const item of tools[tool]) {
791
+ options.tools.push({
792
+ type: 'function',
793
+ function: {
794
+ name: item.name,
795
+ description: item.description,
796
+ parameters: item.inputSchema
797
+ }
631
798
  });
632
799
  }
633
- return message;
634
- });
800
+ }
801
+
802
+ // options.tool_choice = "auto";
803
+
804
+ return options;
635
805
  }
636
806
  }
637
807
 
638
808
  class MixAnthropic extends MixCustom {
639
809
  getDefaultConfig(customConfig) {
810
+
811
+ if (!process.env.ANTHROPIC_API_KEY) {
812
+ throw new Error('Anthropic API key not found. Please provide it in config or set ANTHROPIC_API_KEY environment variable.');
813
+ }
814
+
640
815
  return super.getDefaultConfig({
641
816
  url: 'https://api.anthropic.com/v1/messages',
642
817
  apiKey: process.env.ANTHROPIC_API_KEY,
@@ -645,9 +820,6 @@ class MixAnthropic extends MixCustom {
645
820
  }
646
821
 
647
822
  async create({ config = {}, options = {} } = {}) {
648
- if (!this.config.apiKey) {
649
- throw new Error('Anthropic API key not found. Please provide it in config or set ANTHROPIC_API_KEY environment variable.');
650
- }
651
823
 
652
824
  // Remove top_p for thinking
653
825
  if (options.thinking) {
@@ -660,6 +832,39 @@ class MixAnthropic extends MixCustom {
660
832
  return super.create({ config, options });
661
833
  }
662
834
 
835
+ convertMessages(messages, config) {
836
+ return MixAnthropic.convertMessages(messages, config);
837
+ }
838
+
839
+ static convertMessages(messages, config) {
840
+ return messages.map(message => {
841
+ if (message.role === 'tool') {
842
+ return {
843
+ role: "user",
844
+ content: message.content.map(content => ({
845
+ type: "tool_result",
846
+ tool_use_id: content.tool_call_id,
847
+ content: content.content
848
+ }))
849
+ }
850
+ }
851
+
852
+ message.content = message.content.map(content => {
853
+ if (content.type === 'function') {
854
+ return {
855
+ type: 'tool_use',
856
+ id: content.id,
857
+ name: content.function.name,
858
+ input: JSON.parse(content.function.arguments)
859
+ }
860
+ }
861
+ return content;
862
+ });
863
+
864
+ return message;
865
+ });
866
+ }
867
+
663
868
  getDefaultHeaders(customHeaders) {
664
869
  return super.getDefaultHeaders({
665
870
  'x-api-key': this.config.apiKey,
@@ -673,29 +878,77 @@ class MixAnthropic extends MixCustom {
673
878
  return '';
674
879
  }
675
880
 
676
- processResponse(response) {
677
- if (response.data.content) {
881
+ static extractToolCalls(data) {
678
882
 
679
- if (response.data.content?.[1]?.text) {
883
+ return data.content.map(item => {
884
+ if (item.type === 'tool_use') {
680
885
  return {
681
- think: response.data.content[0]?.thinking,
682
- message: response.data.content[1].text,
683
- response: response.data
684
- }
886
+ id: item.id,
887
+ type: 'function',
888
+ function: {
889
+ name: item.name,
890
+ arguments: JSON.stringify(item.input)
891
+ }
892
+ };
685
893
  }
894
+ return null;
895
+ }).filter(item => item !== null);
896
+ }
686
897
 
687
- if (response.data.content[0].text) {
688
- return {
689
- message: response.data.content[0].text,
690
- response: response.data
691
- }
898
+ static extractMessage(data) {
899
+ if (data.content?.[1]?.text) {
900
+ return data.content[1].text;
901
+ }
902
+ return data.content[0].text;
903
+ }
904
+
905
+ static extractThink(data) {
906
+ return data.content[0]?.thinking || null;
907
+ }
908
+
909
+ static extractSignature(data) {
910
+ return data.content[0]?.signature || null;
911
+ }
912
+
913
+ processResponse(response) {
914
+ return {
915
+ message: MixAnthropic.extractMessage(response.data),
916
+ think: MixAnthropic.extractThink(response.data),
917
+ toolCalls: MixAnthropic.extractToolCalls(response.data),
918
+ response: response.data,
919
+ signature: MixAnthropic.extractSignature(response.data)
920
+ }
921
+ }
922
+
923
+ getOptionsTools(tools) {
924
+ return MixAnthropic.getOptionsTools(tools);
925
+ }
926
+
927
+ static getOptionsTools(tools) {
928
+ const options = {};
929
+ options.tools = [];
930
+ for (const tool in tools) {
931
+ for (const item of tools[tool]) {
932
+ options.tools.push({
933
+ type: 'custom',
934
+ name: item.name,
935
+ description: item.description,
936
+ input_schema: item.inputSchema
937
+ });
692
938
  }
693
939
  }
940
+
941
+ return options;
694
942
  }
695
943
  }
696
944
 
697
945
  class MixPerplexity extends MixCustom {
698
946
  getDefaultConfig(customConfig) {
947
+
948
+ if (!process.env.PPLX_API_KEY) {
949
+ throw new Error('Perplexity API key not found. Please provide it in config or set PPLX_API_KEY environment variable.');
950
+ }
951
+
699
952
  return super.getDefaultConfig({
700
953
  url: 'https://api.perplexity.ai/chat/completions',
701
954
  apiKey: process.env.PPLX_API_KEY,
@@ -714,10 +967,6 @@ class MixPerplexity extends MixCustom {
714
967
  };
715
968
  }
716
969
 
717
- if (!this.config.apiKey) {
718
- throw new Error('Perplexity API key not found. Please provide it in config or set PPLX_API_KEY environment variable.');
719
- }
720
-
721
970
  const content = config.system + config.systemExtra;
722
971
  options.messages = [{ role: 'system', content }, ...options.messages || []];
723
972
  return super.create({ config, options });
@@ -744,19 +993,18 @@ class MixOllama extends MixCustom {
744
993
  return '';
745
994
  }
746
995
 
747
- async create({ config = {}, options = {} } = {}) {
748
-
749
- options.messages = MixOllama.convertMessages(options.messages);
750
- const content = config.system + config.systemExtra;
751
- options.messages = [{ role: 'system', content }, ...options.messages || []];
752
- return super.create({ config, options });
753
- }
754
-
755
996
  extractMessage(data) {
756
997
  return data.message.content.trim();
757
998
  }
758
999
 
759
- static convertMessages(messages) {
1000
+ convertMessages(messages, config) {
1001
+ return MixOllama.convertMessages(messages, config);
1002
+ }
1003
+
1004
+ static convertMessages(messages, config) {
1005
+ const content = config.system + config.systemExtra;
1006
+ messages = [{ role: 'system', content }, ...messages || []];
1007
+
760
1008
  return messages.map(entry => {
761
1009
  let content = '';
762
1010
  let images = [];
@@ -780,26 +1028,31 @@ class MixOllama extends MixCustom {
780
1028
 
781
1029
  class MixGrok extends MixOpenAI {
782
1030
  getDefaultConfig(customConfig) {
1031
+
1032
+ if (!process.env.XAI_API_KEY) {
1033
+ throw new Error('Grok API key not found. Please provide it in config or set XAI_API_KEY environment variable.');
1034
+ }
1035
+
783
1036
  return super.getDefaultConfig({
784
1037
  url: 'https://api.x.ai/v1/chat/completions',
785
1038
  apiKey: process.env.XAI_API_KEY,
786
1039
  ...customConfig
787
1040
  });
788
1041
  }
1042
+ }
789
1043
 
790
- processResponse(response) {
791
- const message = this.extractMessage(response.data);
792
-
793
- const output = {
794
- message: message,
795
- response: response.data
796
- }
1044
+ class MixLambda extends MixCustom {
1045
+ getDefaultConfig(customConfig) {
797
1046
 
798
- if (response.data.choices[0].message.reasoning_content) {
799
- output.think = response.data.choices[0].message.reasoning_content;
1047
+ if (!process.env.LAMBDA_API_KEY) {
1048
+ throw new Error('Lambda API key not found. Please provide it in config or set LAMBDA_API_KEY environment variable.');
800
1049
  }
801
1050
 
802
- return output;
1051
+ return super.getDefaultConfig({
1052
+ url: 'https://api.lambda.ai/v1/chat/completions',
1053
+ apiKey: process.env.LAMBDA_API_KEY,
1054
+ ...customConfig
1055
+ });
803
1056
  }
804
1057
  }
805
1058
 
@@ -810,38 +1063,30 @@ class MixLMStudio extends MixCustom {
810
1063
  ...customConfig
811
1064
  });
812
1065
  }
813
-
814
- async create({ config = {}, options = {} } = {}) {
815
- const content = config.system + config.systemExtra;
816
- options.messages = [{ role: 'system', content }, ...options.messages || []];
817
- options.messages = MixOpenAI.convertMessages(options.messages);
818
- return super.create({ config, options });
819
- }
820
1066
  }
821
1067
 
822
1068
  class MixGroq extends MixCustom {
823
1069
  getDefaultConfig(customConfig) {
1070
+
1071
+ if (!process.env.GROQ_API_KEY) {
1072
+ throw new Error('Groq API key not found. Please provide it in config or set GROQ_API_KEY environment variable.');
1073
+ }
1074
+
824
1075
  return super.getDefaultConfig({
825
1076
  url: 'https://api.groq.com/openai/v1/chat/completions',
826
1077
  apiKey: process.env.GROQ_API_KEY,
827
1078
  ...customConfig
828
1079
  });
829
1080
  }
830
-
831
- async create({ config = {}, options = {} } = {}) {
832
- if (!this.config.apiKey) {
833
- throw new Error('Groq API key not found. Please provide it in config or set GROQ_API_KEY environment variable.');
834
- }
835
-
836
- const content = config.system + config.systemExtra;
837
- options.messages = [{ role: 'system', content }, ...options.messages || []];
838
- options.messages = MixOpenAI.convertMessages(options.messages);
839
- return super.create({ config, options });
840
- }
841
1081
  }
842
1082
 
843
1083
  class MixTogether extends MixCustom {
844
1084
  getDefaultConfig(customConfig) {
1085
+
1086
+ if (!process.env.TOGETHER_API_KEY) {
1087
+ throw new Error('Together API key not found. Please provide it in config or set TOGETHER_API_KEY environment variable.');
1088
+ }
1089
+
845
1090
  return super.getDefaultConfig({
846
1091
  url: 'https://api.together.xyz/v1/chat/completions',
847
1092
  apiKey: process.env.TOGETHER_API_KEY,
@@ -855,44 +1100,21 @@ class MixTogether extends MixCustom {
855
1100
  ...customOptions
856
1101
  };
857
1102
  }
858
-
859
- static convertMessages(messages) {
860
- return messages.map(message => {
861
- if (message.content instanceof Array) {
862
- message.content = message.content.map(content => content.text).join("\n\n");
863
- }
864
- return message;
865
- });
866
- }
867
-
868
- async create({ config = {}, options = {} } = {}) {
869
- if (!this.config.apiKey) {
870
- throw new Error('Together API key not found. Please provide it in config or set TOGETHER_API_KEY environment variable.');
871
- }
872
-
873
- const content = config.system + config.systemExtra;
874
- options.messages = [{ role: 'system', content }, ...options.messages || []];
875
- options.messages = MixTogether.convertMessages(options.messages);
876
-
877
- return super.create({ config, options });
878
- }
879
1103
  }
880
1104
 
881
1105
  class MixCerebras extends MixCustom {
882
1106
  getDefaultConfig(customConfig) {
1107
+
1108
+ if (!process.env.CEREBRAS_API_KEY) {
1109
+ throw new Error('Together API key not found. Please provide it in config or set CEREBRAS_API_KEY environment variable.');
1110
+ }
1111
+
883
1112
  return super.getDefaultConfig({
884
1113
  url: 'https://api.cerebras.ai/v1/chat/completions',
885
1114
  apiKey: process.env.CEREBRAS_API_KEY,
886
1115
  ...customConfig
887
1116
  });
888
1117
  }
889
-
890
- async create({ config = {}, options = {} } = {}) {
891
- const content = config.system + config.systemExtra;
892
- options.messages = [{ role: 'system', content }, ...options.messages || []];
893
- options.messages = MixTogether.convertMessages(options.messages);
894
- return super.create({ config, options });
895
- }
896
1118
  }
897
1119
 
898
1120
  class MixGoogle extends MixCustom {
@@ -900,7 +1122,6 @@ class MixGoogle extends MixCustom {
900
1122
  return super.getDefaultConfig({
901
1123
  url: 'https://generativelanguage.googleapis.com/v1beta/models',
902
1124
  apiKey: process.env.GOOGLE_API_KEY,
903
- ...customConfig
904
1125
  });
905
1126
  }
906
1127
 
@@ -911,40 +1132,54 @@ class MixGoogle extends MixCustom {
911
1132
  };
912
1133
  }
913
1134
 
914
- getDefaultOptions(customOptions) {
915
- return {
916
- generationConfig: {
917
- responseMimeType: "text/plain"
918
- },
919
- ...customOptions
920
- };
921
- }
922
-
923
- static convertMessages(messages) {
1135
+ static convertMessages(messages, config) {
924
1136
  return messages.map(message => {
925
- const parts = [];
926
1137
 
927
- if (message.content instanceof Array) {
928
- message.content.forEach(content => {
1138
+ if (!Array.isArray(message.content)) return message;
1139
+ const role = (message.role === 'assistant' || message.role === 'tool') ? 'model' : 'user'
1140
+
1141
+ if (message.role === 'tool') {
1142
+ return {
1143
+ role,
1144
+ parts: message.content.map(content => ({
1145
+ functionResponse: {
1146
+ name: content.name,
1147
+ response: {
1148
+ output: content.content,
1149
+ },
1150
+ }
1151
+ }))
1152
+ }
1153
+ }
1154
+
1155
+ return {
1156
+ role,
1157
+ parts: message.content.map(content => {
929
1158
  if (content.type === 'text') {
930
- parts.push({ text: content.text });
931
- } else if (content.type === 'image') {
932
- parts.push({
1159
+ return { text: content.text };
1160
+ }
1161
+
1162
+ if (content.type === 'image') {
1163
+ return {
933
1164
  inline_data: {
934
1165
  mime_type: content.source.media_type,
935
1166
  data: content.source.data
936
1167
  }
937
- });
1168
+ }
938
1169
  }
939
- });
940
- } else {
941
- parts.push({ text: message.content });
942
- }
943
1170
 
944
- return {
945
- role: message.role === 'assistant' ? 'model' : 'user',
946
- parts
947
- };
1171
+ if (content.type === 'function') {
1172
+ return {
1173
+ functionCall: {
1174
+ name: content.function.name,
1175
+ args: JSON.parse(content.function.arguments)
1176
+ }
1177
+ }
1178
+ }
1179
+
1180
+ return content;
1181
+ })
1182
+ }
948
1183
  });
949
1184
  }
950
1185
 
@@ -953,30 +1188,38 @@ class MixGoogle extends MixCustom {
953
1188
  throw new Error('Google API key not found. Please provide it in config or set GOOGLE_API_KEY environment variable.');
954
1189
  }
955
1190
 
956
- const modelId = options.model || 'gemini-2.5-flash-preview-04-17';
957
1191
  const generateContentApi = options.stream ? 'streamGenerateContent' : 'generateContent';
958
1192
 
959
- // Construct the full URL with model ID, API endpoint, and API key
960
- const fullUrl = `${this.config.url}/${modelId}:${generateContentApi}?key=${this.config.apiKey}`;
1193
+ const fullUrl = `${this.config.url}/${options.model}:${generateContentApi}?key=${this.config.apiKey}`;
961
1194
 
962
- // Convert messages to Gemini format
963
- const contents = MixGoogle.convertMessages(options.messages);
964
1195
 
965
- // Add system message if present
966
- if (config.system || config.systemExtra) {
967
- contents.unshift({
968
- role: 'user',
969
- parts: [{ text: (config.system || '') + (config.systemExtra || '') }]
970
- });
1196
+ const content = config.system + config.systemExtra;
1197
+ const systemInstruction = { parts: [{ text: content }] };
1198
+
1199
+ options.messages = MixGoogle.convertMessages(options.messages);
1200
+
1201
+ const generationConfig = {
1202
+ topP: options.top_p,
1203
+ maxOutputTokens: options.max_tokens,
971
1204
  }
972
1205
 
973
- // Prepare the request payload
1206
+ generationConfig.responseMimeType = "text/plain";
1207
+
974
1208
  const payload = {
975
- contents,
976
- generationConfig: options.generationConfig || this.getDefaultOptions().generationConfig
1209
+ generationConfig,
1210
+ systemInstruction,
1211
+ contents: options.messages,
1212
+ tools: options.tools
977
1213
  };
978
1214
 
979
1215
  try {
1216
+ if (config.debug) {
1217
+ log.debug("config");
1218
+ log.info(config);
1219
+ log.debug("payload");
1220
+ log.inspect(payload);
1221
+ }
1222
+
980
1223
  if (options.stream) {
981
1224
  throw new Error('Stream is not supported for Gemini');
982
1225
  } else {
@@ -989,9 +1232,59 @@ class MixGoogle extends MixCustom {
989
1232
  }
990
1233
  }
991
1234
 
992
- extractMessage(data) {
1235
+ processResponse(response) {
1236
+ return {
1237
+ message: MixGoogle.extractMessage(response.data),
1238
+ think: null,
1239
+ toolCalls: MixGoogle.extractToolCalls(response.data),
1240
+ response: response.data
1241
+ }
1242
+ }
1243
+
1244
+ static extractToolCalls(data) {
1245
+ return data.candidates?.[0]?.content?.parts?.map(part => {
1246
+ if (part.functionCall) {
1247
+ return {
1248
+ id: part.functionCall.id,
1249
+ type: 'function',
1250
+ function: {
1251
+ name: part.functionCall.name,
1252
+ arguments: JSON.stringify(part.functionCall.args)
1253
+ }
1254
+ };
1255
+ }
1256
+ return null;
1257
+ }).filter(item => item !== null) || [];
1258
+ }
1259
+
1260
+ static extractMessage(data) {
993
1261
  return data.candidates?.[0]?.content?.parts?.[0]?.text;
994
1262
  }
1263
+
1264
+ static getOptionsTools(tools) {
1265
+ const functionDeclarations = [];
1266
+ for (const tool in tools) {
1267
+ for (const item of tools[tool]) {
1268
+ functionDeclarations.push({
1269
+ name: item.name,
1270
+ description: item.description,
1271
+ parameters: item.inputSchema
1272
+ });
1273
+ }
1274
+ }
1275
+
1276
+ const options = {
1277
+ tools: [{
1278
+ functionDeclarations
1279
+ }]
1280
+ };
1281
+
1282
+ return options;
1283
+ }
1284
+
1285
+ getOptionsTools(tools) {
1286
+ return MixGoogle.getOptionsTools(tools);
1287
+ }
995
1288
  }
996
1289
 
997
1290
  module.exports = { MixCustom, ModelMix, MixAnthropic, MixOpenAI, MixPerplexity, MixOllama, MixLMStudio, MixGroq, MixTogether, MixGrok, MixCerebras, MixGoogle };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "modelmix",
3
- "version": "3.3.8",
3
+ "version": "3.5.0",
4
4
  "description": "🧬 ModelMix - Unified API for Diverse AI LLM.",
5
5
  "main": "index.js",
6
6
  "repository": {
@@ -8,6 +8,7 @@
8
8
  "url": "git+https://github.com/clasen/ModelMix.git"
9
9
  },
10
10
  "keywords": [
11
+ "mcp",
11
12
  "llm",
12
13
  "ai",
13
14
  "model",
@@ -48,6 +49,7 @@
48
49
  },
49
50
  "homepage": "https://github.com/clasen/ModelMix#readme",
50
51
  "dependencies": {
52
+ "@modelcontextprotocol/sdk": "^1.11.2",
51
53
  "axios": "^1.8.4",
52
54
  "bottleneck": "^2.19.5",
53
55
  "lemonlog": "^1.1.2"