modelmix 4.4.4 → 4.4.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -47,7 +47,7 @@ import { ModelMix } from 'modelmix';
47
47
 
48
48
  // Get structured JSON responses
49
49
  const model = ModelMix.new()
50
- .sonnet45() // Anthropic claude-sonnet-4-20250514
50
+ .sonnet46() // Anthropic claude-sonnet-4-6
51
51
  .addText("Name and capital of 3 South American countries.");
52
52
 
53
53
  const outputExample = { countries: [{ name: "", capital: "" }] };
@@ -65,7 +65,7 @@ const setup = {
65
65
  };
66
66
 
67
67
  const model = await ModelMix.new(setup)
68
- .sonnet45() // (main model) Anthropic claude-sonnet-4-5-20250929
68
+ .sonnet46() // (main model) Anthropic claude-sonnet-4-5-20250929
69
69
  .gpt5mini() // (fallback 2) OpenAI gpt-5-mini
70
70
  .gemini3flash({ config: { temperature: 0 } }) // (fallback 3) Google gemini-3-flash
71
71
  .grok3mini() // (fallback 4) Grok grok-3-mini
@@ -146,9 +146,8 @@ Here's a comprehensive list of available methods:
146
146
  | `gptOss()` | Together | gpt-oss-120B | [\$0.15 / \$0.60][7] |
147
147
  | `opus46[think]()` | Anthropic | claude-opus-4-6 | [\$5.00 / \$25.00][2] |
148
148
  | `opus45[think]()` | Anthropic | claude-opus-4-5-20251101 | [\$5.00 / \$25.00][2] |
149
- | `opus41[think]()` | Anthropic | claude-opus-4-1-20250805 | [\$15.00 / \$75.00][2] |
149
+ | `sonnet46[think]()`| Anthropic | claude-sonnet-4-6 | [\$3.00 / \$15.00][2] |
150
150
  | `sonnet45[think]()`| Anthropic | claude-sonnet-4-5-20250929 | [\$3.00 / \$15.00][2] |
151
- | `sonnet4[think]()` | Anthropic | claude-sonnet-4-20250514 | [\$3.00 / \$15.00][2] |
152
151
  | `haiku35()` | Anthropic | claude-3-5-haiku-20241022 | [\$0.80 / \$4.00][2] |
153
152
  | `haiku45[think]()` | Anthropic | claude-haiku-4-5-20251001 | [\$1.00 / \$5.00][2] |
154
153
  | `gemini3pro()` | Google | gemini-3-pro-preview | [\$2.00 / \$12.00][3] |
@@ -208,7 +207,7 @@ ModelMix includes a simple but powerful templating system. You can write your sy
208
207
  ### Basic example with `replace`
209
208
 
210
209
  ```javascript
211
- const gpt = ModelMix.new().gpt5mini();
210
+ const gpt = ModelMix.new().gpt52();
212
211
 
213
212
  gpt.addText('Write a short story about a {animal} that lives in {place}.');
214
213
  gpt.replace({ '{animal}': 'cat', '{place}': 'a haunted castle' });
@@ -325,19 +324,14 @@ console.log(result);
325
324
 
326
325
  ### Adding field descriptions
327
326
 
328
- The second argument lets you describe each field so the model understands exactly what you expect:
327
+ The second argument lets you describe each field so the model understands exactly what you expect. Descriptions can be **strings** (simple) or **descriptor objects** (with metadata):
329
328
 
330
329
  ```javascript
331
- const model = ModelMix.new()
332
- .gpt5mini()
333
- .addText('Name and capital of 3 South American countries.');
334
-
335
330
  const result = await model.json(
336
331
  { countries: [{ name: "Argentina", capital: "BUENOS AIRES" }] },
337
332
  { countries: [{ name: "name of the country", capital: "capital of the country in uppercase" }] },
338
333
  { addNote: true }
339
334
  );
340
- console.log(result);
341
335
  // { countries: [
342
336
  // { name: "Brazil", capital: "BRASILIA" },
343
337
  // { name: "Colombia", capital: "BOGOTA" },
@@ -345,7 +339,40 @@ console.log(result);
345
339
  // ]}
346
340
  ```
347
341
 
348
- The example values (like `"Argentina"` and `"BUENOS AIRES"`) show the model the expected format, while the descriptions clarify what each field should contain.
342
+ ### Enhanced descriptors
343
+
344
+ Descriptions support **descriptor objects** with `description`, `required`, `enum`, and `default`:
345
+
346
+ ```javascript
347
+ const result = await model.json(
348
+ { name: 'martin', age: 22, sex: 'm' },
349
+ {
350
+ name: { description: 'Name of the actor', required: false },
351
+ age: 'Age of the actor', // string still works
352
+ sex: { description: 'Gender', enum: ['m', 'f', null], default: 'm' }
353
+ }
354
+ );
355
+ ```
356
+
357
+ | Property | Type | Default | Description |
358
+ | --- | --- | --- | --- |
359
+ | `description` | `string` | — | Field description for the model |
360
+ | `required` | `boolean` | `true` | If `false`, field is removed from `required` and type becomes nullable |
361
+ | `enum` | `array` | — | Allowed values. If includes `null`, type auto-becomes nullable |
362
+ | `default` | `any` | — | Default value for the field |
363
+
364
+ You can mix strings and descriptor objects freely in the same descriptions parameter.
365
+
366
+ ### Array auto-wrap
367
+
368
+ When you pass a top-level array as the example, ModelMix automatically wraps it for better LLM compatibility and unwraps the result transparently:
369
+
370
+ ```javascript
371
+ const result = await model.json([{ name: 'martin' }]);
372
+ // result is an array: [{ name: "Martin" }, { name: "Carlos" }, ...]
373
+ ```
374
+
375
+ Internally, the array is wrapped as `{ out: [...] }` so the model receives a proper object schema, then `result.out` is returned automatically.
349
376
 
350
377
  ### Options
351
378
 
@@ -491,15 +518,16 @@ new ModelMix(args = { options: {}, config: {} })
491
518
  - `tokens`: Object with `input`, `output`, and `total` token counts
492
519
  - `response`: The raw API response
493
520
  - `stream(callback)`: Sends the message and streams the response, invoking the callback with each streamed part.
494
- - `json(schemaExample, descriptions = {})`: Forces the model to return a response in a specific JSON format.
495
- - `schemaExample`: Optional example of the JSON structure to be returned.
496
- - `descriptions`: Optional descriptions for each field in the JSON structure
521
+ - `json(schemaExample, descriptions = {}, options = {})`: Forces the model to return a response in a specific JSON format.
522
+ - `schemaExample`: Example of the JSON structure to be returned. Top-level arrays are auto-wrapped for better LLM compatibility.
523
+ - `descriptions`: Descriptions for each field can be strings or descriptor objects with `{ description, required, enum, default }`.
524
+ - `options`: `{ addSchema: true, addExample: false, addNote: false }`
497
525
  - Returns a Promise that resolves to the structured JSON response
498
526
  - Example:
499
527
  ```javascript
500
528
  const response = await handler.json(
501
529
  { time: '24:00:00', message: 'Hello' },
502
- { time: 'Time in format HH:MM:SS' }
530
+ { time: 'Time in format HH:MM:SS', message: { description: 'Greeting', required: false } }
503
531
  );
504
532
  ```
505
533
  - `block({ addText = true })`: Forces the model to return a response in a specific block format.
package/demo/json.js CHANGED
@@ -2,7 +2,7 @@ process.loadEnvFile();
2
2
  import { ModelMix } from '../index.js';
3
3
 
4
4
  const model = await ModelMix.new({ options: { max_tokens: 10000 }, config: { debug: 3 } })
5
- .gemini3flash()
5
+ .sonnet46()
6
6
  // .gptOss()
7
7
  // .scout({ config: { temperature: 0 } })
8
8
  // .o4mini()
@@ -11,17 +11,19 @@ const model = await ModelMix.new({ options: { max_tokens: 10000 }, config: { deb
11
11
  // .gemini25flash()
12
12
  .addText("Name and capital of 3 South American countries.")
13
13
 
14
- const jsonResult = await model.json({
15
- countries: [{
16
- name: "Argentina",
17
- capital: "BUENOS AIRES"
18
- }]
14
+ const jsonResult = await model.json([{
15
+ name: "Argentina",
16
+ capital: "BUENOS AIRES"
19
17
  }, {
20
- countries: [{
21
- name: "name of the country",
22
- capital: "capital of the country in uppercase"
23
- }]
24
- }, { addNote: true });
18
+ name: "Brazil",
19
+ capital: "BRASILIA"
20
+ }, {
21
+ name: "Colombia",
22
+ capital: "BOGOTA"
23
+ }], [{
24
+ name: { description: "name of the country", enum: ["Perú", "Colombia", "Argentina"] },
25
+ capital: "capital of the country in uppercase"
26
+ }], { addNote: true });
25
27
 
26
28
  console.log(jsonResult);
27
29
  console.log(model.lastRaw.tokens);
package/index.js CHANGED
@@ -31,6 +31,7 @@ const MODEL_PRICING = {
31
31
  'claude-opus-4-6': [5.00, 25.00],
32
32
  'claude-opus-4-5-20251101': [5.00, 25.00],
33
33
  'claude-opus-4-1-20250805': [15.00, 75.00],
34
+ 'claude-sonnet-4-6': [3.00, 15.00],
34
35
  'claude-sonnet-4-5-20250929': [3.00, 15.00],
35
36
  'claude-sonnet-4-20250514': [3.00, 15.00],
36
37
  'claude-3-5-haiku-20241022': [0.80, 4.00],
@@ -112,7 +113,7 @@ class ModelMix {
112
113
 
113
114
  this.config = {
114
115
  system: 'You are an assistant.',
115
- max_history: 1, // Default max history
116
+ max_history: 0, // 0=no history (stateless), N=keep last N messages, -1=unlimited
116
117
  debug: 0, // 0=silent, 1=minimal, 2=readable summary, 3=full (no truncate), 4=verbose (raw details)
117
118
  bottleneck: defaultBottleneckConfig,
118
119
  roundRobin: false, // false=fallback mode, true=round robin rotation
@@ -305,6 +306,14 @@ class ModelMix {
305
306
  options = { ...MixAnthropic.thinkingOptions, ...options };
306
307
  return this.attach('claude-sonnet-4-20250514', new MixAnthropic({ options, config }));
307
308
  }
309
+ sonnet46({ options = {}, config = {} } = {}) {
310
+ return this.attach('claude-sonnet-4-6', new MixAnthropic({ options, config }));
311
+ }
312
+ sonnet46think({ options = {}, config = {} } = {}) {
313
+ options = { ...MixAnthropic.thinkingOptions, ...options };
314
+ return this.attach('claude-sonnet-4-6', new MixAnthropic({ options, config }));
315
+ }
316
+
308
317
  sonnet45({ options = {}, config = {} } = {}) {
309
318
  return this.attach('claude-sonnet-4-5-20250929', new MixAnthropic({ options, config }));
310
319
  }
@@ -640,6 +649,15 @@ class ModelMix {
640
649
 
641
650
  async json(schemaExample = null, schemaDescription = {}, { type = 'json_object', addExample = false, addSchema = true, addNote = false } = {}) {
642
651
 
652
+ let isArrayWrap = false;
653
+ if (Array.isArray(schemaExample)) {
654
+ isArrayWrap = true;
655
+ schemaExample = { out: schemaExample };
656
+ if (Array.isArray(schemaDescription)) {
657
+ schemaDescription = { out: schemaDescription };
658
+ }
659
+ }
660
+
643
661
  let options = {
644
662
  response_format: { type },
645
663
  stream: false,
@@ -666,7 +684,8 @@ class ModelMix {
666
684
  }
667
685
  }
668
686
  const { message } = await this.execute({ options, config });
669
- return JSON.parse(this._extractBlock(message));
687
+ const parsed = JSON.parse(this._extractBlock(message));
688
+ return isArrayWrap ? parsed.out : parsed;
670
689
  }
671
690
 
672
691
  _extractBlock(response) {
@@ -760,26 +779,26 @@ class ModelMix {
760
779
  await this.processImages();
761
780
  this.applyTemplate();
762
781
 
763
- // Smart message slicing to preserve tool call sequences
782
+ // Smart message slicing based on max_history:
783
+ // 0 = no history (stateless), N = keep last N messages, -1 = unlimited
764
784
  if (this.config.max_history > 0) {
765
785
  let sliceStart = Math.max(0, this.messages.length - this.config.max_history);
766
786
 
767
- // If we're slicing and there's a tool message at the start,
768
- // ensure we include the preceding assistant message with tool_calls
769
- while (sliceStart > 0 &&
770
- sliceStart < this.messages.length &&
771
- this.messages[sliceStart].role === 'tool') {
772
- sliceStart--;
773
- // Also need to include the assistant message with tool_calls
774
- if (sliceStart > 0 &&
775
- this.messages[sliceStart].role === 'assistant' &&
776
- this.messages[sliceStart].tool_calls) {
787
+ // If we're slicing into the middle of a tool interaction,
788
+ // backtrack to include the full sequence (user → assistant/tool_calls tool results)
789
+ while (sliceStart > 0 && sliceStart < this.messages.length) {
790
+ const msg = this.messages[sliceStart];
791
+ if (msg.role === 'tool' || (msg.role === 'assistant' && msg.tool_calls)) {
792
+ sliceStart--;
793
+ } else {
777
794
  break;
778
795
  }
779
796
  }
780
797
 
781
798
  this.messages = this.messages.slice(sliceStart);
782
799
  }
800
+ // max_history = -1: unlimited, no slicing
801
+ // max_history = 0: no history, messages only contain what was added since last call
783
802
 
784
803
  this.messages = this.groupByRoles(this.messages);
785
804
  this.options.messages = this.messages;
@@ -900,11 +919,12 @@ class ModelMix {
900
919
  this.messages.push({
901
920
  role: 'tool',
902
921
  tool_call_id: toolResult.tool_call_id,
922
+ name: toolResult.name,
903
923
  content: toolResult.content
904
924
  });
905
925
  }
906
926
 
907
- return this.execute();
927
+ return this.execute({ options, config });
908
928
  }
909
929
 
910
930
  // debug level 1: Just success indicator
@@ -941,6 +961,29 @@ class ModelMix {
941
961
  if (currentConfig.debug >= 1) console.log('');
942
962
 
943
963
  this.lastRaw = result;
964
+
965
+ // Manage conversation history based on max_history setting
966
+ if (this.config.max_history === 0) {
967
+ // Stateless: clear messages so next call starts fresh
968
+ this.messages = [];
969
+ } else if (result.message) {
970
+ // Persist assistant response for multi-turn conversations
971
+ if (result.signature) {
972
+ this.messages.push({
973
+ role: "assistant", content: [{
974
+ type: "thinking",
975
+ thinking: result.think,
976
+ signature: result.signature
977
+ }, {
978
+ type: "text",
979
+ text: result.message
980
+ }]
981
+ });
982
+ } else {
983
+ this.addText(result.message, { role: "assistant" });
984
+ }
985
+ }
986
+
944
987
  return result;
945
988
 
946
989
  } catch (error) {
@@ -1043,7 +1086,7 @@ class ModelMix {
1043
1086
  return;
1044
1087
  }
1045
1088
 
1046
- if (this.config.max_history < 3) {
1089
+ if (this.config.max_history >= 0 && this.config.max_history < 3) {
1047
1090
  log.warn(`MCP ${key} requires at least 3 max_history. Setting to 3.`);
1048
1091
  this.config.max_history = 3;
1049
1092
  }
@@ -1079,7 +1122,7 @@ class ModelMix {
1079
1122
 
1080
1123
  addTool(toolDefinition, callback) {
1081
1124
 
1082
- if (this.config.max_history < 3) {
1125
+ if (this.config.max_history >= 0 && this.config.max_history < 3) {
1083
1126
  log.warn(`MCP ${toolDefinition.name} requires at least 3 max_history. Setting to 3.`);
1084
1127
  this.config.max_history = 3;
1085
1128
  }
@@ -1531,6 +1574,18 @@ class MixAnthropic extends MixCustom {
1531
1574
 
1532
1575
  return filteredMessages.map(message => {
1533
1576
  if (message.role === 'tool') {
1577
+ // Handle new format: tool_call_id directly on message
1578
+ if (message.tool_call_id) {
1579
+ return {
1580
+ role: "user",
1581
+ content: [{
1582
+ type: "tool_result",
1583
+ tool_use_id: message.tool_call_id,
1584
+ content: message.content
1585
+ }]
1586
+ }
1587
+ }
1588
+ // Handle old format: content is an array
1534
1589
  return {
1535
1590
  role: "user",
1536
1591
  content: message.content.map(content => ({
@@ -1961,13 +2016,33 @@ class MixGoogle extends MixCustom {
1961
2016
  if (message.role === 'assistant' && message.tool_calls) {
1962
2017
  return {
1963
2018
  role: 'model',
1964
- parts: message.tool_calls.map(toolCall => ({
1965
- functionCall: {
1966
- name: toolCall.function.name,
1967
- args: JSON.parse(toolCall.function.arguments)
1968
- },
1969
- thought_signature: toolCall.thought_signature || ""
1970
- }))
2019
+ parts: message.tool_calls.map(toolCall => {
2020
+ const part = {
2021
+ functionCall: {
2022
+ name: toolCall.function.name,
2023
+ args: JSON.parse(toolCall.function.arguments)
2024
+ }
2025
+ };
2026
+ if (toolCall.thought_signature) {
2027
+ part.thoughtSignature = toolCall.thought_signature;
2028
+ }
2029
+ return part;
2030
+ })
2031
+ }
2032
+ }
2033
+
2034
+ // Handle new tool result format: tool_call_id and name directly on message
2035
+ if (message.role === 'tool' && message.name) {
2036
+ return {
2037
+ role: 'user',
2038
+ parts: [{
2039
+ functionResponse: {
2040
+ name: message.name,
2041
+ response: {
2042
+ output: message.content,
2043
+ },
2044
+ }
2045
+ }]
1971
2046
  }
1972
2047
  }
1973
2048
 
@@ -1975,6 +2050,7 @@ class MixGoogle extends MixCustom {
1975
2050
  const role = (message.role === 'assistant' || message.role === 'tool') ? 'model' : 'user'
1976
2051
 
1977
2052
  if (message.role === 'tool') {
2053
+ // Handle old format: content is an array of {name, content}
1978
2054
  return {
1979
2055
  role,
1980
2056
  parts: message.content.map(content => ({
@@ -2017,6 +2093,22 @@ class MixGoogle extends MixCustom {
2017
2093
  })
2018
2094
  }
2019
2095
  });
2096
+
2097
+ // Merge consecutive user messages containing only functionResponse parts
2098
+ // Google requires all function responses for a turn in a single message
2099
+ return converted.reduce((acc, msg) => {
2100
+ if (acc.length > 0) {
2101
+ const prev = acc[acc.length - 1];
2102
+ if (prev.role === 'user' && msg.role === 'user' &&
2103
+ prev.parts.every(p => p.functionResponse) &&
2104
+ msg.parts.every(p => p.functionResponse)) {
2105
+ prev.parts.push(...msg.parts);
2106
+ return acc;
2107
+ }
2108
+ }
2109
+ acc.push(msg);
2110
+ return acc;
2111
+ }, []);
2020
2112
  }
2021
2113
 
2022
2114
  async create({ config = {}, options = {} } = {}) {
@@ -2042,7 +2134,13 @@ class MixGoogle extends MixCustom {
2042
2134
  generationConfig.topP = options.top_p;
2043
2135
  }
2044
2136
 
2045
- generationConfig.responseMimeType = "text/plain";
2137
+ // Gemini does not support responseMimeType when function calling is used
2138
+ const hasTools = options.tools && options.tools.length > 0 &&
2139
+ options.tools.some(t => t.functionDeclarations && t.functionDeclarations.length > 0);
2140
+
2141
+ if (!hasTools) {
2142
+ generationConfig.responseMimeType = "text/plain";
2143
+ }
2046
2144
 
2047
2145
  const payload = {
2048
2146
  generationConfig,
@@ -2124,6 +2222,21 @@ class MixGoogle extends MixCustom {
2124
2222
  };
2125
2223
  }
2126
2224
 
2225
+ static stripUnsupportedSchemaProps(schema) {
2226
+ if (!schema || typeof schema !== 'object') return schema;
2227
+ const cleaned = { ...schema };
2228
+ delete cleaned.default;
2229
+ if (cleaned.properties) {
2230
+ cleaned.properties = Object.fromEntries(
2231
+ Object.entries(cleaned.properties).map(([key, value]) => [key, MixGoogle.stripUnsupportedSchemaProps(value)])
2232
+ );
2233
+ }
2234
+ if (cleaned.items) {
2235
+ cleaned.items = MixGoogle.stripUnsupportedSchemaProps(cleaned.items);
2236
+ }
2237
+ return cleaned;
2238
+ }
2239
+
2127
2240
  static getOptionsTools(tools) {
2128
2241
  const functionDeclarations = [];
2129
2242
  for (const tool in tools) {
@@ -2131,7 +2244,7 @@ class MixGoogle extends MixCustom {
2131
2244
  functionDeclarations.push({
2132
2245
  name: item.name,
2133
2246
  description: item.description,
2134
- parameters: item.inputSchema
2247
+ parameters: MixGoogle.stripUnsupportedSchemaProps(item.inputSchema)
2135
2248
  });
2136
2249
  }
2137
2250
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "modelmix",
3
- "version": "4.4.4",
3
+ "version": "4.4.7",
4
4
  "description": "🧬 Reliable interface with automatic fallback for AI LLMs.",
5
5
  "main": "index.js",
6
6
  "repository": {
@@ -73,4 +73,4 @@
73
73
  "test:live.mcp": "mocha test/live.mcp.js --timeout 60000 --require dotenv/config --require test/setup.js",
74
74
  "test:tokens": "mocha test/tokens.test.js --timeout 10000 --require dotenv/config --require test/setup.js"
75
75
  }
76
- }
76
+ }
package/schema.js CHANGED
@@ -1,3 +1,29 @@
1
+ const META_KEYS = new Set(['description', 'required', 'enum', 'default', 'nullable']);
2
+
3
+ function isDescriptor(value) {
4
+ if (!value || typeof value !== 'object' || Array.isArray(value)) return false;
5
+ const keys = Object.keys(value);
6
+ return keys.length > 0 && keys.every(k => META_KEYS.has(k));
7
+ }
8
+
9
+ function makeNullable(fieldSchema) {
10
+ if (!fieldSchema.type) return fieldSchema;
11
+ if (Array.isArray(fieldSchema.type)) {
12
+ if (!fieldSchema.type.includes('null')) fieldSchema.type.push('null');
13
+ } else {
14
+ fieldSchema.type = [fieldSchema.type, 'null'];
15
+ }
16
+ return fieldSchema;
17
+ }
18
+
19
+ function getNestedDescriptions(desc) {
20
+ if (!desc) return {};
21
+ if (typeof desc === 'string') return {};
22
+ if (Array.isArray(desc)) return desc[0] || {};
23
+ if (isDescriptor(desc)) return {};
24
+ return desc;
25
+ }
26
+
1
27
  function generateJsonSchema(example, descriptions = {}) {
2
28
  function detectType(key, value) {
3
29
  if (value === null) return { type: 'null' };
@@ -32,7 +58,7 @@ function generateJsonSchema(example, descriptions = {}) {
32
58
  if (typeof value[0] === 'object' && !Array.isArray(value[0])) {
33
59
  return {
34
60
  type: 'array',
35
- items: generateJsonSchema(value[0], descriptions[key] || {})
61
+ items: generateJsonSchema(value[0], getNestedDescriptions(descriptions[key]))
36
62
  };
37
63
  } else {
38
64
  return {
@@ -42,7 +68,7 @@ function generateJsonSchema(example, descriptions = {}) {
42
68
  }
43
69
  }
44
70
  if (typeof value === 'object') {
45
- return generateJsonSchema(value, descriptions[key] || {});
71
+ return generateJsonSchema(value, getNestedDescriptions(descriptions[key]));
46
72
  }
47
73
  return {};
48
74
  }
@@ -65,13 +91,31 @@ function generateJsonSchema(example, descriptions = {}) {
65
91
 
66
92
  for (const key in example) {
67
93
  const fieldSchema = detectType(key, example[key]);
94
+ const desc = descriptions[key];
95
+ let isRequired = true;
68
96
 
69
- if (descriptions[key] && typeof fieldSchema === 'object') {
70
- fieldSchema.description = descriptions[key];
97
+ if (desc) {
98
+ if (typeof desc === 'string') {
99
+ fieldSchema.description = desc;
100
+ } else if (typeof desc === 'object' && !Array.isArray(desc) && isDescriptor(desc)) {
101
+ if (desc.description) fieldSchema.description = desc.description;
102
+ if (desc.enum) fieldSchema.enum = desc.enum;
103
+ if (desc.default !== undefined) fieldSchema.default = desc.default;
104
+ if (desc.required === false) {
105
+ isRequired = false;
106
+ makeNullable(fieldSchema);
107
+ }
108
+ if (desc.nullable === true) {
109
+ makeNullable(fieldSchema);
110
+ }
111
+ if (desc.enum && desc.enum.includes(null)) {
112
+ makeNullable(fieldSchema);
113
+ }
114
+ }
71
115
  }
72
116
 
73
117
  schema.properties[key] = fieldSchema;
74
- schema.required.push(key);
118
+ if (isRequired) schema.required.push(key);
75
119
  }
76
120
 
77
121
  return schema;
@@ -75,8 +75,8 @@ Chain shorthand methods to attach providers. First model is primary; others are
75
75
 
76
76
  ```javascript
77
77
  const model = ModelMix.new()
78
- .sonnet45() // primary
79
- .gpt5mini() // fallback 1
78
+ .sonnet46() // primary
79
+ .gpt52() // fallback 1
80
80
  .gemini3flash() // fallback 2
81
81
  .addText("Hello!")
82
82
  ```
@@ -86,7 +86,7 @@ If `sonnet45` fails, it automatically tries `gpt5mini`, then `gemini3flash`.
86
86
  ## Available Model Shorthands
87
87
 
88
88
  - **OpenAI**: `gpt52` `gpt51` `gpt5` `gpt5mini` `gpt5nano` `gpt41` `gpt41mini` `gpt41nano`
89
- - **Anthropic**: `opus46` `opus45` `sonnet45` `sonnet4` `haiku45` `haiku35` (thinking variants: add `think` suffix)
89
+ - **Anthropic**: `opus46` `opus45` `sonnet46` `sonnet45` `haiku45` `haiku35` (thinking variants: add `think` suffix)
90
90
  - **Google**: `gemini3pro` `gemini3flash` `gemini25pro` `gemini25flash`
91
91
  - **Grok**: `grok4` `grok41` (thinking variant available)
92
92
  - **Perplexity**: `sonar` `sonarPro`
@@ -125,6 +125,36 @@ const result = await ModelMix.new()
125
125
 
126
126
  `json()` signature: `json(schemaExample, schemaDescription?, { addSchema, addExample, addNote }?)`
127
127
 
128
+ #### Enhanced descriptors
129
+
130
+ Descriptions can be **strings** or **descriptor objects** with metadata:
131
+
132
+ ```javascript
133
+ const result = await model.json(
134
+ { name: 'martin', age: 22, sex: 'Male' },
135
+ {
136
+ name: { description: 'Name of the actor', required: false },
137
+ age: 'Age of the actor', // string still works
138
+ sex: { description: 'Gender', enum: ['Male', 'Female', null] }
139
+ }
140
+ );
141
+ ```
142
+
143
+ Descriptor properties:
144
+ - `description` (string) — field description
145
+ - `required` (boolean, default `true`) — if `false`: removed from required array, type becomes nullable
146
+ - `enum` (array) — allowed values; if includes `null`, type auto-becomes nullable
147
+ - `default` (any) — default value
148
+
149
+ #### Array auto-wrap
150
+
151
+ Top-level arrays are auto-wrapped as `{ out: [...] }` for better LLM compatibility, and unwrapped on return:
152
+
153
+ ```javascript
154
+ const result = await model.json([{ name: 'martin' }]);
155
+ // result is an array: [{ name: "Martin" }, { name: "Carlos" }, ...]
156
+ ```
157
+
128
158
  ### Stream a response
129
159
 
130
160
  ```javascript
@@ -282,7 +312,7 @@ const reply = await chat.message(); // "Martin"
282
312
  - Store API keys in `.env` and load with `dotenv/config` or `process.loadEnvFile()`. Never hardcode keys.
283
313
  - Chain models for resilience: primary model first, fallbacks after.
284
314
  - When using MCP tools or `addTool()`, set `max_history` to at least 3.
285
- - Use `.json()` for structured output instead of parsing text manually.
315
+ - Use `.json()` for structured output instead of parsing text manually. Use descriptor objects `{ description, required, enum, default }` in descriptions for richer schema control.
286
316
  - Use `.message()` for simple text, `.raw()` when you need tokens/thinking/toolCalls.
287
317
  - For thinking models, append `think` to the method name (e.g. `sonnet45think()`).
288
318
  - Template placeholders use `{key}` syntax in both system prompts and user messages.
@@ -302,7 +332,7 @@ const reply = await chat.message(); // "Martin"
302
332
  | `.replace({})` | `this` | Set placeholder replacements |
303
333
  | `.replaceKeyFromFile(key, path)` | `this` | Replace placeholder with file content |
304
334
  | `.message()` | `Promise<string>` | Get text response |
305
- | `.json(example, desc?, opts?)` | `Promise<object>` | Get structured JSON |
335
+ | `.json(example, desc?, opts?)` | `Promise<object\|array>` | Get structured JSON. Descriptions support descriptor objects `{ description, required, enum, default }`. Top-level arrays auto-wrapped |
306
336
  | `.raw()` | `Promise<{message, think, toolCalls, tokens, response}>` | Full response |
307
337
  | `.lastRaw` | `object \| null` | Full response from last `message()`/`json()`/`block()`/`stream()` call |
308
338
  | `.stream(callback)` | `Promise` | Stream response |