@aihubmix/ai-sdk-provider 2.0.3 → 2.0.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,35 +1,5 @@
1
1
  # @aihubmix/ai-sdk-provider
2
2
 
3
- ## 2.0.2
4
-
5
- ### Patch Changes
6
-
7
- - b6c5f35: - 将 Zod 降级到 ^3.25.76 以保证与 AI SDK 生态系统的稳定性。
8
- - 移除调试日志。
9
- - 7a9e37e: - 添加 `aihubmixChatProviderOptionsSchema` 和 `aihubmixResponsesProviderOptionsSchema` 以支持所有 Vercel AI SDK OpenAI 提供商选项。
10
- - 从主入口点导出选项类型和模式。
11
- - 改进了 `gpt-5-pro` 和 `gpt-5-codex` 的模型配置。
12
- - 通过更好的文件命名逻辑增强了 `AihubmixTranscriptionModel`。
13
-
14
- ## 2.0.0
15
-
16
- ### Major Changes
17
-
18
- - 2dc97a3: 升级到 AI SDK v6
19
-
20
- - 升级所有 AI SDK 依赖到 v6 兼容版本
21
- - Provider 接口从 V2 升级到 V3 (LanguageModelV3, ProviderV3 等)
22
- - 工具工厂函数 API 更新 (createProviderToolFactory)
23
- - 添加 specificationVersion 和 embeddingModel 属性
24
- - 版本号升级到 1.0.0
25
-
26
- ### Patch Changes
27
-
28
- - f3b2c86: - 添加 `aihubmixChatProviderOptionsSchema` 和 `aihubmixResponsesProviderOptionsSchema` 以支持所有 Vercel AI SDK OpenAI 提供商选项。
29
- - 从主入口点导出选项类型和模式。
30
- - 改进了 `gpt-5-pro` 和 `gpt-5-codex` 的模型配置。
31
- - 通过更好的文件命名逻辑增强了 `AihubmixTranscriptionModel`。
32
-
33
3
  ## 1.0.1
34
4
 
35
5
  ### Patch Changes
package/README.ja.md CHANGED
@@ -14,7 +14,17 @@ app-codeが内蔵されており、この方法でモデルをリクエストす
14
14
  [AI SDK](https://ai-sdk.dev/docs)用の **[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)**
15
15
  一つのゲートウェイ、無限のモデル;ワンストップリクエスト:OpenAI、Claude、Gemini、DeepSeek、Qwen、そして500以上のAIモデル。
16
16
 
17
- > **📦 バージョン 1.0.1** - AI SDK v6 対応
17
+ ## AI SDK v6 用セットアップ
18
+
19
+ ```bash
20
+ npm i @aihubmix/ai-sdk-provider
21
+ ```
22
+
23
+ ### (レガシー)AI SDK v5 用セットアップ
24
+
25
+ ```bash
26
+ npm i @aihubmix/ai-sdk-provider@0.0.6
27
+ ```
18
28
 
19
29
  ## サポートされている機能
20
30
 
@@ -30,14 +40,6 @@ Aihubmix providerは以下のAI機能をサポートしています:
30
40
  - **転写**:音声からテキストへの変換
31
41
  - **ツール**:ウェブ検索およびその他のツール
32
42
 
33
- ## セットアップ
34
-
35
- Aihubmix providerは`@aihubmix/ai-sdk-provider`モジュールで利用可能です。[@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider)でインストールできます
36
-
37
- ```bash
38
- npm i @aihubmix/ai-sdk-provider
39
- ```
40
-
41
43
  ## Provider インスタンス
42
44
 
43
45
  `@aihubmix/ai-sdk-provider`からデフォルトのproviderインスタンス`aihubmix`をインポートできます:
@@ -299,7 +301,7 @@ const { text } = await generateText({
299
301
 
300
302
  ## 追加リソース
301
303
 
302
- - [Aihubmix Provider リポジトリ](https://github.com/inferera/aihubmix)
304
+ - [Aihubmix Provider リポジトリ](https://github.com/AIhubmix/ai-sdk-provider)
303
305
  - [Aihubmix ドキュメント](https://docs.aihubmix.com/en)
304
306
  - [Aihubmix ダッシュボード](https://aihubmix.com)
305
307
  - [Aihubmix ビジネス協力](mailto:business@aihubmix.com)
package/README.md CHANGED
@@ -14,8 +14,17 @@ Built-in app-code; using this method to request all models offers a 10% discount
14
14
  The **[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)** for the [AI SDK](https://ai-sdk.dev/docs)
15
15
  One Gateway, Infinite Models;one-stop request: OpenAI, Claude, Gemini, DeepSeek, Qwen, and over 500 AI models.
16
16
 
17
- > **📦 Version 1.0.1** - Compatible with AI SDK v6
17
+ ## Setup for AI SDK v6
18
18
 
19
+ ```bash
20
+ npm i @aihubmix/ai-sdk-provider
21
+ ```
22
+
23
+ ### (LEGACY) Setup for AI SDK v5
24
+
25
+ ```bash
26
+ npm i @aihubmix/ai-sdk-provider@0.0.6
27
+ ```
19
28
 
20
29
  ## Supported Features
21
30
 
@@ -32,14 +41,6 @@ The Aihubmix provider supports the following AI features:
32
41
  - **Tools**: Web search and other tools
33
42
 
34
43
 
35
- ## Setup
36
-
37
- The Aihubmix provider is available in the `@aihubmix/ai-sdk-provider` module. You can install it with [@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider)
38
-
39
- ```bash
40
- npm i @aihubmix/ai-sdk-provider
41
- ```
42
-
43
44
  ## Provider Instance
44
45
 
45
46
  You can import the default provider instance `aihubmix` from `@aihubmix/ai-sdk-provider`:
@@ -302,7 +303,7 @@ const { text } = await generateText({
302
303
 
303
304
  ## Additional Resources
304
305
 
305
- - [Aihubmix Provider Repository](https://github.com/inferera/aihubmix)
306
+ - [Aihubmix Provider Repository](https://github.com/AIhubmix/ai-sdk-provider)
306
307
  - [Aihubmix Documentation](https://docs.aihubmix.com/en)
307
308
  - [Aihubmix Dashboard](https://aihubmix.com)
308
309
  - [Aihubmix Cooperation](mailto:business@aihubmix.com)
package/README.zh.md CHANGED
@@ -14,7 +14,17 @@
14
14
  **[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)** 适用于 [AI SDK](https://ai-sdk.dev/docs)
15
15
  一个网关,无限模型;一站式请求:OpenAI、Claude、Gemini、DeepSeek、Qwen 以及超过 500 个 AI 模型。
16
16
 
17
- > **📦 版本 1.0.1** - 兼容 AI SDK v6
17
+ ## 安装 AI SDK v6 版本
18
+
19
+ ```bash
20
+ npm i @aihubmix/ai-sdk-provider
21
+ ```
22
+
23
+ ### (旧版)安装 AI SDK v5 版本
24
+
25
+ ```bash
26
+ npm i @aihubmix/ai-sdk-provider@0.0.6
27
+ ```
18
28
 
19
29
  ## 支持的功能
20
30
 
@@ -30,14 +40,6 @@ Aihubmix provider 支持以下 AI 功能:
30
40
  - **转录**:语音转文本转换
31
41
  - **工具**:网络搜索和其他工具
32
42
 
33
- ## 安装
34
-
35
- Aihubmix 在 `@aihubmix/ai-sdk-provider` 模块中可用。您可以通过 [@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider) 安装它
36
-
37
- ```bash
38
- npm i @aihubmix/ai-sdk-provider
39
- ```
40
-
41
43
  ## Provider 实例
42
44
 
43
45
  您可以从 `@aihubmix/ai-sdk-provider` 导入默认的 provider 实例 `aihubmix`:
@@ -299,7 +301,7 @@ const { text } = await generateText({
299
301
 
300
302
  ## 附加资源
301
303
 
302
- - [Aihubmix Provider 仓库](https://github.com/inferera/aihubmix)
304
+ - [Aihubmix Provider 仓库](https://github.com/AIhubmix/ai-sdk-provider)
303
305
  - [Aihubmix 文档](https://docs.aihubmix.com/en)
304
306
  - [Aihubmix 控制台](https://aihubmix.com)
305
307
  - [Aihubmix 商务合作](mailto:business@aihubmix.com)
package/dist/index.d.mts CHANGED
@@ -1,6 +1,6 @@
1
- import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
2
- import { InferSchema, FetchFunction } from '@ai-sdk/provider-utils';
3
1
  import { ProviderV3, LanguageModelV3, EmbeddingModelV3, ImageModelV3, TranscriptionModelV3, SpeechModelV3 } from '@ai-sdk/provider';
2
+ import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
3
+ import { FetchFunction } from '@ai-sdk/provider-utils';
4
4
 
5
5
  declare const webSearchToolFactory: _ai_sdk_provider_utils.ProviderToolFactory<{}, {
6
6
  /**
@@ -198,58 +198,9 @@ declare const aihubmixTools: {
198
198
  webSearch: (args?: Parameters<typeof webSearchToolFactory>[0]) => _ai_sdk_provider_utils.Tool<{}, unknown>;
199
199
  };
200
200
 
201
- /**
202
- * OpenAI Chat 语言模型的 Provider Options Schema
203
- * 与 Vercel AI SDK 的 openaiChatLanguageModelOptions 保持一致
204
- */
205
- declare const aihubmixChatProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
206
- logitBias?: Record<string, number> | undefined;
207
- logprobs?: number | boolean | undefined;
208
- parallelToolCalls?: boolean | undefined;
209
- user?: string | undefined;
210
- reasoningEffort?: "low" | "high" | "medium" | "none" | "minimal" | "xhigh" | undefined;
211
- maxCompletionTokens?: number | undefined;
212
- store?: boolean | undefined;
213
- metadata?: Record<string, string> | undefined;
214
- prediction?: Record<string, any> | undefined;
215
- serviceTier?: "auto" | "flex" | "priority" | "default" | undefined;
216
- strictJsonSchema?: boolean | undefined;
217
- textVerbosity?: "low" | "high" | "medium" | undefined;
218
- promptCacheKey?: string | undefined;
219
- promptCacheRetention?: "in_memory" | "24h" | undefined;
220
- safetyIdentifier?: string | undefined;
221
- systemMessageMode?: "system" | "developer" | "remove" | undefined;
222
- forceReasoning?: boolean | undefined;
223
- }>;
224
- /**
225
- * OpenAI Responses API 的 Provider Options Schema
226
- */
227
- declare const aihubmixResponsesProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
228
- logprobs?: number | boolean | undefined;
229
- parallelToolCalls?: boolean | null | undefined;
230
- user?: string | null | undefined;
231
- reasoningEffort?: string | null | undefined;
232
- store?: boolean | null | undefined;
233
- metadata?: any;
234
- serviceTier?: "auto" | "flex" | "priority" | "default" | null | undefined;
235
- strictJsonSchema?: boolean | null | undefined;
236
- textVerbosity?: "low" | "high" | "medium" | null | undefined;
237
- promptCacheKey?: string | null | undefined;
238
- promptCacheRetention?: "in_memory" | "24h" | null | undefined;
239
- safetyIdentifier?: string | null | undefined;
240
- systemMessageMode?: "system" | "developer" | "remove" | undefined;
241
- forceReasoning?: boolean | undefined;
242
- conversation?: string | null | undefined;
243
- include?: ("reasoning.encrypted_content" | "file_search_call.results" | "message.output_text.logprobs")[] | null | undefined;
244
- instructions?: string | null | undefined;
245
- maxToolCalls?: number | null | undefined;
246
- previousResponseId?: string | null | undefined;
247
- reasoningSummary?: string | null | undefined;
248
- truncation?: "auto" | "disabled" | null | undefined;
249
- }>;
250
- type AihubmixChatProviderOptions = InferSchema<typeof aihubmixChatProviderOptionsSchema>;
251
- type AihubmixResponsesProviderOptions = InferSchema<typeof aihubmixResponsesProviderOptionsSchema>;
252
- type OpenAIProviderSettings = AihubmixChatProviderOptions;
201
+ interface OpenAIProviderSettings {
202
+ [key: string]: unknown;
203
+ }
253
204
  interface AihubmixProvider extends ProviderV3 {
254
205
  (deploymentId: string, settings?: OpenAIProviderSettings): LanguageModelV3;
255
206
  readonly specificationVersion: 'v3';
@@ -276,4 +227,4 @@ interface AihubmixProviderSettings {
276
227
  declare function createAihubmix(options?: AihubmixProviderSettings): AihubmixProvider;
277
228
  declare const aihubmix: AihubmixProvider;
278
229
 
279
- export { type AihubmixChatProviderOptions, type AihubmixProvider, type AihubmixProviderSettings, type AihubmixResponsesProviderOptions, aihubmix, aihubmixChatProviderOptionsSchema, aihubmixResponsesProviderOptionsSchema, createAihubmix };
230
+ export { type AihubmixProvider, type AihubmixProviderSettings, aihubmix, createAihubmix };
package/dist/index.d.ts CHANGED
@@ -1,6 +1,6 @@
1
- import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
2
- import { InferSchema, FetchFunction } from '@ai-sdk/provider-utils';
3
1
  import { ProviderV3, LanguageModelV3, EmbeddingModelV3, ImageModelV3, TranscriptionModelV3, SpeechModelV3 } from '@ai-sdk/provider';
2
+ import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
3
+ import { FetchFunction } from '@ai-sdk/provider-utils';
4
4
 
5
5
  declare const webSearchToolFactory: _ai_sdk_provider_utils.ProviderToolFactory<{}, {
6
6
  /**
@@ -198,58 +198,9 @@ declare const aihubmixTools: {
198
198
  webSearch: (args?: Parameters<typeof webSearchToolFactory>[0]) => _ai_sdk_provider_utils.Tool<{}, unknown>;
199
199
  };
200
200
 
201
- /**
202
- * OpenAI Chat 语言模型的 Provider Options Schema
203
- * 与 Vercel AI SDK 的 openaiChatLanguageModelOptions 保持一致
204
- */
205
- declare const aihubmixChatProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
206
- logitBias?: Record<string, number> | undefined;
207
- logprobs?: number | boolean | undefined;
208
- parallelToolCalls?: boolean | undefined;
209
- user?: string | undefined;
210
- reasoningEffort?: "low" | "high" | "medium" | "none" | "minimal" | "xhigh" | undefined;
211
- maxCompletionTokens?: number | undefined;
212
- store?: boolean | undefined;
213
- metadata?: Record<string, string> | undefined;
214
- prediction?: Record<string, any> | undefined;
215
- serviceTier?: "auto" | "flex" | "priority" | "default" | undefined;
216
- strictJsonSchema?: boolean | undefined;
217
- textVerbosity?: "low" | "high" | "medium" | undefined;
218
- promptCacheKey?: string | undefined;
219
- promptCacheRetention?: "in_memory" | "24h" | undefined;
220
- safetyIdentifier?: string | undefined;
221
- systemMessageMode?: "system" | "developer" | "remove" | undefined;
222
- forceReasoning?: boolean | undefined;
223
- }>;
224
- /**
225
- * OpenAI Responses API 的 Provider Options Schema
226
- */
227
- declare const aihubmixResponsesProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
228
- logprobs?: number | boolean | undefined;
229
- parallelToolCalls?: boolean | null | undefined;
230
- user?: string | null | undefined;
231
- reasoningEffort?: string | null | undefined;
232
- store?: boolean | null | undefined;
233
- metadata?: any;
234
- serviceTier?: "auto" | "flex" | "priority" | "default" | null | undefined;
235
- strictJsonSchema?: boolean | null | undefined;
236
- textVerbosity?: "low" | "high" | "medium" | null | undefined;
237
- promptCacheKey?: string | null | undefined;
238
- promptCacheRetention?: "in_memory" | "24h" | null | undefined;
239
- safetyIdentifier?: string | null | undefined;
240
- systemMessageMode?: "system" | "developer" | "remove" | undefined;
241
- forceReasoning?: boolean | undefined;
242
- conversation?: string | null | undefined;
243
- include?: ("reasoning.encrypted_content" | "file_search_call.results" | "message.output_text.logprobs")[] | null | undefined;
244
- instructions?: string | null | undefined;
245
- maxToolCalls?: number | null | undefined;
246
- previousResponseId?: string | null | undefined;
247
- reasoningSummary?: string | null | undefined;
248
- truncation?: "auto" | "disabled" | null | undefined;
249
- }>;
250
- type AihubmixChatProviderOptions = InferSchema<typeof aihubmixChatProviderOptionsSchema>;
251
- type AihubmixResponsesProviderOptions = InferSchema<typeof aihubmixResponsesProviderOptionsSchema>;
252
- type OpenAIProviderSettings = AihubmixChatProviderOptions;
201
+ interface OpenAIProviderSettings {
202
+ [key: string]: unknown;
203
+ }
253
204
  interface AihubmixProvider extends ProviderV3 {
254
205
  (deploymentId: string, settings?: OpenAIProviderSettings): LanguageModelV3;
255
206
  readonly specificationVersion: 'v3';
@@ -276,4 +227,4 @@ interface AihubmixProviderSettings {
276
227
  declare function createAihubmix(options?: AihubmixProviderSettings): AihubmixProvider;
277
228
  declare const aihubmix: AihubmixProvider;
278
229
 
279
- export { type AihubmixChatProviderOptions, type AihubmixProvider, type AihubmixProviderSettings, type AihubmixResponsesProviderOptions, aihubmix, aihubmixChatProviderOptionsSchema, aihubmixResponsesProviderOptionsSchema, createAihubmix };
230
+ export { type AihubmixProvider, type AihubmixProviderSettings, aihubmix, createAihubmix };
package/dist/index.js CHANGED
@@ -21,17 +21,16 @@ var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: tru
21
21
  var index_exports = {};
22
22
  __export(index_exports, {
23
23
  aihubmix: () => aihubmix,
24
- aihubmixChatProviderOptionsSchema: () => aihubmixChatProviderOptionsSchema,
25
- aihubmixResponsesProviderOptionsSchema: () => aihubmixResponsesProviderOptionsSchema,
26
24
  createAihubmix: () => createAihubmix
27
25
  });
28
26
  module.exports = __toCommonJS(index_exports);
29
27
 
30
28
  // src/aihubmix-provider.ts
29
+ var import_openai_compatible = require("@ai-sdk/openai-compatible");
31
30
  var import_internal = require("@ai-sdk/openai/internal");
32
- var import_internal2 = require("@ai-sdk/google/internal");
31
+ var import_internal2 = require("@ai-sdk/anthropic/internal");
32
+ var import_internal3 = require("@ai-sdk/google/internal");
33
33
  var import_provider_utils6 = require("@ai-sdk/provider-utils");
34
- var import_zod6 = require("zod");
35
34
 
36
35
  // src/tool/code-interpreter.ts
37
36
  var import_provider_utils = require("@ai-sdk/provider-utils");
@@ -298,191 +297,6 @@ var aihubmixTools = {
298
297
  };
299
298
 
300
299
  // src/aihubmix-provider.ts
301
- var aihubmixChatProviderOptionsSchema = (0, import_provider_utils6.lazySchema)(
302
- () => (0, import_provider_utils6.zodSchema)(
303
- import_zod6.z.object({
304
- /**
305
- * 修改指定 token 出现在生成内容中的概率
306
- * 接受一个 JSON 对象,将 token ID(字符串形式)映射到 -100 到 100 之间的偏差值
307
- */
308
- logitBias: import_zod6.z.record(import_zod6.z.string(), import_zod6.z.number()).optional(),
309
- /**
310
- * 返回 token 的对数概率
311
- * 设置为 true 返回生成 token 的对数概率
312
- * 设置为数字返回前 n 个 token 的对数概率
313
- */
314
- logprobs: import_zod6.z.union([import_zod6.z.boolean(), import_zod6.z.number()]).optional(),
315
- /**
316
- * 是否启用并行工具调用,默认为 true
317
- */
318
- parallelToolCalls: import_zod6.z.boolean().optional(),
319
- /**
320
- * 代表终端用户的唯一标识符,帮助 OpenAI 监控和检测滥用行为
321
- */
322
- user: import_zod6.z.string().optional(),
323
- /**
324
- * 推理模型的推理强度,默认为 `medium`
325
- */
326
- reasoningEffort: import_zod6.z.enum(["none", "minimal", "low", "medium", "high", "xhigh"]).optional(),
327
- /**
328
- * 生成的最大完成 token 数,适用于推理模型
329
- */
330
- maxCompletionTokens: import_zod6.z.number().optional(),
331
- /**
332
- * 是否在 Responses API 中启用持久化
333
- */
334
- store: import_zod6.z.boolean().optional(),
335
- /**
336
- * 与请求关联的元数据
337
- */
338
- metadata: import_zod6.z.record(import_zod6.z.string().max(64), import_zod6.z.string().max(512)).optional(),
339
- /**
340
- * 预测模式的参数
341
- */
342
- prediction: import_zod6.z.record(import_zod6.z.string(), import_zod6.z.any()).optional(),
343
- /**
344
- * 请求的服务层级
345
- * - 'auto': 默认服务层级
346
- * - 'flex': 50% 更便宜但延迟更高(仅限 o3, o4-mini, gpt-5)
347
- * - 'priority': 更快处理(需要企业版)
348
- * - 'default': 标准定价和性能
349
- */
350
- serviceTier: import_zod6.z.enum(["auto", "flex", "priority", "default"]).optional(),
351
- /**
352
- * 是否使用严格的 JSON schema 验证
353
- * @default true
354
- */
355
- strictJsonSchema: import_zod6.z.boolean().optional(),
356
- /**
357
- * 控制模型响应的详细程度
358
- * 较低的值会产生更简洁的响应,较高的值会产生更详细的响应
359
- */
360
- textVerbosity: import_zod6.z.enum(["low", "medium", "high"]).optional(),
361
- /**
362
- * 提示缓存的缓存键
363
- */
364
- promptCacheKey: import_zod6.z.string().optional(),
365
- /**
366
- * 提示缓存的保留策略
367
- * - 'in_memory': 默认,标准提示缓存行为
368
- * - '24h': 扩展提示缓存,保持缓存前缀最多 24 小时
369
- */
370
- promptCacheRetention: import_zod6.z.enum(["in_memory", "24h"]).optional(),
371
- /**
372
- * 用于帮助检测违反使用政策用户的稳定标识符
373
- */
374
- safetyIdentifier: import_zod6.z.string().optional(),
375
- /**
376
- * 覆盖此模型的系统消息模式
377
- * - 'system': 使用 'system' 角色(大多数模型的默认值)
378
- * - 'developer': 使用 'developer' 角色(推理模型使用)
379
- * - 'remove': 完全移除系统消息
380
- */
381
- systemMessageMode: import_zod6.z.enum(["system", "developer", "remove"]).optional(),
382
- /**
383
- * 强制将此模型视为推理模型
384
- * 适用于自定义 baseURL 的"隐形"推理模型
385
- */
386
- forceReasoning: import_zod6.z.boolean().optional()
387
- })
388
- )
389
- );
390
- var aihubmixResponsesProviderOptionsSchema = (0, import_provider_utils6.lazySchema)(
391
- () => (0, import_provider_utils6.zodSchema)(
392
- import_zod6.z.object({
393
- /**
394
- * OpenAI 对话的 ID,用于继续对话
395
- */
396
- conversation: import_zod6.z.string().nullish(),
397
- /**
398
- * 响应中包含的额外字段
399
- */
400
- include: import_zod6.z.array(
401
- import_zod6.z.enum([
402
- "reasoning.encrypted_content",
403
- "file_search_call.results",
404
- "message.output_text.logprobs"
405
- ])
406
- ).nullish(),
407
- /**
408
- * 模型的指令
409
- */
410
- instructions: import_zod6.z.string().nullish(),
411
- /**
412
- * 返回 token 的对数概率
413
- */
414
- logprobs: import_zod6.z.union([import_zod6.z.boolean(), import_zod6.z.number().min(1).max(20)]).optional(),
415
- /**
416
- * 内置工具调用的最大总数
417
- */
418
- maxToolCalls: import_zod6.z.number().nullish(),
419
- /**
420
- * 与生成关联的额外元数据
421
- */
422
- metadata: import_zod6.z.any().nullish(),
423
- /**
424
- * 是否使用并行工具调用,默认为 true
425
- */
426
- parallelToolCalls: import_zod6.z.boolean().nullish(),
427
- /**
428
- * 上一个响应的 ID,用于继续对话
429
- */
430
- previousResponseId: import_zod6.z.string().nullish(),
431
- /**
432
- * 提示缓存键
433
- */
434
- promptCacheKey: import_zod6.z.string().nullish(),
435
- /**
436
- * 提示缓存保留策略
437
- */
438
- promptCacheRetention: import_zod6.z.enum(["in_memory", "24h"]).nullish(),
439
- /**
440
- * 推理模型的推理强度
441
- */
442
- reasoningEffort: import_zod6.z.string().nullish(),
443
- /**
444
- * 控制推理摘要输出
445
- */
446
- reasoningSummary: import_zod6.z.string().nullish(),
447
- /**
448
- * 安全监控的标识符
449
- */
450
- safetyIdentifier: import_zod6.z.string().nullish(),
451
- /**
452
- * 请求的服务层级
453
- */
454
- serviceTier: import_zod6.z.enum(["auto", "flex", "priority", "default"]).nullish(),
455
- /**
456
- * 是否存储生成内容,默认为 true
457
- */
458
- store: import_zod6.z.boolean().nullish(),
459
- /**
460
- * 是否使用严格的 JSON schema 验证
461
- */
462
- strictJsonSchema: import_zod6.z.boolean().nullish(),
463
- /**
464
- * 控制模型响应的详细程度
465
- */
466
- textVerbosity: import_zod6.z.enum(["low", "medium", "high"]).nullish(),
467
- /**
468
- * 输出截断控制
469
- */
470
- truncation: import_zod6.z.enum(["auto", "disabled"]).nullish(),
471
- /**
472
- * 代表终端用户的唯一标识符
473
- */
474
- user: import_zod6.z.string().nullish(),
475
- /**
476
- * 系统消息模式
477
- */
478
- systemMessageMode: import_zod6.z.enum(["system", "developer", "remove"]).optional(),
479
- /**
480
- * 强制将模型视为推理模型
481
- */
482
- forceReasoning: import_zod6.z.boolean().optional()
483
- })
484
- )
485
- );
486
300
  var AihubmixTranscriptionModel = class extends import_internal.OpenAITranscriptionModel {
487
301
  async doGenerate(options) {
488
302
  if (options.mediaType) {
@@ -535,29 +349,13 @@ var AihubmixTranscriptionModel = class extends import_internal.OpenAITranscripti
535
349
  return super.doGenerate(options);
536
350
  }
537
351
  };
538
- var AihubmixOpenAIChatLanguageModel = class _AihubmixOpenAIChatLanguageModel extends import_internal.OpenAIChatLanguageModel {
539
- constructor(modelId, settings) {
540
- super(modelId, {
541
- ...settings,
542
- fetch: _AihubmixOpenAIChatLanguageModel.createCustomFetch(settings.fetch)
543
- });
352
+ function transformRequestBody(body) {
353
+ if (body.tools && Array.isArray(body.tools) && body.tools.length === 0 && body.tool_choice) {
354
+ const { tool_choice, ...rest } = body;
355
+ return rest;
544
356
  }
545
- static createCustomFetch(originalFetch) {
546
- return async (url, options) => {
547
- if (options?.body) {
548
- try {
549
- const body = JSON.parse(options.body);
550
- if (body.tools && Array.isArray(body.tools) && body.tools.length === 0 && body.tool_choice) {
551
- delete body.tool_choice;
552
- options.body = JSON.stringify(body);
553
- }
554
- } catch (error) {
555
- }
556
- }
557
- return originalFetch ? originalFetch(url, options) : fetch(url, options);
558
- };
559
- }
560
- };
357
+ return body;
358
+ }
561
359
  function createAihubmix(options = {}) {
562
360
  const getHeaders = () => ({
563
361
  Authorization: `Bearer ${(0, import_provider_utils6.loadApiKey)({
@@ -583,15 +381,20 @@ function createAihubmix(options = {}) {
583
381
  const createChatModel = (deploymentName, settings = {}) => {
584
382
  const headers = getHeaders();
585
383
  if (deploymentName.startsWith("claude-")) {
586
- return new AihubmixOpenAIChatLanguageModel(deploymentName, {
384
+ return new import_internal2.AnthropicMessagesLanguageModel(deploymentName, {
587
385
  provider: "aihubmix.chat",
588
- url,
589
- headers: getHeaders,
590
- fetch: options.fetch
386
+ baseURL: url({ path: "", modelId: deploymentName }),
387
+ headers: {
388
+ ...headers,
389
+ "x-api-key": headers["Authorization"].split(" ")[1]
390
+ },
391
+ supportedUrls: () => ({
392
+ "image/*": [/^https?:\/\/.*$/]
393
+ })
591
394
  });
592
395
  }
593
396
  if ((deploymentName.startsWith("gemini") || deploymentName.startsWith("imagen")) && !deploymentName.endsWith("-nothink") && !deploymentName.endsWith("-search")) {
594
- return new import_internal2.GoogleGenerativeAILanguageModel(
397
+ return new import_internal3.GoogleGenerativeAILanguageModel(
595
398
  deploymentName,
596
399
  {
597
400
  provider: "aihubmix.chat",
@@ -605,45 +408,38 @@ function createAihubmix(options = {}) {
605
408
  }
606
409
  );
607
410
  }
608
- if (deploymentName === "gpt-5-pro" || deploymentName === "gpt-5-codex") {
609
- return new import_internal.OpenAIResponsesLanguageModel(deploymentName, {
610
- provider: "aihubmix.chat",
611
- url,
612
- headers: getHeaders,
613
- fetch: options.fetch,
614
- fileIdPrefixes: ["file-"]
615
- });
616
- }
617
- return new AihubmixOpenAIChatLanguageModel(deploymentName, {
411
+ return new import_openai_compatible.OpenAICompatibleChatLanguageModel(deploymentName, {
618
412
  provider: "aihubmix.chat",
619
413
  url,
620
414
  headers: getHeaders,
621
- fetch: options.fetch
415
+ fetch: options.fetch,
416
+ includeUsage: true,
417
+ supportsStructuredOutputs: true,
418
+ transformRequestBody
622
419
  });
623
420
  };
624
- const createCompletionModel = (modelId, settings = {}) => new import_internal.OpenAICompletionLanguageModel(modelId, {
421
+ const createCompletionModel = (modelId, settings = {}) => new import_openai_compatible.OpenAICompatibleCompletionLanguageModel(modelId, {
625
422
  provider: "aihubmix.completion",
626
423
  url,
627
424
  headers: getHeaders,
628
- fetch: options.fetch
425
+ fetch: options.fetch,
426
+ includeUsage: true
629
427
  });
630
428
  const createEmbeddingModel = (modelId, settings = {}) => {
631
- return new import_internal.OpenAIEmbeddingModel(modelId, {
429
+ return new import_openai_compatible.OpenAICompatibleEmbeddingModel(modelId, {
632
430
  provider: "aihubmix.embeddings",
633
- headers: getHeaders,
634
431
  url,
432
+ headers: getHeaders,
635
433
  fetch: options.fetch
636
434
  });
637
435
  };
638
436
  const createResponsesModel = (modelId) => new import_internal.OpenAIResponsesLanguageModel(modelId, {
639
437
  provider: "aihubmix.responses",
640
438
  url,
641
- headers: getHeaders,
642
- fetch: options.fetch,
643
- fileIdPrefixes: ["file-"]
439
+ headers: getHeaders
644
440
  });
645
441
  const createImageModel = (modelId, settings = {}) => {
646
- return new import_internal.OpenAIImageModel(modelId, {
442
+ return new import_openai_compatible.OpenAICompatibleImageModel(modelId, {
647
443
  provider: "aihubmix.image",
648
444
  url,
649
445
  headers: getHeaders,
@@ -694,8 +490,6 @@ var aihubmix = createAihubmix();
694
490
  // Annotate the CommonJS export names for ESM import in node:
695
491
  0 && (module.exports = {
696
492
  aihubmix,
697
- aihubmixChatProviderOptionsSchema,
698
- aihubmixResponsesProviderOptionsSchema,
699
493
  createAihubmix
700
494
  });
701
495
  //# sourceMappingURL=index.js.map