@aihubmix/ai-sdk-provider 2.0.4 → 2.0.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +0 -30
- package/README.ja.md +12 -10
- package/README.md +11 -10
- package/README.zh.md +12 -10
- package/dist/index.d.mts +6 -55
- package/dist/index.d.ts +6 -55
- package/dist/index.js +19 -231
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +25 -238
- package/dist/index.mjs.map +1 -1
- package/package.json +28 -20
- package/LICENSE +0 -201
package/CHANGELOG.md
CHANGED
|
@@ -1,35 +1,5 @@
|
|
|
1
1
|
# @aihubmix/ai-sdk-provider
|
|
2
2
|
|
|
3
|
-
## 2.0.2
|
|
4
|
-
|
|
5
|
-
### Patch Changes
|
|
6
|
-
|
|
7
|
-
- b6c5f35: - 将 Zod 降级到 ^3.25.76 以保证与 AI SDK 生态系统的稳定性。
|
|
8
|
-
- 移除调试日志。
|
|
9
|
-
- 7a9e37e: - 添加 `aihubmixChatProviderOptionsSchema` 和 `aihubmixResponsesProviderOptionsSchema` 以支持所有 Vercel AI SDK OpenAI 提供商选项。
|
|
10
|
-
- 从主入口点导出选项类型和模式。
|
|
11
|
-
- 改进了 `gpt-5-pro` 和 `gpt-5-codex` 的模型配置。
|
|
12
|
-
- 通过更好的文件命名逻辑增强了 `AihubmixTranscriptionModel`。
|
|
13
|
-
|
|
14
|
-
## 2.0.0
|
|
15
|
-
|
|
16
|
-
### Major Changes
|
|
17
|
-
|
|
18
|
-
- 2dc97a3: 升级到 AI SDK v6
|
|
19
|
-
|
|
20
|
-
- 升级所有 AI SDK 依赖到 v6 兼容版本
|
|
21
|
-
- Provider 接口从 V2 升级到 V3 (LanguageModelV3, ProviderV3 等)
|
|
22
|
-
- 工具工厂函数 API 更新 (createProviderToolFactory)
|
|
23
|
-
- 添加 specificationVersion 和 embeddingModel 属性
|
|
24
|
-
- 版本号升级到 1.0.0
|
|
25
|
-
|
|
26
|
-
### Patch Changes
|
|
27
|
-
|
|
28
|
-
- f3b2c86: - 添加 `aihubmixChatProviderOptionsSchema` 和 `aihubmixResponsesProviderOptionsSchema` 以支持所有 Vercel AI SDK OpenAI 提供商选项。
|
|
29
|
-
- 从主入口点导出选项类型和模式。
|
|
30
|
-
- 改进了 `gpt-5-pro` 和 `gpt-5-codex` 的模型配置。
|
|
31
|
-
- 通过更好的文件命名逻辑增强了 `AihubmixTranscriptionModel`。
|
|
32
|
-
|
|
33
3
|
## 1.0.1
|
|
34
4
|
|
|
35
5
|
### Patch Changes
|
package/README.ja.md
CHANGED
|
@@ -14,7 +14,17 @@ app-codeが内蔵されており、この方法でモデルをリクエストす
|
|
|
14
14
|
[AI SDK](https://ai-sdk.dev/docs)用の **[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)**
|
|
15
15
|
一つのゲートウェイ、無限のモデル;ワンストップリクエスト:OpenAI、Claude、Gemini、DeepSeek、Qwen、そして500以上のAIモデル。
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
## AI SDK v6 用セットアップ
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
npm i @aihubmix/ai-sdk-provider
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### (レガシー)AI SDK v5 用セットアップ
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
npm i @aihubmix/ai-sdk-provider@0.0.6
|
|
27
|
+
```
|
|
18
28
|
|
|
19
29
|
## サポートされている機能
|
|
20
30
|
|
|
@@ -30,14 +40,6 @@ Aihubmix providerは以下のAI機能をサポートしています:
|
|
|
30
40
|
- **転写**:音声からテキストへの変換
|
|
31
41
|
- **ツール**:ウェブ検索およびその他のツール
|
|
32
42
|
|
|
33
|
-
## セットアップ
|
|
34
|
-
|
|
35
|
-
Aihubmix providerは`@aihubmix/ai-sdk-provider`モジュールで利用可能です。[@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider)でインストールできます
|
|
36
|
-
|
|
37
|
-
```bash
|
|
38
|
-
npm i @aihubmix/ai-sdk-provider
|
|
39
|
-
```
|
|
40
|
-
|
|
41
43
|
## Provider インスタンス
|
|
42
44
|
|
|
43
45
|
`@aihubmix/ai-sdk-provider`からデフォルトのproviderインスタンス`aihubmix`をインポートできます:
|
|
@@ -299,7 +301,7 @@ const { text } = await generateText({
|
|
|
299
301
|
|
|
300
302
|
## 追加リソース
|
|
301
303
|
|
|
302
|
-
- [Aihubmix Provider リポジトリ](https://github.com/
|
|
304
|
+
- [Aihubmix Provider リポジトリ](https://github.com/AIhubmix/ai-sdk-provider)
|
|
303
305
|
- [Aihubmix ドキュメント](https://docs.aihubmix.com/en)
|
|
304
306
|
- [Aihubmix ダッシュボード](https://aihubmix.com)
|
|
305
307
|
- [Aihubmix ビジネス協力](mailto:business@aihubmix.com)
|
package/README.md
CHANGED
|
@@ -14,8 +14,17 @@ Built-in app-code; using this method to request all models offers a 10% discount
|
|
|
14
14
|
The **[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)** for the [AI SDK](https://ai-sdk.dev/docs)
|
|
15
15
|
One Gateway, Infinite Models;one-stop request: OpenAI, Claude, Gemini, DeepSeek, Qwen, and over 500 AI models.
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
## Setup for AI SDK v6
|
|
18
18
|
|
|
19
|
+
```bash
|
|
20
|
+
npm i @aihubmix/ai-sdk-provider
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### (LEGACY) Setup for AI SDK v5
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
npm i @aihubmix/ai-sdk-provider@0.0.6
|
|
27
|
+
```
|
|
19
28
|
|
|
20
29
|
## Supported Features
|
|
21
30
|
|
|
@@ -32,14 +41,6 @@ The Aihubmix provider supports the following AI features:
|
|
|
32
41
|
- **Tools**: Web search and other tools
|
|
33
42
|
|
|
34
43
|
|
|
35
|
-
## Setup
|
|
36
|
-
|
|
37
|
-
The Aihubmix provider is available in the `@aihubmix/ai-sdk-provider` module. You can install it with [@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider)
|
|
38
|
-
|
|
39
|
-
```bash
|
|
40
|
-
npm i @aihubmix/ai-sdk-provider
|
|
41
|
-
```
|
|
42
|
-
|
|
43
44
|
## Provider Instance
|
|
44
45
|
|
|
45
46
|
You can import the default provider instance `aihubmix` from `@aihubmix/ai-sdk-provider`:
|
|
@@ -302,7 +303,7 @@ const { text } = await generateText({
|
|
|
302
303
|
|
|
303
304
|
## Additional Resources
|
|
304
305
|
|
|
305
|
-
- [Aihubmix Provider Repository](https://github.com/
|
|
306
|
+
- [Aihubmix Provider Repository](https://github.com/AIhubmix/ai-sdk-provider)
|
|
306
307
|
- [Aihubmix Documentation](https://docs.aihubmix.com/en)
|
|
307
308
|
- [Aihubmix Dashboard](https://aihubmix.com)
|
|
308
309
|
- [Aihubmix Cooperation](mailto:business@aihubmix.com)
|
package/README.zh.md
CHANGED
|
@@ -14,7 +14,17 @@
|
|
|
14
14
|
**[Aihubmix provider](https://sdk.vercel.ai/providers/community-providers/aihubmix)** 适用于 [AI SDK](https://ai-sdk.dev/docs)
|
|
15
15
|
一个网关,无限模型;一站式请求:OpenAI、Claude、Gemini、DeepSeek、Qwen 以及超过 500 个 AI 模型。
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
## 安装 AI SDK v6 版本
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
npm i @aihubmix/ai-sdk-provider
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### (旧版)安装 AI SDK v5 版本
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
npm i @aihubmix/ai-sdk-provider@0.0.6
|
|
27
|
+
```
|
|
18
28
|
|
|
19
29
|
## 支持的功能
|
|
20
30
|
|
|
@@ -30,14 +40,6 @@ Aihubmix provider 支持以下 AI 功能:
|
|
|
30
40
|
- **转录**:语音转文本转换
|
|
31
41
|
- **工具**:网络搜索和其他工具
|
|
32
42
|
|
|
33
|
-
## 安装
|
|
34
|
-
|
|
35
|
-
Aihubmix 在 `@aihubmix/ai-sdk-provider` 模块中可用。您可以通过 [@aihubmix/ai-sdk-provider](https://www.npmjs.com/package/@aihubmix/ai-sdk-provider) 安装它
|
|
36
|
-
|
|
37
|
-
```bash
|
|
38
|
-
npm i @aihubmix/ai-sdk-provider
|
|
39
|
-
```
|
|
40
|
-
|
|
41
43
|
## Provider 实例
|
|
42
44
|
|
|
43
45
|
您可以从 `@aihubmix/ai-sdk-provider` 导入默认的 provider 实例 `aihubmix`:
|
|
@@ -299,7 +301,7 @@ const { text } = await generateText({
|
|
|
299
301
|
|
|
300
302
|
## 附加资源
|
|
301
303
|
|
|
302
|
-
- [Aihubmix Provider 仓库](https://github.com/
|
|
304
|
+
- [Aihubmix Provider 仓库](https://github.com/AIhubmix/ai-sdk-provider)
|
|
303
305
|
- [Aihubmix 文档](https://docs.aihubmix.com/en)
|
|
304
306
|
- [Aihubmix 控制台](https://aihubmix.com)
|
|
305
307
|
- [Aihubmix 商务合作](mailto:business@aihubmix.com)
|
package/dist/index.d.mts
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
|
|
2
|
-
import { InferSchema, FetchFunction } from '@ai-sdk/provider-utils';
|
|
3
1
|
import { ProviderV3, LanguageModelV3, EmbeddingModelV3, ImageModelV3, TranscriptionModelV3, SpeechModelV3 } from '@ai-sdk/provider';
|
|
2
|
+
import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
|
|
3
|
+
import { FetchFunction } from '@ai-sdk/provider-utils';
|
|
4
4
|
|
|
5
5
|
declare const webSearchToolFactory: _ai_sdk_provider_utils.ProviderToolFactory<{}, {
|
|
6
6
|
/**
|
|
@@ -198,58 +198,9 @@ declare const aihubmixTools: {
|
|
|
198
198
|
webSearch: (args?: Parameters<typeof webSearchToolFactory>[0]) => _ai_sdk_provider_utils.Tool<{}, unknown>;
|
|
199
199
|
};
|
|
200
200
|
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
*/
|
|
205
|
-
declare const aihubmixChatProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
|
|
206
|
-
logitBias?: Record<string, number> | undefined;
|
|
207
|
-
logprobs?: number | boolean | undefined;
|
|
208
|
-
parallelToolCalls?: boolean | undefined;
|
|
209
|
-
user?: string | undefined;
|
|
210
|
-
reasoningEffort?: "low" | "high" | "medium" | "none" | "minimal" | "xhigh" | undefined;
|
|
211
|
-
maxCompletionTokens?: number | undefined;
|
|
212
|
-
store?: boolean | undefined;
|
|
213
|
-
metadata?: Record<string, string> | undefined;
|
|
214
|
-
prediction?: Record<string, any> | undefined;
|
|
215
|
-
serviceTier?: "auto" | "flex" | "priority" | "default" | undefined;
|
|
216
|
-
strictJsonSchema?: boolean | undefined;
|
|
217
|
-
textVerbosity?: "low" | "high" | "medium" | undefined;
|
|
218
|
-
promptCacheKey?: string | undefined;
|
|
219
|
-
promptCacheRetention?: "in_memory" | "24h" | undefined;
|
|
220
|
-
safetyIdentifier?: string | undefined;
|
|
221
|
-
systemMessageMode?: "system" | "developer" | "remove" | undefined;
|
|
222
|
-
forceReasoning?: boolean | undefined;
|
|
223
|
-
}>;
|
|
224
|
-
/**
|
|
225
|
-
* OpenAI Responses API 的 Provider Options Schema
|
|
226
|
-
*/
|
|
227
|
-
declare const aihubmixResponsesProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
|
|
228
|
-
logprobs?: number | boolean | undefined;
|
|
229
|
-
parallelToolCalls?: boolean | null | undefined;
|
|
230
|
-
user?: string | null | undefined;
|
|
231
|
-
reasoningEffort?: string | null | undefined;
|
|
232
|
-
store?: boolean | null | undefined;
|
|
233
|
-
metadata?: any;
|
|
234
|
-
serviceTier?: "auto" | "flex" | "priority" | "default" | null | undefined;
|
|
235
|
-
strictJsonSchema?: boolean | null | undefined;
|
|
236
|
-
textVerbosity?: "low" | "high" | "medium" | null | undefined;
|
|
237
|
-
promptCacheKey?: string | null | undefined;
|
|
238
|
-
promptCacheRetention?: "in_memory" | "24h" | null | undefined;
|
|
239
|
-
safetyIdentifier?: string | null | undefined;
|
|
240
|
-
systemMessageMode?: "system" | "developer" | "remove" | undefined;
|
|
241
|
-
forceReasoning?: boolean | undefined;
|
|
242
|
-
conversation?: string | null | undefined;
|
|
243
|
-
include?: ("reasoning.encrypted_content" | "file_search_call.results" | "message.output_text.logprobs")[] | null | undefined;
|
|
244
|
-
instructions?: string | null | undefined;
|
|
245
|
-
maxToolCalls?: number | null | undefined;
|
|
246
|
-
previousResponseId?: string | null | undefined;
|
|
247
|
-
reasoningSummary?: string | null | undefined;
|
|
248
|
-
truncation?: "auto" | "disabled" | null | undefined;
|
|
249
|
-
}>;
|
|
250
|
-
type AihubmixChatProviderOptions = InferSchema<typeof aihubmixChatProviderOptionsSchema>;
|
|
251
|
-
type AihubmixResponsesProviderOptions = InferSchema<typeof aihubmixResponsesProviderOptionsSchema>;
|
|
252
|
-
type OpenAIProviderSettings = AihubmixChatProviderOptions;
|
|
201
|
+
interface OpenAIProviderSettings {
|
|
202
|
+
[key: string]: unknown;
|
|
203
|
+
}
|
|
253
204
|
interface AihubmixProvider extends ProviderV3 {
|
|
254
205
|
(deploymentId: string, settings?: OpenAIProviderSettings): LanguageModelV3;
|
|
255
206
|
readonly specificationVersion: 'v3';
|
|
@@ -276,4 +227,4 @@ interface AihubmixProviderSettings {
|
|
|
276
227
|
declare function createAihubmix(options?: AihubmixProviderSettings): AihubmixProvider;
|
|
277
228
|
declare const aihubmix: AihubmixProvider;
|
|
278
229
|
|
|
279
|
-
export { type
|
|
230
|
+
export { type AihubmixProvider, type AihubmixProviderSettings, aihubmix, createAihubmix };
|
package/dist/index.d.ts
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
|
|
2
|
-
import { InferSchema, FetchFunction } from '@ai-sdk/provider-utils';
|
|
3
1
|
import { ProviderV3, LanguageModelV3, EmbeddingModelV3, ImageModelV3, TranscriptionModelV3, SpeechModelV3 } from '@ai-sdk/provider';
|
|
2
|
+
import * as _ai_sdk_provider_utils from '@ai-sdk/provider-utils';
|
|
3
|
+
import { FetchFunction } from '@ai-sdk/provider-utils';
|
|
4
4
|
|
|
5
5
|
declare const webSearchToolFactory: _ai_sdk_provider_utils.ProviderToolFactory<{}, {
|
|
6
6
|
/**
|
|
@@ -198,58 +198,9 @@ declare const aihubmixTools: {
|
|
|
198
198
|
webSearch: (args?: Parameters<typeof webSearchToolFactory>[0]) => _ai_sdk_provider_utils.Tool<{}, unknown>;
|
|
199
199
|
};
|
|
200
200
|
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
*/
|
|
205
|
-
declare const aihubmixChatProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
|
|
206
|
-
logitBias?: Record<string, number> | undefined;
|
|
207
|
-
logprobs?: number | boolean | undefined;
|
|
208
|
-
parallelToolCalls?: boolean | undefined;
|
|
209
|
-
user?: string | undefined;
|
|
210
|
-
reasoningEffort?: "low" | "high" | "medium" | "none" | "minimal" | "xhigh" | undefined;
|
|
211
|
-
maxCompletionTokens?: number | undefined;
|
|
212
|
-
store?: boolean | undefined;
|
|
213
|
-
metadata?: Record<string, string> | undefined;
|
|
214
|
-
prediction?: Record<string, any> | undefined;
|
|
215
|
-
serviceTier?: "auto" | "flex" | "priority" | "default" | undefined;
|
|
216
|
-
strictJsonSchema?: boolean | undefined;
|
|
217
|
-
textVerbosity?: "low" | "high" | "medium" | undefined;
|
|
218
|
-
promptCacheKey?: string | undefined;
|
|
219
|
-
promptCacheRetention?: "in_memory" | "24h" | undefined;
|
|
220
|
-
safetyIdentifier?: string | undefined;
|
|
221
|
-
systemMessageMode?: "system" | "developer" | "remove" | undefined;
|
|
222
|
-
forceReasoning?: boolean | undefined;
|
|
223
|
-
}>;
|
|
224
|
-
/**
|
|
225
|
-
* OpenAI Responses API 的 Provider Options Schema
|
|
226
|
-
*/
|
|
227
|
-
declare const aihubmixResponsesProviderOptionsSchema: _ai_sdk_provider_utils.LazySchema<{
|
|
228
|
-
logprobs?: number | boolean | undefined;
|
|
229
|
-
parallelToolCalls?: boolean | null | undefined;
|
|
230
|
-
user?: string | null | undefined;
|
|
231
|
-
reasoningEffort?: string | null | undefined;
|
|
232
|
-
store?: boolean | null | undefined;
|
|
233
|
-
metadata?: any;
|
|
234
|
-
serviceTier?: "auto" | "flex" | "priority" | "default" | null | undefined;
|
|
235
|
-
strictJsonSchema?: boolean | null | undefined;
|
|
236
|
-
textVerbosity?: "low" | "high" | "medium" | null | undefined;
|
|
237
|
-
promptCacheKey?: string | null | undefined;
|
|
238
|
-
promptCacheRetention?: "in_memory" | "24h" | null | undefined;
|
|
239
|
-
safetyIdentifier?: string | null | undefined;
|
|
240
|
-
systemMessageMode?: "system" | "developer" | "remove" | undefined;
|
|
241
|
-
forceReasoning?: boolean | undefined;
|
|
242
|
-
conversation?: string | null | undefined;
|
|
243
|
-
include?: ("reasoning.encrypted_content" | "file_search_call.results" | "message.output_text.logprobs")[] | null | undefined;
|
|
244
|
-
instructions?: string | null | undefined;
|
|
245
|
-
maxToolCalls?: number | null | undefined;
|
|
246
|
-
previousResponseId?: string | null | undefined;
|
|
247
|
-
reasoningSummary?: string | null | undefined;
|
|
248
|
-
truncation?: "auto" | "disabled" | null | undefined;
|
|
249
|
-
}>;
|
|
250
|
-
type AihubmixChatProviderOptions = InferSchema<typeof aihubmixChatProviderOptionsSchema>;
|
|
251
|
-
type AihubmixResponsesProviderOptions = InferSchema<typeof aihubmixResponsesProviderOptionsSchema>;
|
|
252
|
-
type OpenAIProviderSettings = AihubmixChatProviderOptions;
|
|
201
|
+
interface OpenAIProviderSettings {
|
|
202
|
+
[key: string]: unknown;
|
|
203
|
+
}
|
|
253
204
|
interface AihubmixProvider extends ProviderV3 {
|
|
254
205
|
(deploymentId: string, settings?: OpenAIProviderSettings): LanguageModelV3;
|
|
255
206
|
readonly specificationVersion: 'v3';
|
|
@@ -276,4 +227,4 @@ interface AihubmixProviderSettings {
|
|
|
276
227
|
declare function createAihubmix(options?: AihubmixProviderSettings): AihubmixProvider;
|
|
277
228
|
declare const aihubmix: AihubmixProvider;
|
|
278
229
|
|
|
279
|
-
export { type
|
|
230
|
+
export { type AihubmixProvider, type AihubmixProviderSettings, aihubmix, createAihubmix };
|
package/dist/index.js
CHANGED
|
@@ -21,18 +21,16 @@ var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: tru
|
|
|
21
21
|
var index_exports = {};
|
|
22
22
|
__export(index_exports, {
|
|
23
23
|
aihubmix: () => aihubmix,
|
|
24
|
-
aihubmixChatProviderOptionsSchema: () => aihubmixChatProviderOptionsSchema,
|
|
25
|
-
aihubmixResponsesProviderOptionsSchema: () => aihubmixResponsesProviderOptionsSchema,
|
|
26
24
|
createAihubmix: () => createAihubmix
|
|
27
25
|
});
|
|
28
26
|
module.exports = __toCommonJS(index_exports);
|
|
29
27
|
|
|
30
28
|
// src/aihubmix-provider.ts
|
|
29
|
+
var import_openai_compatible = require("@ai-sdk/openai-compatible");
|
|
31
30
|
var import_internal = require("@ai-sdk/openai/internal");
|
|
32
31
|
var import_internal2 = require("@ai-sdk/anthropic/internal");
|
|
33
32
|
var import_internal3 = require("@ai-sdk/google/internal");
|
|
34
33
|
var import_provider_utils6 = require("@ai-sdk/provider-utils");
|
|
35
|
-
var import_zod6 = require("zod");
|
|
36
34
|
|
|
37
35
|
// src/tool/code-interpreter.ts
|
|
38
36
|
var import_provider_utils = require("@ai-sdk/provider-utils");
|
|
@@ -299,191 +297,6 @@ var aihubmixTools = {
|
|
|
299
297
|
};
|
|
300
298
|
|
|
301
299
|
// src/aihubmix-provider.ts
|
|
302
|
-
var aihubmixChatProviderOptionsSchema = (0, import_provider_utils6.lazySchema)(
|
|
303
|
-
() => (0, import_provider_utils6.zodSchema)(
|
|
304
|
-
import_zod6.z.object({
|
|
305
|
-
/**
|
|
306
|
-
* 修改指定 token 出现在生成内容中的概率
|
|
307
|
-
* 接受一个 JSON 对象,将 token ID(字符串形式)映射到 -100 到 100 之间的偏差值
|
|
308
|
-
*/
|
|
309
|
-
logitBias: import_zod6.z.record(import_zod6.z.string(), import_zod6.z.number()).optional(),
|
|
310
|
-
/**
|
|
311
|
-
* 返回 token 的对数概率
|
|
312
|
-
* 设置为 true 返回生成 token 的对数概率
|
|
313
|
-
* 设置为数字返回前 n 个 token 的对数概率
|
|
314
|
-
*/
|
|
315
|
-
logprobs: import_zod6.z.union([import_zod6.z.boolean(), import_zod6.z.number()]).optional(),
|
|
316
|
-
/**
|
|
317
|
-
* 是否启用并行工具调用,默认为 true
|
|
318
|
-
*/
|
|
319
|
-
parallelToolCalls: import_zod6.z.boolean().optional(),
|
|
320
|
-
/**
|
|
321
|
-
* 代表终端用户的唯一标识符,帮助 OpenAI 监控和检测滥用行为
|
|
322
|
-
*/
|
|
323
|
-
user: import_zod6.z.string().optional(),
|
|
324
|
-
/**
|
|
325
|
-
* 推理模型的推理强度,默认为 `medium`
|
|
326
|
-
*/
|
|
327
|
-
reasoningEffort: import_zod6.z.enum(["none", "minimal", "low", "medium", "high", "xhigh"]).optional(),
|
|
328
|
-
/**
|
|
329
|
-
* 生成的最大完成 token 数,适用于推理模型
|
|
330
|
-
*/
|
|
331
|
-
maxCompletionTokens: import_zod6.z.number().optional(),
|
|
332
|
-
/**
|
|
333
|
-
* 是否在 Responses API 中启用持久化
|
|
334
|
-
*/
|
|
335
|
-
store: import_zod6.z.boolean().optional(),
|
|
336
|
-
/**
|
|
337
|
-
* 与请求关联的元数据
|
|
338
|
-
*/
|
|
339
|
-
metadata: import_zod6.z.record(import_zod6.z.string().max(64), import_zod6.z.string().max(512)).optional(),
|
|
340
|
-
/**
|
|
341
|
-
* 预测模式的参数
|
|
342
|
-
*/
|
|
343
|
-
prediction: import_zod6.z.record(import_zod6.z.string(), import_zod6.z.any()).optional(),
|
|
344
|
-
/**
|
|
345
|
-
* 请求的服务层级
|
|
346
|
-
* - 'auto': 默认服务层级
|
|
347
|
-
* - 'flex': 50% 更便宜但延迟更高(仅限 o3, o4-mini, gpt-5)
|
|
348
|
-
* - 'priority': 更快处理(需要企业版)
|
|
349
|
-
* - 'default': 标准定价和性能
|
|
350
|
-
*/
|
|
351
|
-
serviceTier: import_zod6.z.enum(["auto", "flex", "priority", "default"]).optional(),
|
|
352
|
-
/**
|
|
353
|
-
* 是否使用严格的 JSON schema 验证
|
|
354
|
-
* @default true
|
|
355
|
-
*/
|
|
356
|
-
strictJsonSchema: import_zod6.z.boolean().optional(),
|
|
357
|
-
/**
|
|
358
|
-
* 控制模型响应的详细程度
|
|
359
|
-
* 较低的值会产生更简洁的响应,较高的值会产生更详细的响应
|
|
360
|
-
*/
|
|
361
|
-
textVerbosity: import_zod6.z.enum(["low", "medium", "high"]).optional(),
|
|
362
|
-
/**
|
|
363
|
-
* 提示缓存的缓存键
|
|
364
|
-
*/
|
|
365
|
-
promptCacheKey: import_zod6.z.string().optional(),
|
|
366
|
-
/**
|
|
367
|
-
* 提示缓存的保留策略
|
|
368
|
-
* - 'in_memory': 默认,标准提示缓存行为
|
|
369
|
-
* - '24h': 扩展提示缓存,保持缓存前缀最多 24 小时
|
|
370
|
-
*/
|
|
371
|
-
promptCacheRetention: import_zod6.z.enum(["in_memory", "24h"]).optional(),
|
|
372
|
-
/**
|
|
373
|
-
* 用于帮助检测违反使用政策用户的稳定标识符
|
|
374
|
-
*/
|
|
375
|
-
safetyIdentifier: import_zod6.z.string().optional(),
|
|
376
|
-
/**
|
|
377
|
-
* 覆盖此模型的系统消息模式
|
|
378
|
-
* - 'system': 使用 'system' 角色(大多数模型的默认值)
|
|
379
|
-
* - 'developer': 使用 'developer' 角色(推理模型使用)
|
|
380
|
-
* - 'remove': 完全移除系统消息
|
|
381
|
-
*/
|
|
382
|
-
systemMessageMode: import_zod6.z.enum(["system", "developer", "remove"]).optional(),
|
|
383
|
-
/**
|
|
384
|
-
* 强制将此模型视为推理模型
|
|
385
|
-
* 适用于自定义 baseURL 的"隐形"推理模型
|
|
386
|
-
*/
|
|
387
|
-
forceReasoning: import_zod6.z.boolean().optional()
|
|
388
|
-
})
|
|
389
|
-
)
|
|
390
|
-
);
|
|
391
|
-
var aihubmixResponsesProviderOptionsSchema = (0, import_provider_utils6.lazySchema)(
|
|
392
|
-
() => (0, import_provider_utils6.zodSchema)(
|
|
393
|
-
import_zod6.z.object({
|
|
394
|
-
/**
|
|
395
|
-
* OpenAI 对话的 ID,用于继续对话
|
|
396
|
-
*/
|
|
397
|
-
conversation: import_zod6.z.string().nullish(),
|
|
398
|
-
/**
|
|
399
|
-
* 响应中包含的额外字段
|
|
400
|
-
*/
|
|
401
|
-
include: import_zod6.z.array(
|
|
402
|
-
import_zod6.z.enum([
|
|
403
|
-
"reasoning.encrypted_content",
|
|
404
|
-
"file_search_call.results",
|
|
405
|
-
"message.output_text.logprobs"
|
|
406
|
-
])
|
|
407
|
-
).nullish(),
|
|
408
|
-
/**
|
|
409
|
-
* 模型的指令
|
|
410
|
-
*/
|
|
411
|
-
instructions: import_zod6.z.string().nullish(),
|
|
412
|
-
/**
|
|
413
|
-
* 返回 token 的对数概率
|
|
414
|
-
*/
|
|
415
|
-
logprobs: import_zod6.z.union([import_zod6.z.boolean(), import_zod6.z.number().min(1).max(20)]).optional(),
|
|
416
|
-
/**
|
|
417
|
-
* 内置工具调用的最大总数
|
|
418
|
-
*/
|
|
419
|
-
maxToolCalls: import_zod6.z.number().nullish(),
|
|
420
|
-
/**
|
|
421
|
-
* 与生成关联的额外元数据
|
|
422
|
-
*/
|
|
423
|
-
metadata: import_zod6.z.any().nullish(),
|
|
424
|
-
/**
|
|
425
|
-
* 是否使用并行工具调用,默认为 true
|
|
426
|
-
*/
|
|
427
|
-
parallelToolCalls: import_zod6.z.boolean().nullish(),
|
|
428
|
-
/**
|
|
429
|
-
* 上一个响应的 ID,用于继续对话
|
|
430
|
-
*/
|
|
431
|
-
previousResponseId: import_zod6.z.string().nullish(),
|
|
432
|
-
/**
|
|
433
|
-
* 提示缓存键
|
|
434
|
-
*/
|
|
435
|
-
promptCacheKey: import_zod6.z.string().nullish(),
|
|
436
|
-
/**
|
|
437
|
-
* 提示缓存保留策略
|
|
438
|
-
*/
|
|
439
|
-
promptCacheRetention: import_zod6.z.enum(["in_memory", "24h"]).nullish(),
|
|
440
|
-
/**
|
|
441
|
-
* 推理模型的推理强度
|
|
442
|
-
*/
|
|
443
|
-
reasoningEffort: import_zod6.z.string().nullish(),
|
|
444
|
-
/**
|
|
445
|
-
* 控制推理摘要输出
|
|
446
|
-
*/
|
|
447
|
-
reasoningSummary: import_zod6.z.string().nullish(),
|
|
448
|
-
/**
|
|
449
|
-
* 安全监控的标识符
|
|
450
|
-
*/
|
|
451
|
-
safetyIdentifier: import_zod6.z.string().nullish(),
|
|
452
|
-
/**
|
|
453
|
-
* 请求的服务层级
|
|
454
|
-
*/
|
|
455
|
-
serviceTier: import_zod6.z.enum(["auto", "flex", "priority", "default"]).nullish(),
|
|
456
|
-
/**
|
|
457
|
-
* 是否存储生成内容,默认为 true
|
|
458
|
-
*/
|
|
459
|
-
store: import_zod6.z.boolean().nullish(),
|
|
460
|
-
/**
|
|
461
|
-
* 是否使用严格的 JSON schema 验证
|
|
462
|
-
*/
|
|
463
|
-
strictJsonSchema: import_zod6.z.boolean().nullish(),
|
|
464
|
-
/**
|
|
465
|
-
* 控制模型响应的详细程度
|
|
466
|
-
*/
|
|
467
|
-
textVerbosity: import_zod6.z.enum(["low", "medium", "high"]).nullish(),
|
|
468
|
-
/**
|
|
469
|
-
* 输出截断控制
|
|
470
|
-
*/
|
|
471
|
-
truncation: import_zod6.z.enum(["auto", "disabled"]).nullish(),
|
|
472
|
-
/**
|
|
473
|
-
* 代表终端用户的唯一标识符
|
|
474
|
-
*/
|
|
475
|
-
user: import_zod6.z.string().nullish(),
|
|
476
|
-
/**
|
|
477
|
-
* 系统消息模式
|
|
478
|
-
*/
|
|
479
|
-
systemMessageMode: import_zod6.z.enum(["system", "developer", "remove"]).optional(),
|
|
480
|
-
/**
|
|
481
|
-
* 强制将模型视为推理模型
|
|
482
|
-
*/
|
|
483
|
-
forceReasoning: import_zod6.z.boolean().optional()
|
|
484
|
-
})
|
|
485
|
-
)
|
|
486
|
-
);
|
|
487
300
|
var AihubmixTranscriptionModel = class extends import_internal.OpenAITranscriptionModel {
|
|
488
301
|
async doGenerate(options) {
|
|
489
302
|
if (options.mediaType) {
|
|
@@ -536,29 +349,13 @@ var AihubmixTranscriptionModel = class extends import_internal.OpenAITranscripti
|
|
|
536
349
|
return super.doGenerate(options);
|
|
537
350
|
}
|
|
538
351
|
};
|
|
539
|
-
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
fetch: _AihubmixOpenAIChatLanguageModel.createCustomFetch(settings.fetch)
|
|
544
|
-
});
|
|
352
|
+
function transformRequestBody(body) {
|
|
353
|
+
if (body.tools && Array.isArray(body.tools) && body.tools.length === 0 && body.tool_choice) {
|
|
354
|
+
const { tool_choice, ...rest } = body;
|
|
355
|
+
return rest;
|
|
545
356
|
}
|
|
546
|
-
|
|
547
|
-
|
|
548
|
-
if (options?.body) {
|
|
549
|
-
try {
|
|
550
|
-
const body = JSON.parse(options.body);
|
|
551
|
-
if (body.tools && Array.isArray(body.tools) && body.tools.length === 0 && body.tool_choice) {
|
|
552
|
-
delete body.tool_choice;
|
|
553
|
-
options.body = JSON.stringify(body);
|
|
554
|
-
}
|
|
555
|
-
} catch (error) {
|
|
556
|
-
}
|
|
557
|
-
}
|
|
558
|
-
return originalFetch ? originalFetch(url, options) : fetch(url, options);
|
|
559
|
-
};
|
|
560
|
-
}
|
|
561
|
-
};
|
|
357
|
+
return body;
|
|
358
|
+
}
|
|
562
359
|
function createAihubmix(options = {}) {
|
|
563
360
|
const getHeaders = () => ({
|
|
564
361
|
Authorization: `Bearer ${(0, import_provider_utils6.loadApiKey)({
|
|
@@ -611,45 +408,38 @@ function createAihubmix(options = {}) {
|
|
|
611
408
|
}
|
|
612
409
|
);
|
|
613
410
|
}
|
|
614
|
-
|
|
615
|
-
return new import_internal.OpenAIResponsesLanguageModel(deploymentName, {
|
|
616
|
-
provider: "aihubmix.chat",
|
|
617
|
-
url,
|
|
618
|
-
headers: getHeaders,
|
|
619
|
-
fetch: options.fetch,
|
|
620
|
-
fileIdPrefixes: ["file-"]
|
|
621
|
-
});
|
|
622
|
-
}
|
|
623
|
-
return new AihubmixOpenAIChatLanguageModel(deploymentName, {
|
|
411
|
+
return new import_openai_compatible.OpenAICompatibleChatLanguageModel(deploymentName, {
|
|
624
412
|
provider: "aihubmix.chat",
|
|
625
413
|
url,
|
|
626
414
|
headers: getHeaders,
|
|
627
|
-
fetch: options.fetch
|
|
415
|
+
fetch: options.fetch,
|
|
416
|
+
includeUsage: true,
|
|
417
|
+
supportsStructuredOutputs: true,
|
|
418
|
+
transformRequestBody
|
|
628
419
|
});
|
|
629
420
|
};
|
|
630
|
-
const createCompletionModel = (modelId, settings = {}) => new
|
|
421
|
+
const createCompletionModel = (modelId, settings = {}) => new import_openai_compatible.OpenAICompatibleCompletionLanguageModel(modelId, {
|
|
631
422
|
provider: "aihubmix.completion",
|
|
632
423
|
url,
|
|
633
424
|
headers: getHeaders,
|
|
634
|
-
fetch: options.fetch
|
|
425
|
+
fetch: options.fetch,
|
|
426
|
+
includeUsage: true
|
|
635
427
|
});
|
|
636
428
|
const createEmbeddingModel = (modelId, settings = {}) => {
|
|
637
|
-
return new
|
|
429
|
+
return new import_openai_compatible.OpenAICompatibleEmbeddingModel(modelId, {
|
|
638
430
|
provider: "aihubmix.embeddings",
|
|
639
|
-
headers: getHeaders,
|
|
640
431
|
url,
|
|
432
|
+
headers: getHeaders,
|
|
641
433
|
fetch: options.fetch
|
|
642
434
|
});
|
|
643
435
|
};
|
|
644
436
|
const createResponsesModel = (modelId) => new import_internal.OpenAIResponsesLanguageModel(modelId, {
|
|
645
437
|
provider: "aihubmix.responses",
|
|
646
438
|
url,
|
|
647
|
-
headers: getHeaders
|
|
648
|
-
fetch: options.fetch,
|
|
649
|
-
fileIdPrefixes: ["file-"]
|
|
439
|
+
headers: getHeaders
|
|
650
440
|
});
|
|
651
441
|
const createImageModel = (modelId, settings = {}) => {
|
|
652
|
-
return new
|
|
442
|
+
return new import_openai_compatible.OpenAICompatibleImageModel(modelId, {
|
|
653
443
|
provider: "aihubmix.image",
|
|
654
444
|
url,
|
|
655
445
|
headers: getHeaders,
|
|
@@ -700,8 +490,6 @@ var aihubmix = createAihubmix();
|
|
|
700
490
|
// Annotate the CommonJS export names for ESM import in node:
|
|
701
491
|
0 && (module.exports = {
|
|
702
492
|
aihubmix,
|
|
703
|
-
aihubmixChatProviderOptionsSchema,
|
|
704
|
-
aihubmixResponsesProviderOptionsSchema,
|
|
705
493
|
createAihubmix
|
|
706
494
|
});
|
|
707
495
|
//# sourceMappingURL=index.js.map
|