ai-sdk-provider-env 0.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +275 -0
- package/dist/index.cjs +359 -0
- package/dist/index.cjs.map +1 -0
- package/dist/index.d.cts +180 -0
- package/dist/index.d.cts.map +1 -0
- package/dist/index.d.mts +180 -0
- package/dist/index.d.mts.map +1 -0
- package/dist/index.mjs +334 -0
- package/dist/index.mjs.map +1 -0
- package/package.json +79 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,275 @@
|
|
|
1
|
+
> [中文](./README_zh.md)
|
|
2
|
+
|
|
3
|
+
# ai-sdk-provider-env
|
|
4
|
+
|
|
5
|
+
A dynamic, environment-variable-driven provider for [Vercel AI SDK](https://sdk.vercel.ai/). Resolves AI provider configuration from env var conventions at runtime, so you can switch models without touching code.
|
|
6
|
+
|
|
7
|
+
[](https://www.npmjs.com/package/ai-sdk-provider-env)
|
|
8
|
+
[](./LICENSE)
|
|
9
|
+
|
|
10
|
+
## Motivation
|
|
11
|
+
|
|
12
|
+
Using multiple AI providers with Vercel AI SDK means importing each SDK, configuring API keys and base URLs, and wiring everything together — per provider, per project. Switching providers requires code changes.
|
|
13
|
+
|
|
14
|
+
`ai-sdk-provider-env` eliminates this boilerplate. Define provider configurations through environment variables, resolve them at runtime. Add a new provider by setting env vars, switch models by changing a string — no code changes needed.
|
|
15
|
+
|
|
16
|
+
## Features
|
|
17
|
+
|
|
18
|
+
- Resolve provider config (base URL, API key, compatibility mode) from environment variables automatically
|
|
19
|
+
- Built-in presets for popular providers, so you only need to set an API key
|
|
20
|
+
- Supports OpenAI, Anthropic, and any OpenAI-compatible API
|
|
21
|
+
- Implements `ProviderV3`, plugs directly into `createProviderRegistry`
|
|
22
|
+
- Provider instances are cached, no redundant initialization
|
|
23
|
+
- Fully customizable: custom fetch, env-based headers, custom separator, code-based configs
|
|
24
|
+
|
|
25
|
+
## Installation
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
pnpm add ai-sdk-provider-env
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
Install provider SDKs as needed:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
pnpm add @ai-sdk/openai # for OpenAI and OpenAI-compatible
|
|
35
|
+
pnpm add @ai-sdk/anthropic # for Anthropic
|
|
36
|
+
pnpm add @ai-sdk/openai-compatible # for generic OpenAI-compatible APIs
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
## Quick Start
|
|
40
|
+
|
|
41
|
+
```ts
|
|
42
|
+
import { createProviderRegistry, generateText } from 'ai'
|
|
43
|
+
import { envProvider } from 'ai-sdk-provider-env'
|
|
44
|
+
|
|
45
|
+
const registry = createProviderRegistry({
|
|
46
|
+
env: envProvider(),
|
|
47
|
+
})
|
|
48
|
+
|
|
49
|
+
// Use a preset: only API_KEY is required
|
|
50
|
+
// OPENAI_PRESET=openai
|
|
51
|
+
// OPENAI_API_KEY=sk-xxx
|
|
52
|
+
const model = registry.languageModel('env:openai/gpt-4o')
|
|
53
|
+
|
|
54
|
+
const { text } = await generateText({ model, prompt: 'Hello!' })
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Any env var prefix is a config set. Two endpoints? Two prefixes, zero code changes:
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
# .env
|
|
61
|
+
FAST_BASE_URL=https://fast-api.example.com/v1
|
|
62
|
+
FAST_API_KEY=key-fast
|
|
63
|
+
|
|
64
|
+
SMART_BASE_URL=https://smart-api.example.com/v1
|
|
65
|
+
SMART_API_KEY=key-smart
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
```ts
|
|
69
|
+
const draft = await generateText({
|
|
70
|
+
model: registry.languageModel('env:fast/llama-3-8b'),
|
|
71
|
+
prompt: 'Write a story',
|
|
72
|
+
})
|
|
73
|
+
|
|
74
|
+
const review = await generateText({
|
|
75
|
+
model: registry.languageModel('env:smart/gpt-4o'),
|
|
76
|
+
prompt: `Review this: ${draft.text}`,
|
|
77
|
+
})
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
## Environment Variable Convention
|
|
81
|
+
|
|
82
|
+
The model ID format is `{configSet}/{modelId}`. The config set name maps to an env var prefix (uppercased).
|
|
83
|
+
|
|
84
|
+
With the default separator `_`, a config set reads these variables (`[MYAI]` = your config set name, uppercased):
|
|
85
|
+
|
|
86
|
+
| Variable | Required | Description |
|
|
87
|
+
|---|---|---|
|
|
88
|
+
| `[MYAI]_API_KEY` | Yes | API key |
|
|
89
|
+
| `[MYAI]_BASE_URL` | Yes (unless preset is set) | API base URL |
|
|
90
|
+
| `[MYAI]_PRESET` | No | Built-in preset name (e.g. `openai`) |
|
|
91
|
+
| `[MYAI]_COMPATIBLE` | No | Compatibility mode (default: `openai`) |
|
|
92
|
+
| `[MYAI]_HEADERS` | No | Custom HTTP headers (JSON format) |
|
|
93
|
+
|
|
94
|
+
When `PRESET` is set, `BASE_URL` and `COMPATIBLE` become optional and fall back to the preset's values.
|
|
95
|
+
|
|
96
|
+
**Compatibility modes:**
|
|
97
|
+
|
|
98
|
+
| Value | Behavior |
|
|
99
|
+
|---|---|
|
|
100
|
+
| `openai` | Uses `@ai-sdk/openai` (default) |
|
|
101
|
+
| `anthropic` | Uses `@ai-sdk/anthropic` |
|
|
102
|
+
| any other string | Uses `@ai-sdk/openai-compatible` with that string as the provider name |
|
|
103
|
+
|
|
104
|
+
## Built-in Presets
|
|
105
|
+
|
|
106
|
+
| Preset name | Base URL | Compatible |
|
|
107
|
+
|---|---|---|
|
|
108
|
+
| `openai` | `https://api.openai.com/v1` | `openai` |
|
|
109
|
+
| `anthropic` | `https://api.anthropic.com` | `anthropic` |
|
|
110
|
+
| `deepseek` | `https://api.deepseek.com` | `openai` |
|
|
111
|
+
| `zhipu` | `https://open.bigmodel.cn/api/paas/v4` | `openai` |
|
|
112
|
+
| `groq` | `https://api.groq.com/openai/v1` | `openai` |
|
|
113
|
+
| `together` | `https://api.together.xyz/v1` | `openai` |
|
|
114
|
+
| `fireworks` | `https://api.fireworks.ai/inference/v1` | `openai` |
|
|
115
|
+
| `mistral` | `https://api.mistral.ai/v1` | `openai` |
|
|
116
|
+
| `moonshot` | `https://api.moonshot.cn/v1` | `openai` |
|
|
117
|
+
| `perplexity` | `https://api.perplexity.ai` | `openai` |
|
|
118
|
+
| `openrouter` | `https://openrouter.ai/api/v1` | `openai` |
|
|
119
|
+
| `siliconflow` | `https://api.siliconflow.cn/v1` | `openai` |
|
|
120
|
+
|
|
121
|
+
## API Reference
|
|
122
|
+
|
|
123
|
+
### `envProvider(options?)`
|
|
124
|
+
|
|
125
|
+
Returns a `ProviderV3` instance.
|
|
126
|
+
|
|
127
|
+
```ts
|
|
128
|
+
import { envProvider } from 'ai-sdk-provider-env'
|
|
129
|
+
|
|
130
|
+
const provider = envProvider(options)
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
**Options** (`EnvProviderOptions`):
|
|
134
|
+
|
|
135
|
+
| Option | Type | Default | Description |
|
|
136
|
+
|---|---|---|---|
|
|
137
|
+
| `separator` | `string` | `'_'` | Separator between the prefix and the variable name |
|
|
138
|
+
| `configs` | `Record<string, ConfigSetEntry>` | `undefined` | Explicit config sets (takes precedence over env vars) |
|
|
139
|
+
| `defaults` | `EnvProviderDefaults` | `undefined` | Global defaults applied to all providers (can be overridden per config set) |
|
|
140
|
+
|
|
141
|
+
**`EnvProviderDefaults`:**
|
|
142
|
+
|
|
143
|
+
| Option | Type | Default | Description |
|
|
144
|
+
|---|---|---|---|
|
|
145
|
+
| `fetch` | `typeof globalThis.fetch` | `undefined` | Custom fetch implementation passed to all created providers |
|
|
146
|
+
| `headers` | `Record<string, string>` | `undefined` | Default HTTP headers for all providers (overridden by config-set headers) |
|
|
147
|
+
|
|
148
|
+
**`ConfigSetEntry`:**
|
|
149
|
+
|
|
150
|
+
```ts
|
|
151
|
+
interface ConfigSetEntry {
|
|
152
|
+
apiKey: string
|
|
153
|
+
preset?: string
|
|
154
|
+
baseURL?: string
|
|
155
|
+
compatible?: string // 'openai' | 'anthropic' | any string (default: 'openai')
|
|
156
|
+
headers?: Record<string, string>
|
|
157
|
+
}
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
**Model ID format:**
|
|
161
|
+
|
|
162
|
+
```
|
|
163
|
+
{configSet}/{modelId}
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
Examples: `openai/gpt-4o`, `anthropic/claude-sonnet-4-20250514`, `myapi/some-model`.
|
|
167
|
+
|
|
168
|
+
## Advanced Usage
|
|
169
|
+
|
|
170
|
+
### Custom separator
|
|
171
|
+
|
|
172
|
+
If single underscores conflict with your naming scheme, use double underscores or any other string:
|
|
173
|
+
|
|
174
|
+
```ts
|
|
175
|
+
const provider = envProvider({ separator: '__' })
|
|
176
|
+
|
|
177
|
+
// Now reads: OPENAI__BASE_URL, OPENAI__API_KEY, OPENAI__PRESET, OPENAI__COMPATIBLE
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
### Code-based configs
|
|
181
|
+
|
|
182
|
+
Skip env vars entirely and pass config directly. This takes the highest precedence:
|
|
183
|
+
|
|
184
|
+
```ts
|
|
185
|
+
const provider = envProvider({
|
|
186
|
+
configs: {
|
|
187
|
+
openai: {
|
|
188
|
+
baseURL: 'https://api.openai.com/v1',
|
|
189
|
+
apiKey: process.env.OPENAI_KEY!,
|
|
190
|
+
compatible: 'openai',
|
|
191
|
+
},
|
|
192
|
+
claude: {
|
|
193
|
+
baseURL: 'https://api.anthropic.com',
|
|
194
|
+
apiKey: process.env.ANTHROPIC_KEY!,
|
|
195
|
+
compatible: 'anthropic',
|
|
196
|
+
},
|
|
197
|
+
deepseek: {
|
|
198
|
+
preset: 'deepseek',
|
|
199
|
+
apiKey: process.env.DEEPSEEK_KEY!,
|
|
200
|
+
},
|
|
201
|
+
},
|
|
202
|
+
})
|
|
203
|
+
|
|
204
|
+
const model = provider.languageModel('openai/gpt-4o')
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
### Custom fetch
|
|
208
|
+
|
|
209
|
+
Pass a custom fetch implementation to all providers. Useful for proxies, logging, or test mocks:
|
|
210
|
+
|
|
211
|
+
```ts
|
|
212
|
+
const provider = envProvider({ defaults: { fetch: myCustomFetch } })
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
### Default headers
|
|
216
|
+
|
|
217
|
+
Set HTTP headers that apply to all providers. Per-config-set headers (from env vars or code configs) override defaults with the same key:
|
|
218
|
+
|
|
219
|
+
```ts
|
|
220
|
+
const provider = envProvider({
|
|
221
|
+
defaults: {
|
|
222
|
+
headers: { 'X-App-Name': 'my-app', 'X-Request-Source': 'server' },
|
|
223
|
+
},
|
|
224
|
+
})
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
### Custom headers via env vars
|
|
228
|
+
|
|
229
|
+
Set per-config-set HTTP headers using the `HEADERS` env var. The value must be valid JSON:
|
|
230
|
+
|
|
231
|
+
```bash
|
|
232
|
+
OPENAI_HEADERS={"X-Custom":"value","X-Request-Source":"my-app"}
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
These headers are merged into every request made by that config set's provider. When combined with `defaults.headers`, config-set headers take precedence for the same key.
|
|
236
|
+
|
|
237
|
+
### Using with `createProviderRegistry`
|
|
238
|
+
|
|
239
|
+
`envProvider()` implements `ProviderV3`, so it works directly with `createProviderRegistry`:
|
|
240
|
+
|
|
241
|
+
```ts
|
|
242
|
+
import { createProviderRegistry, generateText } from 'ai'
|
|
243
|
+
import { envProvider } from 'ai-sdk-provider-env'
|
|
244
|
+
|
|
245
|
+
const registry = createProviderRegistry({
|
|
246
|
+
env: envProvider(),
|
|
247
|
+
})
|
|
248
|
+
|
|
249
|
+
// Language model
|
|
250
|
+
const model = registry.languageModel('env:openai/gpt-4o')
|
|
251
|
+
|
|
252
|
+
// Embedding model
|
|
253
|
+
const embedder = registry.embeddingModel('env:openai/text-embedding-3-small')
|
|
254
|
+
|
|
255
|
+
// Image model
|
|
256
|
+
const imageModel = registry.imageModel('env:openai/dall-e-3')
|
|
257
|
+
|
|
258
|
+
const { text } = await generateText({
|
|
259
|
+
model,
|
|
260
|
+
prompt: 'Hello!',
|
|
261
|
+
})
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
The model ID format inside the registry is `{registryKey}:{configSet}/{modelId}`. With the setup above, `env:openai/gpt-4o` means config set `openai`, model `gpt-4o`.
|
|
265
|
+
|
|
266
|
+
You can also mount multiple providers side by side:
|
|
267
|
+
|
|
268
|
+
```ts
|
|
269
|
+
import { createOpenAI } from '@ai-sdk/openai'
|
|
270
|
+
|
|
271
|
+
const registry = createProviderRegistry({
|
|
272
|
+
env: envProvider(),
|
|
273
|
+
openai: createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
|
|
274
|
+
})
|
|
275
|
+
```
|
package/dist/index.cjs
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
1
|
+
Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
|
|
2
|
+
//#region \0rolldown/runtime.js
|
|
3
|
+
var __create = Object.create;
|
|
4
|
+
var __defProp = Object.defineProperty;
|
|
5
|
+
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
|
|
6
|
+
var __getOwnPropNames = Object.getOwnPropertyNames;
|
|
7
|
+
var __getProtoOf = Object.getPrototypeOf;
|
|
8
|
+
var __hasOwnProp = Object.prototype.hasOwnProperty;
|
|
9
|
+
var __copyProps = (to, from, except, desc) => {
|
|
10
|
+
if (from && typeof from === "object" || typeof from === "function") {
|
|
11
|
+
for (var keys = __getOwnPropNames(from), i = 0, n = keys.length, key; i < n; i++) {
|
|
12
|
+
key = keys[i];
|
|
13
|
+
if (!__hasOwnProp.call(to, key) && key !== except) {
|
|
14
|
+
__defProp(to, key, {
|
|
15
|
+
get: ((k) => from[k]).bind(null, key),
|
|
16
|
+
enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable
|
|
17
|
+
});
|
|
18
|
+
}
|
|
19
|
+
}
|
|
20
|
+
}
|
|
21
|
+
return to;
|
|
22
|
+
};
|
|
23
|
+
var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps(isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", {
|
|
24
|
+
value: mod,
|
|
25
|
+
enumerable: true
|
|
26
|
+
}) : target, mod));
|
|
27
|
+
|
|
28
|
+
//#endregion
|
|
29
|
+
let node_process = require("node:process");
|
|
30
|
+
node_process = __toESM(node_process);
|
|
31
|
+
let _ai_sdk_provider = require("@ai-sdk/provider");
|
|
32
|
+
|
|
33
|
+
//#region src/factories.ts
|
|
34
|
+
/**
|
|
35
|
+
* Create an OpenAI provider.
|
|
36
|
+
*
|
|
37
|
+
* Dynamically requires `@ai-sdk/openai`, so it only needs to be installed when actually used.
|
|
38
|
+
*/
|
|
39
|
+
function createOpenAIProvider(opts) {
|
|
40
|
+
try {
|
|
41
|
+
const { createOpenAI } = require("@ai-sdk/openai");
|
|
42
|
+
return createOpenAI(opts);
|
|
43
|
+
} catch {
|
|
44
|
+
throw new Error("[ai-sdk-provider-env] openai compatibility mode requires @ai-sdk/openai. Run: npm install @ai-sdk/openai");
|
|
45
|
+
}
|
|
46
|
+
}
|
|
47
|
+
/**
|
|
48
|
+
* Create an Anthropic provider.
|
|
49
|
+
*
|
|
50
|
+
* Dynamically requires `@ai-sdk/anthropic`, so it only needs to be installed when actually used.
|
|
51
|
+
*/
|
|
52
|
+
function createAnthropicProvider(opts) {
|
|
53
|
+
try {
|
|
54
|
+
const { createAnthropic } = require("@ai-sdk/anthropic");
|
|
55
|
+
return createAnthropic(opts);
|
|
56
|
+
} catch {
|
|
57
|
+
throw new Error("[ai-sdk-provider-env] anthropic compatibility mode requires @ai-sdk/anthropic. Run: npm install @ai-sdk/anthropic");
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
/**
|
|
61
|
+
* Create an OpenAI Compatible provider.
|
|
62
|
+
*
|
|
63
|
+
* Dynamically requires `@ai-sdk/openai-compatible`, so it only needs to be installed when actually used.
|
|
64
|
+
*/
|
|
65
|
+
function createOpenAICompatibleProvider(opts) {
|
|
66
|
+
try {
|
|
67
|
+
const { createOpenAICompatible } = require("@ai-sdk/openai-compatible");
|
|
68
|
+
return createOpenAICompatible(opts);
|
|
69
|
+
} catch {
|
|
70
|
+
throw new Error("[ai-sdk-provider-env] openai-compatible mode requires @ai-sdk/openai-compatible. Run: npm install @ai-sdk/openai-compatible");
|
|
71
|
+
}
|
|
72
|
+
}
|
|
73
|
+
|
|
74
|
+
//#endregion
|
|
75
|
+
//#region src/presets.ts
|
|
76
|
+
/**
|
|
77
|
+
* Built-in preset configurations for common providers.
|
|
78
|
+
*
|
|
79
|
+
* When using a preset, only `{PREFIX}__PRESET` and `{PREFIX}__API_KEY`
|
|
80
|
+
* are required; `BASE_URL` and `COMPATIBLE` are provided by the preset.
|
|
81
|
+
*/
|
|
82
|
+
const builtinPresets = {
|
|
83
|
+
openai: {
|
|
84
|
+
baseURL: "https://api.openai.com/v1",
|
|
85
|
+
compatible: "openai"
|
|
86
|
+
},
|
|
87
|
+
anthropic: {
|
|
88
|
+
baseURL: "https://api.anthropic.com",
|
|
89
|
+
compatible: "anthropic"
|
|
90
|
+
},
|
|
91
|
+
deepseek: {
|
|
92
|
+
baseURL: "https://api.deepseek.com",
|
|
93
|
+
compatible: "openai"
|
|
94
|
+
},
|
|
95
|
+
zhipu: {
|
|
96
|
+
baseURL: "https://open.bigmodel.cn/api/paas/v4",
|
|
97
|
+
compatible: "openai"
|
|
98
|
+
},
|
|
99
|
+
groq: {
|
|
100
|
+
baseURL: "https://api.groq.com/openai/v1",
|
|
101
|
+
compatible: "openai"
|
|
102
|
+
},
|
|
103
|
+
together: {
|
|
104
|
+
baseURL: "https://api.together.xyz/v1",
|
|
105
|
+
compatible: "openai"
|
|
106
|
+
},
|
|
107
|
+
fireworks: {
|
|
108
|
+
baseURL: "https://api.fireworks.ai/inference/v1",
|
|
109
|
+
compatible: "openai"
|
|
110
|
+
},
|
|
111
|
+
mistral: {
|
|
112
|
+
baseURL: "https://api.mistral.ai/v1",
|
|
113
|
+
compatible: "openai"
|
|
114
|
+
},
|
|
115
|
+
moonshot: {
|
|
116
|
+
baseURL: "https://api.moonshot.cn/v1",
|
|
117
|
+
compatible: "openai"
|
|
118
|
+
},
|
|
119
|
+
perplexity: {
|
|
120
|
+
baseURL: "https://api.perplexity.ai",
|
|
121
|
+
compatible: "openai"
|
|
122
|
+
},
|
|
123
|
+
openrouter: {
|
|
124
|
+
baseURL: "https://openrouter.ai/api/v1",
|
|
125
|
+
compatible: "openai"
|
|
126
|
+
},
|
|
127
|
+
siliconflow: {
|
|
128
|
+
baseURL: "https://api.siliconflow.cn/v1",
|
|
129
|
+
compatible: "openai"
|
|
130
|
+
}
|
|
131
|
+
};
|
|
132
|
+
|
|
133
|
+
//#endregion
|
|
134
|
+
//#region src/env-provider.ts
|
|
135
|
+
/**
|
|
136
|
+
* Default factories that delegate to the real implementations in `factories.ts`.
|
|
137
|
+
*/
|
|
138
|
+
const defaultFactories = {
|
|
139
|
+
createOpenAI: createOpenAIProvider,
|
|
140
|
+
createAnthropic: createAnthropicProvider,
|
|
141
|
+
createOpenAICompatible: createOpenAICompatibleProvider
|
|
142
|
+
};
|
|
143
|
+
/**
|
|
144
|
+
* Testable core implementation that accepts injected provider factories.
|
|
145
|
+
*
|
|
146
|
+
* In tests, call this function directly with fake factories
|
|
147
|
+
* to avoid module mocking entirely.
|
|
148
|
+
*/
|
|
149
|
+
function createEnvProvider(factories, options = {}) {
|
|
150
|
+
const separator = options.separator ?? "_";
|
|
151
|
+
const defaultFetch = options.defaults?.fetch;
|
|
152
|
+
const defaultHeaders = options.defaults?.headers;
|
|
153
|
+
const cache = /* @__PURE__ */ new Map();
|
|
154
|
+
/**
|
|
155
|
+
* Resolve baseURL and compatible from a preset name.
|
|
156
|
+
*/
|
|
157
|
+
function resolvePreset(presetName) {
|
|
158
|
+
const preset = builtinPresets[presetName];
|
|
159
|
+
if (!preset) {
|
|
160
|
+
const available = Object.keys(builtinPresets).join(", ");
|
|
161
|
+
throw new Error(`[ai-sdk-provider-env] Unknown preset "${presetName}". Available presets: ${available}`);
|
|
162
|
+
}
|
|
163
|
+
return {
|
|
164
|
+
baseURL: preset.baseURL,
|
|
165
|
+
compatible: preset.compatible ?? "openai"
|
|
166
|
+
};
|
|
167
|
+
}
|
|
168
|
+
/**
|
|
169
|
+
* Resolve config set configuration from explicit configs, presets, or environment variables.
|
|
170
|
+
*/
|
|
171
|
+
function resolveConfig(configSet) {
|
|
172
|
+
if (options.configs?.[configSet]) {
|
|
173
|
+
const config = options.configs[configSet];
|
|
174
|
+
if (config.preset) {
|
|
175
|
+
const preset = resolvePreset(config.preset);
|
|
176
|
+
return {
|
|
177
|
+
baseURL: config.baseURL ?? preset.baseURL,
|
|
178
|
+
apiKey: config.apiKey,
|
|
179
|
+
compatible: config.compatible ?? preset.compatible,
|
|
180
|
+
...config.headers && { headers: config.headers }
|
|
181
|
+
};
|
|
182
|
+
}
|
|
183
|
+
if (!config.baseURL) throw new Error(`[ai-sdk-provider-env] Missing baseURL in config for "${configSet}" (or set preset to use a built-in preset)`);
|
|
184
|
+
return {
|
|
185
|
+
baseURL: config.baseURL,
|
|
186
|
+
apiKey: config.apiKey,
|
|
187
|
+
compatible: config.compatible ?? "openai",
|
|
188
|
+
...config.headers && { headers: config.headers }
|
|
189
|
+
};
|
|
190
|
+
}
|
|
191
|
+
const prefix = configSet.toUpperCase();
|
|
192
|
+
const env = (key) => node_process.default.env[`${prefix}${separator}${key}`];
|
|
193
|
+
const apiKey = env("API_KEY");
|
|
194
|
+
if (!apiKey) throw new Error(`[ai-sdk-provider-env] Missing env var ${prefix}${separator}API_KEY`);
|
|
195
|
+
const headersRaw = env("HEADERS");
|
|
196
|
+
let headers;
|
|
197
|
+
if (headersRaw) try {
|
|
198
|
+
headers = JSON.parse(headersRaw);
|
|
199
|
+
} catch {
|
|
200
|
+
throw new Error(`[ai-sdk-provider-env] Invalid JSON in ${prefix}${separator}HEADERS: ${headersRaw}`);
|
|
201
|
+
}
|
|
202
|
+
const presetName = env("PRESET");
|
|
203
|
+
if (presetName) {
|
|
204
|
+
const preset = resolvePreset(presetName);
|
|
205
|
+
return {
|
|
206
|
+
baseURL: env("BASE_URL") ?? preset.baseURL,
|
|
207
|
+
apiKey,
|
|
208
|
+
compatible: env("COMPATIBLE") ?? preset.compatible,
|
|
209
|
+
...headers && { headers }
|
|
210
|
+
};
|
|
211
|
+
}
|
|
212
|
+
const baseURL = env("BASE_URL");
|
|
213
|
+
if (!baseURL) throw new Error(`[ai-sdk-provider-env] Missing env var ${prefix}${separator}BASE_URL (or set ${prefix}${separator}PRESET to use a preset)`);
|
|
214
|
+
return {
|
|
215
|
+
baseURL,
|
|
216
|
+
apiKey,
|
|
217
|
+
compatible: env("COMPATIBLE") ?? "openai",
|
|
218
|
+
...headers && { headers }
|
|
219
|
+
};
|
|
220
|
+
}
|
|
221
|
+
/**
|
|
222
|
+
* Create the underlying provider based on the compatibility mode.
|
|
223
|
+
*/
|
|
224
|
+
function createUnderlying(config) {
|
|
225
|
+
const { baseURL, apiKey, compatible, headers } = config;
|
|
226
|
+
const mergedHeaders = defaultHeaders || headers ? {
|
|
227
|
+
...defaultHeaders,
|
|
228
|
+
...headers
|
|
229
|
+
} : void 0;
|
|
230
|
+
const baseOpts = {
|
|
231
|
+
baseURL,
|
|
232
|
+
apiKey,
|
|
233
|
+
...mergedHeaders && { headers: mergedHeaders },
|
|
234
|
+
...defaultFetch && { fetch: defaultFetch }
|
|
235
|
+
};
|
|
236
|
+
switch (compatible) {
|
|
237
|
+
case "openai": return factories.createOpenAI(baseOpts);
|
|
238
|
+
case "anthropic": return factories.createAnthropic(baseOpts);
|
|
239
|
+
default: return factories.createOpenAICompatible({
|
|
240
|
+
name: compatible,
|
|
241
|
+
...baseOpts
|
|
242
|
+
});
|
|
243
|
+
}
|
|
244
|
+
}
|
|
245
|
+
/**
|
|
246
|
+
* Get or create a cached provider for the given config set.
|
|
247
|
+
*/
|
|
248
|
+
function getProvider(configSet) {
|
|
249
|
+
const key = configSet.toUpperCase();
|
|
250
|
+
const cached = cache.get(key);
|
|
251
|
+
if (cached) return cached;
|
|
252
|
+
const provider = createUnderlying(resolveConfig(configSet));
|
|
253
|
+
cache.set(key, provider);
|
|
254
|
+
return provider;
|
|
255
|
+
}
|
|
256
|
+
/**
|
|
257
|
+
* Parse a model ID. The first `/` separates the config set name from the actual model ID.
|
|
258
|
+
*/
|
|
259
|
+
function parseModelId(modelId) {
|
|
260
|
+
const slashIndex = modelId.indexOf("/");
|
|
261
|
+
if (slashIndex === -1) throw new Error(`[ai-sdk-provider-env] Invalid model ID "${modelId}". Expected format: "{configSet}/{modelId}", e.g. "zhipu/glm-4"`);
|
|
262
|
+
return {
|
|
263
|
+
configSet: modelId.slice(0, slashIndex),
|
|
264
|
+
model: modelId.slice(slashIndex + 1)
|
|
265
|
+
};
|
|
266
|
+
}
|
|
267
|
+
return {
|
|
268
|
+
specificationVersion: "v3",
|
|
269
|
+
languageModel(modelId) {
|
|
270
|
+
const { configSet, model } = parseModelId(modelId);
|
|
271
|
+
return getProvider(configSet).languageModel(model);
|
|
272
|
+
},
|
|
273
|
+
embeddingModel(modelId) {
|
|
274
|
+
const { configSet, model } = parseModelId(modelId);
|
|
275
|
+
return getProvider(configSet).embeddingModel(model);
|
|
276
|
+
},
|
|
277
|
+
imageModel(modelId) {
|
|
278
|
+
const { configSet, model } = parseModelId(modelId);
|
|
279
|
+
return getProvider(configSet).imageModel(model);
|
|
280
|
+
},
|
|
281
|
+
textEmbeddingModel(modelId) {
|
|
282
|
+
const { configSet, model } = parseModelId(modelId);
|
|
283
|
+
const provider = getProvider(configSet);
|
|
284
|
+
if (!provider.textEmbeddingModel) throw new _ai_sdk_provider.NoSuchModelError({
|
|
285
|
+
modelId,
|
|
286
|
+
modelType: "embeddingModel"
|
|
287
|
+
});
|
|
288
|
+
return provider.textEmbeddingModel(model);
|
|
289
|
+
},
|
|
290
|
+
transcriptionModel(modelId) {
|
|
291
|
+
const { configSet, model } = parseModelId(modelId);
|
|
292
|
+
const provider = getProvider(configSet);
|
|
293
|
+
if (!provider.transcriptionModel) throw new _ai_sdk_provider.NoSuchModelError({
|
|
294
|
+
modelId,
|
|
295
|
+
modelType: "transcriptionModel"
|
|
296
|
+
});
|
|
297
|
+
return provider.transcriptionModel(model);
|
|
298
|
+
},
|
|
299
|
+
speechModel(modelId) {
|
|
300
|
+
const { configSet, model } = parseModelId(modelId);
|
|
301
|
+
const provider = getProvider(configSet);
|
|
302
|
+
if (!provider.speechModel) throw new _ai_sdk_provider.NoSuchModelError({
|
|
303
|
+
modelId,
|
|
304
|
+
modelType: "speechModel"
|
|
305
|
+
});
|
|
306
|
+
return provider.speechModel(model);
|
|
307
|
+
},
|
|
308
|
+
rerankingModel(modelId) {
|
|
309
|
+
const { configSet, model } = parseModelId(modelId);
|
|
310
|
+
const provider = getProvider(configSet);
|
|
311
|
+
if (!provider.rerankingModel) throw new _ai_sdk_provider.NoSuchModelError({
|
|
312
|
+
modelId,
|
|
313
|
+
modelType: "rerankingModel"
|
|
314
|
+
});
|
|
315
|
+
return provider.rerankingModel(model);
|
|
316
|
+
}
|
|
317
|
+
};
|
|
318
|
+
}
|
|
319
|
+
/**
|
|
320
|
+
* Create a dynamic, environment-variable-driven AI SDK provider.
|
|
321
|
+
*
|
|
322
|
+
* Automatically resolves provider configurations from env var naming conventions,
|
|
323
|
+
* with built-in preset support for quick setup.
|
|
324
|
+
*
|
|
325
|
+
* Env var convention (using config set `ZHIPU` with default separator `_` as example):
|
|
326
|
+
* - `ZHIPU_PRESET` — use a built-in preset (BASE_URL and COMPATIBLE become optional)
|
|
327
|
+
* - `ZHIPU_BASE_URL` — API base URL
|
|
328
|
+
* - `ZHIPU_API_KEY` — API key (required)
|
|
329
|
+
* - `ZHIPU_COMPATIBLE` — compatibility mode (defaults to `'openai'`)
|
|
330
|
+
* - `ZHIPU_HEADERS` — custom HTTP headers (JSON format)
|
|
331
|
+
*
|
|
332
|
+
* @example
|
|
333
|
+
* ```ts
|
|
334
|
+
* import { createProviderRegistry } from 'ai'
|
|
335
|
+
* import { envProvider } from 'ai-sdk-provider-env'
|
|
336
|
+
*
|
|
337
|
+
* const registry = createProviderRegistry({
|
|
338
|
+
* env: envProvider(),
|
|
339
|
+
* })
|
|
340
|
+
*
|
|
341
|
+
* // Use a preset (only API_KEY is required)
|
|
342
|
+
* // DEEPSEEK_PRESET=deepseek
|
|
343
|
+
* // DEEPSEEK_API_KEY=sk-xxx
|
|
344
|
+
* const model = registry.languageModel('env:deepseek/deepseek-chat')
|
|
345
|
+
*
|
|
346
|
+
* // Specify all parameters manually
|
|
347
|
+
* // MYAPI_BASE_URL=https://api.example.com/v1
|
|
348
|
+
* // MYAPI_API_KEY=xxx
|
|
349
|
+
* const model2 = registry.languageModel('env:myapi/some-model')
|
|
350
|
+
* ```
|
|
351
|
+
*/
|
|
352
|
+
function envProvider(options = {}) {
|
|
353
|
+
return createEnvProvider(defaultFactories, options);
|
|
354
|
+
}
|
|
355
|
+
|
|
356
|
+
//#endregion
|
|
357
|
+
exports.builtinPresets = builtinPresets;
|
|
358
|
+
exports.envProvider = envProvider;
|
|
359
|
+
//# sourceMappingURL=index.cjs.map
|