@auto-engineer/ai-gateway 0.14.0 → 0.16.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.turbo/turbo-build.log +5 -6
- package/CHANGELOG.md +8 -0
- package/README.md +148 -291
- package/dist/tsconfig.tsbuildinfo +1 -1
- package/package.json +1 -1
package/.turbo/turbo-build.log
CHANGED
|
@@ -1,6 +1,5 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
3
|
-
>
|
|
4
|
-
|
|
5
|
-
|
|
6
|
-
Fixed ESM imports in dist/
|
|
1
|
+
|
|
2
|
+
> @auto-engineer/ai-gateway@0.16.0 build /home/runner/work/auto-engineer/auto-engineer/packages/ai-gateway
|
|
3
|
+
> tsc && tsx ../../scripts/fix-esm-imports.ts
|
|
4
|
+
|
|
5
|
+
Fixed ESM imports in dist/
|
package/CHANGELOG.md
CHANGED
package/README.md
CHANGED
|
@@ -1,365 +1,222 @@
|
|
|
1
1
|
# @auto-engineer/ai-gateway
|
|
2
2
|
|
|
3
|
-
AI
|
|
3
|
+
Unified AI provider abstraction layer with multi-provider support and MCP tool integration.
|
|
4
4
|
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
This is a plugin for the Auto Engineer CLI. Install both the CLI and this plugin:
|
|
8
|
-
|
|
9
|
-
```bash
|
|
10
|
-
npm install -g @auto-engineer/cli
|
|
11
|
-
npm install @auto-engineer/ai-gateway
|
|
12
|
-
```
|
|
13
|
-
|
|
14
|
-
## Configuration
|
|
15
|
-
|
|
16
|
-
Add this plugin to your `auto.config.ts`:
|
|
17
|
-
|
|
18
|
-
```typescript
|
|
19
|
-
export default {
|
|
20
|
-
plugins: [
|
|
21
|
-
'@auto-engineer/ai-gateway',
|
|
22
|
-
// ... other plugins
|
|
23
|
-
],
|
|
24
|
-
};
|
|
25
|
-
```
|
|
26
|
-
|
|
27
|
-
### Environment Variables
|
|
28
|
-
|
|
29
|
-
Configure AI providers by setting environment variables in a `.env` file or your environment:
|
|
30
|
-
|
|
31
|
-
```bash
|
|
32
|
-
# At least one of these is required
|
|
33
|
-
OPENAI_API_KEY=your-openai-key
|
|
34
|
-
ANTHROPIC_API_KEY=your-anthropic-key
|
|
35
|
-
GEMINI_API_KEY=your-google-key
|
|
36
|
-
XAI_API_KEY=your-xai-key
|
|
37
|
-
|
|
38
|
-
# Custom Provider Configuration (optional)
|
|
39
|
-
# Use this to connect to any OpenAI-compatible API endpoint
|
|
40
|
-
CUSTOM_PROVIDER_NAME=litellm
|
|
41
|
-
CUSTOM_PROVIDER_BASE_URL=https://api.litellm.ai
|
|
42
|
-
CUSTOM_PROVIDER_API_KEY=your-custom-api-key
|
|
43
|
-
CUSTOM_PROVIDER_DEFAULT_MODEL=claude-3-sonnet
|
|
44
|
-
|
|
45
|
-
# Optional: Set default provider and model
|
|
46
|
-
DEFAULT_AI_PROVIDER=openai
|
|
47
|
-
DEFAULT_AI_MODEL=gpt-4o-mini
|
|
48
|
-
```
|
|
49
|
-
|
|
50
|
-
## Commands
|
|
51
|
-
|
|
52
|
-
This plugin provides the following commands:
|
|
53
|
-
|
|
54
|
-
- `ai:generate-text` - Generate text using AI models
|
|
55
|
-
- `ai:stream-text` - Stream text output from AI models
|
|
56
|
-
- `ai:generate-structured` - Generate structured data with schema validation
|
|
57
|
-
- `ai:stream-structured` - Stream structured data with schema validation
|
|
58
|
-
|
|
59
|
-
## What does this plugin do?
|
|
60
|
-
|
|
61
|
-
The AI Gateway plugin provides a unified interface for interacting with multiple AI providers (OpenAI, Anthropic, Google, XAI, and Custom providers) and integrates with the Auto Engineer ecosystem for AI-driven code generation and tool execution. It supports text generation, structured data generation, and streaming capabilities with robust error handling and debugging.
|
|
5
|
+
---
|
|
62
6
|
|
|
63
|
-
|
|
7
|
+
## Purpose
|
|
64
8
|
|
|
65
|
-
|
|
9
|
+
Without `@auto-engineer/ai-gateway`, you would have to manage multiple AI provider SDKs, handle provider-specific authentication, and implement your own streaming and structured data generation logic.
|
|
66
10
|
|
|
67
|
-
|
|
11
|
+
This package provides a unified interface for OpenAI, Anthropic, Google, XAI, and custom OpenAI-compatible providers. Built on the Vercel AI SDK with support for text generation, structured data, streaming, and MCP tools.
|
|
68
12
|
|
|
69
|
-
|
|
70
|
-
- Automatic provider selection based on environment configuration
|
|
71
|
-
- Fallback to available providers if default is not configured
|
|
72
|
-
- Configurable default models per provider
|
|
13
|
+
---
|
|
73
14
|
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
The AI Gateway now supports custom providers, allowing you to connect to any OpenAI-compatible API endpoint. This is particularly useful for:
|
|
77
|
-
|
|
78
|
-
- **LiteLLM Proxy**: Access 100+ AI models through a single interface
|
|
79
|
-
- **Local AI models**: Connect to locally hosted models (Ollama, local OpenAI servers)
|
|
80
|
-
- **Corporate AI endpoints**: Use company-hosted AI services
|
|
81
|
-
- **Custom AI proxies**: Route through custom authentication or processing layers
|
|
82
|
-
|
|
83
|
-
#### Configuration
|
|
84
|
-
|
|
85
|
-
To configure a custom provider, set all four environment variables:
|
|
86
|
-
|
|
87
|
-
```bash
|
|
88
|
-
CUSTOM_PROVIDER_NAME=your-provider-name
|
|
89
|
-
CUSTOM_PROVIDER_BASE_URL=https://your-api-endpoint.com
|
|
90
|
-
CUSTOM_PROVIDER_API_KEY=your-api-key
|
|
91
|
-
CUSTOM_PROVIDER_DEFAULT_MODEL=your-default-model
|
|
92
|
-
```
|
|
93
|
-
|
|
94
|
-
#### Common Use Cases
|
|
95
|
-
|
|
96
|
-
**LiteLLM Proxy:**
|
|
97
|
-
|
|
98
|
-
```bash
|
|
99
|
-
CUSTOM_PROVIDER_NAME=litellm
|
|
100
|
-
CUSTOM_PROVIDER_BASE_URL=https://api.litellm.ai
|
|
101
|
-
CUSTOM_PROVIDER_API_KEY=sk-litellm-your-key
|
|
102
|
-
CUSTOM_PROVIDER_DEFAULT_MODEL=claude-3-sonnet
|
|
103
|
-
```
|
|
104
|
-
|
|
105
|
-
**Local Ollama:**
|
|
106
|
-
|
|
107
|
-
```bash
|
|
108
|
-
CUSTOM_PROVIDER_NAME=ollama
|
|
109
|
-
CUSTOM_PROVIDER_BASE_URL=http://localhost:11434/v1
|
|
110
|
-
CUSTOM_PROVIDER_API_KEY=ollama
|
|
111
|
-
CUSTOM_PROVIDER_DEFAULT_MODEL=llama3.1:8b
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
**Azure OpenAI:**
|
|
15
|
+
## Installation
|
|
115
16
|
|
|
116
17
|
```bash
|
|
117
|
-
|
|
118
|
-
CUSTOM_PROVIDER_BASE_URL=https://your-resource.openai.azure.com/openai/deployments
|
|
119
|
-
CUSTOM_PROVIDER_API_KEY=your-azure-key
|
|
120
|
-
CUSTOM_PROVIDER_DEFAULT_MODEL=gpt-4
|
|
18
|
+
pnpm add @auto-engineer/ai-gateway
|
|
121
19
|
```
|
|
122
20
|
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
### Text Generation
|
|
126
|
-
|
|
127
|
-
- Generate text with customizable parameters (temperature, max tokens)
|
|
128
|
-
- Supports both synchronous and streaming text generation
|
|
129
|
-
- Integrates with registered tools for enhanced functionality
|
|
130
|
-
- Image-based text generation for supported providers (OpenAI, XAI)
|
|
21
|
+
## Quick Start
|
|
131
22
|
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
- Generates structured data with Zod schema validation
|
|
135
|
-
- Retry logic for schema validation failures
|
|
136
|
-
- Enhanced error prompts for iterative refinement
|
|
137
|
-
- Streaming support for partial object updates
|
|
138
|
-
|
|
139
|
-
### Tool Integration
|
|
140
|
-
|
|
141
|
-
- Registers and executes custom tools via the Model Context Protocol (MCP) server
|
|
142
|
-
- Supports batch tool registration
|
|
143
|
-
- Validates tool inputs with Zod schemas
|
|
144
|
-
- Integrates tools with AI-driven workflows
|
|
145
|
-
|
|
146
|
-
### Debugging Support
|
|
147
|
-
|
|
148
|
-
Comprehensive debug logging with namespaces:
|
|
149
|
-
|
|
150
|
-
- `ai-gateway`: General operations
|
|
151
|
-
- `ai-gateway:call`: AI call operations
|
|
152
|
-
- `ai-gateway:provider`: Provider selection and initialization
|
|
153
|
-
- `ai-gateway:error`: Error handling
|
|
154
|
-
- `ai-gateway:stream`: Streaming operations
|
|
155
|
-
- `ai-gateway:result`: Result processing
|
|
23
|
+
```typescript
|
|
24
|
+
import { generateTextWithAI, AIProvider } from '@auto-engineer/ai-gateway';
|
|
156
25
|
|
|
157
|
-
|
|
26
|
+
const text = await generateTextWithAI('Explain quantum computing', {
|
|
27
|
+
provider: AIProvider.Anthropic,
|
|
28
|
+
temperature: 0.7,
|
|
29
|
+
maxTokens: 1000,
|
|
30
|
+
});
|
|
158
31
|
|
|
159
|
-
|
|
160
|
-
|
|
32
|
+
console.log(text);
|
|
33
|
+
// → "Quantum computing is a type of computation..."
|
|
161
34
|
```
|
|
162
35
|
|
|
163
|
-
|
|
36
|
+
---
|
|
164
37
|
|
|
165
|
-
##
|
|
38
|
+
## How-to Guides
|
|
166
39
|
|
|
167
|
-
###
|
|
40
|
+
### Generate Text
|
|
168
41
|
|
|
169
42
|
```typescript
|
|
170
43
|
import { generateTextWithAI } from '@auto-engineer/ai-gateway';
|
|
171
44
|
|
|
172
|
-
const
|
|
173
|
-
provider: 'openai',
|
|
174
|
-
model: 'gpt-4o-mini',
|
|
175
|
-
temperature: 0.7,
|
|
176
|
-
maxTokens: 500,
|
|
177
|
-
});
|
|
178
|
-
|
|
179
|
-
console.log(result);
|
|
45
|
+
const text = await generateTextWithAI('Write a haiku about coding');
|
|
180
46
|
```
|
|
181
47
|
|
|
182
|
-
###
|
|
48
|
+
### Stream Text
|
|
183
49
|
|
|
184
50
|
```typescript
|
|
185
|
-
import {
|
|
186
|
-
|
|
187
|
-
const result = await generateTextStreamingWithAI('Explain quantum computing', {
|
|
188
|
-
provider: 'anthropic',
|
|
189
|
-
model: 'claude-sonnet-4-20250514',
|
|
190
|
-
streamCallback: (token) => process.stdout.write(token),
|
|
191
|
-
});
|
|
51
|
+
import { streamTextWithAI } from '@auto-engineer/ai-gateway';
|
|
192
52
|
|
|
193
|
-
|
|
53
|
+
for await (const chunk of streamTextWithAI('Tell me a story')) {
|
|
54
|
+
process.stdout.write(chunk);
|
|
55
|
+
}
|
|
194
56
|
```
|
|
195
57
|
|
|
196
|
-
###
|
|
58
|
+
### Generate Structured Data
|
|
197
59
|
|
|
198
60
|
```typescript
|
|
199
61
|
import { generateStructuredDataWithAI, z } from '@auto-engineer/ai-gateway';
|
|
200
62
|
|
|
201
|
-
const
|
|
63
|
+
const TodoSchema = z.object({
|
|
202
64
|
title: z.string(),
|
|
203
|
-
|
|
65
|
+
priority: z.enum(['low', 'medium', 'high']),
|
|
204
66
|
completed: z.boolean(),
|
|
205
67
|
});
|
|
206
68
|
|
|
207
|
-
const
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
schemaName: 'TodoItem',
|
|
211
|
-
schemaDescription: 'A todo item with title, description, and completion status',
|
|
69
|
+
const todo = await generateStructuredDataWithAI('Create a todo for code review', {
|
|
70
|
+
schema: TodoSchema,
|
|
71
|
+
schemaName: 'Todo',
|
|
212
72
|
});
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
### Use with Images
|
|
213
76
|
|
|
214
|
-
|
|
77
|
+
```typescript
|
|
78
|
+
import { generateTextWithImageAI, AIProvider } from '@auto-engineer/ai-gateway';
|
|
79
|
+
|
|
80
|
+
const description = await generateTextWithImageAI(
|
|
81
|
+
'Describe this image',
|
|
82
|
+
imageBase64String,
|
|
83
|
+
{ provider: AIProvider.OpenAI }
|
|
84
|
+
);
|
|
215
85
|
```
|
|
216
86
|
|
|
217
|
-
###
|
|
87
|
+
### Register MCP Tools
|
|
218
88
|
|
|
219
89
|
```typescript
|
|
220
|
-
import { registerTool, startServer } from '@auto-engineer/ai-gateway';
|
|
221
|
-
import { z } from 'zod';
|
|
90
|
+
import { registerTool, startServer, z } from '@auto-engineer/ai-gateway';
|
|
222
91
|
|
|
223
|
-
registerTool(
|
|
92
|
+
registerTool<{ name: string }>(
|
|
224
93
|
'greet',
|
|
225
94
|
{
|
|
226
95
|
title: 'Greeting Tool',
|
|
227
|
-
description: 'Greets users
|
|
228
|
-
inputSchema: {
|
|
229
|
-
name: z.string().min(1, 'Name is required'),
|
|
230
|
-
language: z.enum(['en', 'es', 'fr', 'de']).optional().default('en'),
|
|
231
|
-
},
|
|
232
|
-
},
|
|
233
|
-
async ({ name, language = 'en' }) => {
|
|
234
|
-
const greetings = {
|
|
235
|
-
en: `Hello, ${name}!`,
|
|
236
|
-
es: `¡Hola, ${name}!`,
|
|
237
|
-
fr: `Bonjour, ${name}!`,
|
|
238
|
-
de: `Hallo, ${name}!`,
|
|
239
|
-
};
|
|
240
|
-
return { content: [{ type: 'text', text: greetings[language] }] };
|
|
96
|
+
description: 'Greets users',
|
|
97
|
+
inputSchema: { name: z.string() },
|
|
241
98
|
},
|
|
99
|
+
async ({ name }) => ({
|
|
100
|
+
content: [{ type: 'text', text: `Hello, ${name}!` }],
|
|
101
|
+
})
|
|
242
102
|
);
|
|
243
103
|
|
|
244
104
|
await startServer();
|
|
245
105
|
```
|
|
246
106
|
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
Customize behavior through `auto.config.ts`:
|
|
250
|
-
|
|
251
|
-
```typescript
|
|
252
|
-
export default {
|
|
253
|
-
plugins: [
|
|
254
|
-
[
|
|
255
|
-
'@auto-engineer/ai-gateway',
|
|
256
|
-
{
|
|
257
|
-
// Default AI provider
|
|
258
|
-
defaultProvider: 'openai',
|
|
259
|
-
|
|
260
|
-
// Default model per provider
|
|
261
|
-
defaultModels: {
|
|
262
|
-
openai: 'gpt-4o-mini',
|
|
263
|
-
anthropic: 'claude-sonnet-4-20250514',
|
|
264
|
-
google: 'gemini-2.5-pro',
|
|
265
|
-
xai: 'grok-4',
|
|
266
|
-
},
|
|
267
|
-
|
|
268
|
-
// Generation parameters
|
|
269
|
-
temperature: 0.7,
|
|
270
|
-
maxTokens: 1000,
|
|
271
|
-
|
|
272
|
-
// Tool integration
|
|
273
|
-
includeToolsByDefault: true,
|
|
274
|
-
},
|
|
275
|
-
],
|
|
276
|
-
],
|
|
277
|
-
};
|
|
278
|
-
```
|
|
107
|
+
---
|
|
279
108
|
|
|
280
|
-
##
|
|
109
|
+
## API Reference
|
|
281
110
|
|
|
282
|
-
|
|
111
|
+
### Package Exports
|
|
283
112
|
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
113
|
+
```typescript
|
|
114
|
+
import {
|
|
115
|
+
generateTextWithAI,
|
|
116
|
+
generateTextStreamingWithAI,
|
|
117
|
+
streamTextWithAI,
|
|
118
|
+
generateTextWithImageAI,
|
|
119
|
+
generateTextWithToolsAI,
|
|
120
|
+
generateStructuredDataWithAI,
|
|
121
|
+
streamStructuredDataWithAI,
|
|
122
|
+
getAvailableProviders,
|
|
123
|
+
getDefaultAIProvider,
|
|
124
|
+
getDefaultModel,
|
|
125
|
+
AIProvider,
|
|
126
|
+
z,
|
|
127
|
+
} from '@auto-engineer/ai-gateway';
|
|
289
128
|
|
|
290
|
-
|
|
129
|
+
import { createAIContext, generateText } from '@auto-engineer/ai-gateway/core';
|
|
291
130
|
|
|
131
|
+
import { registerTool, startServer, mcpServer } from '@auto-engineer/ai-gateway/node';
|
|
292
132
|
```
|
|
293
|
-
ai-gateway/
|
|
294
|
-
├── src/
|
|
295
|
-
│ ├── config.ts # AI provider configuration
|
|
296
|
-
│ ├── index.ts # Main API and provider logic
|
|
297
|
-
│ ├── mcp-server.ts # Model Context Protocol server for tool management
|
|
298
|
-
│ └── example-use.ts # Example tool implementations
|
|
299
|
-
├── DEBUG.md # Debugging instructions
|
|
300
|
-
├── CHANGELOG.md # Version history
|
|
301
|
-
├── package.json
|
|
302
|
-
└── tsconfig.json
|
|
303
|
-
```
|
|
304
|
-
|
|
305
|
-
## Quality Assurance
|
|
306
|
-
|
|
307
|
-
- **Type Safety**: Full TypeScript support with Zod schema validation
|
|
308
|
-
- **Error Handling**: Comprehensive error detection and logging
|
|
309
|
-
- **Testing**: Unit tests with Vitest for core functionality
|
|
310
|
-
- **Linting**: ESLint and Prettier for code quality
|
|
311
|
-
- **Debugging**: Detailed logging with `debug` library
|
|
312
|
-
|
|
313
|
-
## Advanced Features
|
|
314
133
|
|
|
315
|
-
###
|
|
134
|
+
### Entry Points
|
|
316
135
|
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
-
|
|
136
|
+
| Entry Point | Import Path | Description |
|
|
137
|
+
|-------------|-------------|-------------|
|
|
138
|
+
| Main | `@auto-engineer/ai-gateway` | Node.js wrappers with global context |
|
|
139
|
+
| Core | `@auto-engineer/ai-gateway/core` | Pure functions requiring explicit context |
|
|
140
|
+
| Node | `@auto-engineer/ai-gateway/node` | Full Node.js API including MCP server |
|
|
320
141
|
|
|
321
|
-
###
|
|
142
|
+
### Functions
|
|
322
143
|
|
|
323
|
-
|
|
324
|
-
- Partial object streaming for structured data
|
|
325
|
-
- Efficient chunk handling for large responses
|
|
144
|
+
#### `generateTextWithAI(prompt: string, options?: AIOptions): Promise<string>`
|
|
326
145
|
|
|
327
|
-
|
|
146
|
+
Generate text from a prompt.
|
|
328
147
|
|
|
329
|
-
|
|
330
|
-
- Environment-based configuration
|
|
331
|
-
- Support for provider-specific error handling
|
|
148
|
+
#### `streamTextWithAI(prompt: string, options?: AIOptions): AsyncGenerator<string>`
|
|
332
149
|
|
|
333
|
-
|
|
150
|
+
Stream text generation as an async generator.
|
|
334
151
|
|
|
335
|
-
|
|
336
|
-
2. Configure environment variables for your AI providers
|
|
337
|
-
3. Add the plugin to `auto.config.ts`
|
|
338
|
-
4. Use the provided commands or import functions for AI operations
|
|
152
|
+
#### `generateStructuredDataWithAI<T>(prompt: string, options: StructuredAIOptions<T>): Promise<T>`
|
|
339
153
|
|
|
340
|
-
|
|
154
|
+
Generate structured data validated against a Zod schema.
|
|
341
155
|
|
|
342
|
-
|
|
343
|
-
# Install dependencies
|
|
344
|
-
npm install @auto-engineer/ai-gateway
|
|
345
|
-
|
|
346
|
-
# Generate text
|
|
347
|
-
auto ai:generate-text --prompt="Write a story" --provider=openai
|
|
348
|
-
|
|
349
|
-
# Generate structured data
|
|
350
|
-
auto ai:generate-structured --prompt="Create a user profile" --schema=userSchema.json
|
|
351
|
-
```
|
|
352
|
-
|
|
353
|
-
## Debugging
|
|
156
|
+
### AIProvider
|
|
354
157
|
|
|
355
|
-
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
158
|
+
```typescript
|
|
159
|
+
enum AIProvider {
|
|
160
|
+
OpenAI = 'openai',
|
|
161
|
+
Anthropic = 'anthropic',
|
|
162
|
+
Google = 'google',
|
|
163
|
+
XAI = 'xai',
|
|
164
|
+
Custom = 'custom',
|
|
165
|
+
}
|
|
359
166
|
```
|
|
360
167
|
|
|
361
|
-
|
|
168
|
+
### AIOptions
|
|
362
169
|
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
170
|
+
```typescript
|
|
171
|
+
interface AIOptions {
|
|
172
|
+
provider?: AIProvider;
|
|
173
|
+
model?: string;
|
|
174
|
+
temperature?: number;
|
|
175
|
+
maxTokens?: number;
|
|
176
|
+
streamCallback?: (token: string) => void;
|
|
177
|
+
includeTools?: boolean;
|
|
178
|
+
}
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
## Architecture
|
|
184
|
+
|
|
185
|
+
```
|
|
186
|
+
src/
|
|
187
|
+
├── index.ts
|
|
188
|
+
├── core/
|
|
189
|
+
│ ├── context.ts
|
|
190
|
+
│ ├── generators.ts
|
|
191
|
+
│ ├── types.ts
|
|
192
|
+
│ └── providers/
|
|
193
|
+
└── node/
|
|
194
|
+
├── wrappers.ts
|
|
195
|
+
├── config.ts
|
|
196
|
+
└── mcp-server.ts
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
The following diagram shows the provider abstraction:
|
|
200
|
+
|
|
201
|
+
```mermaid
|
|
202
|
+
flowchart LR
|
|
203
|
+
A[Application] --> B[ai-gateway]
|
|
204
|
+
B --> C[OpenAI]
|
|
205
|
+
B --> D[Anthropic]
|
|
206
|
+
B --> E[Google]
|
|
207
|
+
B --> F[XAI]
|
|
208
|
+
B --> G[Custom]
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
*Flow: Application calls ai-gateway, which routes to the configured provider.*
|
|
212
|
+
|
|
213
|
+
### Dependencies
|
|
214
|
+
|
|
215
|
+
| Package | Usage |
|
|
216
|
+
|---------|-------|
|
|
217
|
+
| `ai` | Vercel AI SDK core |
|
|
218
|
+
| `@ai-sdk/anthropic` | Anthropic provider |
|
|
219
|
+
| `@ai-sdk/openai` | OpenAI provider |
|
|
220
|
+
| `@ai-sdk/google` | Google Gemini provider |
|
|
221
|
+
| `@ai-sdk/xai` | xAI Grok provider |
|
|
222
|
+
| `zod` | Schema validation |
|