@draht/ai 2026.3.2-2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +1185 -0
- package/dist/api-registry.d.ts +20 -0
- package/dist/api-registry.d.ts.map +1 -0
- package/dist/api-registry.js +44 -0
- package/dist/api-registry.js.map +1 -0
- package/dist/cli.d.ts +3 -0
- package/dist/cli.d.ts.map +1 -0
- package/dist/cli.js +116 -0
- package/dist/cli.js.map +1 -0
- package/dist/env-api-keys.d.ts +9 -0
- package/dist/env-api-keys.d.ts.map +1 -0
- package/dist/env-api-keys.js +99 -0
- package/dist/env-api-keys.js.map +1 -0
- package/dist/index.d.ts +22 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +21 -0
- package/dist/index.js.map +1 -0
- package/dist/models.d.ts +24 -0
- package/dist/models.d.ts.map +1 -0
- package/dist/models.generated.d.ts +13133 -0
- package/dist/models.generated.d.ts.map +1 -0
- package/dist/models.generated.js +12939 -0
- package/dist/models.generated.js.map +1 -0
- package/dist/models.js +55 -0
- package/dist/models.js.map +1 -0
- package/dist/providers/amazon-bedrock.d.ts +15 -0
- package/dist/providers/amazon-bedrock.d.ts.map +1 -0
- package/dist/providers/amazon-bedrock.js +585 -0
- package/dist/providers/amazon-bedrock.js.map +1 -0
- package/dist/providers/anthropic.d.ts +33 -0
- package/dist/providers/anthropic.d.ts.map +1 -0
- package/dist/providers/anthropic.js +729 -0
- package/dist/providers/anthropic.js.map +1 -0
- package/dist/providers/azure-openai-responses.d.ts +15 -0
- package/dist/providers/azure-openai-responses.d.ts.map +1 -0
- package/dist/providers/azure-openai-responses.js +184 -0
- package/dist/providers/azure-openai-responses.js.map +1 -0
- package/dist/providers/github-copilot-headers.d.ts +8 -0
- package/dist/providers/github-copilot-headers.d.ts.map +1 -0
- package/dist/providers/github-copilot-headers.js +29 -0
- package/dist/providers/github-copilot-headers.js.map +1 -0
- package/dist/providers/google-gemini-cli.d.ts +74 -0
- package/dist/providers/google-gemini-cli.d.ts.map +1 -0
- package/dist/providers/google-gemini-cli.js +735 -0
- package/dist/providers/google-gemini-cli.js.map +1 -0
- package/dist/providers/google-shared.d.ts +65 -0
- package/dist/providers/google-shared.d.ts.map +1 -0
- package/dist/providers/google-shared.js +306 -0
- package/dist/providers/google-shared.js.map +1 -0
- package/dist/providers/google-vertex.d.ts +15 -0
- package/dist/providers/google-vertex.d.ts.map +1 -0
- package/dist/providers/google-vertex.js +371 -0
- package/dist/providers/google-vertex.js.map +1 -0
- package/dist/providers/google.d.ts +13 -0
- package/dist/providers/google.d.ts.map +1 -0
- package/dist/providers/google.js +352 -0
- package/dist/providers/google.js.map +1 -0
- package/dist/providers/openai-codex-responses.d.ts +9 -0
- package/dist/providers/openai-codex-responses.d.ts.map +1 -0
- package/dist/providers/openai-codex-responses.js +699 -0
- package/dist/providers/openai-codex-responses.js.map +1 -0
- package/dist/providers/openai-completions.d.ts +15 -0
- package/dist/providers/openai-completions.d.ts.map +1 -0
- package/dist/providers/openai-completions.js +712 -0
- package/dist/providers/openai-completions.js.map +1 -0
- package/dist/providers/openai-responses-shared.d.ts +17 -0
- package/dist/providers/openai-responses-shared.d.ts.map +1 -0
- package/dist/providers/openai-responses-shared.js +427 -0
- package/dist/providers/openai-responses-shared.js.map +1 -0
- package/dist/providers/openai-responses.d.ts +13 -0
- package/dist/providers/openai-responses.d.ts.map +1 -0
- package/dist/providers/openai-responses.js +198 -0
- package/dist/providers/openai-responses.js.map +1 -0
- package/dist/providers/register-builtins.d.ts +3 -0
- package/dist/providers/register-builtins.d.ts.map +1 -0
- package/dist/providers/register-builtins.js +63 -0
- package/dist/providers/register-builtins.js.map +1 -0
- package/dist/providers/simple-options.d.ts +8 -0
- package/dist/providers/simple-options.d.ts.map +1 -0
- package/dist/providers/simple-options.js +35 -0
- package/dist/providers/simple-options.js.map +1 -0
- package/dist/providers/transform-messages.d.ts +8 -0
- package/dist/providers/transform-messages.d.ts.map +1 -0
- package/dist/providers/transform-messages.js +155 -0
- package/dist/providers/transform-messages.js.map +1 -0
- package/dist/stream.d.ts +9 -0
- package/dist/stream.d.ts.map +1 -0
- package/dist/stream.js +28 -0
- package/dist/stream.js.map +1 -0
- package/dist/types.d.ts +279 -0
- package/dist/types.d.ts.map +1 -0
- package/dist/types.js +2 -0
- package/dist/types.js.map +1 -0
- package/dist/utils/event-stream.d.ts +21 -0
- package/dist/utils/event-stream.d.ts.map +1 -0
- package/dist/utils/event-stream.js +81 -0
- package/dist/utils/event-stream.js.map +1 -0
- package/dist/utils/http-proxy.d.ts +2 -0
- package/dist/utils/http-proxy.d.ts.map +1 -0
- package/dist/utils/http-proxy.js +15 -0
- package/dist/utils/http-proxy.js.map +1 -0
- package/dist/utils/json-parse.d.ts +9 -0
- package/dist/utils/json-parse.d.ts.map +1 -0
- package/dist/utils/json-parse.js +29 -0
- package/dist/utils/json-parse.js.map +1 -0
- package/dist/utils/oauth/anthropic.d.ts +17 -0
- package/dist/utils/oauth/anthropic.d.ts.map +1 -0
- package/dist/utils/oauth/anthropic.js +104 -0
- package/dist/utils/oauth/anthropic.js.map +1 -0
- package/dist/utils/oauth/github-copilot.d.ts +30 -0
- package/dist/utils/oauth/github-copilot.d.ts.map +1 -0
- package/dist/utils/oauth/github-copilot.js +281 -0
- package/dist/utils/oauth/github-copilot.js.map +1 -0
- package/dist/utils/oauth/google-antigravity.d.ts +26 -0
- package/dist/utils/oauth/google-antigravity.d.ts.map +1 -0
- package/dist/utils/oauth/google-antigravity.js +373 -0
- package/dist/utils/oauth/google-antigravity.js.map +1 -0
- package/dist/utils/oauth/google-gemini-cli.d.ts +26 -0
- package/dist/utils/oauth/google-gemini-cli.d.ts.map +1 -0
- package/dist/utils/oauth/google-gemini-cli.js +478 -0
- package/dist/utils/oauth/google-gemini-cli.js.map +1 -0
- package/dist/utils/oauth/index.d.ts +62 -0
- package/dist/utils/oauth/index.d.ts.map +1 -0
- package/dist/utils/oauth/index.js +133 -0
- package/dist/utils/oauth/index.js.map +1 -0
- package/dist/utils/oauth/openai-codex.d.ts +34 -0
- package/dist/utils/oauth/openai-codex.d.ts.map +1 -0
- package/dist/utils/oauth/openai-codex.js +380 -0
- package/dist/utils/oauth/openai-codex.js.map +1 -0
- package/dist/utils/oauth/pkce.d.ts +13 -0
- package/dist/utils/oauth/pkce.d.ts.map +1 -0
- package/dist/utils/oauth/pkce.js +31 -0
- package/dist/utils/oauth/pkce.js.map +1 -0
- package/dist/utils/oauth/types.d.ts +47 -0
- package/dist/utils/oauth/types.d.ts.map +1 -0
- package/dist/utils/oauth/types.js +2 -0
- package/dist/utils/oauth/types.js.map +1 -0
- package/dist/utils/overflow.d.ts +52 -0
- package/dist/utils/overflow.d.ts.map +1 -0
- package/dist/utils/overflow.js +115 -0
- package/dist/utils/overflow.js.map +1 -0
- package/dist/utils/sanitize-unicode.d.ts +22 -0
- package/dist/utils/sanitize-unicode.d.ts.map +1 -0
- package/dist/utils/sanitize-unicode.js +26 -0
- package/dist/utils/sanitize-unicode.js.map +1 -0
- package/dist/utils/typebox-helpers.d.ts +17 -0
- package/dist/utils/typebox-helpers.d.ts.map +1 -0
- package/dist/utils/typebox-helpers.js +21 -0
- package/dist/utils/typebox-helpers.js.map +1 -0
- package/dist/utils/validation.d.ts +18 -0
- package/dist/utils/validation.d.ts.map +1 -0
- package/dist/utils/validation.js +72 -0
- package/dist/utils/validation.js.map +1 -0
- package/package.json +67 -0
package/README.md
ADDED
|
@@ -0,0 +1,1185 @@
|
|
|
1
|
+
# @mariozechner/pi-ai
|
|
2
|
+
|
|
3
|
+
Unified LLM API with automatic model discovery, provider configuration, token and cost tracking, and simple context persistence and hand-off to other models mid-session.
|
|
4
|
+
|
|
5
|
+
**Note**: This library only includes models that support tool calling (function calling), as this is essential for agentic workflows.
|
|
6
|
+
|
|
7
|
+
## Table of Contents
|
|
8
|
+
|
|
9
|
+
- [Supported Providers](#supported-providers)
|
|
10
|
+
- [Installation](#installation)
|
|
11
|
+
- [Quick Start](#quick-start)
|
|
12
|
+
- [Tools](#tools)
|
|
13
|
+
- [Defining Tools](#defining-tools)
|
|
14
|
+
- [Handling Tool Calls](#handling-tool-calls)
|
|
15
|
+
- [Streaming Tool Calls with Partial JSON](#streaming-tool-calls-with-partial-json)
|
|
16
|
+
- [Validating Tool Arguments](#validating-tool-arguments)
|
|
17
|
+
- [Complete Event Reference](#complete-event-reference)
|
|
18
|
+
- [Image Input](#image-input)
|
|
19
|
+
- [Thinking/Reasoning](#thinkingreasoning)
|
|
20
|
+
- [Unified Interface](#unified-interface-streamsimplecompletesimple)
|
|
21
|
+
- [Provider-Specific Options](#provider-specific-options-streamcomplete)
|
|
22
|
+
- [Streaming Thinking Content](#streaming-thinking-content)
|
|
23
|
+
- [Stop Reasons](#stop-reasons)
|
|
24
|
+
- [Error Handling](#error-handling)
|
|
25
|
+
- [Aborting Requests](#aborting-requests)
|
|
26
|
+
- [Continuing After Abort](#continuing-after-abort)
|
|
27
|
+
- [APIs, Models, and Providers](#apis-models-and-providers)
|
|
28
|
+
- [Providers and Models](#providers-and-models)
|
|
29
|
+
- [Querying Providers and Models](#querying-providers-and-models)
|
|
30
|
+
- [Custom Models](#custom-models)
|
|
31
|
+
- [OpenAI Compatibility Settings](#openai-compatibility-settings)
|
|
32
|
+
- [Type Safety](#type-safety)
|
|
33
|
+
- [Cross-Provider Handoffs](#cross-provider-handoffs)
|
|
34
|
+
- [Context Serialization](#context-serialization)
|
|
35
|
+
- [Browser Usage](#browser-usage)
|
|
36
|
+
- [Environment Variables](#environment-variables-nodejs-only)
|
|
37
|
+
- [Checking Environment Variables](#checking-environment-variables)
|
|
38
|
+
- [OAuth Providers](#oauth-providers)
|
|
39
|
+
- [Vertex AI (ADC)](#vertex-ai-adc)
|
|
40
|
+
- [CLI Login](#cli-login)
|
|
41
|
+
- [Programmatic OAuth](#programmatic-oauth)
|
|
42
|
+
- [Login Flow Example](#login-flow-example)
|
|
43
|
+
- [Using OAuth Tokens](#using-oauth-tokens)
|
|
44
|
+
- [Provider Notes](#provider-notes)
|
|
45
|
+
- [License](#license)
|
|
46
|
+
|
|
47
|
+
## Supported Providers
|
|
48
|
+
|
|
49
|
+
- **OpenAI**
|
|
50
|
+
- **Azure OpenAI (Responses)**
|
|
51
|
+
- **OpenAI Codex** (ChatGPT Plus/Pro subscription, requires OAuth, see below)
|
|
52
|
+
- **Anthropic**
|
|
53
|
+
- **Google**
|
|
54
|
+
- **Vertex AI** (Gemini via Vertex AI)
|
|
55
|
+
- **Mistral**
|
|
56
|
+
- **Groq**
|
|
57
|
+
- **Cerebras**
|
|
58
|
+
- **xAI**
|
|
59
|
+
- **OpenRouter**
|
|
60
|
+
- **Vercel AI Gateway**
|
|
61
|
+
- **MiniMax**
|
|
62
|
+
- **GitHub Copilot** (requires OAuth, see below)
|
|
63
|
+
- **Google Gemini CLI** (requires OAuth, see below)
|
|
64
|
+
- **Antigravity** (requires OAuth, see below)
|
|
65
|
+
- **Amazon Bedrock**
|
|
66
|
+
- **Kimi For Coding** (Moonshot AI, uses Anthropic-compatible API)
|
|
67
|
+
- **Any OpenAI-compatible API**: Ollama, vLLM, LM Studio, etc.
|
|
68
|
+
|
|
69
|
+
## Installation
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
npm install @mariozechner/pi-ai
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
TypeBox exports are re-exported from `@mariozechner/pi-ai`: `Type`, `Static`, and `TSchema`.
|
|
76
|
+
|
|
77
|
+
## Quick Start
|
|
78
|
+
|
|
79
|
+
```typescript
|
|
80
|
+
import { Type, getModel, stream, complete, Context, Tool, StringEnum } from '@mariozechner/pi-ai';
|
|
81
|
+
|
|
82
|
+
// Fully typed with auto-complete support for both providers and models
|
|
83
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
84
|
+
|
|
85
|
+
// Define tools with TypeBox schemas for type safety and validation
|
|
86
|
+
const tools: Tool[] = [{
|
|
87
|
+
name: 'get_time',
|
|
88
|
+
description: 'Get the current time',
|
|
89
|
+
parameters: Type.Object({
|
|
90
|
+
timezone: Type.Optional(Type.String({ description: 'Optional timezone (e.g., America/New_York)' }))
|
|
91
|
+
})
|
|
92
|
+
}];
|
|
93
|
+
|
|
94
|
+
// Build a conversation context (easily serializable and transferable between models)
|
|
95
|
+
const context: Context = {
|
|
96
|
+
systemPrompt: 'You are a helpful assistant.',
|
|
97
|
+
messages: [{ role: 'user', content: 'What time is it?' }],
|
|
98
|
+
tools
|
|
99
|
+
};
|
|
100
|
+
|
|
101
|
+
// Option 1: Streaming with all event types
|
|
102
|
+
const s = stream(model, context);
|
|
103
|
+
|
|
104
|
+
for await (const event of s) {
|
|
105
|
+
switch (event.type) {
|
|
106
|
+
case 'start':
|
|
107
|
+
console.log(`Starting with ${event.partial.model}`);
|
|
108
|
+
break;
|
|
109
|
+
case 'text_start':
|
|
110
|
+
console.log('\n[Text started]');
|
|
111
|
+
break;
|
|
112
|
+
case 'text_delta':
|
|
113
|
+
process.stdout.write(event.delta);
|
|
114
|
+
break;
|
|
115
|
+
case 'text_end':
|
|
116
|
+
console.log('\n[Text ended]');
|
|
117
|
+
break;
|
|
118
|
+
case 'thinking_start':
|
|
119
|
+
console.log('[Model is thinking...]');
|
|
120
|
+
break;
|
|
121
|
+
case 'thinking_delta':
|
|
122
|
+
process.stdout.write(event.delta);
|
|
123
|
+
break;
|
|
124
|
+
case 'thinking_end':
|
|
125
|
+
console.log('[Thinking complete]');
|
|
126
|
+
break;
|
|
127
|
+
case 'toolcall_start':
|
|
128
|
+
console.log(`\n[Tool call started: index ${event.contentIndex}]`);
|
|
129
|
+
break;
|
|
130
|
+
case 'toolcall_delta':
|
|
131
|
+
// Partial tool arguments are being streamed
|
|
132
|
+
const partialCall = event.partial.content[event.contentIndex];
|
|
133
|
+
if (partialCall.type === 'toolCall') {
|
|
134
|
+
console.log(`[Streaming args for ${partialCall.name}]`);
|
|
135
|
+
}
|
|
136
|
+
break;
|
|
137
|
+
case 'toolcall_end':
|
|
138
|
+
console.log(`\nTool called: ${event.toolCall.name}`);
|
|
139
|
+
console.log(`Arguments: ${JSON.stringify(event.toolCall.arguments)}`);
|
|
140
|
+
break;
|
|
141
|
+
case 'done':
|
|
142
|
+
console.log(`\nFinished: ${event.reason}`);
|
|
143
|
+
break;
|
|
144
|
+
case 'error':
|
|
145
|
+
console.error(`Error: ${event.error}`);
|
|
146
|
+
break;
|
|
147
|
+
}
|
|
148
|
+
}
|
|
149
|
+
|
|
150
|
+
// Get the final message after streaming, add it to the context
|
|
151
|
+
const finalMessage = await s.result();
|
|
152
|
+
context.messages.push(finalMessage);
|
|
153
|
+
|
|
154
|
+
// Handle tool calls if any
|
|
155
|
+
const toolCalls = finalMessage.content.filter(b => b.type === 'toolCall');
|
|
156
|
+
for (const call of toolCalls) {
|
|
157
|
+
// Execute the tool
|
|
158
|
+
const result = call.name === 'get_time'
|
|
159
|
+
? new Date().toLocaleString('en-US', {
|
|
160
|
+
timeZone: call.arguments.timezone || 'UTC',
|
|
161
|
+
dateStyle: 'full',
|
|
162
|
+
timeStyle: 'long'
|
|
163
|
+
})
|
|
164
|
+
: 'Unknown tool';
|
|
165
|
+
|
|
166
|
+
// Add tool result to context (supports text and images)
|
|
167
|
+
context.messages.push({
|
|
168
|
+
role: 'toolResult',
|
|
169
|
+
toolCallId: call.id,
|
|
170
|
+
toolName: call.name,
|
|
171
|
+
content: [{ type: 'text', text: result }],
|
|
172
|
+
isError: false,
|
|
173
|
+
timestamp: Date.now()
|
|
174
|
+
});
|
|
175
|
+
}
|
|
176
|
+
|
|
177
|
+
// Continue if there were tool calls
|
|
178
|
+
if (toolCalls.length > 0) {
|
|
179
|
+
const continuation = await complete(model, context);
|
|
180
|
+
context.messages.push(continuation);
|
|
181
|
+
console.log('After tool execution:', continuation.content);
|
|
182
|
+
}
|
|
183
|
+
|
|
184
|
+
console.log(`Total tokens: ${finalMessage.usage.input} in, ${finalMessage.usage.output} out`);
|
|
185
|
+
console.log(`Cost: $${finalMessage.usage.cost.total.toFixed(4)}`);
|
|
186
|
+
|
|
187
|
+
// Option 2: Get complete response without streaming
|
|
188
|
+
const response = await complete(model, context);
|
|
189
|
+
|
|
190
|
+
for (const block of response.content) {
|
|
191
|
+
if (block.type === 'text') {
|
|
192
|
+
console.log(block.text);
|
|
193
|
+
} else if (block.type === 'toolCall') {
|
|
194
|
+
console.log(`Tool: ${block.name}(${JSON.stringify(block.arguments)})`);
|
|
195
|
+
}
|
|
196
|
+
}
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
## Tools
|
|
200
|
+
|
|
201
|
+
Tools enable LLMs to interact with external systems. This library uses TypeBox schemas for type-safe tool definitions with automatic validation using AJV. TypeBox schemas can be serialized and deserialized as plain JSON, making them ideal for distributed systems.
|
|
202
|
+
|
|
203
|
+
### Defining Tools
|
|
204
|
+
|
|
205
|
+
```typescript
|
|
206
|
+
import { Type, Tool, StringEnum } from '@mariozechner/pi-ai';
|
|
207
|
+
|
|
208
|
+
// Define tool parameters with TypeBox
|
|
209
|
+
const weatherTool: Tool = {
|
|
210
|
+
name: 'get_weather',
|
|
211
|
+
description: 'Get current weather for a location',
|
|
212
|
+
parameters: Type.Object({
|
|
213
|
+
location: Type.String({ description: 'City name or coordinates' }),
|
|
214
|
+
units: StringEnum(['celsius', 'fahrenheit'], { default: 'celsius' })
|
|
215
|
+
})
|
|
216
|
+
};
|
|
217
|
+
|
|
218
|
+
// Note: For Google API compatibility, use StringEnum helper instead of Type.Enum
|
|
219
|
+
// Type.Enum generates anyOf/const patterns that Google doesn't support
|
|
220
|
+
|
|
221
|
+
const bookMeetingTool: Tool = {
|
|
222
|
+
name: 'book_meeting',
|
|
223
|
+
description: 'Schedule a meeting',
|
|
224
|
+
parameters: Type.Object({
|
|
225
|
+
title: Type.String({ minLength: 1 }),
|
|
226
|
+
startTime: Type.String({ format: 'date-time' }),
|
|
227
|
+
endTime: Type.String({ format: 'date-time' }),
|
|
228
|
+
attendees: Type.Array(Type.String({ format: 'email' }), { minItems: 1 })
|
|
229
|
+
})
|
|
230
|
+
};
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
### Handling Tool Calls
|
|
234
|
+
|
|
235
|
+
Tool results use content blocks and can include both text and images:
|
|
236
|
+
|
|
237
|
+
```typescript
|
|
238
|
+
import { readFileSync } from 'fs';
|
|
239
|
+
|
|
240
|
+
const context: Context = {
|
|
241
|
+
messages: [{ role: 'user', content: 'What is the weather in London?' }],
|
|
242
|
+
tools: [weatherTool]
|
|
243
|
+
};
|
|
244
|
+
|
|
245
|
+
const response = await complete(model, context);
|
|
246
|
+
|
|
247
|
+
// Check for tool calls in the response
|
|
248
|
+
for (const block of response.content) {
|
|
249
|
+
if (block.type === 'toolCall') {
|
|
250
|
+
// Execute your tool with the arguments
|
|
251
|
+
// See "Validating Tool Arguments" section for validation
|
|
252
|
+
const result = await executeWeatherApi(block.arguments);
|
|
253
|
+
|
|
254
|
+
// Add tool result with text content
|
|
255
|
+
context.messages.push({
|
|
256
|
+
role: 'toolResult',
|
|
257
|
+
toolCallId: block.id,
|
|
258
|
+
toolName: block.name,
|
|
259
|
+
content: [{ type: 'text', text: JSON.stringify(result) }],
|
|
260
|
+
isError: false,
|
|
261
|
+
timestamp: Date.now()
|
|
262
|
+
});
|
|
263
|
+
}
|
|
264
|
+
}
|
|
265
|
+
|
|
266
|
+
// Tool results can also include images (for vision-capable models)
|
|
267
|
+
const imageBuffer = readFileSync('chart.png');
|
|
268
|
+
context.messages.push({
|
|
269
|
+
role: 'toolResult',
|
|
270
|
+
toolCallId: 'tool_xyz',
|
|
271
|
+
toolName: 'generate_chart',
|
|
272
|
+
content: [
|
|
273
|
+
{ type: 'text', text: 'Generated chart showing temperature trends' },
|
|
274
|
+
{ type: 'image', data: imageBuffer.toString('base64'), mimeType: 'image/png' }
|
|
275
|
+
],
|
|
276
|
+
isError: false,
|
|
277
|
+
timestamp: Date.now()
|
|
278
|
+
});
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
### Streaming Tool Calls with Partial JSON
|
|
282
|
+
|
|
283
|
+
During streaming, tool call arguments are progressively parsed as they arrive. This enables real-time UI updates before the complete arguments are available:
|
|
284
|
+
|
|
285
|
+
```typescript
|
|
286
|
+
const s = stream(model, context);
|
|
287
|
+
|
|
288
|
+
for await (const event of s) {
|
|
289
|
+
if (event.type === 'toolcall_delta') {
|
|
290
|
+
const toolCall = event.partial.content[event.contentIndex];
|
|
291
|
+
|
|
292
|
+
// toolCall.arguments contains partially parsed JSON during streaming
|
|
293
|
+
// This allows for progressive UI updates
|
|
294
|
+
if (toolCall.type === 'toolCall' && toolCall.arguments) {
|
|
295
|
+
// BE DEFENSIVE: arguments may be incomplete
|
|
296
|
+
// Example: Show file path being written even before content is complete
|
|
297
|
+
if (toolCall.name === 'write_file' && toolCall.arguments.path) {
|
|
298
|
+
console.log(`Writing to: ${toolCall.arguments.path}`);
|
|
299
|
+
|
|
300
|
+
// Content might be partial or missing
|
|
301
|
+
if (toolCall.arguments.content) {
|
|
302
|
+
console.log(`Content preview: ${toolCall.arguments.content.substring(0, 100)}...`);
|
|
303
|
+
}
|
|
304
|
+
}
|
|
305
|
+
}
|
|
306
|
+
}
|
|
307
|
+
|
|
308
|
+
if (event.type === 'toolcall_end') {
|
|
309
|
+
// Here toolCall.arguments is complete (but not yet validated)
|
|
310
|
+
const toolCall = event.toolCall;
|
|
311
|
+
console.log(`Tool completed: ${toolCall.name}`, toolCall.arguments);
|
|
312
|
+
}
|
|
313
|
+
}
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
**Important notes about partial tool arguments:**
|
|
317
|
+
- During `toolcall_delta` events, `arguments` contains the best-effort parse of partial JSON
|
|
318
|
+
- Fields may be missing or incomplete - always check for existence before use
|
|
319
|
+
- String values may be truncated mid-word
|
|
320
|
+
- Arrays may be incomplete
|
|
321
|
+
- Nested objects may be partially populated
|
|
322
|
+
- At minimum, `arguments` will be an empty object `{}`, never `undefined`
|
|
323
|
+
- The Google provider does not support function call streaming. Instead, you will receive a single `toolcall_delta` event with the full arguments.
|
|
324
|
+
|
|
325
|
+
### Validating Tool Arguments
|
|
326
|
+
|
|
327
|
+
When using `agentLoop`, tool arguments are automatically validated against your TypeBox schemas before execution. If validation fails, the error is returned to the model as a tool result, allowing it to retry.
|
|
328
|
+
|
|
329
|
+
When implementing your own tool execution loop with `stream()` or `complete()`, use `validateToolCall` to validate arguments before passing them to your tools:
|
|
330
|
+
|
|
331
|
+
```typescript
|
|
332
|
+
import { stream, validateToolCall, Tool } from '@mariozechner/pi-ai';
|
|
333
|
+
|
|
334
|
+
const tools: Tool[] = [weatherTool, calculatorTool];
|
|
335
|
+
const s = stream(model, { messages, tools });
|
|
336
|
+
|
|
337
|
+
for await (const event of s) {
|
|
338
|
+
if (event.type === 'toolcall_end') {
|
|
339
|
+
const toolCall = event.toolCall;
|
|
340
|
+
|
|
341
|
+
try {
|
|
342
|
+
// Validate arguments against the tool's schema (throws on invalid args)
|
|
343
|
+
const validatedArgs = validateToolCall(tools, toolCall);
|
|
344
|
+
const result = await executeMyTool(toolCall.name, validatedArgs);
|
|
345
|
+
// ... add tool result to context
|
|
346
|
+
} catch (error) {
|
|
347
|
+
// Validation failed - return error as tool result so model can retry
|
|
348
|
+
context.messages.push({
|
|
349
|
+
role: 'toolResult',
|
|
350
|
+
toolCallId: toolCall.id,
|
|
351
|
+
toolName: toolCall.name,
|
|
352
|
+
content: [{ type: 'text', text: error.message }],
|
|
353
|
+
isError: true,
|
|
354
|
+
timestamp: Date.now()
|
|
355
|
+
});
|
|
356
|
+
}
|
|
357
|
+
}
|
|
358
|
+
}
|
|
359
|
+
```
|
|
360
|
+
|
|
361
|
+
### Complete Event Reference
|
|
362
|
+
|
|
363
|
+
All streaming events emitted during assistant message generation:
|
|
364
|
+
|
|
365
|
+
| Event Type | Description | Key Properties |
|
|
366
|
+
|------------|-------------|----------------|
|
|
367
|
+
| `start` | Stream begins | `partial`: Initial assistant message structure |
|
|
368
|
+
| `text_start` | Text block starts | `contentIndex`: Position in content array |
|
|
369
|
+
| `text_delta` | Text chunk received | `delta`: New text, `contentIndex`: Position |
|
|
370
|
+
| `text_end` | Text block complete | `content`: Full text, `contentIndex`: Position |
|
|
371
|
+
| `thinking_start` | Thinking block starts | `contentIndex`: Position in content array |
|
|
372
|
+
| `thinking_delta` | Thinking chunk received | `delta`: New text, `contentIndex`: Position |
|
|
373
|
+
| `thinking_end` | Thinking block complete | `content`: Full thinking, `contentIndex`: Position |
|
|
374
|
+
| `toolcall_start` | Tool call begins | `contentIndex`: Position in content array |
|
|
375
|
+
| `toolcall_delta` | Tool arguments streaming | `delta`: JSON chunk, `partial.content[contentIndex].arguments`: Partial parsed args |
|
|
376
|
+
| `toolcall_end` | Tool call complete | `toolCall`: Complete validated tool call with `id`, `name`, `arguments` |
|
|
377
|
+
| `done` | Stream complete | `reason`: Stop reason ("stop", "length", "toolUse"), `message`: Final assistant message |
|
|
378
|
+
| `error` | Error occurred | `reason`: Error type ("error" or "aborted"), `error`: AssistantMessage with partial content |
|
|
379
|
+
|
|
380
|
+
## Image Input
|
|
381
|
+
|
|
382
|
+
Models with vision capabilities can process images. You can check if a model supports images via the `input` property. If you pass images to a non-vision model, they are silently ignored.
|
|
383
|
+
|
|
384
|
+
```typescript
|
|
385
|
+
import { readFileSync } from 'fs';
|
|
386
|
+
import { getModel, complete } from '@mariozechner/pi-ai';
|
|
387
|
+
|
|
388
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
389
|
+
|
|
390
|
+
// Check if model supports images
|
|
391
|
+
if (model.input.includes('image')) {
|
|
392
|
+
console.log('Model supports vision');
|
|
393
|
+
}
|
|
394
|
+
|
|
395
|
+
const imageBuffer = readFileSync('image.png');
|
|
396
|
+
const base64Image = imageBuffer.toString('base64');
|
|
397
|
+
|
|
398
|
+
const response = await complete(model, {
|
|
399
|
+
messages: [{
|
|
400
|
+
role: 'user',
|
|
401
|
+
content: [
|
|
402
|
+
{ type: 'text', text: 'What is in this image?' },
|
|
403
|
+
{ type: 'image', data: base64Image, mimeType: 'image/png' }
|
|
404
|
+
]
|
|
405
|
+
}]
|
|
406
|
+
});
|
|
407
|
+
|
|
408
|
+
// Access the response
|
|
409
|
+
for (const block of response.content) {
|
|
410
|
+
if (block.type === 'text') {
|
|
411
|
+
console.log(block.text);
|
|
412
|
+
}
|
|
413
|
+
}
|
|
414
|
+
```
|
|
415
|
+
|
|
416
|
+
## Thinking/Reasoning
|
|
417
|
+
|
|
418
|
+
Many models support thinking/reasoning capabilities where they can show their internal thought process. You can check if a model supports reasoning via the `reasoning` property. If you pass reasoning options to a non-reasoning model, they are silently ignored.
|
|
419
|
+
|
|
420
|
+
### Unified Interface (streamSimple/completeSimple)
|
|
421
|
+
|
|
422
|
+
```typescript
|
|
423
|
+
import { getModel, streamSimple, completeSimple } from '@mariozechner/pi-ai';
|
|
424
|
+
|
|
425
|
+
// Many models across providers support thinking/reasoning
|
|
426
|
+
const model = getModel('anthropic', 'claude-sonnet-4-20250514');
|
|
427
|
+
// or getModel('openai', 'gpt-5-mini');
|
|
428
|
+
// or getModel('google', 'gemini-2.5-flash');
|
|
429
|
+
// or getModel('xai', 'grok-code-fast-1');
|
|
430
|
+
// or getModel('groq', 'openai/gpt-oss-20b');
|
|
431
|
+
// or getModel('cerebras', 'gpt-oss-120b');
|
|
432
|
+
// or getModel('openrouter', 'z-ai/glm-4.5v');
|
|
433
|
+
|
|
434
|
+
// Check if model supports reasoning
|
|
435
|
+
if (model.reasoning) {
|
|
436
|
+
console.log('Model supports reasoning/thinking');
|
|
437
|
+
}
|
|
438
|
+
|
|
439
|
+
// Use the simplified reasoning option
|
|
440
|
+
const response = await completeSimple(model, {
|
|
441
|
+
messages: [{ role: 'user', content: 'Solve: 2x + 5 = 13' }]
|
|
442
|
+
}, {
|
|
443
|
+
reasoning: 'medium' // 'minimal' | 'low' | 'medium' | 'high' | 'xhigh' (xhigh maps to high on non-OpenAI providers)
|
|
444
|
+
});
|
|
445
|
+
|
|
446
|
+
// Access thinking and text blocks
|
|
447
|
+
for (const block of response.content) {
|
|
448
|
+
if (block.type === 'thinking') {
|
|
449
|
+
console.log('Thinking:', block.thinking);
|
|
450
|
+
} else if (block.type === 'text') {
|
|
451
|
+
console.log('Response:', block.text);
|
|
452
|
+
}
|
|
453
|
+
}
|
|
454
|
+
```
|
|
455
|
+
|
|
456
|
+
### Provider-Specific Options (stream/complete)
|
|
457
|
+
|
|
458
|
+
For fine-grained control, use the provider-specific options:
|
|
459
|
+
|
|
460
|
+
```typescript
|
|
461
|
+
import { getModel, complete } from '@mariozechner/pi-ai';
|
|
462
|
+
|
|
463
|
+
// OpenAI Reasoning (o1, o3, gpt-5)
|
|
464
|
+
const openaiModel = getModel('openai', 'gpt-5-mini');
|
|
465
|
+
await complete(openaiModel, context, {
|
|
466
|
+
reasoningEffort: 'medium',
|
|
467
|
+
reasoningSummary: 'detailed' // OpenAI Responses API only
|
|
468
|
+
});
|
|
469
|
+
|
|
470
|
+
// Anthropic Thinking (Claude Sonnet 4)
|
|
471
|
+
const anthropicModel = getModel('anthropic', 'claude-sonnet-4-20250514');
|
|
472
|
+
await complete(anthropicModel, context, {
|
|
473
|
+
thinkingEnabled: true,
|
|
474
|
+
thinkingBudgetTokens: 8192 // Optional token limit
|
|
475
|
+
});
|
|
476
|
+
|
|
477
|
+
// Google Gemini Thinking
|
|
478
|
+
const googleModel = getModel('google', 'gemini-2.5-flash');
|
|
479
|
+
await complete(googleModel, context, {
|
|
480
|
+
thinking: {
|
|
481
|
+
enabled: true,
|
|
482
|
+
budgetTokens: 8192 // -1 for dynamic, 0 to disable
|
|
483
|
+
}
|
|
484
|
+
});
|
|
485
|
+
```
|
|
486
|
+
|
|
487
|
+
### Streaming Thinking Content
|
|
488
|
+
|
|
489
|
+
When streaming, thinking content is delivered through specific events:
|
|
490
|
+
|
|
491
|
+
```typescript
|
|
492
|
+
const s = streamSimple(model, context, { reasoning: 'high' });
|
|
493
|
+
|
|
494
|
+
for await (const event of s) {
|
|
495
|
+
switch (event.type) {
|
|
496
|
+
case 'thinking_start':
|
|
497
|
+
console.log('[Model started thinking]');
|
|
498
|
+
break;
|
|
499
|
+
case 'thinking_delta':
|
|
500
|
+
process.stdout.write(event.delta); // Stream thinking content
|
|
501
|
+
break;
|
|
502
|
+
case 'thinking_end':
|
|
503
|
+
console.log('\n[Thinking complete]');
|
|
504
|
+
break;
|
|
505
|
+
}
|
|
506
|
+
}
|
|
507
|
+
```
|
|
508
|
+
|
|
509
|
+
## Stop Reasons
|
|
510
|
+
|
|
511
|
+
Every `AssistantMessage` includes a `stopReason` field that indicates how the generation ended:
|
|
512
|
+
|
|
513
|
+
- `"stop"` - Normal completion, the model finished its response
|
|
514
|
+
- `"length"` - Output hit the maximum token limit
|
|
515
|
+
- `"toolUse"` - Model is calling tools and expects tool results
|
|
516
|
+
- `"error"` - An error occurred during generation
|
|
517
|
+
- `"aborted"` - Request was cancelled via abort signal
|
|
518
|
+
|
|
519
|
+
## Error Handling
|
|
520
|
+
|
|
521
|
+
When a request ends with an error (including aborts and tool call validation errors), the streaming API emits an error event:
|
|
522
|
+
|
|
523
|
+
```typescript
|
|
524
|
+
// In streaming
|
|
525
|
+
for await (const event of stream) {
|
|
526
|
+
if (event.type === 'error') {
|
|
527
|
+
// event.reason is either "error" or "aborted"
|
|
528
|
+
// event.error is the AssistantMessage with partial content
|
|
529
|
+
console.error(`Error (${event.reason}):`, event.error.errorMessage);
|
|
530
|
+
console.log('Partial content:', event.error.content);
|
|
531
|
+
}
|
|
532
|
+
}
|
|
533
|
+
|
|
534
|
+
// The final message will have the error details
|
|
535
|
+
const message = await stream.result();
|
|
536
|
+
if (message.stopReason === 'error' || message.stopReason === 'aborted') {
|
|
537
|
+
console.error('Request failed:', message.errorMessage);
|
|
538
|
+
// message.content contains any partial content received before the error
|
|
539
|
+
// message.usage contains partial token counts and costs
|
|
540
|
+
}
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
### Aborting Requests
|
|
544
|
+
|
|
545
|
+
The abort signal allows you to cancel in-progress requests. Aborted requests have `stopReason === 'aborted'`:
|
|
546
|
+
|
|
547
|
+
```typescript
|
|
548
|
+
import { getModel, stream } from '@mariozechner/pi-ai';
|
|
549
|
+
|
|
550
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
551
|
+
const controller = new AbortController();
|
|
552
|
+
|
|
553
|
+
// Abort after 2 seconds
|
|
554
|
+
setTimeout(() => controller.abort(), 2000);
|
|
555
|
+
|
|
556
|
+
const s = stream(model, {
|
|
557
|
+
messages: [{ role: 'user', content: 'Write a long story' }]
|
|
558
|
+
}, {
|
|
559
|
+
signal: controller.signal
|
|
560
|
+
});
|
|
561
|
+
|
|
562
|
+
for await (const event of s) {
|
|
563
|
+
if (event.type === 'text_delta') {
|
|
564
|
+
process.stdout.write(event.delta);
|
|
565
|
+
} else if (event.type === 'error') {
|
|
566
|
+
// event.reason tells you if it was "error" or "aborted"
|
|
567
|
+
console.log(`${event.reason === 'aborted' ? 'Aborted' : 'Error'}:`, event.error.errorMessage);
|
|
568
|
+
}
|
|
569
|
+
}
|
|
570
|
+
|
|
571
|
+
// Get results (may be partial if aborted)
|
|
572
|
+
const response = await s.result();
|
|
573
|
+
if (response.stopReason === 'aborted') {
|
|
574
|
+
console.log('Request was aborted:', response.errorMessage);
|
|
575
|
+
console.log('Partial content received:', response.content);
|
|
576
|
+
console.log('Tokens used:', response.usage);
|
|
577
|
+
}
|
|
578
|
+
```
|
|
579
|
+
|
|
580
|
+
### Continuing After Abort
|
|
581
|
+
|
|
582
|
+
Aborted messages can be added to the conversation context and continued in subsequent requests:
|
|
583
|
+
|
|
584
|
+
```typescript
|
|
585
|
+
const context = {
|
|
586
|
+
messages: [
|
|
587
|
+
{ role: 'user', content: 'Explain quantum computing in detail' }
|
|
588
|
+
]
|
|
589
|
+
};
|
|
590
|
+
|
|
591
|
+
// First request gets aborted after 2 seconds
|
|
592
|
+
const controller1 = new AbortController();
|
|
593
|
+
setTimeout(() => controller1.abort(), 2000);
|
|
594
|
+
|
|
595
|
+
const partial = await complete(model, context, { signal: controller1.signal });
|
|
596
|
+
|
|
597
|
+
// Add the partial response to context
|
|
598
|
+
context.messages.push(partial);
|
|
599
|
+
context.messages.push({ role: 'user', content: 'Please continue' });
|
|
600
|
+
|
|
601
|
+
// Continue the conversation
|
|
602
|
+
const continuation = await complete(model, context);
|
|
603
|
+
```
|
|
604
|
+
|
|
605
|
+
### Debugging Provider Payloads
|
|
606
|
+
|
|
607
|
+
Use the `onPayload` callback to inspect the request payload sent to the provider. This is useful for debugging request formatting issues or provider validation errors.
|
|
608
|
+
|
|
609
|
+
```typescript
|
|
610
|
+
const response = await complete(model, context, {
|
|
611
|
+
onPayload: (payload) => {
|
|
612
|
+
console.log('Provider payload:', JSON.stringify(payload, null, 2));
|
|
613
|
+
}
|
|
614
|
+
});
|
|
615
|
+
```
|
|
616
|
+
|
|
617
|
+
The callback is supported by `stream`, `complete`, `streamSimple`, and `completeSimple`.
|
|
618
|
+
|
|
619
|
+
## APIs, Models, and Providers
|
|
620
|
+
|
|
621
|
+
The library uses a registry of API implementations. Built-in APIs include:
|
|
622
|
+
|
|
623
|
+
- **`anthropic-messages`**: Anthropic Messages API (`streamAnthropic`, `AnthropicOptions`)
|
|
624
|
+
- **`google-generative-ai`**: Google Generative AI API (`streamGoogle`, `GoogleOptions`)
|
|
625
|
+
- **`google-gemini-cli`**: Google Cloud Code Assist API (`streamGoogleGeminiCli`, `GoogleGeminiCliOptions`)
|
|
626
|
+
- **`google-vertex`**: Google Vertex AI API (`streamGoogleVertex`, `GoogleVertexOptions`)
|
|
627
|
+
- **`openai-completions`**: OpenAI Chat Completions API (`streamOpenAICompletions`, `OpenAICompletionsOptions`)
|
|
628
|
+
- **`openai-responses`**: OpenAI Responses API (`streamOpenAIResponses`, `OpenAIResponsesOptions`)
|
|
629
|
+
- **`openai-codex-responses`**: OpenAI Codex Responses API (`streamOpenAICodexResponses`, `OpenAICodexResponsesOptions`)
|
|
630
|
+
- **`azure-openai-responses`**: Azure OpenAI Responses API (`streamAzureOpenAIResponses`, `AzureOpenAIResponsesOptions`)
|
|
631
|
+
- **`bedrock-converse-stream`**: Amazon Bedrock Converse API (`streamBedrock`, `BedrockOptions`)
|
|
632
|
+
|
|
633
|
+
### Providers and Models
|
|
634
|
+
|
|
635
|
+
A **provider** offers models through a specific API. For example:
|
|
636
|
+
- **Anthropic** models use the `anthropic-messages` API
|
|
637
|
+
- **Google** models use the `google-generative-ai` API
|
|
638
|
+
- **OpenAI** models use the `openai-responses` API
|
|
639
|
+
- **Mistral, xAI, Cerebras, Groq, etc.** models use the `openai-completions` API (OpenAI-compatible)
|
|
640
|
+
|
|
641
|
+
### Querying Providers and Models
|
|
642
|
+
|
|
643
|
+
```typescript
|
|
644
|
+
import { getProviders, getModels, getModel } from '@mariozechner/pi-ai';
|
|
645
|
+
|
|
646
|
+
// Get all available providers
|
|
647
|
+
const providers = getProviders();
|
|
648
|
+
console.log(providers); // ['openai', 'anthropic', 'google', 'xai', 'groq', ...]
|
|
649
|
+
|
|
650
|
+
// Get all models from a provider (fully typed)
|
|
651
|
+
const anthropicModels = getModels('anthropic');
|
|
652
|
+
for (const model of anthropicModels) {
|
|
653
|
+
console.log(`${model.id}: ${model.name}`);
|
|
654
|
+
console.log(` API: ${model.api}`); // 'anthropic-messages'
|
|
655
|
+
console.log(` Context: ${model.contextWindow} tokens`);
|
|
656
|
+
console.log(` Vision: ${model.input.includes('image')}`);
|
|
657
|
+
console.log(` Reasoning: ${model.reasoning}`);
|
|
658
|
+
}
|
|
659
|
+
|
|
660
|
+
// Get a specific model (both provider and model ID are auto-completed in IDEs)
|
|
661
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
662
|
+
console.log(`Using ${model.name} via ${model.api} API`);
|
|
663
|
+
```
|
|
664
|
+
|
|
665
|
+
### Custom Models
|
|
666
|
+
|
|
667
|
+
You can create custom models for local inference servers or custom endpoints:
|
|
668
|
+
|
|
669
|
+
```typescript
|
|
670
|
+
import { Model, stream } from '@mariozechner/pi-ai';
|
|
671
|
+
|
|
672
|
+
// Example: Ollama using OpenAI-compatible API
|
|
673
|
+
const ollamaModel: Model<'openai-completions'> = {
|
|
674
|
+
id: 'llama-3.1-8b',
|
|
675
|
+
name: 'Llama 3.1 8B (Ollama)',
|
|
676
|
+
api: 'openai-completions',
|
|
677
|
+
provider: 'ollama',
|
|
678
|
+
baseUrl: 'http://localhost:11434/v1',
|
|
679
|
+
reasoning: false,
|
|
680
|
+
input: ['text'],
|
|
681
|
+
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
|
682
|
+
contextWindow: 128000,
|
|
683
|
+
maxTokens: 32000
|
|
684
|
+
};
|
|
685
|
+
|
|
686
|
+
// Example: LiteLLM proxy with explicit compat settings
|
|
687
|
+
const litellmModel: Model<'openai-completions'> = {
|
|
688
|
+
id: 'gpt-4o',
|
|
689
|
+
name: 'GPT-4o (via LiteLLM)',
|
|
690
|
+
api: 'openai-completions',
|
|
691
|
+
provider: 'litellm',
|
|
692
|
+
baseUrl: 'http://localhost:4000/v1',
|
|
693
|
+
reasoning: false,
|
|
694
|
+
input: ['text', 'image'],
|
|
695
|
+
cost: { input: 2.5, output: 10, cacheRead: 0, cacheWrite: 0 },
|
|
696
|
+
contextWindow: 128000,
|
|
697
|
+
maxTokens: 16384,
|
|
698
|
+
compat: {
|
|
699
|
+
supportsStore: false, // LiteLLM doesn't support the store field
|
|
700
|
+
}
|
|
701
|
+
};
|
|
702
|
+
|
|
703
|
+
// Example: Custom endpoint with headers (bypassing Cloudflare bot detection)
|
|
704
|
+
const proxyModel: Model<'anthropic-messages'> = {
|
|
705
|
+
id: 'claude-sonnet-4',
|
|
706
|
+
name: 'Claude Sonnet 4 (Proxied)',
|
|
707
|
+
api: 'anthropic-messages',
|
|
708
|
+
provider: 'custom-proxy',
|
|
709
|
+
baseUrl: 'https://proxy.example.com/v1',
|
|
710
|
+
reasoning: true,
|
|
711
|
+
input: ['text', 'image'],
|
|
712
|
+
cost: { input: 3, output: 15, cacheRead: 0.3, cacheWrite: 3.75 },
|
|
713
|
+
contextWindow: 200000,
|
|
714
|
+
maxTokens: 8192,
|
|
715
|
+
headers: {
|
|
716
|
+
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36',
|
|
717
|
+
'X-Custom-Auth': 'bearer-token-here'
|
|
718
|
+
}
|
|
719
|
+
};
|
|
720
|
+
|
|
721
|
+
// Use the custom model
|
|
722
|
+
const response = await stream(ollamaModel, context, {
|
|
723
|
+
apiKey: 'dummy' // Ollama doesn't need a real key
|
|
724
|
+
});
|
|
725
|
+
```
|
|
726
|
+
|
|
727
|
+
### OpenAI Compatibility Settings
|
|
728
|
+
|
|
729
|
+
The `openai-completions` API is implemented by many providers with minor differences. By default, the library auto-detects compatibility settings based on `baseUrl` for known providers (Cerebras, xAI, Mistral, Chutes, etc.). For custom proxies or unknown endpoints, you can override these settings via the `compat` field. For `openai-responses` models, the compat field only supports Responses-specific flags.
|
|
730
|
+
|
|
731
|
+
```typescript
|
|
732
|
+
interface OpenAICompletionsCompat {
|
|
733
|
+
supportsStore?: boolean; // Whether provider supports the `store` field (default: true)
|
|
734
|
+
supportsDeveloperRole?: boolean; // Whether provider supports `developer` role vs `system` (default: true)
|
|
735
|
+
supportsReasoningEffort?: boolean; // Whether provider supports `reasoning_effort` (default: true)
|
|
736
|
+
supportsUsageInStreaming?: boolean; // Whether provider supports `stream_options: { include_usage: true }` (default: true)
|
|
737
|
+
supportsStrictMode?: boolean; // Whether provider supports `strict` in tool definitions (default: true)
|
|
738
|
+
maxTokensField?: 'max_completion_tokens' | 'max_tokens'; // Which field name to use (default: max_completion_tokens)
|
|
739
|
+
requiresToolResultName?: boolean; // Whether tool results require the `name` field (default: false)
|
|
740
|
+
requiresAssistantAfterToolResult?: boolean; // Whether tool results must be followed by an assistant message (default: false)
|
|
741
|
+
requiresThinkingAsText?: boolean; // Whether thinking blocks must be converted to text (default: false)
|
|
742
|
+
requiresMistralToolIds?: boolean; // Whether tool call IDs must be normalized to Mistral format (default: false)
|
|
743
|
+
thinkingFormat?: 'openai' | 'zai' | 'qwen'; // Format for reasoning param: 'openai' uses reasoning_effort, 'zai' uses thinking: { type: "enabled" }, 'qwen' uses enable_thinking: boolean (default: openai)
|
|
744
|
+
openRouterRouting?: OpenRouterRouting; // OpenRouter routing preferences (default: {})
|
|
745
|
+
vercelGatewayRouting?: VercelGatewayRouting; // Vercel AI Gateway routing preferences (default: {})
|
|
746
|
+
}
|
|
747
|
+
|
|
748
|
+
interface OpenAIResponsesCompat {
|
|
749
|
+
// Reserved for future use
|
|
750
|
+
}
|
|
751
|
+
```
|
|
752
|
+
|
|
753
|
+
If `compat` is not set, the library falls back to URL-based detection. If `compat` is partially set, unspecified fields use the detected defaults. This is useful for:
|
|
754
|
+
|
|
755
|
+
- **LiteLLM proxies**: May not support `store` field
|
|
756
|
+
- **Custom inference servers**: May use non-standard field names
|
|
757
|
+
- **Self-hosted endpoints**: May have different feature support
|
|
758
|
+
|
|
759
|
+
### Type Safety
|
|
760
|
+
|
|
761
|
+
Models are typed by their API, which keeps the model metadata accurate. Provider-specific option types are enforced when you call the provider functions directly. The generic `stream` and `complete` functions accept `StreamOptions` with additional provider fields.
|
|
762
|
+
|
|
763
|
+
```typescript
|
|
764
|
+
import { streamAnthropic, type AnthropicOptions } from '@mariozechner/pi-ai';
|
|
765
|
+
|
|
766
|
+
// TypeScript knows this is an Anthropic model
|
|
767
|
+
const claude = getModel('anthropic', 'claude-sonnet-4-20250514');
|
|
768
|
+
|
|
769
|
+
const options: AnthropicOptions = {
|
|
770
|
+
thinkingEnabled: true,
|
|
771
|
+
thinkingBudgetTokens: 2048
|
|
772
|
+
};
|
|
773
|
+
|
|
774
|
+
await streamAnthropic(claude, context, options);
|
|
775
|
+
```
|
|
776
|
+
|
|
777
|
+
## Cross-Provider Handoffs
|
|
778
|
+
|
|
779
|
+
The library supports seamless handoffs between different LLM providers within the same conversation. This allows you to switch models mid-conversation while preserving context, including thinking blocks, tool calls, and tool results.
|
|
780
|
+
|
|
781
|
+
### How It Works
|
|
782
|
+
|
|
783
|
+
When messages from one provider are sent to a different provider, the library automatically transforms them for compatibility:
|
|
784
|
+
|
|
785
|
+
- **User and tool result messages** are passed through unchanged
|
|
786
|
+
- **Assistant messages from the same provider/API** are preserved as-is
|
|
787
|
+
- **Assistant messages from different providers** have their thinking blocks converted to text with `<thinking>` tags
|
|
788
|
+
- **Tool calls and regular text** are preserved unchanged
|
|
789
|
+
|
|
790
|
+
### Example: Multi-Provider Conversation
|
|
791
|
+
|
|
792
|
+
```typescript
|
|
793
|
+
import { getModel, complete, Context } from '@mariozechner/pi-ai';
|
|
794
|
+
|
|
795
|
+
// Start with Claude
|
|
796
|
+
const claude = getModel('anthropic', 'claude-sonnet-4-20250514');
|
|
797
|
+
const context: Context = {
|
|
798
|
+
messages: []
|
|
799
|
+
};
|
|
800
|
+
|
|
801
|
+
context.messages.push({ role: 'user', content: 'What is 25 * 18?' });
|
|
802
|
+
const claudeResponse = await complete(claude, context, {
|
|
803
|
+
thinkingEnabled: true
|
|
804
|
+
});
|
|
805
|
+
context.messages.push(claudeResponse);
|
|
806
|
+
|
|
807
|
+
// Switch to GPT-5 - it will see Claude's thinking as <thinking> tagged text
|
|
808
|
+
const gpt5 = getModel('openai', 'gpt-5-mini');
|
|
809
|
+
context.messages.push({ role: 'user', content: 'Is that calculation correct?' });
|
|
810
|
+
const gptResponse = await complete(gpt5, context);
|
|
811
|
+
context.messages.push(gptResponse);
|
|
812
|
+
|
|
813
|
+
// Switch to Gemini
|
|
814
|
+
const gemini = getModel('google', 'gemini-2.5-flash');
|
|
815
|
+
context.messages.push({ role: 'user', content: 'What was the original question?' });
|
|
816
|
+
const geminiResponse = await complete(gemini, context);
|
|
817
|
+
```
|
|
818
|
+
|
|
819
|
+
### Provider Compatibility
|
|
820
|
+
|
|
821
|
+
All providers can handle messages from other providers, including:
|
|
822
|
+
- Text content
|
|
823
|
+
- Tool calls and tool results (including images in tool results)
|
|
824
|
+
- Thinking/reasoning blocks (transformed to tagged text for cross-provider compatibility)
|
|
825
|
+
- Aborted messages with partial content
|
|
826
|
+
|
|
827
|
+
This enables flexible workflows where you can:
|
|
828
|
+
- Start with a fast model for initial responses
|
|
829
|
+
- Switch to a more capable model for complex reasoning
|
|
830
|
+
- Use specialized models for specific tasks
|
|
831
|
+
- Maintain conversation continuity across provider outages
|
|
832
|
+
|
|
833
|
+
## Context Serialization
|
|
834
|
+
|
|
835
|
+
The `Context` object can be easily serialized and deserialized using standard JSON methods, making it simple to persist conversations, implement chat history, or transfer contexts between services:
|
|
836
|
+
|
|
837
|
+
```typescript
|
|
838
|
+
import { Context, getModel, complete } from '@mariozechner/pi-ai';
|
|
839
|
+
|
|
840
|
+
// Create and use a context
|
|
841
|
+
const context: Context = {
|
|
842
|
+
systemPrompt: 'You are a helpful assistant.',
|
|
843
|
+
messages: [
|
|
844
|
+
{ role: 'user', content: 'What is TypeScript?' }
|
|
845
|
+
]
|
|
846
|
+
};
|
|
847
|
+
|
|
848
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
849
|
+
const response = await complete(model, context);
|
|
850
|
+
context.messages.push(response);
|
|
851
|
+
|
|
852
|
+
// Serialize the entire context
|
|
853
|
+
const serialized = JSON.stringify(context);
|
|
854
|
+
console.log('Serialized context size:', serialized.length, 'bytes');
|
|
855
|
+
|
|
856
|
+
// Save to database, localStorage, file, etc.
|
|
857
|
+
localStorage.setItem('conversation', serialized);
|
|
858
|
+
|
|
859
|
+
// Later: deserialize and continue the conversation
|
|
860
|
+
const restored: Context = JSON.parse(localStorage.getItem('conversation')!);
|
|
861
|
+
restored.messages.push({ role: 'user', content: 'Tell me more about its type system' });
|
|
862
|
+
|
|
863
|
+
// Continue with any model
|
|
864
|
+
const newModel = getModel('anthropic', 'claude-3-5-haiku-20241022');
|
|
865
|
+
const continuation = await complete(newModel, restored);
|
|
866
|
+
```
|
|
867
|
+
|
|
868
|
+
> **Note**: If the context contains images (encoded as base64 as shown in the Image Input section), those will also be serialized.
|
|
869
|
+
|
|
870
|
+
## Browser Usage
|
|
871
|
+
|
|
872
|
+
The library supports browser environments. You must pass the API key explicitly since environment variables are not available in browsers:
|
|
873
|
+
|
|
874
|
+
```typescript
|
|
875
|
+
import { getModel, complete } from '@mariozechner/pi-ai';
|
|
876
|
+
|
|
877
|
+
// API key must be passed explicitly in browser
|
|
878
|
+
const model = getModel('anthropic', 'claude-3-5-haiku-20241022');
|
|
879
|
+
|
|
880
|
+
const response = await complete(model, {
|
|
881
|
+
messages: [{ role: 'user', content: 'Hello!' }]
|
|
882
|
+
}, {
|
|
883
|
+
apiKey: 'your-api-key'
|
|
884
|
+
});
|
|
885
|
+
```
|
|
886
|
+
|
|
887
|
+
> **Security Warning**: Exposing API keys in frontend code is dangerous. Anyone can extract and abuse your keys. Only use this approach for internal tools or demos. For production applications, use a backend proxy that keeps your API keys secure.
|
|
888
|
+
|
|
889
|
+
### Environment Variables (Node.js only)
|
|
890
|
+
|
|
891
|
+
In Node.js environments, you can set environment variables to avoid passing API keys:
|
|
892
|
+
|
|
893
|
+
| Provider | Environment Variable(s) |
|
|
894
|
+
|----------|------------------------|
|
|
895
|
+
| OpenAI | `OPENAI_API_KEY` |
|
|
896
|
+
| Azure OpenAI | `AZURE_OPENAI_API_KEY` + `AZURE_OPENAI_BASE_URL` or `AZURE_OPENAI_RESOURCE_NAME` (optional `AZURE_OPENAI_API_VERSION`, `AZURE_OPENAI_DEPLOYMENT_NAME_MAP` like `model=deployment,model2=deployment2`) |
|
|
897
|
+
| Anthropic | `ANTHROPIC_API_KEY` or `ANTHROPIC_OAUTH_TOKEN` |
|
|
898
|
+
| Google | `GEMINI_API_KEY` |
|
|
899
|
+
| Vertex AI | `GOOGLE_CLOUD_PROJECT` (or `GCLOUD_PROJECT`) + `GOOGLE_CLOUD_LOCATION` + ADC |
|
|
900
|
+
| Mistral | `MISTRAL_API_KEY` |
|
|
901
|
+
| Groq | `GROQ_API_KEY` |
|
|
902
|
+
| Cerebras | `CEREBRAS_API_KEY` |
|
|
903
|
+
| xAI | `XAI_API_KEY` |
|
|
904
|
+
| OpenRouter | `OPENROUTER_API_KEY` |
|
|
905
|
+
| Vercel AI Gateway | `AI_GATEWAY_API_KEY` |
|
|
906
|
+
| zAI | `ZAI_API_KEY` |
|
|
907
|
+
| MiniMax | `MINIMAX_API_KEY` |
|
|
908
|
+
| Kimi For Coding | `KIMI_API_KEY` |
|
|
909
|
+
| GitHub Copilot | `COPILOT_GITHUB_TOKEN` or `GH_TOKEN` or `GITHUB_TOKEN` |
|
|
910
|
+
|
|
911
|
+
When set, the library automatically uses these keys:
|
|
912
|
+
|
|
913
|
+
```typescript
|
|
914
|
+
// Uses OPENAI_API_KEY from environment
|
|
915
|
+
const model = getModel('openai', 'gpt-4o-mini');
|
|
916
|
+
const response = await complete(model, context);
|
|
917
|
+
|
|
918
|
+
// Or override with explicit key
|
|
919
|
+
const response = await complete(model, context, {
|
|
920
|
+
apiKey: 'sk-different-key'
|
|
921
|
+
});
|
|
922
|
+
```
|
|
923
|
+
|
|
924
|
+
#### Antigravity Version Override
|
|
925
|
+
|
|
926
|
+
Set `PI_AI_ANTIGRAVITY_VERSION` to override the Antigravity User-Agent version when Google updates their requirements:
|
|
927
|
+
|
|
928
|
+
```bash
|
|
929
|
+
export PI_AI_ANTIGRAVITY_VERSION="1.23.0"
|
|
930
|
+
```
|
|
931
|
+
|
|
932
|
+
#### Cache Retention
|
|
933
|
+
|
|
934
|
+
Set `PI_CACHE_RETENTION=long` to extend prompt cache retention:
|
|
935
|
+
|
|
936
|
+
| Provider | Default | With `PI_CACHE_RETENTION=long` |
|
|
937
|
+
|----------|---------|-------------------------------|
|
|
938
|
+
| Anthropic | 5 minutes | 1 hour |
|
|
939
|
+
| OpenAI | in-memory | 24 hours |
|
|
940
|
+
|
|
941
|
+
This only affects direct API calls to `api.anthropic.com` and `api.openai.com`. Proxies and other providers are unaffected.
|
|
942
|
+
|
|
943
|
+
> **Note**: Extended cache retention may increase costs for Anthropic (cache writes are charged at a higher rate). OpenAI's 24h retention has no additional cost.
|
|
944
|
+
|
|
945
|
+
### Checking Environment Variables
|
|
946
|
+
|
|
947
|
+
```typescript
|
|
948
|
+
import { getEnvApiKey } from '@mariozechner/pi-ai';
|
|
949
|
+
|
|
950
|
+
// Check if an API key is set in environment variables
|
|
951
|
+
const key = getEnvApiKey('openai'); // checks OPENAI_API_KEY
|
|
952
|
+
```
|
|
953
|
+
|
|
954
|
+
## OAuth Providers
|
|
955
|
+
|
|
956
|
+
Several providers require OAuth authentication instead of static API keys:
|
|
957
|
+
|
|
958
|
+
- **Anthropic** (Claude Pro/Max subscription)
|
|
959
|
+
- **OpenAI Codex** (ChatGPT Plus/Pro subscription, access to GPT-5.x Codex models)
|
|
960
|
+
- **GitHub Copilot** (Copilot subscription)
|
|
961
|
+
- **Google Gemini CLI** (Gemini 2.0/2.5 via Google Cloud Code Assist; free tier or paid subscription)
|
|
962
|
+
- **Antigravity** (Free Gemini 3, Claude, GPT-OSS via Google Cloud)
|
|
963
|
+
|
|
964
|
+
For paid Cloud Code Assist subscriptions, set `GOOGLE_CLOUD_PROJECT` or `GOOGLE_CLOUD_PROJECT_ID` to your project ID.
|
|
965
|
+
|
|
966
|
+
### Vertex AI (ADC)
|
|
967
|
+
|
|
968
|
+
Vertex AI models use Application Default Credentials (ADC):
|
|
969
|
+
|
|
970
|
+
- **Local development**: Run `gcloud auth application-default login`
|
|
971
|
+
- **CI/Production**: Set `GOOGLE_APPLICATION_CREDENTIALS` to point to a service account JSON key file
|
|
972
|
+
|
|
973
|
+
Also set `GOOGLE_CLOUD_PROJECT` (or `GCLOUD_PROJECT`) and `GOOGLE_CLOUD_LOCATION`. You can also pass `project`/`location` in the call options.
|
|
974
|
+
|
|
975
|
+
Example:
|
|
976
|
+
|
|
977
|
+
```bash
|
|
978
|
+
# Local (uses your user credentials)
|
|
979
|
+
gcloud auth application-default login
|
|
980
|
+
export GOOGLE_CLOUD_PROJECT="my-project"
|
|
981
|
+
export GOOGLE_CLOUD_LOCATION="us-central1"
|
|
982
|
+
|
|
983
|
+
# CI/Production (service account key file)
|
|
984
|
+
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
|
985
|
+
```
|
|
986
|
+
|
|
987
|
+
```typescript
|
|
988
|
+
import { getModel, complete } from '@mariozechner/pi-ai';
|
|
989
|
+
|
|
990
|
+
(async () => {
|
|
991
|
+
const model = getModel('google-vertex', 'gemini-2.5-flash');
|
|
992
|
+
const response = await complete(model, {
|
|
993
|
+
messages: [{ role: 'user', content: 'Hello from Vertex AI' }]
|
|
994
|
+
});
|
|
995
|
+
|
|
996
|
+
for (const block of response.content) {
|
|
997
|
+
if (block.type === 'text') console.log(block.text);
|
|
998
|
+
}
|
|
999
|
+
})().catch(console.error);
|
|
1000
|
+
```
|
|
1001
|
+
|
|
1002
|
+
Official docs: [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials)
|
|
1003
|
+
|
|
1004
|
+
### CLI Login
|
|
1005
|
+
|
|
1006
|
+
The quickest way to authenticate:
|
|
1007
|
+
|
|
1008
|
+
```bash
|
|
1009
|
+
npx @mariozechner/pi-ai login # interactive provider selection
|
|
1010
|
+
npx @mariozechner/pi-ai login anthropic # login to specific provider
|
|
1011
|
+
npx @mariozechner/pi-ai list # list available providers
|
|
1012
|
+
```
|
|
1013
|
+
|
|
1014
|
+
Credentials are saved to `auth.json` in the current directory.
|
|
1015
|
+
|
|
1016
|
+
### Programmatic OAuth
|
|
1017
|
+
|
|
1018
|
+
The library provides login and token refresh functions. Credential storage is the caller's responsibility.
|
|
1019
|
+
|
|
1020
|
+
```typescript
|
|
1021
|
+
import {
|
|
1022
|
+
// Login functions (return credentials, do not store)
|
|
1023
|
+
loginAnthropic,
|
|
1024
|
+
loginOpenAICodex,
|
|
1025
|
+
loginGitHubCopilot,
|
|
1026
|
+
loginGeminiCli,
|
|
1027
|
+
loginAntigravity,
|
|
1028
|
+
|
|
1029
|
+
// Token management
|
|
1030
|
+
refreshOAuthToken, // (provider, credentials) => new credentials
|
|
1031
|
+
getOAuthApiKey, // (provider, credentialsMap) => { newCredentials, apiKey } | null
|
|
1032
|
+
|
|
1033
|
+
// Types
|
|
1034
|
+
type OAuthProvider, // 'anthropic' | 'openai-codex' | 'github-copilot' | 'google-gemini-cli' | 'google-antigravity'
|
|
1035
|
+
type OAuthCredentials,
|
|
1036
|
+
} from '@mariozechner/pi-ai';
|
|
1037
|
+
```
|
|
1038
|
+
|
|
1039
|
+
### Login Flow Example
|
|
1040
|
+
|
|
1041
|
+
```typescript
|
|
1042
|
+
import { loginGitHubCopilot } from '@mariozechner/pi-ai';
|
|
1043
|
+
import { writeFileSync } from 'fs';
|
|
1044
|
+
|
|
1045
|
+
const credentials = await loginGitHubCopilot({
|
|
1046
|
+
onAuth: (url, instructions) => {
|
|
1047
|
+
console.log(`Open: ${url}`);
|
|
1048
|
+
if (instructions) console.log(instructions);
|
|
1049
|
+
},
|
|
1050
|
+
onPrompt: async (prompt) => {
|
|
1051
|
+
return await getUserInput(prompt.message);
|
|
1052
|
+
},
|
|
1053
|
+
onProgress: (message) => console.log(message)
|
|
1054
|
+
});
|
|
1055
|
+
|
|
1056
|
+
// Store credentials yourself
|
|
1057
|
+
const auth = { 'github-copilot': { type: 'oauth', ...credentials } };
|
|
1058
|
+
writeFileSync('auth.json', JSON.stringify(auth, null, 2));
|
|
1059
|
+
```
|
|
1060
|
+
|
|
1061
|
+
### Using OAuth Tokens
|
|
1062
|
+
|
|
1063
|
+
Use `getOAuthApiKey()` to get an API key, automatically refreshing if expired:
|
|
1064
|
+
|
|
1065
|
+
```typescript
|
|
1066
|
+
import { getModel, complete, getOAuthApiKey } from '@mariozechner/pi-ai';
|
|
1067
|
+
import { readFileSync, writeFileSync } from 'fs';
|
|
1068
|
+
|
|
1069
|
+
// Load your stored credentials
|
|
1070
|
+
const auth = JSON.parse(readFileSync('auth.json', 'utf-8'));
|
|
1071
|
+
|
|
1072
|
+
// Get API key (refreshes if expired)
|
|
1073
|
+
const result = await getOAuthApiKey('github-copilot', auth);
|
|
1074
|
+
if (!result) throw new Error('Not logged in');
|
|
1075
|
+
|
|
1076
|
+
// Save refreshed credentials
|
|
1077
|
+
auth['github-copilot'] = { type: 'oauth', ...result.newCredentials };
|
|
1078
|
+
writeFileSync('auth.json', JSON.stringify(auth, null, 2));
|
|
1079
|
+
|
|
1080
|
+
// Use the API key
|
|
1081
|
+
const model = getModel('github-copilot', 'gpt-4o');
|
|
1082
|
+
const response = await complete(model, {
|
|
1083
|
+
messages: [{ role: 'user', content: 'Hello!' }]
|
|
1084
|
+
}, { apiKey: result.apiKey });
|
|
1085
|
+
```
|
|
1086
|
+
|
|
1087
|
+
### Provider Notes
|
|
1088
|
+
|
|
1089
|
+
**OpenAI Codex**: Requires a ChatGPT Plus or Pro subscription. Provides access to GPT-5.x Codex models with extended context windows and reasoning capabilities. The library automatically handles session-based prompt caching when `sessionId` is provided in stream options. You can set `transport` in stream options to `"sse"`, `"websocket"`, or `"auto"` for Codex Responses transport selection. When using WebSocket with a `sessionId`, connections are reused per session and expire after 5 minutes of inactivity.
|
|
1090
|
+
|
|
1091
|
+
**Azure OpenAI (Responses)**: Uses the Responses API only. Set `AZURE_OPENAI_API_KEY` and either `AZURE_OPENAI_BASE_URL` or `AZURE_OPENAI_RESOURCE_NAME`. Use `AZURE_OPENAI_API_VERSION` (defaults to `v1`) to override the API version if needed. Deployment names are treated as model IDs by default, override with `azureDeploymentName` or `AZURE_OPENAI_DEPLOYMENT_NAME_MAP` using comma-separated `model-id=deployment` pairs (for example `gpt-4o-mini=my-deployment,gpt-4o=prod`). Legacy deployment-based URLs are intentionally unsupported.
|
|
1092
|
+
|
|
1093
|
+
**GitHub Copilot**: If you get "The requested model is not supported" error, enable the model manually in VS Code: open Copilot Chat, click the model selector, select the model (warning icon), and click "Enable".
|
|
1094
|
+
|
|
1095
|
+
**Google Gemini CLI / Antigravity**: These use Google Cloud OAuth. The `apiKey` returned by `getOAuthApiKey()` is a JSON string containing both the token and project ID, which the library handles automatically.
|
|
1096
|
+
|
|
1097
|
+
## Development
|
|
1098
|
+
|
|
1099
|
+
### Adding a New Provider
|
|
1100
|
+
|
|
1101
|
+
Adding a new LLM provider requires changes across multiple files. This checklist covers all necessary steps:
|
|
1102
|
+
|
|
1103
|
+
#### 1. Core Types (`src/types.ts`)
|
|
1104
|
+
|
|
1105
|
+
- Add the API identifier to `KnownApi` (for example `"bedrock-converse-stream"`)
|
|
1106
|
+
- Create an options interface extending `StreamOptions` (for example `BedrockOptions`)
|
|
1107
|
+
- Add the provider name to `KnownProvider` (for example `"amazon-bedrock"`)
|
|
1108
|
+
|
|
1109
|
+
#### 2. Provider Implementation (`src/providers/`)
|
|
1110
|
+
|
|
1111
|
+
Create a new provider file (for example `amazon-bedrock.ts`) that exports:
|
|
1112
|
+
|
|
1113
|
+
- `stream<Provider>()` function returning `AssistantMessageEventStream`
|
|
1114
|
+
- `streamSimple<Provider>()` for `SimpleStreamOptions` mapping
|
|
1115
|
+
- Provider-specific options interface
|
|
1116
|
+
- Message conversion functions to transform `Context` to provider format
|
|
1117
|
+
- Tool conversion if the provider supports tools
|
|
1118
|
+
- Response parsing to emit standardized events (`text`, `tool_call`, `thinking`, `usage`, `stop`)
|
|
1119
|
+
|
|
1120
|
+
#### 3. API Registry Integration (`src/providers/register-builtins.ts`)
|
|
1121
|
+
|
|
1122
|
+
- Register the API with `registerApiProvider()`
|
|
1123
|
+
- Add credential detection in `env-api-keys.ts` for the new provider
|
|
1124
|
+
- Ensure `streamSimple` handles auth lookup via `getEnvApiKey()` or provider-specific auth
|
|
1125
|
+
|
|
1126
|
+
#### 4. Model Generation (`scripts/generate-models.ts`)
|
|
1127
|
+
|
|
1128
|
+
- Add logic to fetch and parse models from the provider's source (e.g., models.dev API)
|
|
1129
|
+
- Map provider model data to the standardized `Model` interface
|
|
1130
|
+
- Handle provider-specific quirks (pricing format, capability flags, model ID transformations)
|
|
1131
|
+
|
|
1132
|
+
#### 5. Tests (`test/`)
|
|
1133
|
+
|
|
1134
|
+
Create or update test files to cover the new provider:
|
|
1135
|
+
|
|
1136
|
+
- `stream.test.ts` - Basic streaming and tool use
|
|
1137
|
+
- `tokens.test.ts` - Token usage reporting
|
|
1138
|
+
- `abort.test.ts` - Request cancellation
|
|
1139
|
+
- `empty.test.ts` - Empty message handling
|
|
1140
|
+
- `context-overflow.test.ts` - Context limit errors
|
|
1141
|
+
- `image-limits.test.ts` - Image support (if applicable)
|
|
1142
|
+
- `unicode-surrogate.test.ts` - Unicode handling
|
|
1143
|
+
- `tool-call-without-result.test.ts` - Orphaned tool calls
|
|
1144
|
+
- `image-tool-result.test.ts` - Images in tool results
|
|
1145
|
+
- `total-tokens.test.ts` - Token counting accuracy
|
|
1146
|
+
- `cross-provider-handoff.test.ts` - Cross-provider context replay
|
|
1147
|
+
|
|
1148
|
+
For `cross-provider-handoff.test.ts`, add at least one provider/model pair. If the provider exposes multiple model families (for example GPT and Claude), add at least one pair per family.
|
|
1149
|
+
|
|
1150
|
+
For providers with non-standard auth (AWS, Google Vertex), create a utility like `bedrock-utils.ts` with credential detection helpers.
|
|
1151
|
+
|
|
1152
|
+
#### 6. Coding Agent Integration (`../coding-agent/`)
|
|
1153
|
+
|
|
1154
|
+
Update `src/core/model-resolver.ts`:
|
|
1155
|
+
|
|
1156
|
+
- Add a default model ID for the provider in `DEFAULT_MODELS`
|
|
1157
|
+
|
|
1158
|
+
Update `src/cli/args.ts`:
|
|
1159
|
+
|
|
1160
|
+
- Add environment variable documentation in the help text
|
|
1161
|
+
|
|
1162
|
+
Update `README.md`:
|
|
1163
|
+
|
|
1164
|
+
- Add the provider to the providers section with setup instructions
|
|
1165
|
+
|
|
1166
|
+
#### 7. Documentation
|
|
1167
|
+
|
|
1168
|
+
Update `packages/ai/README.md`:
|
|
1169
|
+
|
|
1170
|
+
- Add to the Supported Providers table
|
|
1171
|
+
- Document any provider-specific options or authentication requirements
|
|
1172
|
+
- Add environment variable to the Environment Variables section
|
|
1173
|
+
|
|
1174
|
+
#### 8. Changelog
|
|
1175
|
+
|
|
1176
|
+
Add an entry to `packages/ai/CHANGELOG.md` under `## [Unreleased]`:
|
|
1177
|
+
|
|
1178
|
+
```markdown
|
|
1179
|
+
### Added
|
|
1180
|
+
- Added support for [Provider Name] provider ([#PR](link) by [@author](link))
|
|
1181
|
+
```
|
|
1182
|
+
|
|
1183
|
+
## License
|
|
1184
|
+
|
|
1185
|
+
MIT
|