ai-requests-adapter 2.2.0 → 3.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +218 -133
- package/dist/capabilities.d.ts +3 -2
- package/dist/capabilities.d.ts.map +1 -1
- package/dist/capabilities.js +228 -39
- package/dist/capabilities.js.map +1 -1
- package/dist/chat-style-transformer.d.ts +85 -63
- package/dist/chat-style-transformer.d.ts.map +1 -1
- package/dist/chat-style-transformer.js +203 -217
- package/dist/chat-style-transformer.js.map +1 -1
- package/dist/index.d.ts +16 -8
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +14 -6
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,6 +1,44 @@
|
|
|
1
|
-
#
|
|
1
|
+
# ai-requests-adapter (v3.0)
|
|
2
2
|
|
|
3
|
-
A
|
|
3
|
+
A **provider/API transformer** that takes **one unified input shape** (OpenAI **Chat Completions–style** request) and **adapts** it into the **vendor + API-specific request payload** you actually send (OpenAI Responses vs Chat Completions, Anthropic Messages, Gemini generateContent, Groq OpenAI-compat, xAI OpenAI-compat, Moonshot Kimi OpenAI-compat, etc.).
|
|
4
|
+
|
|
5
|
+
This package is designed to remove "model config broke again" problems by driving behavior from a **capabilities JSON registry**.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Why v3.0
|
|
10
|
+
|
|
11
|
+
**v3.0** makes one thing explicit:
|
|
12
|
+
|
|
13
|
+
- Your **unified input is chat-completions style** (what you already have).
|
|
14
|
+
- The library is an **adapter/transformer** (not a normalizer).
|
|
15
|
+
- Provider/model behavior is defined in **metadata JSON**, not scattered conditionals.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Unified Input Form (what you pass in)
|
|
20
|
+
|
|
21
|
+
The unified input is intentionally **OpenAI Chat Completions–like**:
|
|
22
|
+
|
|
23
|
+
```ts
|
|
24
|
+
type UnifiedChatCompletionsLikeRequest = {
|
|
25
|
+
model: string;
|
|
26
|
+
messages: Array<{ role: "system"|"user"|"assistant"|"tool"; content: string }>;
|
|
27
|
+
|
|
28
|
+
max_tokens?: number;
|
|
29
|
+
temperature?: number;
|
|
30
|
+
top_p?: number;
|
|
31
|
+
frequency_penalty?: number;
|
|
32
|
+
presence_penalty?: number;
|
|
33
|
+
stop?: string[] | string;
|
|
34
|
+
|
|
35
|
+
// optional reasoning knob (used when target supports it)
|
|
36
|
+
reasoning_effort?: "none"|"minimal"|"low"|"medium"|"high";
|
|
37
|
+
|
|
38
|
+
// vendor-specific passthrough namespace
|
|
39
|
+
extra?: Record<string, any>;
|
|
40
|
+
};
|
|
41
|
+
```
|
|
4
42
|
|
|
5
43
|
## Installation
|
|
6
44
|
|
|
@@ -13,114 +51,100 @@ npm install ai-requests-adapter
|
|
|
13
51
|
```typescript
|
|
14
52
|
import { transformToVendorRequest } from 'ai-requests-adapter';
|
|
15
53
|
|
|
16
|
-
//
|
|
17
|
-
const
|
|
54
|
+
// Your unified request (chat-completions style)
|
|
55
|
+
const request = {
|
|
18
56
|
provider: 'openai',
|
|
19
57
|
model: 'gpt-5.2',
|
|
20
58
|
messages: [
|
|
21
59
|
{ role: 'system', content: 'You are helpful.' },
|
|
22
|
-
{ role: 'user', content: 'Hello
|
|
60
|
+
{ role: 'user', content: 'Hello!' }
|
|
23
61
|
],
|
|
24
62
|
max_tokens: 200,
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
});
|
|
28
|
-
|
|
29
|
-
console.log(result.api); // "responses"
|
|
30
|
-
console.log(result.payload); // Ready-to-send OpenAI Responses API payload
|
|
31
|
-
console.log(result.warnings); // Any transformation warnings
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
## Core Concept
|
|
35
|
-
|
|
36
|
-
**One unified input format** → **Transformer** → **Vendor + API specific payload**
|
|
63
|
+
reasoning_effort: 'medium'
|
|
64
|
+
};
|
|
37
65
|
|
|
38
|
-
|
|
66
|
+
// Transform to vendor-specific payload
|
|
67
|
+
const { provider, apiVariant, request: vendorRequest, warnings } = transformToVendorRequest(
|
|
68
|
+
request,
|
|
69
|
+
{ provider: 'openai' } // target override if needed
|
|
70
|
+
);
|
|
39
71
|
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
model: string,
|
|
44
|
-
messages: [{ role: 'system'|'developer'|'user'|'assistant', content: string }, ...],
|
|
45
|
-
temperature?, top_p?, max_tokens?, frequency_penalty?, presence_penalty?, stop?, tools?, reasoning?
|
|
46
|
-
}
|
|
72
|
+
console.log(apiVariant); // "responses"
|
|
73
|
+
console.log(vendorRequest); // Ready-to-send OpenAI Responses API payload
|
|
74
|
+
console.log(warnings); // Any parameter filtering warnings
|
|
47
75
|
```
|
|
48
76
|
|
|
49
|
-
The transformer outputs the exact payload needed for each vendor's API.
|
|
50
|
-
|
|
51
77
|
## API Reference
|
|
52
78
|
|
|
53
79
|
### Main Function
|
|
54
80
|
|
|
55
|
-
#### `transformToVendorRequest(
|
|
81
|
+
#### `transformToVendorRequest(unifiedRequest, target, registry?)`
|
|
56
82
|
|
|
57
|
-
|
|
83
|
+
The core transformer function.
|
|
58
84
|
|
|
59
85
|
```typescript
|
|
60
86
|
import { transformToVendorRequest } from 'ai-requests-adapter';
|
|
61
87
|
|
|
62
|
-
const
|
|
88
|
+
const { provider, apiVariant, request, warnings } = transformToVendorRequest(
|
|
89
|
+
unifiedRequest, // UnifiedChatCompletionsLikeRequest
|
|
90
|
+
target, // Target (provider + optional apiVariant)
|
|
91
|
+
registry? // Optional custom registry
|
|
92
|
+
);
|
|
63
93
|
```
|
|
64
94
|
|
|
65
95
|
**Parameters:**
|
|
66
|
-
- `
|
|
67
|
-
- `
|
|
68
|
-
|
|
69
|
-
**Returns:** `BuiltVendorRequest`
|
|
70
|
-
- `provider: string` - The provider (e.g., 'openai')
|
|
71
|
-
- `api: string` - The API endpoint (e.g., 'responses', 'chat_completions')
|
|
72
|
-
- `payload: Record<string, any>` - Ready-to-send API payload
|
|
73
|
-
- `warnings: string[]` - Any transformation warnings
|
|
74
|
-
|
|
75
|
-
### Types
|
|
76
|
-
|
|
77
|
-
#### `UnifiedChatRequest`
|
|
78
|
-
|
|
79
|
-
The single input format your gateway accepts:
|
|
96
|
+
- `unifiedRequest: UnifiedChatCompletionsLikeRequest` - Your chat-completions-style request
|
|
97
|
+
- `target: Target` - Which provider/API to target
|
|
98
|
+
- `registry?: Registry` - Custom capabilities registry (uses built-in by default)
|
|
80
99
|
|
|
100
|
+
**Returns:**
|
|
81
101
|
```typescript
|
|
82
|
-
|
|
83
|
-
provider: 'openai' | 'anthropic' |
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
// Chat-style config (snake_case primary, camelCase accepted):
|
|
91
|
-
temperature?: number;
|
|
92
|
-
top_p?: number;
|
|
93
|
-
max_tokens?: number;
|
|
94
|
-
frequency_penalty?: number;
|
|
95
|
-
presence_penalty?: number;
|
|
96
|
-
stop?: string[] | string;
|
|
97
|
-
tools?: unknown[];
|
|
102
|
+
{
|
|
103
|
+
provider: ProviderKey; // 'openai' | 'anthropic' | etc.
|
|
104
|
+
apiVariant: string; // 'responses' | 'chat_completions' | 'messages' | etc.
|
|
105
|
+
request: Record<string, any>; // Ready-to-send vendor API payload
|
|
106
|
+
warnings?: string[]; // Parameter filtering warnings
|
|
107
|
+
}
|
|
108
|
+
```
|
|
98
109
|
|
|
99
|
-
|
|
100
|
-
reasoning_effort?: 'none'|'minimal'|'low'|'medium'|'high';
|
|
101
|
-
reasoningEffort?: 'none'|'minimal'|'low'|'medium'|'high';
|
|
110
|
+
### Target Specification
|
|
102
111
|
|
|
103
|
-
|
|
104
|
-
|
|
112
|
+
```typescript
|
|
113
|
+
type Target = {
|
|
114
|
+
provider: ProviderKey;
|
|
115
|
+
apiVariant?: string; // If omitted, uses model default from registry
|
|
105
116
|
};
|
|
117
|
+
|
|
118
|
+
type ProviderKey = 'openai' | 'anthropic' | 'google' | 'groq' | 'xai' | 'moonshot_kimi';
|
|
106
119
|
```
|
|
107
120
|
|
|
108
|
-
|
|
121
|
+
## Provider Examples
|
|
109
122
|
|
|
110
|
-
|
|
123
|
+
### OpenAI (Responses API)
|
|
111
124
|
|
|
112
125
|
```typescript
|
|
113
|
-
const
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
126
|
+
const { request } = transformToVendorRequest(
|
|
127
|
+
{
|
|
128
|
+
model: 'gpt-5.2',
|
|
129
|
+
messages: [
|
|
130
|
+
{ role: 'system', content: 'You are helpful.' },
|
|
131
|
+
{ role: 'user', content: 'Hello' }
|
|
132
|
+
],
|
|
133
|
+
max_tokens: 200,
|
|
134
|
+
reasoning_effort: 'medium',
|
|
135
|
+
temperature: 0.2 // Omitted: conditional on reasoning_effort='none'
|
|
136
|
+
},
|
|
137
|
+
{ provider: 'openai' }
|
|
138
|
+
);
|
|
139
|
+
|
|
140
|
+
// Result: apiVariant = "responses"
|
|
141
|
+
// request = {
|
|
142
|
+
// model: 'gpt-5.2',
|
|
143
|
+
// instructions: 'You are helpful.',
|
|
144
|
+
// input: [{ role: 'user', content: 'Hello' }],
|
|
145
|
+
// max_output_tokens: 200,
|
|
146
|
+
// reasoning: { effort: 'medium' }
|
|
147
|
+
// }
|
|
124
148
|
});
|
|
125
149
|
|
|
126
150
|
console.log(result.api); // "responses"
|
|
@@ -168,16 +192,94 @@ console.log(result.payload);
|
|
|
168
192
|
*/
|
|
169
193
|
```
|
|
170
194
|
|
|
195
|
+
### Anthropic (Messages API)
|
|
196
|
+
|
|
197
|
+
```typescript
|
|
198
|
+
const { request } = transformToVendorRequest(
|
|
199
|
+
{
|
|
200
|
+
model: 'claude-sonnet-4-5-20250929',
|
|
201
|
+
messages: [
|
|
202
|
+
{ role: 'system', content: 'You are helpful.' },
|
|
203
|
+
{ role: 'user', content: 'Hello' }
|
|
204
|
+
],
|
|
205
|
+
max_tokens: 200,
|
|
206
|
+
temperature: 0.7
|
|
207
|
+
},
|
|
208
|
+
{ provider: 'anthropic' }
|
|
209
|
+
);
|
|
210
|
+
|
|
211
|
+
// Result: apiVariant = "messages"
|
|
212
|
+
// request = {
|
|
213
|
+
// system: 'You are helpful.',
|
|
214
|
+
// messages: [{ role: 'user', content: 'Hello' }],
|
|
215
|
+
// max_tokens: 200,
|
|
216
|
+
// temperature: 0.7
|
|
217
|
+
// }
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
### Google Gemini (Native API)
|
|
221
|
+
|
|
222
|
+
```typescript
|
|
223
|
+
const { request } = transformToVendorRequest(
|
|
224
|
+
{
|
|
225
|
+
model: 'gemini-2.0-flash',
|
|
226
|
+
messages: [
|
|
227
|
+
{ role: 'system', content: 'You are helpful.' },
|
|
228
|
+
{ role: 'user', content: 'Hello' }
|
|
229
|
+
],
|
|
230
|
+
max_tokens: 200,
|
|
231
|
+
temperature: 0.7
|
|
232
|
+
},
|
|
233
|
+
{ provider: 'google' }
|
|
234
|
+
);
|
|
235
|
+
|
|
236
|
+
// Result: apiVariant = "gemini_generateContent"
|
|
237
|
+
// request = {
|
|
238
|
+
// systemInstruction: { role: 'system', parts: [{ text: 'You are helpful.' }] },
|
|
239
|
+
// contents: [{ role: 'user', parts: [{ text: 'Hello' }] }],
|
|
240
|
+
// generationConfig: { maxOutputTokens: 200, temperature: 0.7 }
|
|
241
|
+
// }
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
### Groq (OpenAI Compatible)
|
|
245
|
+
|
|
246
|
+
```typescript
|
|
247
|
+
const { request } = transformToVendorRequest(
|
|
248
|
+
{
|
|
249
|
+
model: 'llama-3.3-70b-versatile',
|
|
250
|
+
messages: [
|
|
251
|
+
{ role: 'user', content: 'Hello' }
|
|
252
|
+
],
|
|
253
|
+
max_tokens: 200,
|
|
254
|
+
temperature: 0.7
|
|
255
|
+
},
|
|
256
|
+
{ provider: 'groq' }
|
|
257
|
+
);
|
|
258
|
+
|
|
259
|
+
// Result: apiVariant = "openai_chat_completions"
|
|
260
|
+
// request = {
|
|
261
|
+
// model: 'llama-3.3-70b-versatile',
|
|
262
|
+
// messages: [{ role: 'user', content: 'Hello' }],
|
|
263
|
+
// max_tokens: 200,
|
|
264
|
+
// temperature: 0.7
|
|
265
|
+
// }
|
|
266
|
+
```
|
|
267
|
+
|
|
171
268
|
## Capabilities Registry
|
|
172
269
|
|
|
173
|
-
|
|
270
|
+
Provider/model behavior is defined in JSON, not code. The registry handles:
|
|
271
|
+
|
|
272
|
+
- **API variant selection** (Responses vs Chat Completions, etc.)
|
|
273
|
+
- **Parameter mapping** (`max_tokens` → `max_output_tokens`)
|
|
274
|
+
- **Capability filtering** (unsupported params are dropped)
|
|
275
|
+
- **Conditional rules** (temperature only when reasoning_effort='none')
|
|
276
|
+
- **Fallback resolution** (exact → pattern → provider defaults)
|
|
174
277
|
|
|
175
278
|
### Built-in Registry
|
|
176
279
|
|
|
177
280
|
```typescript
|
|
178
281
|
import { capabilities } from 'ai-requests-adapter';
|
|
179
282
|
|
|
180
|
-
// Access the default registry
|
|
181
283
|
console.log(capabilities.providers.openai.models['gpt-5.2']);
|
|
182
284
|
```
|
|
183
285
|
|
|
@@ -185,70 +287,42 @@ console.log(capabilities.providers.openai.models['gpt-5.2']);
|
|
|
185
287
|
|
|
186
288
|
```typescript
|
|
187
289
|
import { transformToVendorRequest } from 'ai-requests-adapter';
|
|
290
|
+
import { loadRegistry } from 'ai-requests-adapter';
|
|
188
291
|
|
|
189
|
-
const
|
|
190
|
-
|
|
191
|
-
openai: {
|
|
192
|
-
schemaVersion: '2025-12-24',
|
|
193
|
-
defaults: {
|
|
194
|
-
tokenParamByApi: {
|
|
195
|
-
responses: 'max_output_tokens',
|
|
196
|
-
chat_completions: 'max_tokens'
|
|
197
|
-
}
|
|
198
|
-
},
|
|
199
|
-
models: {
|
|
200
|
-
'my-custom-model': {
|
|
201
|
-
api: 'responses',
|
|
202
|
-
reasoningEffort: { supported: true },
|
|
203
|
-
sampling: { temperature: { supported: true }, top_p: { supported: true } },
|
|
204
|
-
penalties: { frequency: false, presence: false },
|
|
205
|
-
stop: { supported: true }
|
|
206
|
-
}
|
|
207
|
-
}
|
|
208
|
-
}
|
|
209
|
-
}
|
|
210
|
-
};
|
|
211
|
-
|
|
212
|
-
const result = transformToVendorRequest(request, customCapabilities);
|
|
292
|
+
const customRegistry = loadRegistry(); // Or build your own
|
|
293
|
+
const result = transformToVendorRequest(unifiedRequest, target, customRegistry);
|
|
213
294
|
```
|
|
214
295
|
|
|
215
|
-
##
|
|
296
|
+
## Gateway Integration
|
|
216
297
|
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
```typescript
|
|
220
|
-
import { AIRequestsAdapter, createAIAdapter } from 'ai-requests-adapter';
|
|
221
|
-
|
|
222
|
-
// Legacy usage (deprecated)
|
|
223
|
-
const adapter = createAIAdapter();
|
|
224
|
-
adapter.registerProvider('openai', openAIProvider);
|
|
225
|
-
const response = await adapter.sendRequest(request);
|
|
226
|
-
```
|
|
227
|
-
|
|
228
|
-
## Integration Example
|
|
229
|
-
|
|
230
|
-
Here's how to integrate with your gateway:
|
|
298
|
+
Your gateway accepts the unified format and routes to vendor APIs:
|
|
231
299
|
|
|
232
300
|
```typescript
|
|
233
301
|
import { transformToVendorRequest } from 'ai-requests-adapter';
|
|
234
302
|
|
|
235
|
-
// Your gateway endpoint
|
|
236
303
|
app.post('/api/chat', async (req, res) => {
|
|
237
304
|
try {
|
|
238
|
-
// req.body is
|
|
239
|
-
const { provider,
|
|
305
|
+
// req.body is UnifiedChatCompletionsLikeRequest
|
|
306
|
+
const { provider, apiVariant, request, warnings } = transformToVendorRequest(
|
|
307
|
+
req.body,
|
|
308
|
+
{ provider: req.body.provider || 'openai' }
|
|
309
|
+
);
|
|
240
310
|
|
|
241
|
-
// Route to
|
|
311
|
+
// Route to vendor SDK
|
|
242
312
|
let response;
|
|
243
313
|
switch (provider) {
|
|
244
314
|
case 'openai':
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
315
|
+
response = apiVariant === 'responses'
|
|
316
|
+
? await openai.responses.create(request)
|
|
317
|
+
: await openai.chat.completions.create(request);
|
|
318
|
+
break;
|
|
319
|
+
case 'anthropic':
|
|
320
|
+
response = await anthropic.messages.create(request);
|
|
321
|
+
break;
|
|
322
|
+
case 'google':
|
|
323
|
+
response = await google.generativeAI.generateContent(request);
|
|
250
324
|
break;
|
|
251
|
-
//
|
|
325
|
+
// ... other providers
|
|
252
326
|
}
|
|
253
327
|
|
|
254
328
|
res.json({ response, warnings });
|
|
@@ -258,13 +332,24 @@ app.post('/api/chat', async (req, res) => {
|
|
|
258
332
|
});
|
|
259
333
|
```
|
|
260
334
|
|
|
335
|
+
## Provider Support Matrix
|
|
336
|
+
|
|
337
|
+
| Provider | API Variants | Message Format | Token Param | Notes |
|
|
338
|
+
|----------|-------------|----------------|-------------|--------|
|
|
339
|
+
| **OpenAI** | `responses`, `chat_completions` | Standard + developer role merging | `max_output_tokens`, `max_tokens` | Reasoning effort, conditional sampling |
|
|
340
|
+
| **Anthropic** | `messages` | System as top-level field | `max_tokens` | Mutual exclusion rules (Claude 4.5) |
|
|
341
|
+
| **Google** | `gemini_generateContent`, `gemini_openai_compat` | Native contents format | `generationConfig.maxOutputTokens` | System as systemInstruction |
|
|
342
|
+
| **Groq** | `openai_chat_completions` | Standard | `max_tokens` | OpenAI-compatible |
|
|
343
|
+
| **xAI** | `openai_chat_completions` | Standard | `max_tokens` | OpenAI-compatible |
|
|
344
|
+
| **Moonshot** | `openai_chat_completions` | Standard | `max_tokens` | OpenAI-compatible |
|
|
345
|
+
|
|
261
346
|
## Contributing
|
|
262
347
|
|
|
263
|
-
|
|
348
|
+
Add new providers by:
|
|
264
349
|
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
350
|
+
1. **Create provider config JSON** in the registry
|
|
351
|
+
2. **Add message transformation logic** in `transformUnifiedRequest()`
|
|
352
|
+
3. **Update API variants** with token/stop param mappings
|
|
268
353
|
|
|
269
354
|
## License
|
|
270
355
|
|
package/dist/capabilities.d.ts
CHANGED
|
@@ -1,3 +1,4 @@
|
|
|
1
|
-
import type {
|
|
2
|
-
export declare const capabilities:
|
|
1
|
+
import type { Registry } from './chat-style-transformer';
|
|
2
|
+
export declare const capabilities: Registry;
|
|
3
|
+
export declare function loadRegistry(): Registry;
|
|
3
4
|
//# sourceMappingURL=capabilities.d.ts.map
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"capabilities.d.ts","sourceRoot":"","sources":["../src/capabilities.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,
|
|
1
|
+
{"version":3,"file":"capabilities.d.ts","sourceRoot":"","sources":["../src/capabilities.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,QAAQ,EAAE,MAAM,0BAA0B,CAAC;AAMzD,eAAO,MAAM,YAAY,EAAE,QAoP1B,CAAC;AAKF,wBAAgB,YAAY,IAAI,QAAQ,CAEvC"}
|