@revenium/openai 1.0.11 → 1.0.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.env.example +20 -0
- package/CHANGELOG.md +21 -47
- package/README.md +141 -690
- package/dist/cjs/core/config/loader.js +1 -1
- package/dist/cjs/core/config/loader.js.map +1 -1
- package/dist/cjs/core/tracking/api-client.js +1 -1
- package/dist/cjs/core/tracking/api-client.js.map +1 -1
- package/dist/cjs/index.js +2 -2
- package/dist/cjs/index.js.map +1 -1
- package/dist/cjs/utils/url-builder.js +32 -7
- package/dist/cjs/utils/url-builder.js.map +1 -1
- package/dist/esm/core/config/loader.js +1 -1
- package/dist/esm/core/config/loader.js.map +1 -1
- package/dist/esm/core/tracking/api-client.js +1 -1
- package/dist/esm/core/tracking/api-client.js.map +1 -1
- package/dist/esm/index.js +2 -2
- package/dist/esm/index.js.map +1 -1
- package/dist/esm/utils/url-builder.js +32 -7
- package/dist/esm/utils/url-builder.js.map +1 -1
- package/dist/types/index.d.ts +2 -2
- package/dist/types/types/index.d.ts +2 -2
- package/dist/types/types/index.d.ts.map +1 -1
- package/dist/types/utils/url-builder.d.ts +11 -3
- package/dist/types/utils/url-builder.d.ts.map +1 -1
- package/examples/README.md +250 -254
- package/examples/azure-basic.ts +25 -13
- package/examples/azure-responses-basic.ts +36 -7
- package/examples/azure-responses-streaming.ts +36 -7
- package/examples/azure-streaming.ts +40 -19
- package/examples/getting_started.ts +54 -0
- package/examples/openai-basic.ts +39 -17
- package/examples/openai-function-calling.ts +259 -0
- package/examples/openai-responses-basic.ts +36 -7
- package/examples/openai-responses-streaming.ts +36 -7
- package/examples/openai-streaming.ts +24 -13
- package/examples/openai-vision.ts +289 -0
- package/package.json +3 -9
- package/src/core/config/azure-config.ts +72 -0
- package/src/core/config/index.ts +23 -0
- package/src/core/config/loader.ts +66 -0
- package/src/core/config/manager.ts +94 -0
- package/src/core/config/validator.ts +89 -0
- package/src/core/providers/detector.ts +159 -0
- package/src/core/providers/index.ts +16 -0
- package/src/core/tracking/api-client.ts +78 -0
- package/src/core/tracking/index.ts +21 -0
- package/src/core/tracking/payload-builder.ts +132 -0
- package/src/core/tracking/usage-tracker.ts +189 -0
- package/src/core/wrapper/index.ts +9 -0
- package/src/core/wrapper/instance-patcher.ts +288 -0
- package/src/core/wrapper/request-handler.ts +423 -0
- package/src/core/wrapper/stream-wrapper.ts +100 -0
- package/src/index.ts +336 -0
- package/src/types/function-parameters.ts +251 -0
- package/src/types/index.ts +313 -0
- package/src/types/openai-augmentation.ts +233 -0
- package/src/types/responses-api.ts +308 -0
- package/src/utils/azure-model-resolver.ts +220 -0
- package/src/utils/constants.ts +21 -0
- package/src/utils/error-handler.ts +251 -0
- package/src/utils/metadata-builder.ts +219 -0
- package/src/utils/provider-detection.ts +257 -0
- package/src/utils/request-handler-factory.ts +285 -0
- package/src/utils/stop-reason-mapper.ts +74 -0
- package/src/utils/type-guards.ts +202 -0
- package/src/utils/url-builder.ts +68 -0
package/README.md
CHANGED
|
@@ -20,467 +20,64 @@ A professional-grade Node.js middleware that seamlessly integrates with OpenAI a
|
|
|
20
20
|
- **Fire-and-Forget** - Never blocks your application flow
|
|
21
21
|
- **Zero Configuration** - Auto-initialization from environment variables
|
|
22
22
|
|
|
23
|
-
## Package Migration
|
|
24
|
-
|
|
25
|
-
This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming.
|
|
26
|
-
|
|
27
|
-
### Migration Steps
|
|
28
|
-
|
|
29
|
-
If you're upgrading from the old package:
|
|
30
|
-
|
|
31
|
-
```bash
|
|
32
|
-
# Uninstall the old package
|
|
33
|
-
npm uninstall revenium-middleware-openai-node
|
|
34
|
-
|
|
35
|
-
# Install the new package
|
|
36
|
-
npm install @revenium/openai
|
|
37
|
-
```
|
|
38
|
-
|
|
39
|
-
**Update your imports:**
|
|
40
|
-
|
|
41
|
-
```typescript
|
|
42
|
-
// Old import
|
|
43
|
-
import { patchOpenAIInstance } from "revenium-middleware-openai-node";
|
|
44
|
-
|
|
45
|
-
// New import
|
|
46
|
-
import { patchOpenAIInstance } from "@revenium/openai";
|
|
47
|
-
```
|
|
48
|
-
|
|
49
|
-
All functionality remains exactly the same - only the package name has changed.
|
|
50
|
-
|
|
51
23
|
## Getting Started
|
|
52
24
|
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
### Option 1: Create Project from Scratch
|
|
56
|
-
|
|
57
|
-
Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
|
|
58
|
-
[Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
|
|
59
|
-
|
|
60
|
-
### Option 2: Clone Our Repository
|
|
61
|
-
|
|
62
|
-
Clone and run the repository with working examples.
|
|
63
|
-
[Go to Repository Guide](#option-2-clone-our-repository)
|
|
64
|
-
|
|
65
|
-
### Option 3: Add to Existing Project
|
|
66
|
-
|
|
67
|
-
Already have a project? Just install and replace imports.
|
|
68
|
-
[Go to Integration Guide](#option-3-existing-project-integration)
|
|
69
|
-
|
|
70
|
-
---
|
|
71
|
-
|
|
72
|
-
## Option 1: Create Project from Scratch
|
|
73
|
-
|
|
74
|
-
### Step 1: Create Project Directory
|
|
25
|
+
### 1. Create Project Directory
|
|
75
26
|
|
|
76
27
|
```bash
|
|
77
|
-
# Create and navigate to
|
|
28
|
+
# Create project directory and navigate to it
|
|
78
29
|
mkdir my-openai-project
|
|
79
30
|
cd my-openai-project
|
|
80
31
|
|
|
81
32
|
# Initialize npm project
|
|
82
33
|
npm init -y
|
|
83
|
-
```
|
|
84
|
-
|
|
85
|
-
### Step 2: Install Dependencies
|
|
86
|
-
|
|
87
|
-
```bash
|
|
88
|
-
# Install the middleware and OpenAI SDK
|
|
89
|
-
npm install @revenium/openai openai@^5.8.0 dotenv
|
|
90
34
|
|
|
91
|
-
#
|
|
92
|
-
npm install
|
|
35
|
+
# Install packages
|
|
36
|
+
npm install @revenium/openai openai dotenv tsx
|
|
37
|
+
npm install --save-dev typescript @types/node
|
|
93
38
|
```
|
|
94
39
|
|
|
95
|
-
###
|
|
40
|
+
### 2. Configure Environment Variables
|
|
96
41
|
|
|
97
|
-
Create a `.env` file
|
|
98
|
-
|
|
99
|
-
```bash
|
|
100
|
-
# Create .env file
|
|
101
|
-
echo. > .env # On Windows (CMD)
|
|
102
|
-
touch .env # On Mac/Linux
|
|
103
|
-
# OR PowerShell
|
|
104
|
-
New-Item -Path .env -ItemType File
|
|
105
|
-
```
|
|
42
|
+
Create a `.env` file:
|
|
106
43
|
|
|
107
|
-
|
|
44
|
+
**NOTE: YOU MUST REPLACE THE PLACEHOLDERS WITH YOUR OWN API KEYS**
|
|
108
45
|
|
|
109
46
|
```env
|
|
110
|
-
|
|
111
|
-
# Copy this file to .env and fill in your actual values
|
|
112
|
-
|
|
113
|
-
# Required: Your Revenium API key (starts with hak_)
|
|
47
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
114
48
|
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
115
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
116
|
-
|
|
117
|
-
# Required: Your OpenAI API key (starts with sk-)
|
|
118
49
|
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
119
|
-
|
|
120
|
-
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
121
|
-
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
122
|
-
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
123
|
-
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
124
|
-
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
125
|
-
|
|
126
|
-
# Optional: Enable debug logging
|
|
127
|
-
REVENIUM_DEBUG=false
|
|
128
50
|
```
|
|
129
51
|
|
|
130
|
-
|
|
52
|
+
### 3. Run Your First Example
|
|
131
53
|
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
### Step 4: Protect Your API Keys
|
|
135
|
-
|
|
136
|
-
**CRITICAL SECURITY**: Never commit your `.env` file to version control!
|
|
137
|
-
|
|
138
|
-
Your `.env` file contains sensitive API keys that must be kept secret:
|
|
54
|
+
Run the [getting started example](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/getting_started.ts):
|
|
139
55
|
|
|
140
56
|
```bash
|
|
141
|
-
|
|
142
|
-
git check-ignore .env
|
|
143
|
-
```
|
|
144
|
-
|
|
145
|
-
If the command returns nothing, add `.env` to your `.gitignore`:
|
|
146
|
-
|
|
147
|
-
```gitignore
|
|
148
|
-
# Environment variables
|
|
149
|
-
.env
|
|
150
|
-
.env.*
|
|
151
|
-
!.env.example
|
|
152
|
-
```
|
|
153
|
-
|
|
154
|
-
**Best Practice**: Use GitHub's standard Node.gitignore as a starting point:
|
|
155
|
-
- Reference: https://github.com/github/gitignore/blob/main/Node.gitignore
|
|
156
|
-
|
|
157
|
-
**Warning:** The following command will overwrite your current `.gitignore` file.
|
|
158
|
-
To avoid losing custom rules, back up your file first or append instead:
|
|
159
|
-
`curl https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore >> .gitignore`
|
|
160
|
-
|
|
161
|
-
**Note:** Appending may result in duplicate entries if your `.gitignore` already contains some of the patterns from Node.gitignore.
|
|
162
|
-
Please review your `.gitignore` after appending and remove any duplicate lines as needed.
|
|
163
|
-
|
|
164
|
-
This protects your OpenAI API key, Revenium API key, and any other secrets from being accidentally committed to your repository.
|
|
165
|
-
|
|
166
|
-
### Step 5: Create Your First Test
|
|
167
|
-
|
|
168
|
-
#### TypeScript Test
|
|
169
|
-
|
|
170
|
-
Create `test-openai.ts`:
|
|
171
|
-
|
|
172
|
-
```typescript
|
|
173
|
-
import 'dotenv/config';
|
|
174
|
-
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
175
|
-
import OpenAI from 'openai';
|
|
176
|
-
|
|
177
|
-
async function testOpenAI() {
|
|
178
|
-
try {
|
|
179
|
-
// Initialize Revenium middleware
|
|
180
|
-
const initResult = initializeReveniumFromEnv();
|
|
181
|
-
if (!initResult.success) {
|
|
182
|
-
console.error(' Failed to initialize Revenium:', initResult.message);
|
|
183
|
-
process.exit(1);
|
|
184
|
-
}
|
|
185
|
-
|
|
186
|
-
// Create and patch OpenAI instance
|
|
187
|
-
const openai = patchOpenAIInstance(new OpenAI());
|
|
188
|
-
|
|
189
|
-
const response = await openai.chat.completions.create({
|
|
190
|
-
model: 'gpt-4o-mini',
|
|
191
|
-
max_tokens: 100,
|
|
192
|
-
messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
|
|
193
|
-
usageMetadata: {
|
|
194
|
-
subscriber: {
|
|
195
|
-
id: 'user-456',
|
|
196
|
-
email: 'user@demo-org.com',
|
|
197
|
-
credential: {
|
|
198
|
-
name: 'demo-api-key',
|
|
199
|
-
value: 'demo-key-123',
|
|
200
|
-
},
|
|
201
|
-
},
|
|
202
|
-
organizationId: 'demo-org-123',
|
|
203
|
-
productId: 'ai-assistant-v2',
|
|
204
|
-
taskType: 'educational-query',
|
|
205
|
-
agent: 'openai-basic-demo',
|
|
206
|
-
traceId: 'session-' + Date.now(),
|
|
207
|
-
},
|
|
208
|
-
});
|
|
209
|
-
|
|
210
|
-
const text = response.choices[0]?.message?.content || 'No response';
|
|
211
|
-
console.log('Response:', text);
|
|
212
|
-
} catch (error) {
|
|
213
|
-
console.error('Error:', error);
|
|
214
|
-
}
|
|
215
|
-
}
|
|
216
|
-
|
|
217
|
-
testOpenAI();
|
|
57
|
+
npx tsx node_modules/@revenium/openai/examples/getting_started.ts
|
|
218
58
|
```
|
|
219
59
|
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
Create `test-openai.js`:
|
|
223
|
-
|
|
224
|
-
```javascript
|
|
225
|
-
require('dotenv').config();
|
|
226
|
-
const {
|
|
227
|
-
initializeReveniumFromEnv,
|
|
228
|
-
patchOpenAIInstance,
|
|
229
|
-
} = require('@revenium/openai');
|
|
230
|
-
const OpenAI = require('openai');
|
|
231
|
-
|
|
232
|
-
async function testOpenAI() {
|
|
233
|
-
try {
|
|
234
|
-
// Initialize Revenium middleware
|
|
235
|
-
const initResult = initializeReveniumFromEnv();
|
|
236
|
-
if (!initResult.success) {
|
|
237
|
-
console.error(' Failed to initialize Revenium:', initResult.message);
|
|
238
|
-
process.exit(1);
|
|
239
|
-
}
|
|
240
|
-
|
|
241
|
-
// Create and patch OpenAI instance
|
|
242
|
-
const openai = patchOpenAIInstance(new OpenAI());
|
|
243
|
-
|
|
244
|
-
const response = await openai.chat.completions.create({
|
|
245
|
-
model: 'gpt-4o-mini',
|
|
246
|
-
max_tokens: 100,
|
|
247
|
-
messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
|
|
248
|
-
usageMetadata: {
|
|
249
|
-
subscriber: {
|
|
250
|
-
id: 'user-456',
|
|
251
|
-
email: 'user@demo-org.com',
|
|
252
|
-
},
|
|
253
|
-
organizationId: 'demo-org-123',
|
|
254
|
-
taskType: 'educational-query',
|
|
255
|
-
},
|
|
256
|
-
});
|
|
257
|
-
|
|
258
|
-
const text = response.choices[0]?.message?.content || 'No response';
|
|
259
|
-
console.log('Response:', text);
|
|
260
|
-
} catch (error) {
|
|
261
|
-
// Handle error appropriately
|
|
262
|
-
}
|
|
263
|
-
}
|
|
264
|
-
|
|
265
|
-
testOpenAI();
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
### Step 6: Add Package Scripts
|
|
269
|
-
|
|
270
|
-
Update your `package.json`:
|
|
271
|
-
|
|
272
|
-
```json
|
|
273
|
-
{
|
|
274
|
-
"name": "my-openai-project",
|
|
275
|
-
"version": "1.0.0",
|
|
276
|
-
"type": "commonjs",
|
|
277
|
-
"scripts": {
|
|
278
|
-
"test-ts": "npx tsx test-openai.ts",
|
|
279
|
-
"test-js": "node test-openai.js"
|
|
280
|
-
},
|
|
281
|
-
"dependencies": {
|
|
282
|
-
"@revenium/openai": "^1.0.11",
|
|
283
|
-
"openai": "^5.8.0",
|
|
284
|
-
"dotenv": "^16.5.0"
|
|
285
|
-
}
|
|
286
|
-
}
|
|
287
|
-
```
|
|
288
|
-
|
|
289
|
-
### Step 7: Run Your Tests
|
|
290
|
-
|
|
291
|
-
```bash
|
|
292
|
-
# Test TypeScript version
|
|
293
|
-
npm run test-ts
|
|
294
|
-
|
|
295
|
-
# Test JavaScript version
|
|
296
|
-
npm run test-js
|
|
297
|
-
```
|
|
298
|
-
|
|
299
|
-
### Step 8: Project Structure
|
|
300
|
-
|
|
301
|
-
Your project should now look like this:
|
|
302
|
-
|
|
303
|
-
```
|
|
304
|
-
my-openai-project/
|
|
305
|
-
├── .env # Environment variables
|
|
306
|
-
├── .gitignore # Git ignore file
|
|
307
|
-
├── package.json # Project configuration
|
|
308
|
-
├── test-openai.ts # TypeScript test
|
|
309
|
-
└── test-openai.js # JavaScript test
|
|
310
|
-
```
|
|
311
|
-
|
|
312
|
-
## Option 2: Clone Our Repository
|
|
313
|
-
|
|
314
|
-
### Step 1: Clone the Repository
|
|
315
|
-
|
|
316
|
-
```bash
|
|
317
|
-
# Clone the repository
|
|
318
|
-
git clone git@github.com:revenium/revenium-middleware-openai-node.git
|
|
319
|
-
cd revenium-middleware-openai-node
|
|
320
|
-
```
|
|
321
|
-
|
|
322
|
-
### Step 2: Install Dependencies
|
|
323
|
-
|
|
324
|
-
```bash
|
|
325
|
-
# Install all dependencies
|
|
326
|
-
npm install
|
|
327
|
-
npm install @revenium/openai
|
|
328
|
-
```
|
|
329
|
-
|
|
330
|
-
### Step 3: Setup Environment Variables
|
|
331
|
-
|
|
332
|
-
Create a `.env` file in the project root:
|
|
333
|
-
|
|
334
|
-
```bash
|
|
335
|
-
# Create .env file
|
|
336
|
-
cp .env.example .env # If available, or create manually
|
|
337
|
-
```
|
|
338
|
-
|
|
339
|
-
Copy and paste the following into `.env`:
|
|
60
|
+
Or with debug logging:
|
|
340
61
|
|
|
341
62
|
```bash
|
|
342
|
-
#
|
|
343
|
-
|
|
344
|
-
|
|
345
|
-
# Required: Your Revenium API key (starts with hak_)
|
|
346
|
-
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
347
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
348
|
-
|
|
349
|
-
# Required: Your OpenAI API key (starts with sk-)
|
|
350
|
-
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
351
|
-
|
|
352
|
-
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
353
|
-
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
354
|
-
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
355
|
-
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
356
|
-
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
357
|
-
|
|
358
|
-
# Optional: Enable debug logging
|
|
359
|
-
REVENIUM_DEBUG=false
|
|
360
|
-
```
|
|
361
|
-
|
|
362
|
-
**IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
|
|
363
|
-
|
|
364
|
-
### Step 4: Build the Project
|
|
365
|
-
|
|
366
|
-
```bash
|
|
367
|
-
# Build the middleware
|
|
368
|
-
npm run build
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
### Step 5: Run the Examples
|
|
372
|
-
|
|
373
|
-
The repository includes working example files:
|
|
374
|
-
|
|
375
|
-
```bash
|
|
376
|
-
# Run Chat Completions API examples (using npm scripts)
|
|
377
|
-
npm run example:openai-basic
|
|
378
|
-
npm run example:openai-streaming
|
|
379
|
-
npm run example:azure-basic
|
|
380
|
-
npm run example:azure-streaming
|
|
381
|
-
|
|
382
|
-
# Run Responses API examples (available with OpenAI SDK 5.8+)
|
|
383
|
-
npm run example:openai-responses-basic
|
|
384
|
-
npm run example:openai-responses-streaming
|
|
385
|
-
npm run example:azure-responses-basic
|
|
386
|
-
npm run example:azure-responses-streaming
|
|
387
|
-
|
|
388
|
-
# Or run examples directly with tsx
|
|
389
|
-
npx tsx examples/openai-basic.ts
|
|
390
|
-
npx tsx examples/openai-streaming.ts
|
|
391
|
-
npx tsx examples/azure-basic.ts
|
|
392
|
-
npx tsx examples/azure-streaming.ts
|
|
393
|
-
npx tsx examples/openai-responses-basic.ts
|
|
394
|
-
npx tsx examples/openai-responses-streaming.ts
|
|
395
|
-
npx tsx examples/azure-responses-basic.ts
|
|
396
|
-
npx tsx examples/azure-responses-streaming.ts
|
|
397
|
-
```
|
|
398
|
-
|
|
399
|
-
These examples demonstrate:
|
|
400
|
-
|
|
401
|
-
- **Chat Completions API** - Traditional OpenAI chat completions and embeddings
|
|
402
|
-
- **Responses API** - New OpenAI Responses API with enhanced capabilities
|
|
403
|
-
- **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
|
|
404
|
-
- **Streaming Support** - Real-time response streaming with metadata tracking
|
|
405
|
-
- **Optional Metadata** - Rich business context and user tracking
|
|
406
|
-
- **Error Handling** - Robust error handling and debugging
|
|
407
|
-
|
|
408
|
-
## Option 3: Existing Project Integration
|
|
409
|
-
|
|
410
|
-
Already have a project? Just install and replace imports:
|
|
411
|
-
|
|
412
|
-
### Step 1: Install the Package
|
|
413
|
-
|
|
414
|
-
```bash
|
|
415
|
-
npm install @revenium/openai
|
|
416
|
-
```
|
|
417
|
-
|
|
418
|
-
### Step 2: Update Your Imports
|
|
63
|
+
# Linux/macOS
|
|
64
|
+
REVENIUM_DEBUG=true npx tsx node_modules/@revenium/openai/examples/getting_started.ts
|
|
419
65
|
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
```typescript
|
|
423
|
-
import OpenAI from 'openai';
|
|
424
|
-
|
|
425
|
-
const openai = new OpenAI();
|
|
426
|
-
```
|
|
427
|
-
|
|
428
|
-
**After:**
|
|
429
|
-
|
|
430
|
-
```typescript
|
|
431
|
-
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
432
|
-
import OpenAI from 'openai';
|
|
433
|
-
|
|
434
|
-
// Initialize Revenium middleware
|
|
435
|
-
initializeReveniumFromEnv();
|
|
436
|
-
|
|
437
|
-
// Patch your OpenAI instance
|
|
438
|
-
const openai = patchOpenAIInstance(new OpenAI());
|
|
66
|
+
# Windows (PowerShell)
|
|
67
|
+
$env:REVENIUM_DEBUG="true"; npx tsx node_modules/@revenium/openai/examples/getting_started.ts
|
|
439
68
|
```
|
|
440
69
|
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
Add to your `.env` file:
|
|
444
|
-
|
|
445
|
-
```env
|
|
446
|
-
# Revenium OpenAI Middleware Configuration
|
|
70
|
+
**For more examples and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).**
|
|
447
71
|
|
|
448
|
-
|
|
449
|
-
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
450
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
451
|
-
|
|
452
|
-
# Required: Your OpenAI API key (starts with sk-)
|
|
453
|
-
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
454
|
-
|
|
455
|
-
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
456
|
-
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
457
|
-
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
458
|
-
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
459
|
-
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
460
|
-
|
|
461
|
-
# Optional: Enable debug logging
|
|
462
|
-
REVENIUM_DEBUG=false
|
|
463
|
-
```
|
|
464
|
-
|
|
465
|
-
### Step 4: Optional - Add Metadata
|
|
72
|
+
---
|
|
466
73
|
|
|
467
|
-
|
|
74
|
+
## Requirements
|
|
468
75
|
|
|
469
|
-
|
|
470
|
-
|
|
471
|
-
|
|
472
|
-
model: 'gpt-4o-mini',
|
|
473
|
-
messages: [{ role: 'user', content: 'Hello!' }],
|
|
474
|
-
// Add optional metadata for better analytics
|
|
475
|
-
usageMetadata: {
|
|
476
|
-
subscriber: { id: 'user-123' },
|
|
477
|
-
organizationId: 'my-company',
|
|
478
|
-
taskType: 'chat',
|
|
479
|
-
},
|
|
480
|
-
});
|
|
481
|
-
```
|
|
76
|
+
- Node.js 16+
|
|
77
|
+
- OpenAI package v4.0+
|
|
78
|
+
- TypeScript 5.0+ (for TypeScript projects)
|
|
482
79
|
|
|
483
|
-
|
|
80
|
+
---
|
|
484
81
|
|
|
485
82
|
## What Gets Tracked
|
|
486
83
|
|
|
@@ -594,131 +191,54 @@ npm install openai@5.8.2
|
|
|
594
191
|
- [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
|
|
595
192
|
- [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
|
|
596
193
|
|
|
597
|
-
###
|
|
194
|
+
### Working Examples
|
|
598
195
|
|
|
599
|
-
|
|
196
|
+
Complete working examples are included with this package. Each example is fully documented and ready to run.
|
|
600
197
|
|
|
601
|
-
|
|
198
|
+
#### Available Examples
|
|
602
199
|
|
|
603
|
-
|
|
604
|
-
|
|
605
|
-
|
|
200
|
+
**OpenAI Chat Completions API:**
|
|
201
|
+
- `openai-basic.ts` - Basic chat + embeddings with optional metadata
|
|
202
|
+
- `openai-streaming.ts` - Streaming responses + batch embeddings
|
|
606
203
|
|
|
607
|
-
|
|
608
|
-
|
|
609
|
-
|
|
204
|
+
**OpenAI Responses API (SDK 5.8+):**
|
|
205
|
+
- `openai-responses-basic.ts` - New Responses API with string input
|
|
206
|
+
- `openai-responses-streaming.ts` - Streaming with Responses API
|
|
610
207
|
|
|
611
|
-
|
|
612
|
-
|
|
613
|
-
|
|
614
|
-
|
|
615
|
-
|
|
616
|
-
usageMetadata: {
|
|
617
|
-
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
618
|
-
organizationId: 'org-456',
|
|
619
|
-
productId: 'quantum-explainer',
|
|
620
|
-
taskType: 'educational-content',
|
|
621
|
-
},
|
|
622
|
-
});
|
|
208
|
+
**Azure OpenAI:**
|
|
209
|
+
- `azure-basic.ts` - Azure chat completions + embeddings
|
|
210
|
+
- `azure-streaming.ts` - Azure streaming responses
|
|
211
|
+
- `azure-responses-basic.ts` - Azure Responses API
|
|
212
|
+
- `azure-responses-streaming.ts` - Azure streaming Responses API
|
|
623
213
|
|
|
624
|
-
|
|
625
|
-
|
|
214
|
+
**Detailed Guide:**
|
|
215
|
+
- `examples/README.md` - Complete setup guide with TypeScript and JavaScript patterns
|
|
626
216
|
|
|
627
|
-
|
|
217
|
+
#### Running Examples
|
|
628
218
|
|
|
629
|
-
|
|
630
|
-
|
|
631
|
-
|
|
632
|
-
|
|
633
|
-
|
|
634
|
-
|
|
635
|
-
usageMetadata: {
|
|
636
|
-
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
637
|
-
organizationId: 'org-456',
|
|
638
|
-
},
|
|
639
|
-
});
|
|
219
|
+
**Installed via npm?**
|
|
220
|
+
```bash
|
|
221
|
+
# Try these in order:
|
|
222
|
+
npx tsx node_modules/@revenium/openai/examples/openai-basic.ts
|
|
223
|
+
npx tsx node_modules/@revenium/openai/examples/openai-streaming.ts
|
|
224
|
+
npx tsx node_modules/@revenium/openai/examples/openai-responses-basic.ts
|
|
640
225
|
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
}
|
|
226
|
+
# View all examples:
|
|
227
|
+
ls node_modules/@revenium/openai/examples/
|
|
644
228
|
```
|
|
645
229
|
|
|
646
|
-
|
|
647
|
-
|
|
648
|
-
|
|
649
|
-
|
|
650
|
-
|
|
651
|
-
|
|
652
|
-
import OpenAI from 'openai';
|
|
653
|
-
|
|
654
|
-
// Initialize and patch OpenAI instance
|
|
655
|
-
initializeReveniumFromEnv();
|
|
656
|
-
const openai = patchOpenAIInstance(new OpenAI());
|
|
657
|
-
|
|
658
|
-
const response = await openai.chat.completions.create({
|
|
659
|
-
model: 'gpt-4',
|
|
660
|
-
messages: [{ role: 'user', content: 'Summarize this document' }],
|
|
661
|
-
// Add custom tracking metadata - all fields optional, no type casting needed!
|
|
662
|
-
usageMetadata: {
|
|
663
|
-
subscriber: {
|
|
664
|
-
id: 'user-12345',
|
|
665
|
-
email: 'john@acme-corp.com',
|
|
666
|
-
},
|
|
667
|
-
organizationId: 'acme-corp',
|
|
668
|
-
productId: 'document-ai',
|
|
669
|
-
taskType: 'document-summary',
|
|
670
|
-
agent: 'doc-summarizer-v2',
|
|
671
|
-
traceId: 'session-abc123',
|
|
672
|
-
},
|
|
673
|
-
});
|
|
230
|
+
**Cloned from GitHub?**
|
|
231
|
+
```bash
|
|
232
|
+
npm install
|
|
233
|
+
npm run example:openai-basic
|
|
234
|
+
npm run example:openai-streaming
|
|
235
|
+
npm run example:openai-responses-basic
|
|
674
236
|
|
|
675
|
-
|
|
676
|
-
|
|
677
|
-
model: 'gpt-5',
|
|
678
|
-
input: 'Summarize this document',
|
|
679
|
-
// Same metadata structure - seamless compatibility!
|
|
680
|
-
usageMetadata: {
|
|
681
|
-
subscriber: {
|
|
682
|
-
id: 'user-12345',
|
|
683
|
-
email: 'john@acme-corp.com',
|
|
684
|
-
},
|
|
685
|
-
organizationId: 'acme-corp',
|
|
686
|
-
productId: 'document-ai',
|
|
687
|
-
taskType: 'document-summary',
|
|
688
|
-
agent: 'doc-summarizer-v2',
|
|
689
|
-
traceId: 'session-abc123',
|
|
690
|
-
},
|
|
691
|
-
});
|
|
237
|
+
# See all example scripts:
|
|
238
|
+
npm run
|
|
692
239
|
```
|
|
693
240
|
|
|
694
|
-
|
|
695
|
-
|
|
696
|
-
The middleware automatically handles streaming requests with seamless metadata:
|
|
697
|
-
|
|
698
|
-
```typescript
|
|
699
|
-
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
700
|
-
import OpenAI from 'openai';
|
|
701
|
-
|
|
702
|
-
// Initialize and patch OpenAI instance
|
|
703
|
-
initializeReveniumFromEnv();
|
|
704
|
-
const openai = patchOpenAIInstance(new OpenAI());
|
|
705
|
-
|
|
706
|
-
const stream = await openai.chat.completions.create({
|
|
707
|
-
model: 'gpt-4',
|
|
708
|
-
messages: [{ role: 'user', content: 'Tell me a story' }],
|
|
709
|
-
stream: true,
|
|
710
|
-
// Metadata works seamlessly with streaming - all fields optional!
|
|
711
|
-
usageMetadata: {
|
|
712
|
-
organizationId: 'story-app',
|
|
713
|
-
taskType: 'creative-writing',
|
|
714
|
-
},
|
|
715
|
-
});
|
|
716
|
-
|
|
717
|
-
for await (const chunk of stream) {
|
|
718
|
-
process.stdout.write(chunk.choices[0]?.delta?.content || '');
|
|
719
|
-
}
|
|
720
|
-
// Usage tracking happens automatically when stream completes
|
|
721
|
-
```
|
|
241
|
+
**Browse online:** [`examples/` directory on GitHub](https://github.com/revenium/revenium-middleware-openai-node/tree/HEAD/examples)
|
|
722
242
|
|
|
723
243
|
### Temporarily Disabling Tracking
|
|
724
244
|
|
|
@@ -752,46 +272,24 @@ patchOpenAI();
|
|
|
752
272
|
|
|
753
273
|
### Quick Start with Azure OpenAI
|
|
754
274
|
|
|
755
|
-
|
|
756
|
-
# Set your Azure OpenAI environment variables
|
|
757
|
-
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
|
|
758
|
-
export AZURE_OPENAI_API_KEY="your-azure-api-key"
|
|
759
|
-
export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
|
|
760
|
-
export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
|
|
761
|
-
|
|
762
|
-
# Set your Revenium credentials
|
|
763
|
-
export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
|
|
764
|
-
# export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
|
|
765
|
-
```
|
|
275
|
+
**Use case:** Automatic Azure OpenAI client detection with deployment name mapping and accurate usage tracking.
|
|
766
276
|
|
|
767
|
-
|
|
768
|
-
|
|
769
|
-
|
|
277
|
+
See complete Azure examples:
|
|
278
|
+
- `examples/azure-basic.ts` - Azure chat completions with environment variable setup
|
|
279
|
+
- `examples/azure-streaming.ts` - Azure streaming responses
|
|
280
|
+
- `examples/azure-responses-basic.ts` - Azure Responses API integration
|
|
770
281
|
|
|
771
|
-
|
|
772
|
-
|
|
773
|
-
|
|
774
|
-
|
|
775
|
-
|
|
776
|
-
|
|
777
|
-
|
|
778
|
-
apiKey: process.env.AZURE_OPENAI_API_KEY,
|
|
779
|
-
apiVersion: process.env.AZURE_OPENAI_API_VERSION,
|
|
780
|
-
})
|
|
781
|
-
);
|
|
782
|
-
|
|
783
|
-
// Your existing Azure OpenAI code works with seamless metadata
|
|
784
|
-
const response = await azure.chat.completions.create({
|
|
785
|
-
model: 'gpt-4o', // Uses your deployment name
|
|
786
|
-
messages: [{ role: 'user', content: 'Hello from Azure!' }],
|
|
787
|
-
// Optional metadata with native TypeScript support
|
|
788
|
-
usageMetadata: {
|
|
789
|
-
organizationId: 'my-company',
|
|
790
|
-
taskType: 'azure-chat',
|
|
791
|
-
},
|
|
792
|
-
});
|
|
282
|
+
**Environment variables needed:**
|
|
283
|
+
```bash
|
|
284
|
+
# Azure OpenAI configuration
|
|
285
|
+
AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
|
|
286
|
+
AZURE_OPENAI_API_KEY="your-azure-api-key"
|
|
287
|
+
AZURE_OPENAI_DEPLOYMENT="gpt-4o"
|
|
288
|
+
AZURE_OPENAI_API_VERSION="2024-12-01-preview"
|
|
793
289
|
|
|
794
|
-
|
|
290
|
+
# Revenium configuration
|
|
291
|
+
REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
|
|
292
|
+
REVENIUM_METERING_BASE_URL="https://api.revenium.io"
|
|
795
293
|
```
|
|
796
294
|
|
|
797
295
|
### Azure Features
|
|
@@ -828,107 +326,86 @@ The middleware automatically maps Azure deployment names to standard model names
|
|
|
828
326
|
|
|
829
327
|
## Advanced Usage
|
|
830
328
|
|
|
831
|
-
###
|
|
329
|
+
### Initialization Options
|
|
832
330
|
|
|
833
|
-
The middleware
|
|
331
|
+
The middleware supports three initialization patterns:
|
|
332
|
+
|
|
333
|
+
**Automatic (Recommended)** - Import and patch OpenAI instance:
|
|
834
334
|
|
|
835
335
|
```typescript
|
|
836
|
-
import {
|
|
336
|
+
import { patchOpenAIInstance } from '@revenium/openai';
|
|
837
337
|
import OpenAI from 'openai';
|
|
838
338
|
|
|
839
|
-
initializeReveniumFromEnv();
|
|
840
339
|
const openai = patchOpenAIInstance(new OpenAI());
|
|
340
|
+
// Tracking works automatically if env vars are set
|
|
341
|
+
```
|
|
841
342
|
|
|
842
|
-
|
|
843
|
-
|
|
844
|
-
|
|
845
|
-
|
|
846
|
-
|
|
847
|
-
usageMetadata: {
|
|
848
|
-
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
849
|
-
organizationId: 'story-app',
|
|
850
|
-
taskType: 'creative-writing',
|
|
851
|
-
traceId: 'session-' + Date.now(),
|
|
852
|
-
},
|
|
853
|
-
});
|
|
343
|
+
**Explicit** - Call `initializeReveniumFromEnv()` for error handling control:
|
|
344
|
+
|
|
345
|
+
```typescript
|
|
346
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
347
|
+
import OpenAI from 'openai';
|
|
854
348
|
|
|
855
|
-
|
|
856
|
-
|
|
349
|
+
const result = initializeReveniumFromEnv();
|
|
350
|
+
if (!result.success) {
|
|
351
|
+
console.error('Failed to initialize:', result.message);
|
|
352
|
+
process.exit(1);
|
|
857
353
|
}
|
|
858
|
-
|
|
354
|
+
|
|
355
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
859
356
|
```
|
|
860
357
|
|
|
861
|
-
|
|
358
|
+
**Manual** - Use `configure()` to set all options programmatically (see Manual Configuration below).
|
|
862
359
|
|
|
863
|
-
|
|
360
|
+
For detailed examples of all initialization patterns, see [`examples/`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
|
|
864
361
|
|
|
865
|
-
|
|
866
|
-
// Simple string input with metadata
|
|
867
|
-
const response = await openai.responses.create({
|
|
868
|
-
model: 'gpt-5',
|
|
869
|
-
input: 'What is the capital of France?',
|
|
870
|
-
max_output_tokens: 150,
|
|
871
|
-
usageMetadata: {
|
|
872
|
-
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
873
|
-
organizationId: 'org-456',
|
|
874
|
-
productId: 'geography-tutor',
|
|
875
|
-
taskType: 'educational-query',
|
|
876
|
-
},
|
|
877
|
-
});
|
|
362
|
+
### Streaming Responses
|
|
878
363
|
|
|
879
|
-
|
|
880
|
-
```
|
|
364
|
+
Streaming is fully supported with real-time token tracking and time-to-first-token metrics. The middleware automatically tracks streaming responses without any additional configuration.
|
|
881
365
|
|
|
882
|
-
|
|
366
|
+
See [`examples/openai-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/openai-streaming.ts) and [`examples/azure-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/azure-streaming.ts) for working streaming examples.
|
|
883
367
|
|
|
884
|
-
|
|
368
|
+
### Custom Metadata Tracking
|
|
885
369
|
|
|
886
|
-
|
|
887
|
-
import { AzureOpenAI } from 'openai';
|
|
370
|
+
Add business context to track usage by organization, user, task type, or custom fields. Pass a `usageMetadata` object with any of these optional fields:
|
|
888
371
|
|
|
889
|
-
|
|
890
|
-
|
|
891
|
-
|
|
892
|
-
|
|
893
|
-
|
|
894
|
-
|
|
895
|
-
|
|
896
|
-
|
|
897
|
-
|
|
898
|
-
|
|
899
|
-
|
|
900
|
-
|
|
901
|
-
|
|
902
|
-
usageMetadata: {
|
|
903
|
-
organizationId: 'my-company',
|
|
904
|
-
taskType: 'azure-chat',
|
|
905
|
-
agent: 'azure-assistant',
|
|
906
|
-
},
|
|
907
|
-
});
|
|
908
|
-
```
|
|
372
|
+
| Field | Description | Use Case |
|
|
373
|
+
|-------|-------------|----------|
|
|
374
|
+
| `traceId` | Unique identifier for session or conversation tracking | Link multiple API calls together for debugging, user session analytics, or distributed tracing across services |
|
|
375
|
+
| `taskType` | Type of AI task being performed | Categorize usage by workload (e.g., "chat", "code-generation", "doc-summary") for cost analysis and optimization |
|
|
376
|
+
| `subscriber.id` | Unique user identifier | Track individual user consumption for billing, rate limiting, or user analytics |
|
|
377
|
+
| `subscriber.email` | User email address | Identify users for support, compliance, or usage reports |
|
|
378
|
+
| `subscriber.credential.name` | Authentication credential name | Track which API key or service account made the request |
|
|
379
|
+
| `subscriber.credential.value` | Authentication credential value | Associate usage with specific credentials for security auditing |
|
|
380
|
+
| `organizationId` | Organization or company identifier | Multi-tenant cost allocation, usage quotas per organization |
|
|
381
|
+
| `subscriptionId` | Subscription plan identifier | Track usage against subscription limits, identify plan upgrade opportunities |
|
|
382
|
+
| `productId` | Your product or feature identifier | Attribute AI costs to specific features in your application (e.g., "chatbot", "email-assistant") |
|
|
383
|
+
| `agent` | AI agent or bot identifier | Distinguish between multiple AI agents or automation workflows in your system |
|
|
384
|
+
| `responseQualityScore` | Custom quality rating (0.0-1.0) | Track user satisfaction or automated quality metrics for model performance analysis |
|
|
909
385
|
|
|
910
|
-
|
|
386
|
+
**Resources:**
|
|
387
|
+
- [API Reference](https://revenium.readme.io/reference/meter_ai_completion) - Complete metadata field documentation
|
|
911
388
|
|
|
912
|
-
|
|
389
|
+
### OpenAI Responses API
|
|
390
|
+
**Use case:** Using OpenAI's new Responses API with string inputs and simplified interface (SDK 5.8+).
|
|
913
391
|
|
|
914
|
-
|
|
915
|
-
|
|
916
|
-
|
|
917
|
-
input: 'Advanced text embedding with comprehensive tracking metadata',
|
|
918
|
-
usageMetadata: {
|
|
919
|
-
subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
|
|
920
|
-
organizationId: 'my-company',
|
|
921
|
-
taskType: 'document-embedding',
|
|
922
|
-
productId: 'search-engine',
|
|
923
|
-
traceId: `embed-${Date.now()}`,
|
|
924
|
-
agent: 'openai-embeddings-node',
|
|
925
|
-
},
|
|
926
|
-
});
|
|
392
|
+
See working examples:
|
|
393
|
+
- `examples/openai-responses-basic.ts` - Basic Responses API usage
|
|
394
|
+
- `examples/openai-responses-streaming.ts` - Streaming with Responses API
|
|
927
395
|
|
|
928
|
-
|
|
929
|
-
|
|
930
|
-
|
|
931
|
-
|
|
396
|
+
### Azure OpenAI Integration
|
|
397
|
+
**Use case:** Automatic Azure OpenAI detection with deployment name resolution and accurate pricing.
|
|
398
|
+
|
|
399
|
+
See working examples:
|
|
400
|
+
- `examples/azure-basic.ts` - Azure chat completions and embeddings
|
|
401
|
+
- `examples/azure-responses-basic.ts` - Azure Responses API integration
|
|
402
|
+
|
|
403
|
+
### Embeddings with Metadata
|
|
404
|
+
**Use case:** Track embeddings usage for search engines, RAG systems, and document processing.
|
|
405
|
+
|
|
406
|
+
Embeddings examples are included in:
|
|
407
|
+
- `examples/openai-basic.ts` - Text embeddings with metadata
|
|
408
|
+
- `examples/openai-streaming.ts` - Batch embeddings processing
|
|
932
409
|
|
|
933
410
|
### Manual Configuration
|
|
934
411
|
|
|
@@ -939,7 +416,7 @@ import { configure } from '@revenium/openai';
|
|
|
939
416
|
|
|
940
417
|
configure({
|
|
941
418
|
reveniumApiKey: 'hak_your_api_key',
|
|
942
|
-
reveniumBaseUrl: 'https://api.revenium.io
|
|
419
|
+
reveniumBaseUrl: 'https://api.revenium.io',
|
|
943
420
|
apiTimeout: 5000,
|
|
944
421
|
failSilent: true,
|
|
945
422
|
maxRetries: 3,
|
|
@@ -954,7 +431,7 @@ configure({
|
|
|
954
431
|
| ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
|
|
955
432
|
| `REVENIUM_METERING_API_KEY` | true | - | Your Revenium API key (starts with `hak_`) |
|
|
956
433
|
| `OPENAI_API_KEY` | true | - | Your OpenAI API key (starts with `sk-`) |
|
|
957
|
-
| `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io
|
|
434
|
+
| `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io` | Revenium metering API base URL |
|
|
958
435
|
| `REVENIUM_DEBUG` | false | `false` | Enable debug logging (`true`/`false`) |
|
|
959
436
|
| `AZURE_OPENAI_ENDPOINT` | false | - | Azure OpenAI endpoint URL (for Azure testing) |
|
|
960
437
|
| `AZURE_OPENAI_API_KEY` | false | - | Azure OpenAI API key (for Azure testing) |
|
|
@@ -963,14 +440,14 @@ configure({
|
|
|
963
440
|
|
|
964
441
|
**Important Note about `REVENIUM_METERING_BASE_URL`:**
|
|
965
442
|
|
|
966
|
-
- This variable is **optional** and defaults to the production URL (`https://api.revenium.io
|
|
443
|
+
- This variable is **optional** and defaults to the production URL (`https://api.revenium.io`)
|
|
967
444
|
- If you don't set it explicitly, the middleware will use the default production endpoint
|
|
968
445
|
- However, you may see console warnings or errors if the middleware cannot determine the correct environment
|
|
969
446
|
- **Best practice:** Always set this variable explicitly to match your environment:
|
|
970
447
|
|
|
971
448
|
```bash
|
|
972
449
|
# Default production URL (recommended)
|
|
973
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
450
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
974
451
|
```
|
|
975
452
|
|
|
976
453
|
- **Remember:** Your `REVENIUM_METERING_API_KEY` must match your base URL environment
|
|
@@ -1070,11 +547,11 @@ export REVENIUM_DEBUG=true
|
|
|
1070
547
|
```bash
|
|
1071
548
|
# Correct - Key and URL from same environment
|
|
1072
549
|
REVENIUM_METERING_API_KEY=hak_your_api_key_here
|
|
1073
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
550
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
1074
551
|
|
|
1075
552
|
# Wrong - Key and URL from different environments
|
|
1076
553
|
REVENIUM_METERING_API_KEY=hak_wrong_environment_key
|
|
1077
|
-
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
554
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io
|
|
1078
555
|
```
|
|
1079
556
|
|
|
1080
557
|
#### 3. **TypeScript type errors**
|
|
@@ -1149,32 +626,12 @@ For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.reven
|
|
|
1149
626
|
|
|
1150
627
|
## Supported Models
|
|
1151
628
|
|
|
1152
|
-
|
|
1153
|
-
|
|
1154
|
-
| Model Family | Models | APIs Supported |
|
|
1155
|
-
| ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
|
|
1156
|
-
| **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
|
|
1157
|
-
| **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
|
|
1158
|
-
| **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
|
|
1159
|
-
| **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
|
|
1160
|
-
| **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
|
|
1161
|
-
| **GPT-5** | `gpt-5` (when available) | Responses API |
|
|
1162
|
-
| **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
|
|
1163
|
-
|
|
1164
|
-
### Azure OpenAI Models
|
|
629
|
+
This middleware works with all OpenAI chat completion and embedding models, including those available through Azure OpenAI.
|
|
1165
630
|
|
|
1166
|
-
|
|
631
|
+
**For the current list of supported models, pricing, and capabilities:**
|
|
632
|
+
- [Revenium AI Models API](https://revenium.readme.io/v2.0.0/reference/get_ai_model)
|
|
1167
633
|
|
|
1168
|
-
|
|
1169
|
-
| ------------------------ | ------------------------ | --------------------------- |
|
|
1170
|
-
| `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
|
|
1171
|
-
| `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
|
|
1172
|
-
| `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
|
|
1173
|
-
| `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
|
|
1174
|
-
| `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
|
|
1175
|
-
| `embedding-3-large` | `text-embedding-3-large` | Embeddings |
|
|
1176
|
-
|
|
1177
|
-
**Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
|
|
634
|
+
Models are continuously updated as new versions are released by OpenAI and Azure OpenAI. The middleware automatically handles model detection and pricing for accurate usage tracking.
|
|
1178
635
|
|
|
1179
636
|
### API Support Matrix
|
|
1180
637
|
|
|
@@ -1187,12 +644,6 @@ All OpenAI models are supported through Azure OpenAI with automatic deployment n
|
|
|
1187
644
|
| **Cost Calculation** | Yes | Yes | Yes |
|
|
1188
645
|
| **Token Counting** | Yes | Yes | Yes |
|
|
1189
646
|
|
|
1190
|
-
## Requirements
|
|
1191
|
-
|
|
1192
|
-
- Node.js 16+
|
|
1193
|
-
- OpenAI package v4.0+
|
|
1194
|
-
- TypeScript 5.0+ (for TypeScript projects)
|
|
1195
|
-
|
|
1196
647
|
## Documentation
|
|
1197
648
|
|
|
1198
649
|
For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
|