@revenium/openai 1.0.11 → 1.0.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (96) hide show
  1. package/.env.example +20 -0
  2. package/CHANGELOG.md +47 -47
  3. package/README.md +121 -964
  4. package/dist/cjs/core/config/loader.js +1 -1
  5. package/dist/cjs/core/config/loader.js.map +1 -1
  6. package/dist/cjs/core/config/manager.js +2 -1
  7. package/dist/cjs/core/config/manager.js.map +1 -1
  8. package/dist/cjs/core/providers/detector.js +3 -3
  9. package/dist/cjs/core/providers/detector.js.map +1 -1
  10. package/dist/cjs/core/tracking/api-client.js +1 -1
  11. package/dist/cjs/core/tracking/api-client.js.map +1 -1
  12. package/dist/cjs/core/tracking/payload-builder.js +17 -12
  13. package/dist/cjs/core/tracking/payload-builder.js.map +1 -1
  14. package/dist/cjs/index.js +23 -2
  15. package/dist/cjs/index.js.map +1 -1
  16. package/dist/cjs/types/index.js.map +1 -1
  17. package/dist/cjs/utils/metadata-builder.js +12 -5
  18. package/dist/cjs/utils/metadata-builder.js.map +1 -1
  19. package/dist/cjs/utils/stop-reason-mapper.js +4 -0
  20. package/dist/cjs/utils/stop-reason-mapper.js.map +1 -1
  21. package/dist/cjs/utils/url-builder.js +32 -7
  22. package/dist/cjs/utils/url-builder.js.map +1 -1
  23. package/dist/esm/core/config/loader.js +1 -1
  24. package/dist/esm/core/config/loader.js.map +1 -1
  25. package/dist/esm/core/config/manager.js +2 -1
  26. package/dist/esm/core/config/manager.js.map +1 -1
  27. package/dist/esm/core/providers/detector.js +3 -3
  28. package/dist/esm/core/providers/detector.js.map +1 -1
  29. package/dist/esm/core/tracking/api-client.js +1 -1
  30. package/dist/esm/core/tracking/api-client.js.map +1 -1
  31. package/dist/esm/core/tracking/payload-builder.js +17 -12
  32. package/dist/esm/core/tracking/payload-builder.js.map +1 -1
  33. package/dist/esm/index.js +22 -2
  34. package/dist/esm/index.js.map +1 -1
  35. package/dist/esm/types/index.js.map +1 -1
  36. package/dist/esm/utils/metadata-builder.js +12 -5
  37. package/dist/esm/utils/metadata-builder.js.map +1 -1
  38. package/dist/esm/utils/stop-reason-mapper.js +4 -0
  39. package/dist/esm/utils/stop-reason-mapper.js.map +1 -1
  40. package/dist/esm/utils/url-builder.js +32 -7
  41. package/dist/esm/utils/url-builder.js.map +1 -1
  42. package/dist/types/core/config/manager.d.ts.map +1 -1
  43. package/dist/types/core/tracking/payload-builder.d.ts.map +1 -1
  44. package/dist/types/index.d.ts +23 -2
  45. package/dist/types/index.d.ts.map +1 -1
  46. package/dist/types/types/index.d.ts +9 -13
  47. package/dist/types/types/index.d.ts.map +1 -1
  48. package/dist/types/types/openai-augmentation.d.ts +1 -2
  49. package/dist/types/types/openai-augmentation.d.ts.map +1 -1
  50. package/dist/types/utils/metadata-builder.d.ts +2 -1
  51. package/dist/types/utils/metadata-builder.d.ts.map +1 -1
  52. package/dist/types/utils/stop-reason-mapper.d.ts.map +1 -1
  53. package/dist/types/utils/url-builder.d.ts +11 -3
  54. package/dist/types/utils/url-builder.d.ts.map +1 -1
  55. package/examples/README.md +213 -255
  56. package/examples/azure-basic.ts +26 -14
  57. package/examples/azure-responses-basic.ts +39 -10
  58. package/examples/azure-responses-streaming.ts +39 -10
  59. package/examples/azure-streaming.ts +41 -20
  60. package/examples/getting_started.ts +54 -0
  61. package/examples/openai-basic.ts +39 -17
  62. package/examples/openai-function-calling.ts +259 -0
  63. package/examples/openai-responses-basic.ts +38 -9
  64. package/examples/openai-responses-streaming.ts +38 -9
  65. package/examples/openai-streaming.ts +24 -13
  66. package/examples/openai-vision.ts +289 -0
  67. package/package.json +3 -9
  68. package/src/core/config/azure-config.ts +72 -0
  69. package/src/core/config/index.ts +23 -0
  70. package/src/core/config/loader.ts +66 -0
  71. package/src/core/config/manager.ts +95 -0
  72. package/src/core/config/validator.ts +89 -0
  73. package/src/core/providers/detector.ts +159 -0
  74. package/src/core/providers/index.ts +16 -0
  75. package/src/core/tracking/api-client.ts +78 -0
  76. package/src/core/tracking/index.ts +21 -0
  77. package/src/core/tracking/payload-builder.ts +137 -0
  78. package/src/core/tracking/usage-tracker.ts +189 -0
  79. package/src/core/wrapper/index.ts +9 -0
  80. package/src/core/wrapper/instance-patcher.ts +288 -0
  81. package/src/core/wrapper/request-handler.ts +423 -0
  82. package/src/core/wrapper/stream-wrapper.ts +100 -0
  83. package/src/index.ts +360 -0
  84. package/src/types/function-parameters.ts +251 -0
  85. package/src/types/index.ts +310 -0
  86. package/src/types/openai-augmentation.ts +232 -0
  87. package/src/types/responses-api.ts +308 -0
  88. package/src/utils/azure-model-resolver.ts +220 -0
  89. package/src/utils/constants.ts +21 -0
  90. package/src/utils/error-handler.ts +251 -0
  91. package/src/utils/metadata-builder.ts +228 -0
  92. package/src/utils/provider-detection.ts +257 -0
  93. package/src/utils/request-handler-factory.ts +285 -0
  94. package/src/utils/stop-reason-mapper.ts +78 -0
  95. package/src/utils/type-guards.ts +202 -0
  96. package/src/utils/url-builder.ts +68 -0
package/README.md CHANGED
@@ -3,6 +3,7 @@
3
3
  [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
4
  [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
5
  [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
+ [![Website](https://img.shields.io/badge/website-revenium.ai-blue)](https://www.revenium.ai)
6
7
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
8
 
8
9
  **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
@@ -12,475 +13,72 @@ A professional-grade Node.js middleware that seamlessly integrates with OpenAI a
12
13
  ## Features
13
14
 
14
15
  - **Seamless Integration** - Native TypeScript support, no type casting required
15
- - **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
16
- - **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
16
+ - **Optional Metadata** - Track users, organizations, and business context (12 predefined fields, all optional)
17
+ - **Dual API Support** - Chat Completions API + Responses API
17
18
  - **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
18
19
  - **Type Safety** - Complete TypeScript support with IntelliSense
19
20
  - **Streaming Support** - Handles regular and streaming requests seamlessly
20
21
  - **Fire-and-Forget** - Never blocks your application flow
21
22
  - **Zero Configuration** - Auto-initialization from environment variables
22
23
 
23
- ## Package Migration
24
-
25
- This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming.
26
-
27
- ### Migration Steps
28
-
29
- If you're upgrading from the old package:
30
-
31
- ```bash
32
- # Uninstall the old package
33
- npm uninstall revenium-middleware-openai-node
34
-
35
- # Install the new package
36
- npm install @revenium/openai
37
- ```
38
-
39
- **Update your imports:**
40
-
41
- ```typescript
42
- // Old import
43
- import { patchOpenAIInstance } from "revenium-middleware-openai-node";
44
-
45
- // New import
46
- import { patchOpenAIInstance } from "@revenium/openai";
47
- ```
48
-
49
- All functionality remains exactly the same - only the package name has changed.
50
-
51
24
  ## Getting Started
52
25
 
53
- Choose your preferred approach to get started quickly:
54
-
55
- ### Option 1: Create Project from Scratch
56
-
57
- Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
58
- [Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
59
-
60
- ### Option 2: Clone Our Repository
61
-
62
- Clone and run the repository with working examples.
63
- [Go to Repository Guide](#option-2-clone-our-repository)
64
-
65
- ### Option 3: Add to Existing Project
66
-
67
- Already have a project? Just install and replace imports.
68
- [Go to Integration Guide](#option-3-existing-project-integration)
69
-
70
- ---
71
-
72
- ## Option 1: Create Project from Scratch
73
-
74
- ### Step 1: Create Project Directory
26
+ ### 1. Create Project Directory
75
27
 
76
28
  ```bash
77
- # Create and navigate to your project
29
+ # Create project directory and navigate to it
78
30
  mkdir my-openai-project
79
31
  cd my-openai-project
80
32
 
81
33
  # Initialize npm project
82
34
  npm init -y
83
- ```
84
35
 
85
- ### Step 2: Install Dependencies
86
-
87
- ```bash
88
- # Install the middleware and OpenAI SDK
89
- npm install @revenium/openai openai@^5.8.0 dotenv
90
-
91
- # For TypeScript projects (optional)
92
- npm install -D typescript tsx @types/node
36
+ # Install packages
37
+ npm install @revenium/openai openai dotenv tsx
38
+ npm install --save-dev typescript @types/node
93
39
  ```
94
40
 
95
- ### Step 3: Setup Environment Variables
41
+ ### 2. Configure Environment Variables
96
42
 
97
- Create a `.env` file in your project root:
43
+ Create a `.env` file:
98
44
 
99
- ```bash
100
- # Create .env file
101
- echo. > .env # On Windows (CMD)
102
- touch .env # On Mac/Linux
103
- # OR PowerShell
104
- New-Item -Path .env -ItemType File
105
- ```
106
-
107
- Copy and paste the following into `.env`:
45
+ **NOTE: YOU MUST REPLACE THE PLACEHOLDERS WITH YOUR OWN API KEYS**
108
46
 
109
47
  ```env
110
- # Revenium OpenAI Middleware Configuration
111
- # Copy this file to .env and fill in your actual values
112
-
113
- # Required: Your Revenium API key (starts with hak_)
114
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
115
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
116
-
117
- # Required: Your OpenAI API key (starts with sk-)
118
- OPENAI_API_KEY=sk_your_openai_api_key_here
119
-
120
- # Optional: Your Azure OpenAI configuration (for Azure testing)
121
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
122
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
123
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
124
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
125
-
126
- # Optional: Enable debug logging
127
- REVENIUM_DEBUG=false
128
- ```
129
-
130
- **NOTE**: Replace each `your_..._here` with your actual values.
131
-
132
- **IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
133
-
134
- ### Step 4: Protect Your API Keys
135
-
136
- **CRITICAL SECURITY**: Never commit your `.env` file to version control!
137
-
138
- Your `.env` file contains sensitive API keys that must be kept secret:
139
-
140
- ```bash
141
- # Verify .env is in your .gitignore
142
- git check-ignore .env
143
- ```
144
-
145
- If the command returns nothing, add `.env` to your `.gitignore`:
146
-
147
- ```gitignore
148
- # Environment variables
149
- .env
150
- .env.*
151
- !.env.example
152
- ```
153
-
154
- **Best Practice**: Use GitHub's standard Node.gitignore as a starting point:
155
- - Reference: https://github.com/github/gitignore/blob/main/Node.gitignore
156
-
157
- **Warning:** The following command will overwrite your current `.gitignore` file.
158
- To avoid losing custom rules, back up your file first or append instead:
159
- `curl https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore >> .gitignore`
160
-
161
- **Note:** Appending may result in duplicate entries if your `.gitignore` already contains some of the patterns from Node.gitignore.
162
- Please review your `.gitignore` after appending and remove any duplicate lines as needed.
163
-
164
- This protects your OpenAI API key, Revenium API key, and any other secrets from being accidentally committed to your repository.
165
-
166
- ### Step 5: Create Your First Test
167
-
168
- #### TypeScript Test
169
-
170
- Create `test-openai.ts`:
171
-
172
- ```typescript
173
- import 'dotenv/config';
174
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
175
- import OpenAI from 'openai';
176
-
177
- async function testOpenAI() {
178
- try {
179
- // Initialize Revenium middleware
180
- const initResult = initializeReveniumFromEnv();
181
- if (!initResult.success) {
182
- console.error(' Failed to initialize Revenium:', initResult.message);
183
- process.exit(1);
184
- }
185
-
186
- // Create and patch OpenAI instance
187
- const openai = patchOpenAIInstance(new OpenAI());
188
-
189
- const response = await openai.chat.completions.create({
190
- model: 'gpt-4o-mini',
191
- max_tokens: 100,
192
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
193
- usageMetadata: {
194
- subscriber: {
195
- id: 'user-456',
196
- email: 'user@demo-org.com',
197
- credential: {
198
- name: 'demo-api-key',
199
- value: 'demo-key-123',
200
- },
201
- },
202
- organizationId: 'demo-org-123',
203
- productId: 'ai-assistant-v2',
204
- taskType: 'educational-query',
205
- agent: 'openai-basic-demo',
206
- traceId: 'session-' + Date.now(),
207
- },
208
- });
209
-
210
- const text = response.choices[0]?.message?.content || 'No response';
211
- console.log('Response:', text);
212
- } catch (error) {
213
- console.error('Error:', error);
214
- }
215
- }
216
-
217
- testOpenAI();
218
- ```
219
-
220
- #### JavaScript Test
221
-
222
- Create `test-openai.js`:
223
-
224
- ```javascript
225
- require('dotenv').config();
226
- const {
227
- initializeReveniumFromEnv,
228
- patchOpenAIInstance,
229
- } = require('@revenium/openai');
230
- const OpenAI = require('openai');
231
-
232
- async function testOpenAI() {
233
- try {
234
- // Initialize Revenium middleware
235
- const initResult = initializeReveniumFromEnv();
236
- if (!initResult.success) {
237
- console.error(' Failed to initialize Revenium:', initResult.message);
238
- process.exit(1);
239
- }
240
-
241
- // Create and patch OpenAI instance
242
- const openai = patchOpenAIInstance(new OpenAI());
243
-
244
- const response = await openai.chat.completions.create({
245
- model: 'gpt-4o-mini',
246
- max_tokens: 100,
247
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
248
- usageMetadata: {
249
- subscriber: {
250
- id: 'user-456',
251
- email: 'user@demo-org.com',
252
- },
253
- organizationId: 'demo-org-123',
254
- taskType: 'educational-query',
255
- },
256
- });
257
-
258
- const text = response.choices[0]?.message?.content || 'No response';
259
- console.log('Response:', text);
260
- } catch (error) {
261
- // Handle error appropriately
262
- }
263
- }
264
-
265
- testOpenAI();
266
- ```
267
-
268
- ### Step 6: Add Package Scripts
269
-
270
- Update your `package.json`:
271
-
272
- ```json
273
- {
274
- "name": "my-openai-project",
275
- "version": "1.0.0",
276
- "type": "commonjs",
277
- "scripts": {
278
- "test-ts": "npx tsx test-openai.ts",
279
- "test-js": "node test-openai.js"
280
- },
281
- "dependencies": {
282
- "@revenium/openai": "^1.0.11",
283
- "openai": "^5.8.0",
284
- "dotenv": "^16.5.0"
285
- }
286
- }
287
- ```
288
-
289
- ### Step 7: Run Your Tests
290
-
291
- ```bash
292
- # Test TypeScript version
293
- npm run test-ts
294
-
295
- # Test JavaScript version
296
- npm run test-js
297
- ```
298
-
299
- ### Step 8: Project Structure
300
-
301
- Your project should now look like this:
302
-
303
- ```
304
- my-openai-project/
305
- ├── .env # Environment variables
306
- ├── .gitignore # Git ignore file
307
- ├── package.json # Project configuration
308
- ├── test-openai.ts # TypeScript test
309
- └── test-openai.js # JavaScript test
310
- ```
311
-
312
- ## Option 2: Clone Our Repository
313
-
314
- ### Step 1: Clone the Repository
315
-
316
- ```bash
317
- # Clone the repository
318
- git clone git@github.com:revenium/revenium-middleware-openai-node.git
319
- cd revenium-middleware-openai-node
320
- ```
321
-
322
- ### Step 2: Install Dependencies
323
-
324
- ```bash
325
- # Install all dependencies
326
- npm install
327
- npm install @revenium/openai
328
- ```
329
-
330
- ### Step 3: Setup Environment Variables
331
-
332
- Create a `.env` file in the project root:
333
-
334
- ```bash
335
- # Create .env file
336
- cp .env.example .env # If available, or create manually
337
- ```
338
-
339
- Copy and paste the following into `.env`:
340
-
341
- ```bash
342
- # Revenium OpenAI Middleware Configuration
343
- # Copy this file to .env and fill in your actual values
344
-
345
- # Required: Your Revenium API key (starts with hak_)
48
+ REVENIUM_METERING_BASE_URL=https://api.revenium.ai
346
49
  REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
347
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
348
-
349
- # Required: Your OpenAI API key (starts with sk-)
350
50
  OPENAI_API_KEY=sk_your_openai_api_key_here
351
-
352
- # Optional: Your Azure OpenAI configuration (for Azure testing)
353
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
354
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
355
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
356
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
357
-
358
- # Optional: Enable debug logging
359
- REVENIUM_DEBUG=false
360
51
  ```
361
52
 
362
- **IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
53
+ ### 3. Run Your First Example
363
54
 
364
- ### Step 4: Build the Project
55
+ Run the [getting started example](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/getting_started.ts):
365
56
 
366
57
  ```bash
367
- # Build the middleware
368
- npm run build
58
+ npx tsx node_modules/@revenium/openai/examples/getting_started.ts
369
59
  ```
370
60
 
371
- ### Step 5: Run the Examples
372
-
373
- The repository includes working example files:
61
+ Or with debug logging:
374
62
 
375
63
  ```bash
376
- # Run Chat Completions API examples (using npm scripts)
377
- npm run example:openai-basic
378
- npm run example:openai-streaming
379
- npm run example:azure-basic
380
- npm run example:azure-streaming
381
-
382
- # Run Responses API examples (available with OpenAI SDK 5.8+)
383
- npm run example:openai-responses-basic
384
- npm run example:openai-responses-streaming
385
- npm run example:azure-responses-basic
386
- npm run example:azure-responses-streaming
387
-
388
- # Or run examples directly with tsx
389
- npx tsx examples/openai-basic.ts
390
- npx tsx examples/openai-streaming.ts
391
- npx tsx examples/azure-basic.ts
392
- npx tsx examples/azure-streaming.ts
393
- npx tsx examples/openai-responses-basic.ts
394
- npx tsx examples/openai-responses-streaming.ts
395
- npx tsx examples/azure-responses-basic.ts
396
- npx tsx examples/azure-responses-streaming.ts
397
- ```
398
-
399
- These examples demonstrate:
400
-
401
- - **Chat Completions API** - Traditional OpenAI chat completions and embeddings
402
- - **Responses API** - New OpenAI Responses API with enhanced capabilities
403
- - **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
404
- - **Streaming Support** - Real-time response streaming with metadata tracking
405
- - **Optional Metadata** - Rich business context and user tracking
406
- - **Error Handling** - Robust error handling and debugging
407
-
408
- ## Option 3: Existing Project Integration
64
+ # Linux/macOS
65
+ REVENIUM_DEBUG=true npx tsx node_modules/@revenium/openai/examples/getting_started.ts
409
66
 
410
- Already have a project? Just install and replace imports:
411
-
412
- ### Step 1: Install the Package
413
-
414
- ```bash
415
- npm install @revenium/openai
67
+ # Windows (PowerShell)
68
+ $env:REVENIUM_DEBUG="true"; npx tsx node_modules/@revenium/openai/examples/getting_started.ts
416
69
  ```
417
70
 
418
- ### Step 2: Update Your Imports
419
-
420
- **Before:**
421
-
422
- ```typescript
423
- import OpenAI from 'openai';
424
-
425
- const openai = new OpenAI();
426
- ```
71
+ **For more examples and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).**
427
72
 
428
- **After:**
429
-
430
- ```typescript
431
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
432
- import OpenAI from 'openai';
433
-
434
- // Initialize Revenium middleware
435
- initializeReveniumFromEnv();
436
-
437
- // Patch your OpenAI instance
438
- const openai = patchOpenAIInstance(new OpenAI());
439
- ```
440
-
441
- ### Step 3: Add Environment Variables
442
-
443
- Add to your `.env` file:
444
-
445
- ```env
446
- # Revenium OpenAI Middleware Configuration
447
-
448
- # Required: Your Revenium API key (starts with hak_)
449
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
450
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
451
-
452
- # Required: Your OpenAI API key (starts with sk-)
453
- OPENAI_API_KEY=sk_your_openai_api_key_here
454
-
455
- # Optional: Your Azure OpenAI configuration (for Azure testing)
456
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
457
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
458
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
459
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
460
-
461
- # Optional: Enable debug logging
462
- REVENIUM_DEBUG=false
463
- ```
464
-
465
- ### Step 4: Optional - Add Metadata
73
+ ---
466
74
 
467
- Enhance your existing calls with optional metadata:
75
+ ## Requirements
468
76
 
469
- ```typescript
470
- // Your existing code works unchanged
471
- const response = await openai.chat.completions.create({
472
- model: 'gpt-4o-mini',
473
- messages: [{ role: 'user', content: 'Hello!' }],
474
- // Add optional metadata for better analytics
475
- usageMetadata: {
476
- subscriber: { id: 'user-123' },
477
- organizationId: 'my-company',
478
- taskType: 'chat',
479
- },
480
- });
481
- ```
77
+ - Node.js 16+
78
+ - OpenAI package v5.0.0 or later
79
+ - TypeScript 5.0+ (for TypeScript projects)
482
80
 
483
- ** That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
81
+ ---
484
82
 
485
83
  ## What Gets Tracked
486
84
 
@@ -498,7 +96,7 @@ The middleware automatically captures comprehensive usage data:
498
96
  - **User Tracking** - Subscriber ID, email, credentials
499
97
  - **Organization Data** - Organization ID, subscription ID, product ID
500
98
  - **Task Classification** - Task type, agent identifier, trace ID
501
- - **Quality Metrics** - Response quality scores, custom metadata
99
+ - **Quality Metrics** - Response quality scores, task identifiers
502
100
 
503
101
  ### **Technical Details**
504
102
 
@@ -507,443 +105,104 @@ The middleware automatically captures comprehensive usage data:
507
105
  - **Error Tracking** - Failed requests, error types, retry attempts
508
106
  - **Environment Info** - Development vs production usage
509
107
 
510
- ## OpenAI Responses API Support
511
-
512
- This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
513
-
514
- ### What is the Responses API?
515
-
516
- The Responses API is OpenAI's new stateful API that:
517
-
518
- - Uses `input` instead of `messages` parameter for simplified interaction
519
- - Provides unified experience combining chat completions and assistants capabilities
520
- - Supports advanced features like background tasks, function calling, and code interpreter
521
- - Offers better streaming and real-time response generation
522
- - Works with GPT-5 and other advanced models
523
-
524
- ### API Comparison
525
-
526
- **Traditional Chat Completions:**
527
-
528
- ```javascript
529
- const response = await openai.chat.completions.create({
530
- model: 'gpt-4o',
531
- messages: [{ role: 'user', content: 'Hello' }],
532
- });
533
- ```
534
-
535
- **New Responses API:**
536
-
537
- ```javascript
538
- const response = await openai.responses.create({
539
- model: 'gpt-5',
540
- input: 'Hello', // Simplified input parameter
541
- });
542
- ```
543
-
544
- ### Key Differences
545
-
546
- | Feature | Chat Completions | Responses API |
547
- | ---------------------- | ---------------------------- | ----------------------------------- |
548
- | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
549
- | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
550
- | **Response Structure** | `choices[0].message.content` | `output_text` |
551
- | **Stateful** | No | Yes (with `store: true`) |
552
- | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
553
- | **Temperature** | Supported | Not supported with GPT-5 |
554
-
555
- ### Requirements & Installation
556
-
557
- **OpenAI SDK Version:**
558
-
559
- - **Minimum:** `5.8.0` (when Responses API was officially released)
560
- - **Recommended:** `5.8.2` or later (tested and verified)
561
- - **Current:** `6.2.0` (latest available)
562
-
563
- **Installation:**
564
-
565
- ```bash
566
- # Install latest version with Responses API support
567
- npm install openai@^5.8.0
568
-
569
- # Or install specific tested version
570
- npm install openai@5.8.2
571
- ```
572
-
573
- ### Current Status
574
-
575
- **The Responses API is officially available in OpenAI SDK 5.8+**
576
-
577
- **Official Release:**
578
-
579
- - Released by OpenAI in SDK version 5.8.0
580
- - Fully documented in official OpenAI documentation
581
- - Production-ready with GPT-5 and other supported models
582
- - Complete middleware support with Revenium integration
583
-
584
- **Middleware Features:**
585
-
586
- - Full Responses API support (streaming & non-streaming)
587
- - Seamless metadata tracking identical to Chat Completions
588
- - Type-safe TypeScript integration
589
- - Complete token tracking including reasoning tokens
590
- - Azure OpenAI compatibility
591
-
592
- **References:**
593
-
594
- - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
595
- - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
596
-
597
- ### Responses API Examples
598
-
599
- The middleware includes comprehensive examples for the new Responses API:
600
-
601
- **Basic Usage:**
602
-
603
- ```typescript
604
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
605
- import OpenAI from 'openai';
606
-
607
- // Initialize and patch OpenAI instance
608
- initializeReveniumFromEnv();
609
- const openai = patchOpenAIInstance(new OpenAI());
108
+ ## Advanced Usage
610
109
 
611
- // Simple string input
612
- const response = await openai.responses.create({
613
- model: 'gpt-5',
614
- input: 'What is the capital of France?',
615
- max_output_tokens: 150,
616
- usageMetadata: {
617
- subscriber: { id: 'user-123', email: 'user@example.com' },
618
- organizationId: 'org-456',
619
- productId: 'quantum-explainer',
620
- taskType: 'educational-content',
621
- },
622
- });
110
+ ### Initialization Options
623
111
 
624
- console.log(response.output_text); // "Paris."
625
- ```
112
+ The middleware supports three initialization patterns:
626
113
 
627
- **Streaming Example:**
114
+ **Automatic (Recommended)** - Import and patch OpenAI instance:
628
115
 
629
116
  ```typescript
630
- const stream = await openai.responses.create({
631
- model: 'gpt-5',
632
- input: 'Write a short story about AI',
633
- stream: true,
634
- max_output_tokens: 500,
635
- usageMetadata: {
636
- subscriber: { id: 'user-123', email: 'user@example.com' },
637
- organizationId: 'org-456',
638
- },
639
- });
640
-
641
- for await (const chunk of stream) {
642
- process.stdout.write(chunk.delta?.content || '');
643
- }
644
- ```
645
-
646
- ### Adding Custom Metadata
647
-
648
- Track users, organizations, and custom data with seamless TypeScript integration:
649
-
650
- ```typescript
651
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
117
+ import { patchOpenAIInstance } from '@revenium/openai';
652
118
  import OpenAI from 'openai';
653
119
 
654
- // Initialize and patch OpenAI instance
655
- initializeReveniumFromEnv();
656
120
  const openai = patchOpenAIInstance(new OpenAI());
657
-
658
- const response = await openai.chat.completions.create({
659
- model: 'gpt-4',
660
- messages: [{ role: 'user', content: 'Summarize this document' }],
661
- // Add custom tracking metadata - all fields optional, no type casting needed!
662
- usageMetadata: {
663
- subscriber: {
664
- id: 'user-12345',
665
- email: 'john@acme-corp.com',
666
- },
667
- organizationId: 'acme-corp',
668
- productId: 'document-ai',
669
- taskType: 'document-summary',
670
- agent: 'doc-summarizer-v2',
671
- traceId: 'session-abc123',
672
- },
673
- });
674
-
675
- // Same metadata works with Responses API
676
- const responsesResult = await openai.responses.create({
677
- model: 'gpt-5',
678
- input: 'Summarize this document',
679
- // Same metadata structure - seamless compatibility!
680
- usageMetadata: {
681
- subscriber: {
682
- id: 'user-12345',
683
- email: 'john@acme-corp.com',
684
- },
685
- organizationId: 'acme-corp',
686
- productId: 'document-ai',
687
- taskType: 'document-summary',
688
- agent: 'doc-summarizer-v2',
689
- traceId: 'session-abc123',
690
- },
691
- });
121
+ // Tracking works automatically if env vars are set
692
122
  ```
693
123
 
694
- ### Streaming Support
695
-
696
- The middleware automatically handles streaming requests with seamless metadata:
124
+ **Explicit** - Call `initializeReveniumFromEnv()` for error handling control:
697
125
 
698
126
  ```typescript
699
127
  import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
700
128
  import OpenAI from 'openai';
701
129
 
702
- // Initialize and patch OpenAI instance
703
- initializeReveniumFromEnv();
704
- const openai = patchOpenAIInstance(new OpenAI());
705
-
706
- const stream = await openai.chat.completions.create({
707
- model: 'gpt-4',
708
- messages: [{ role: 'user', content: 'Tell me a story' }],
709
- stream: true,
710
- // Metadata works seamlessly with streaming - all fields optional!
711
- usageMetadata: {
712
- organizationId: 'story-app',
713
- taskType: 'creative-writing',
714
- },
715
- });
716
-
717
- for await (const chunk of stream) {
718
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
130
+ const result = initializeReveniumFromEnv();
131
+ if (!result.success) {
132
+ console.error('Failed to initialize:', result.message);
133
+ process.exit(1);
719
134
  }
720
- // Usage tracking happens automatically when stream completes
721
- ```
722
-
723
- ### Temporarily Disabling Tracking
724
-
725
- If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
726
-
727
- ```javascript
728
- import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
729
-
730
- // Disable tracking
731
- unpatchOpenAI();
732
-
733
- // Your OpenAI calls now bypass Revenium tracking
734
- await openai.chat.completions.create({...});
735
-
736
- // Re-enable tracking
737
- patchOpenAI();
738
- ```
739
-
740
- **Common use cases:**
741
-
742
- - **Debugging**: Isolate whether issues are caused by the middleware
743
- - **Testing**: Compare behavior with/without tracking
744
- - **Conditional tracking**: Enable/disable based on environment
745
- - **Troubleshooting**: Temporary bypass during incident response
746
-
747
- **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
748
-
749
- ## Azure OpenAI Integration
750
-
751
- **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
752
-
753
- ### Quick Start with Azure OpenAI
754
-
755
- ```bash
756
- # Set your Azure OpenAI environment variables
757
- export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
758
- export AZURE_OPENAI_API_KEY="your-azure-api-key"
759
- export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
760
- export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
761
-
762
- # Set your Revenium credentials
763
- export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
764
- # export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
765
- ```
766
-
767
- ```typescript
768
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
769
- import { AzureOpenAI } from 'openai';
770
-
771
- // Initialize Revenium middleware
772
- initializeReveniumFromEnv();
773
-
774
- // Create and patch Azure OpenAI client
775
- const azure = patchOpenAIInstance(
776
- new AzureOpenAI({
777
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
778
- apiKey: process.env.AZURE_OPENAI_API_KEY,
779
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
780
- })
781
- );
782
-
783
- // Your existing Azure OpenAI code works with seamless metadata
784
- const response = await azure.chat.completions.create({
785
- model: 'gpt-4o', // Uses your deployment name
786
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
787
- // Optional metadata with native TypeScript support
788
- usageMetadata: {
789
- organizationId: 'my-company',
790
- taskType: 'azure-chat',
791
- },
792
- });
793
-
794
- console.log(response.choices[0].message.content);
795
- ```
796
135
 
797
- ### Azure Features
798
-
799
- - **Automatic Detection**: Detects Azure OpenAI clients automatically
800
- - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
801
- - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
802
- - **Deployment Support**: Works with any Azure deployment name (simple or complex)
803
- - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
804
- - **Zero Code Changes**: Existing Azure OpenAI code works without modification
805
-
806
- ### Azure Environment Variables
807
-
808
- | Variable | Required | Description | Example |
809
- | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
810
- | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
811
- | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
812
- | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
813
- | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
814
-
815
- ### Azure Model Name Resolution
816
-
817
- The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
818
-
819
- ```typescript
820
- // Azure deployment names → Standard model names for pricing
821
- "gpt-4o-2024-11-20" → "gpt-4o"
822
- "gpt4o-prod" → "gpt-4o"
823
- "o4-mini" → "gpt-4o-mini"
824
- "gpt-35-turbo-dev" → "gpt-3.5-turbo"
825
- "text-embedding-3-large" → "text-embedding-3-large" // Direct match
826
- "embedding-3-large" → "text-embedding-3-large"
136
+ const openai = patchOpenAIInstance(new OpenAI());
827
137
  ```
828
138
 
829
- ## Advanced Usage
139
+ **Manual** - Use `configure()` to set all options programmatically (see Manual Configuration below).
830
140
 
831
- ### Streaming with Metadata
141
+ For detailed examples of all initialization patterns, see [`examples/`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
832
142
 
833
- The middleware seamlessly handles streaming requests with full metadata support:
143
+ ### Streaming Responses
834
144
 
835
- ```typescript
836
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
837
- import OpenAI from 'openai';
145
+ Streaming is fully supported with real-time token tracking. The middleware automatically tracks streaming responses without any additional configuration.
838
146
 
839
- initializeReveniumFromEnv();
840
- const openai = patchOpenAIInstance(new OpenAI());
147
+ See [`examples/openai-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/openai-streaming.ts) and [`examples/azure-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/azure-streaming.ts) for working streaming examples.
841
148
 
842
- // Chat Completions API streaming
843
- const stream = await openai.chat.completions.create({
844
- model: 'gpt-4o-mini',
845
- messages: [{ role: 'user', content: 'Tell me a story' }],
846
- stream: true,
847
- usageMetadata: {
848
- subscriber: { id: 'user-123', email: 'user@example.com' },
849
- organizationId: 'story-app',
850
- taskType: 'creative-writing',
851
- traceId: 'session-' + Date.now(),
852
- },
853
- });
149
+ ### Custom Metadata Tracking
854
150
 
855
- for await (const chunk of stream) {
856
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
857
- }
858
- // Usage tracking happens automatically when stream completes
859
- ```
151
+ Add business context to track usage by organization, user, task type, or custom identifiers. Pass a `usageMetadata` object with any of these optional fields:
860
152
 
861
- ### Responses API with Metadata
153
+ | Field | Description | Use Case |
154
+ |-------|-------------|----------|
155
+ | `traceId` | Unique identifier for session or conversation tracking | Link multiple API calls together for debugging, user session analytics, or distributed tracing across services |
156
+ | `taskType` | Type of AI task being performed | Categorize usage by workload (e.g., "chat", "code-generation", "doc-summary") for cost analysis and optimization |
157
+ | `subscriber.id` | Unique user identifier | Track individual user consumption for billing, rate limiting, or user analytics |
158
+ | `subscriber.email` | User email address | Identify users for support, compliance, or usage reports |
159
+ | `subscriber.credential.name` | Authentication credential name | Track which API key or service account made the request |
160
+ | `subscriber.credential.value` | Authentication credential value | Associate usage with specific credentials for security auditing |
161
+ | `organizationId` | Organization or company identifier | Multi-tenant cost allocation, usage quotas per organization |
162
+ | `subscriptionId` | Subscription plan identifier | Track usage against subscription limits, identify plan upgrade opportunities |
163
+ | `productId` | Your product or feature identifier | Attribute AI costs to specific features in your application (e.g., "chatbot", "email-assistant") |
164
+ | `agent` | AI agent or bot identifier | Distinguish between multiple AI agents or automation workflows in your system |
165
+ | `responseQualityScore` | Custom quality rating (0.0-1.0) | Track user satisfaction or automated quality metrics for model performance analysis |
862
166
 
863
- Full support for OpenAI's new Responses API:
167
+ **Resources:**
168
+ - [API Reference](https://revenium.readme.io/reference/meter_ai_completion) - Complete metadata field documentation
864
169
 
865
- ```typescript
866
- // Simple string input with metadata
867
- const response = await openai.responses.create({
868
- model: 'gpt-5',
869
- input: 'What is the capital of France?',
870
- max_output_tokens: 150,
871
- usageMetadata: {
872
- subscriber: { id: 'user-123', email: 'user@example.com' },
873
- organizationId: 'org-456',
874
- productId: 'geography-tutor',
875
- taskType: 'educational-query',
876
- },
877
- });
170
+ ### OpenAI Responses API
171
+ **Use case:** Using OpenAI's Responses API with string inputs and simplified interface.
878
172
 
879
- console.log(response.output_text); // "Paris."
880
- ```
173
+ See working examples:
174
+ - `examples/openai-responses-basic.ts` - Basic Responses API usage
175
+ - `examples/openai-responses-streaming.ts` - Streaming with Responses API
881
176
 
882
177
  ### Azure OpenAI Integration
178
+ **Use case:** Automatic Azure OpenAI detection with deployment name resolution and accurate pricing.
883
179
 
884
- Automatic Azure OpenAI detection with seamless metadata:
885
-
886
- ```typescript
887
- import { AzureOpenAI } from 'openai';
888
-
889
- // Create and patch Azure OpenAI client
890
- const azure = patchOpenAIInstance(
891
- new AzureOpenAI({
892
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
893
- apiKey: process.env.AZURE_OPENAI_API_KEY,
894
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
895
- })
896
- );
897
-
898
- // Your existing Azure OpenAI code works with seamless metadata
899
- const response = await azure.chat.completions.create({
900
- model: 'gpt-4o', // Uses your deployment name
901
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
902
- usageMetadata: {
903
- organizationId: 'my-company',
904
- taskType: 'azure-chat',
905
- agent: 'azure-assistant',
906
- },
907
- });
908
- ```
180
+ See working examples:
181
+ - `examples/azure-basic.ts` - Azure chat completions and embeddings
182
+ - `examples/azure-responses-basic.ts` - Azure Responses API integration
909
183
 
910
184
  ### Embeddings with Metadata
185
+ **Use case:** Track embeddings usage for search engines, RAG systems, and document processing.
911
186
 
912
- Track embeddings usage with optional metadata:
913
-
914
- ```typescript
915
- const embedding = await openai.embeddings.create({
916
- model: 'text-embedding-3-small',
917
- input: 'Advanced text embedding with comprehensive tracking metadata',
918
- usageMetadata: {
919
- subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
920
- organizationId: 'my-company',
921
- taskType: 'document-embedding',
922
- productId: 'search-engine',
923
- traceId: `embed-${Date.now()}`,
924
- agent: 'openai-embeddings-node',
925
- },
926
- });
927
-
928
- console.log('Model:', embedding.model);
929
- console.log('Usage:', embedding.usage);
930
- console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
931
- ```
187
+ Embeddings examples are included in:
188
+ - `examples/openai-basic.ts` - Text embeddings with metadata
189
+ - `examples/openai-streaming.ts` - Batch embeddings processing
932
190
 
933
191
  ### Manual Configuration
934
192
 
935
193
  For advanced use cases, configure the middleware manually:
936
194
 
937
195
  ```typescript
938
- import { configure } from '@revenium/openai';
196
+ import { configure, patchOpenAIInstance } from '@revenium/openai';
197
+ import OpenAI from 'openai';
939
198
 
940
199
  configure({
941
200
  reveniumApiKey: 'hak_your_api_key',
942
- reveniumBaseUrl: 'https://api.revenium.io/meter',
943
- apiTimeout: 5000,
944
- failSilent: true,
945
- maxRetries: 3,
201
+ reveniumBaseUrl: 'https://api.revenium.ai',
202
+ debug: true,
946
203
  });
204
+
205
+ const openai = patchOpenAIInstance(new OpenAI());
947
206
  ```
948
207
 
949
208
  ## Configuration Options
@@ -954,7 +213,7 @@ configure({
954
213
  | ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
955
214
  | `REVENIUM_METERING_API_KEY` | true | - | Your Revenium API key (starts with `hak_`) |
956
215
  | `OPENAI_API_KEY` | true | - | Your OpenAI API key (starts with `sk-`) |
957
- | `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io/meter` | Revenium metering API base URL |
216
+ | `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.ai` | Revenium metering API base URL |
958
217
  | `REVENIUM_DEBUG` | false | `false` | Enable debug logging (`true`/`false`) |
959
218
  | `AZURE_OPENAI_ENDPOINT` | false | - | Azure OpenAI endpoint URL (for Azure testing) |
960
219
  | `AZURE_OPENAI_API_KEY` | false | - | Azure OpenAI API key (for Azure testing) |
@@ -963,65 +222,34 @@ configure({
963
222
 
964
223
  **Important Note about `REVENIUM_METERING_BASE_URL`:**
965
224
 
966
- - This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
225
+ - This variable is **optional** and defaults to the production URL (`https://api.revenium.ai`)
967
226
  - If you don't set it explicitly, the middleware will use the default production endpoint
968
227
  - However, you may see console warnings or errors if the middleware cannot determine the correct environment
969
228
  - **Best practice:** Always set this variable explicitly to match your environment:
970
229
 
971
230
  ```bash
972
231
  # Default production URL (recommended)
973
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
232
+ REVENIUM_METERING_BASE_URL=https://api.revenium.ai
974
233
  ```
975
234
 
976
235
  - **Remember:** Your `REVENIUM_METERING_API_KEY` must match your base URL environment
977
236
 
978
- ### Usage Metadata Options
979
-
980
- All metadata fields are optional and help provide better analytics:
981
-
982
- ```typescript
983
- interface UsageMetadata {
984
- traceId?: string; // Session or conversation ID
985
- taskType?: string; // Type of AI task (e.g., "chat", "summary")
986
- subscriber?: {
987
- // User information (nested structure)
988
- id?: string; // User ID from your system
989
- email?: string; // User's email address
990
- credential?: {
991
- // User credentials
992
- name?: string; // Credential name
993
- value?: string; // Credential value
994
- };
995
- };
996
- organizationId?: string; // Organization/company ID
997
- subscriptionId?: string; // Billing plan ID
998
- productId?: string; // Your product/feature ID
999
- agent?: string; // AI agent identifier
1000
- responseQualityScore?: number; // Quality score (0-1)
1001
- }
1002
- ```
1003
-
1004
237
  ## Included Examples
1005
238
 
1006
- The package includes 8 comprehensive example files in your installation:
1007
-
1008
- **OpenAI Examples:**
1009
- - **openai-basic.ts**: Basic chat completions with metadata tracking
1010
- - **openai-streaming.ts**: Streaming responses with real-time output
1011
- - **openai-responses-basic.ts**: New Responses API usage (OpenAI SDK 5.8+)
1012
- - **openai-responses-streaming.ts**: Streaming with Responses API
1013
-
1014
- **Azure OpenAI Examples:**
1015
- - **azure-basic.ts**: Azure OpenAI chat completions
1016
- - **azure-streaming.ts**: Azure streaming responses
1017
- - **azure-responses-basic.ts**: Azure Responses API
1018
- - **azure-responses-streaming.ts**: Azure streaming Responses API
239
+ The package includes comprehensive example files covering:
1019
240
 
1020
- **For npm users:** Examples are installed in `node_modules/@revenium/openai/examples/`
241
+ - **Getting Started** - Simple entry point with all metadata fields documented
242
+ - **Chat Completions** - Basic and streaming usage patterns
243
+ - **Responses API** - OpenAI's new API with simplified interface
244
+ - **Azure OpenAI** - Automatic Azure detection and integration
245
+ - **Embeddings** - Text embedding generation with tracking
1021
246
 
1022
- **For GitHub users:** Examples are in the repository's `examples/` directory
247
+ Run the getting started example:
248
+ ```bash
249
+ npx tsx node_modules/@revenium/openai/examples/getting_started.ts
250
+ ```
1023
251
 
1024
- For detailed setup instructions and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
252
+ For complete example documentation, setup instructions, and all available examples, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
1025
253
 
1026
254
  ## How It Works
1027
255
 
@@ -1040,141 +268,76 @@ The middleware never blocks your application - if Revenium tracking fails, your
1040
268
 
1041
269
  ### Common Issues
1042
270
 
1043
- #### 1. **No tracking data in dashboard**
271
+ #### No tracking data appears
1044
272
 
1045
- **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
1046
-
1047
- **Solution**: Enable debug logging to check middleware status:
273
+ Ensure environment variables are set and enable debug logging:
1048
274
 
1049
275
  ```bash
276
+ export REVENIUM_METERING_API_KEY="hak_your_key"
277
+ export OPENAI_API_KEY="sk_your_key"
1050
278
  export REVENIUM_DEBUG=true
1051
279
  ```
1052
280
 
1053
- **Expected output for successful tracking**:
1054
-
1055
- ```bash
281
+ Look for these log messages:
282
+ ```
1056
283
  [Revenium Debug] OpenAI chat.completions.create intercepted
1057
284
  [Revenium Debug] Revenium tracking successful
1058
-
1059
- # For Responses API:
1060
- [Revenium Debug] OpenAI responses.create intercepted
1061
- [Revenium Debug] Revenium tracking successful
1062
285
  ```
1063
286
 
1064
- #### 2. **Environment mismatch errors**
1065
-
1066
- **Symptoms**: Authentication errors or 401/403 responses
1067
-
1068
- **Solution**: Ensure your API key matches your base URL environment:
1069
-
1070
- ```bash
1071
- # Correct - Key and URL from same environment
1072
- REVENIUM_METERING_API_KEY=hak_your_api_key_here
1073
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1074
-
1075
- # Wrong - Key and URL from different environments
1076
- REVENIUM_METERING_API_KEY=hak_wrong_environment_key
1077
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1078
- ```
1079
-
1080
- #### 3. **TypeScript type errors**
1081
-
1082
- **Symptoms**: TypeScript errors about `usageMetadata` property
287
+ #### TypeScript errors with usageMetadata
1083
288
 
1084
- **Solution**: Ensure you're importing the middleware before OpenAI:
289
+ Import the middleware before OpenAI to enable type augmentation:
1085
290
 
1086
291
  ```typescript
1087
- // Correct order
1088
292
  import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1089
293
  import OpenAI from 'openai';
1090
-
1091
- // Wrong order
1092
- import OpenAI from 'openai';
1093
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1094
294
  ```
1095
295
 
1096
- #### 4. **Azure OpenAI not working**
296
+ #### Azure OpenAI not tracking
1097
297
 
1098
- **Symptoms**: Azure OpenAI calls not being tracked
1099
-
1100
- **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
298
+ Ensure you patch the Azure client:
1101
299
 
1102
300
  ```typescript
1103
301
  import { AzureOpenAI } from 'openai';
1104
302
  import { patchOpenAIInstance } from '@revenium/openai';
1105
303
 
1106
- // Correct
1107
304
  const azure = patchOpenAIInstance(new AzureOpenAI({...}));
1108
-
1109
- // Wrong - not patched
1110
- const azure = new AzureOpenAI({...});
1111
- ```
1112
-
1113
- #### 5. **Responses API not available**
1114
-
1115
- **Symptoms**: `openai.responses.create` is undefined
1116
-
1117
- **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
1118
-
1119
- ```bash
1120
- npm install openai@^5.8.0
1121
305
  ```
1122
306
 
1123
307
  ### Debug Mode
1124
308
 
1125
- Enable comprehensive debug logging:
309
+ Enable detailed logging:
1126
310
 
1127
311
  ```bash
1128
312
  export REVENIUM_DEBUG=true
1129
313
  ```
1130
314
 
1131
- This will show:
1132
-
1133
- - Middleware initialization status
1134
- - Request interception confirmations
1135
- - Metadata extraction details
1136
- - Tracking success/failure messages
1137
- - Error details and stack traces
1138
-
1139
315
  ### Getting Help
1140
316
 
1141
- If you're still experiencing issues:
317
+ If issues persist:
1142
318
 
1143
- 1. **Check the logs** with `REVENIUM_DEBUG=true`
1144
- 2. **Verify environment variables** are set correctly
1145
- 3. **Test with minimal example** from our documentation
1146
- 4. **Contact support** with debug logs and error details
1147
-
1148
- For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
319
+ 1. Check logs with `REVENIUM_DEBUG=true`
320
+ 2. Verify environment variables are set
321
+ 3. Test with `examples/getting_started.ts`
322
+ 4. Contact support@revenium.io with debug logs
1149
323
 
1150
324
  ## Supported Models
1151
325
 
1152
- ### OpenAI Models
326
+ This middleware works with any OpenAI model. Examples in this package include:
1153
327
 
1154
- | Model Family | Models | APIs Supported |
1155
- | ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
1156
- | **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
1157
- | **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
1158
- | **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
1159
- | **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
1160
- | **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
1161
- | **GPT-5** | `gpt-5` (when available) | Responses API |
1162
- | **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
328
+ **Chat Completions:**
329
+ - `gpt-4o-mini`, `gpt-4o` (GPT-4 family)
330
+ - `gpt-5`, `gpt-5-mini`, `gpt-5-nano` (GPT-5 family)
1163
331
 
1164
- ### Azure OpenAI Models
332
+ **Embeddings:**
333
+ - `text-embedding-3-small`, `text-embedding-3-large`
1165
334
 
1166
- All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
335
+ **Azure OpenAI:**
336
+ - Works with any Azure deployment (deployment names automatically resolved)
1167
337
 
1168
- | Azure Deployment | Resolved Model | API Support |
1169
- | ------------------------ | ------------------------ | --------------------------- |
1170
- | `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
1171
- | `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
1172
- | `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
1173
- | `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
1174
- | `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
1175
- | `embedding-3-large` | `text-embedding-3-large` | Embeddings |
338
+ For the complete model list and latest specifications, see the [OpenAI Models Documentation](https://platform.openai.com/docs/models).
1176
339
 
1177
- **Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
340
+ For cost tracking across providers, see the [Revenium Model Catalog](https://revenium.readme.io/v2.0.0/reference/get_ai_model).
1178
341
 
1179
342
  ### API Support Matrix
1180
343
 
@@ -1187,12 +350,6 @@ All OpenAI models are supported through Azure OpenAI with automatic deployment n
1187
350
  | **Cost Calculation** | Yes | Yes | Yes |
1188
351
  | **Token Counting** | Yes | Yes | Yes |
1189
352
 
1190
- ## Requirements
1191
-
1192
- - Node.js 16+
1193
- - OpenAI package v4.0+
1194
- - TypeScript 5.0+ (for TypeScript projects)
1195
-
1196
353
  ## Documentation
1197
354
 
1198
355
  For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)