@revenium/openai 1.0.10 → 1.0.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (70) hide show
  1. package/.env.example +20 -0
  2. package/CHANGELOG.md +52 -0
  3. package/LICENSE +21 -21
  4. package/README.md +682 -1152
  5. package/dist/cjs/core/config/loader.js +1 -1
  6. package/dist/cjs/core/config/loader.js.map +1 -1
  7. package/dist/cjs/core/tracking/api-client.js +1 -1
  8. package/dist/cjs/core/tracking/api-client.js.map +1 -1
  9. package/dist/cjs/index.js +4 -4
  10. package/dist/cjs/index.js.map +1 -1
  11. package/dist/cjs/types/openai-augmentation.js +1 -1
  12. package/dist/cjs/utils/url-builder.js +32 -7
  13. package/dist/cjs/utils/url-builder.js.map +1 -1
  14. package/dist/esm/core/config/loader.js +1 -1
  15. package/dist/esm/core/config/loader.js.map +1 -1
  16. package/dist/esm/core/tracking/api-client.js +1 -1
  17. package/dist/esm/core/tracking/api-client.js.map +1 -1
  18. package/dist/esm/index.js +4 -4
  19. package/dist/esm/index.js.map +1 -1
  20. package/dist/esm/types/openai-augmentation.js +1 -1
  21. package/dist/esm/utils/url-builder.js +32 -7
  22. package/dist/esm/utils/url-builder.js.map +1 -1
  23. package/dist/types/index.d.ts +4 -4
  24. package/dist/types/types/index.d.ts +2 -2
  25. package/dist/types/types/index.d.ts.map +1 -1
  26. package/dist/types/types/openai-augmentation.d.ts +1 -1
  27. package/dist/types/utils/url-builder.d.ts +11 -3
  28. package/dist/types/utils/url-builder.d.ts.map +1 -1
  29. package/examples/README.md +357 -0
  30. package/examples/azure-basic.ts +206 -0
  31. package/examples/azure-responses-basic.ts +233 -0
  32. package/examples/azure-responses-streaming.ts +255 -0
  33. package/examples/azure-streaming.ts +209 -0
  34. package/examples/getting_started.ts +54 -0
  35. package/examples/openai-basic.ts +147 -0
  36. package/examples/openai-function-calling.ts +259 -0
  37. package/examples/openai-responses-basic.ts +212 -0
  38. package/examples/openai-responses-streaming.ts +232 -0
  39. package/examples/openai-streaming.ts +172 -0
  40. package/examples/openai-vision.ts +289 -0
  41. package/package.json +81 -84
  42. package/src/core/config/azure-config.ts +72 -0
  43. package/src/core/config/index.ts +23 -0
  44. package/src/core/config/loader.ts +66 -0
  45. package/src/core/config/manager.ts +94 -0
  46. package/src/core/config/validator.ts +89 -0
  47. package/src/core/providers/detector.ts +159 -0
  48. package/src/core/providers/index.ts +16 -0
  49. package/src/core/tracking/api-client.ts +78 -0
  50. package/src/core/tracking/index.ts +21 -0
  51. package/src/core/tracking/payload-builder.ts +132 -0
  52. package/src/core/tracking/usage-tracker.ts +189 -0
  53. package/src/core/wrapper/index.ts +9 -0
  54. package/src/core/wrapper/instance-patcher.ts +288 -0
  55. package/src/core/wrapper/request-handler.ts +423 -0
  56. package/src/core/wrapper/stream-wrapper.ts +100 -0
  57. package/src/index.ts +336 -0
  58. package/src/types/function-parameters.ts +251 -0
  59. package/src/types/index.ts +313 -0
  60. package/src/types/openai-augmentation.ts +233 -0
  61. package/src/types/responses-api.ts +308 -0
  62. package/src/utils/azure-model-resolver.ts +220 -0
  63. package/src/utils/constants.ts +21 -0
  64. package/src/utils/error-handler.ts +251 -0
  65. package/src/utils/metadata-builder.ts +219 -0
  66. package/src/utils/provider-detection.ts +257 -0
  67. package/src/utils/request-handler-factory.ts +285 -0
  68. package/src/utils/stop-reason-mapper.ts +74 -0
  69. package/src/utils/type-guards.ts +202 -0
  70. package/src/utils/url-builder.ts +68 -0
package/README.md CHANGED
@@ -1,1152 +1,682 @@
1
- # 🚀 Revenium OpenAI Middleware for Node.js
2
-
3
- [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
- [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
- [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
-
8
- > **📦 Package Renamed**: This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming. Please update your dependencies accordingly.
9
-
10
- **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
11
-
12
- A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
13
-
14
- ## Features
15
-
16
- - 🔄 **Seamless Integration** - Native TypeScript support, no type casting required
17
- - 📊 **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
18
- - 🎯 **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
19
- - ☁️ **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
20
- - 🛡️ **Type Safety** - Complete TypeScript support with IntelliSense
21
- - 🌊 **Streaming Support** - Handles regular and streaming requests seamlessly
22
- - ⚡ **Fire-and-Forget** - Never blocks your application flow
23
- - 🔧 **Zero Configuration** - Auto-initialization from environment variables
24
-
25
- ## 🚀 Getting Started
26
-
27
- Choose your preferred approach to get started quickly:
28
-
29
- ### Option 1: Create Project from Scratch
30
-
31
- Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
32
- [👉 Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
33
-
34
- ### Option 2: Clone Our Repository
35
-
36
- Clone and run the repository with working examples.
37
- [👉 Go to Repository Guide](#option-2-clone-our-repository)
38
-
39
- ### Option 3: Add to Existing Project
40
-
41
- Already have a project? Just install and replace imports.
42
- [👉 Go to Integration Guide](#option-3-existing-project-integration)
43
-
44
- ---
45
-
46
- ## Option 1: Create Project from Scratch
47
-
48
- ### Step 1: Create Project Directory
49
-
50
- ```bash
51
- # Create and navigate to your project
52
- mkdir my-openai-project
53
- cd my-openai-project
54
-
55
- # Initialize npm project
56
- npm init -y
57
- ```
58
-
59
- ### Step 2: Install Dependencies
60
-
61
- ```bash
62
- # Install the middleware and OpenAI SDK
63
- npm install @revenium/openai openai@^5.8.0 dotenv
64
-
65
- # For TypeScript projects (optional)
66
- npm install -D typescript tsx @types/node
67
- ```
68
-
69
- ### Step 3: Setup Environment Variables
70
-
71
- Create a `.env` file in your project root:
72
-
73
- ```bash
74
- # Create .env file
75
- echo. > .env # On Windows (CMD)
76
- touch .env # On Mac/Linux
77
- # OR PowerShell
78
- New-Item -Path .env -ItemType File
79
- ```
80
-
81
- Copy and paste the following into `.env`:
82
-
83
- ```env
84
- # Revenium OpenAI Middleware Configuration
85
- # Copy this file to .env and fill in your actual values
86
-
87
- # Required: Your Revenium API key (starts with hak_)
88
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
89
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
90
-
91
- # Required: Your OpenAI API key (starts with sk-)
92
- OPENAI_API_KEY=sk_your_openai_api_key_here
93
-
94
- # Optional: Your Azure OpenAI configuration (for Azure testing)
95
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
96
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
97
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
98
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
99
-
100
- # Optional: Enable debug logging
101
- REVENIUM_DEBUG=false
102
- ```
103
-
104
- **💡 NOTE**: Replace each `your_..._here` with your actual values.
105
-
106
- **⚠️ IMPORTANT - Environment Matching**:
107
-
108
- - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
109
- - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
110
- - **Mismatched environments will cause authentication failures**
111
-
112
- ### Step 4: Create Your First Test
113
-
114
- #### TypeScript Test
115
-
116
- Create `test-openai.ts`:
117
-
118
- ```typescript
119
- import 'dotenv/config';
120
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
121
- import OpenAI from 'openai';
122
-
123
- async function testOpenAI() {
124
- try {
125
- // Initialize Revenium middleware
126
- const initResult = initializeReveniumFromEnv();
127
- if (!initResult.success) {
128
- console.error('❌ Failed to initialize Revenium:', initResult.message);
129
- process.exit(1);
130
- }
131
-
132
- // Create and patch OpenAI instance
133
- const openai = patchOpenAIInstance(new OpenAI());
134
-
135
- const response = await openai.chat.completions.create({
136
- model: 'gpt-4o-mini',
137
- max_tokens: 100,
138
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
139
- usageMetadata: {
140
- subscriber: {
141
- id: 'user-456',
142
- email: 'user@demo-org.com',
143
- credential: {
144
- name: 'demo-api-key',
145
- value: 'demo-key-123',
146
- },
147
- },
148
- organizationId: 'demo-org-123',
149
- productId: 'ai-assistant-v2',
150
- taskType: 'educational-query',
151
- agent: 'openai-basic-demo',
152
- traceId: 'session-' + Date.now(),
153
- },
154
- });
155
-
156
- const text = response.choices[0]?.message?.content || 'No response';
157
- console.log('Response:', text);
158
- } catch (error) {
159
- console.error('Error:', error);
160
- }
161
- }
162
-
163
- testOpenAI();
164
- ```
165
-
166
- #### JavaScript Test
167
-
168
- Create `test-openai.js`:
169
-
170
- ```javascript
171
- require('dotenv').config();
172
- const {
173
- initializeReveniumFromEnv,
174
- patchOpenAIInstance,
175
- } = require('@revenium/openai');
176
- const OpenAI = require('openai');
177
-
178
- async function testOpenAI() {
179
- try {
180
- // Initialize Revenium middleware
181
- const initResult = initializeReveniumFromEnv();
182
- if (!initResult.success) {
183
- console.error('❌ Failed to initialize Revenium:', initResult.message);
184
- process.exit(1);
185
- }
186
-
187
- // Create and patch OpenAI instance
188
- const openai = patchOpenAIInstance(new OpenAI());
189
-
190
- const response = await openai.chat.completions.create({
191
- model: 'gpt-4o-mini',
192
- max_tokens: 100,
193
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
194
- usageMetadata: {
195
- subscriber: {
196
- id: 'user-456',
197
- email: 'user@demo-org.com',
198
- },
199
- organizationId: 'demo-org-123',
200
- taskType: 'educational-query',
201
- },
202
- });
203
-
204
- const text = response.choices[0]?.message?.content || 'No response';
205
- console.log('Response:', text);
206
- } catch (error) {
207
- // Handle error appropriately
208
- }
209
- }
210
-
211
- testOpenAI();
212
- ```
213
-
214
- ### Step 5: Add Package Scripts
215
-
216
- Update your `package.json`:
217
-
218
- ```json
219
- {
220
- "name": "my-openai-project",
221
- "version": "1.0.0",
222
- "type": "commonjs",
223
- "scripts": {
224
- "test-ts": "npx tsx test-openai.ts",
225
- "test-js": "node test-openai.js"
226
- },
227
- "dependencies": {
228
- "@revenium/openai": "^1.0.7",
229
- "openai": "^5.8.0",
230
- "dotenv": "^16.5.0"
231
- }
232
- }
233
- ```
234
-
235
- ### Step 6: Run Your Tests
236
-
237
- ```bash
238
- # Test TypeScript version
239
- npm run test-ts
240
-
241
- # Test JavaScript version
242
- npm run test-js
243
- ```
244
-
245
- ### Step 7: Project Structure
246
-
247
- Your project should now look like this:
248
-
249
- ```
250
- my-openai-project/
251
- ├── .env # Environment variables
252
- ├── .gitignore # Git ignore file
253
- ├── package.json # Project configuration
254
- ├── test-openai.ts # TypeScript test
255
- └── test-openai.js # JavaScript test
256
- ```
257
-
258
- ## Option 2: Clone Our Repository
259
-
260
- ### Step 1: Clone the Repository
261
-
262
- ```bash
263
- # Clone the repository
264
- git clone git@github.com:revenium/revenium-middleware-openai-node.git
265
- cd revenium-middleware-openai-node
266
- ```
267
-
268
- ### Step 2: Install Dependencies
269
-
270
- ```bash
271
- # Install all dependencies
272
- npm install
273
- ```
274
-
275
- ### Step 3: Setup Environment Variables
276
-
277
- Create a `.env` file in the project root:
278
-
279
- ```bash
280
- # Create .env file
281
- cp .env.example .env # If available, or create manually
282
- ```
283
-
284
- Copy and paste the following into `.env`:
285
-
286
- ```bash
287
- # Revenium OpenAI Middleware Configuration
288
- # Copy this file to .env and fill in your actual values
289
-
290
- # Required: Your Revenium API key (starts with hak_)
291
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
292
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
293
-
294
- # Required: Your OpenAI API key (starts with sk-)
295
- OPENAI_API_KEY=sk_your_openai_api_key_here
296
-
297
- # Optional: Your Azure OpenAI configuration (for Azure testing)
298
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
299
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
300
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
301
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
302
-
303
- # Optional: Enable debug logging
304
- REVENIUM_DEBUG=false
305
- ```
306
-
307
- **⚠️ IMPORTANT - Environment Matching**:
308
-
309
- - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
310
- - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
311
- - **Mismatched environments will cause authentication failures**
312
-
313
- ### Step 4: Build the Project
314
-
315
- ```bash
316
- # Build the middleware
317
- npm run build
318
- ```
319
-
320
- ### Step 5: Run the Examples
321
-
322
- The repository includes working example files:
323
-
324
- ```bash
325
- # Run Chat Completions API examples (using npm scripts)
326
- npm run example:openai-basic
327
- npm run example:openai-streaming
328
- npm run example:azure-basic
329
- npm run example:azure-streaming
330
-
331
- # Run Responses API examples (available with OpenAI SDK 5.8+)
332
- npm run example:openai-responses-basic
333
- npm run example:openai-responses-streaming
334
- npm run example:azure-responses-basic
335
- npm run example:azure-responses-streaming
336
-
337
- # Or run examples directly with tsx
338
- npx tsx examples/openai-basic.ts
339
- npx tsx examples/openai-streaming.ts
340
- npx tsx examples/azure-basic.ts
341
- npx tsx examples/azure-streaming.ts
342
- npx tsx examples/openai-responses-basic.ts
343
- npx tsx examples/openai-responses-streaming.ts
344
- npx tsx examples/azure-responses-basic.ts
345
- npx tsx examples/azure-responses-streaming.ts
346
- ```
347
-
348
- These examples demonstrate:
349
-
350
- - **Chat Completions API** - Traditional OpenAI chat completions and embeddings
351
- - **Responses API** - New OpenAI Responses API with enhanced capabilities
352
- - **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
353
- - **Streaming Support** - Real-time response streaming with metadata tracking
354
- - **Optional Metadata** - Rich business context and user tracking
355
- - **Error Handling** - Robust error handling and debugging
356
-
357
- ## Option 3: Existing Project Integration
358
-
359
- Already have a project? Just install and replace imports:
360
-
361
- ### Step 1: Install the Package
362
-
363
- ```bash
364
- npm install @revenium/openai
365
- ```
366
-
367
- ### Step 2: Update Your Imports
368
-
369
- **Before:**
370
-
371
- ```typescript
372
- import OpenAI from 'openai';
373
-
374
- const openai = new OpenAI();
375
- ```
376
-
377
- **After:**
378
-
379
- ```typescript
380
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
381
- import OpenAI from 'openai';
382
-
383
- // Initialize Revenium middleware
384
- initializeReveniumFromEnv();
385
-
386
- // Patch your OpenAI instance
387
- const openai = patchOpenAIInstance(new OpenAI());
388
- ```
389
-
390
- ### Step 3: Add Environment Variables
391
-
392
- Add to your `.env` file:
393
-
394
- ```env
395
- # Revenium OpenAI Middleware Configuration
396
-
397
- # Required: Your Revenium API key (starts with hak_)
398
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
399
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
400
-
401
- # Required: Your OpenAI API key (starts with sk-)
402
- OPENAI_API_KEY=sk_your_openai_api_key_here
403
-
404
- # Optional: Your Azure OpenAI configuration (for Azure testing)
405
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
406
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
407
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
408
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
409
-
410
- # Optional: Enable debug logging
411
- REVENIUM_DEBUG=false
412
- ```
413
-
414
- ### Step 4: Optional - Add Metadata
415
-
416
- Enhance your existing calls with optional metadata:
417
-
418
- ```typescript
419
- // Your existing code works unchanged
420
- const response = await openai.chat.completions.create({
421
- model: 'gpt-4o-mini',
422
- messages: [{ role: 'user', content: 'Hello!' }],
423
- // Add optional metadata for better analytics
424
- usageMetadata: {
425
- subscriber: { id: 'user-123' },
426
- organizationId: 'my-company',
427
- taskType: 'chat',
428
- },
429
- });
430
- ```
431
-
432
- **✅ That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
433
-
434
- ## 📊 What Gets Tracked
435
-
436
- The middleware automatically captures comprehensive usage data:
437
-
438
- ### **🔢 Usage Metrics**
439
-
440
- - **Token Counts** - Input tokens, output tokens, total tokens
441
- - **Model Information** - Model name, provider (OpenAI/Azure), API version
442
- - **Request Timing** - Request duration, response time
443
- - **Cost Calculation** - Estimated costs based on current pricing
444
-
445
- ### **🏷️ Business Context (Optional)**
446
-
447
- - **User Tracking** - Subscriber ID, email, credentials
448
- - **Organization Data** - Organization ID, subscription ID, product ID
449
- - **Task Classification** - Task type, agent identifier, trace ID
450
- - **Quality Metrics** - Response quality scores, custom metadata
451
-
452
- ### **🔧 Technical Details**
453
-
454
- - **API Endpoints** - Chat completions, embeddings, responses API
455
- - **Request Types** - Streaming vs non-streaming
456
- - **Error Tracking** - Failed requests, error types, retry attempts
457
- - **Environment Info** - Development vs production usage
458
-
459
- ## OpenAI Responses API Support
460
-
461
- This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
462
-
463
- ### What is the Responses API?
464
-
465
- The Responses API is OpenAI's new stateful API that:
466
-
467
- - Uses `input` instead of `messages` parameter for simplified interaction
468
- - Provides unified experience combining chat completions and assistants capabilities
469
- - Supports advanced features like background tasks, function calling, and code interpreter
470
- - Offers better streaming and real-time response generation
471
- - Works with GPT-5 and other advanced models
472
-
473
- ### API Comparison
474
-
475
- **Traditional Chat Completions:**
476
-
477
- ```javascript
478
- const response = await openai.chat.completions.create({
479
- model: 'gpt-4o',
480
- messages: [{ role: 'user', content: 'Hello' }],
481
- });
482
- ```
483
-
484
- **New Responses API:**
485
-
486
- ```javascript
487
- const response = await openai.responses.create({
488
- model: 'gpt-5',
489
- input: 'Hello', // Simplified input parameter
490
- });
491
- ```
492
-
493
- ### Key Differences
494
-
495
- | Feature | Chat Completions | Responses API |
496
- | ---------------------- | ---------------------------- | ----------------------------------- |
497
- | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
498
- | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
499
- | **Response Structure** | `choices[0].message.content` | `output_text` |
500
- | **Stateful** | No | Yes (with `store: true`) |
501
- | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
502
- | **Temperature** | Supported | Not supported with GPT-5 |
503
-
504
- ### Requirements & Installation
505
-
506
- **OpenAI SDK Version:**
507
-
508
- - **Minimum:** `5.8.0` (when Responses API was officially released)
509
- - **Recommended:** `5.8.2` or later (tested and verified)
510
- - **Current:** `6.2.0` (latest available)
511
-
512
- **Installation:**
513
-
514
- ```bash
515
- # Install latest version with Responses API support
516
- npm install openai@^5.8.0
517
-
518
- # Or install specific tested version
519
- npm install openai@5.8.2
520
- ```
521
-
522
- ### Current Status
523
-
524
- **The Responses API is officially available in OpenAI SDK 5.8+**
525
-
526
- **Official Release:**
527
-
528
- - ✅ Released by OpenAI in SDK version 5.8.0
529
- - ✅ Fully documented in official OpenAI documentation
530
- - Production-ready with GPT-5 and other supported models
531
- - ✅ Complete middleware support with Revenium integration
532
-
533
- **Middleware Features:**
534
-
535
- - ✅ Full Responses API support (streaming & non-streaming)
536
- - Seamless metadata tracking identical to Chat Completions
537
- - Type-safe TypeScript integration
538
- - Complete token tracking including reasoning tokens
539
- - ✅ Azure OpenAI compatibility
540
-
541
- **References:**
542
-
543
- - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
544
- - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
545
-
546
- ### Responses API Examples
547
-
548
- The middleware includes comprehensive examples for the new Responses API:
549
-
550
- **Basic Usage:**
551
-
552
- ```typescript
553
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
554
- import OpenAI from 'openai';
555
-
556
- // Initialize and patch OpenAI instance
557
- initializeReveniumFromEnv();
558
- const openai = patchOpenAIInstance(new OpenAI());
559
-
560
- // Simple string input
561
- const response = await openai.responses.create({
562
- model: 'gpt-5',
563
- input: 'What is the capital of France?',
564
- max_output_tokens: 150,
565
- usageMetadata: {
566
- subscriber: { id: 'user-123', email: 'user@example.com' },
567
- organizationId: 'org-456',
568
- productId: 'quantum-explainer',
569
- taskType: 'educational-content',
570
- },
571
- });
572
-
573
- console.log(response.output_text); // "Paris."
574
- ```
575
-
576
- **Streaming Example:**
577
-
578
- ```typescript
579
- const stream = await openai.responses.create({
580
- model: 'gpt-5',
581
- input: 'Write a short story about AI',
582
- stream: true,
583
- max_output_tokens: 500,
584
- usageMetadata: {
585
- subscriber: { id: 'user-123', email: 'user@example.com' },
586
- organizationId: 'org-456',
587
- },
588
- });
589
-
590
- for await (const chunk of stream) {
591
- process.stdout.write(chunk.delta?.content || '');
592
- }
593
- ```
594
-
595
- ### Adding Custom Metadata
596
-
597
- Track users, organizations, and custom data with seamless TypeScript integration:
598
-
599
- ```typescript
600
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
601
- import OpenAI from 'openai';
602
-
603
- // Initialize and patch OpenAI instance
604
- initializeReveniumFromEnv();
605
- const openai = patchOpenAIInstance(new OpenAI());
606
-
607
- const response = await openai.chat.completions.create({
608
- model: 'gpt-4',
609
- messages: [{ role: 'user', content: 'Summarize this document' }],
610
- // Add custom tracking metadata - all fields optional, no type casting needed!
611
- usageMetadata: {
612
- subscriber: {
613
- id: 'user-12345',
614
- email: 'john@acme-corp.com',
615
- },
616
- organizationId: 'acme-corp',
617
- productId: 'document-ai',
618
- taskType: 'document-summary',
619
- agent: 'doc-summarizer-v2',
620
- traceId: 'session-abc123',
621
- },
622
- });
623
-
624
- // Same metadata works with Responses API
625
- const responsesResult = await openai.responses.create({
626
- model: 'gpt-5',
627
- input: 'Summarize this document',
628
- // Same metadata structure - seamless compatibility!
629
- usageMetadata: {
630
- subscriber: {
631
- id: 'user-12345',
632
- email: 'john@acme-corp.com',
633
- },
634
- organizationId: 'acme-corp',
635
- productId: 'document-ai',
636
- taskType: 'document-summary',
637
- agent: 'doc-summarizer-v2',
638
- traceId: 'session-abc123',
639
- },
640
- });
641
- ```
642
-
643
- ### Streaming Support
644
-
645
- The middleware automatically handles streaming requests with seamless metadata:
646
-
647
- ```typescript
648
- import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
649
- import OpenAI from 'openai';
650
-
651
- // Initialize and patch OpenAI instance
652
- initializeReveniumFromEnv();
653
- const openai = patchOpenAIInstance(new OpenAI());
654
-
655
- const stream = await openai.chat.completions.create({
656
- model: 'gpt-4',
657
- messages: [{ role: 'user', content: 'Tell me a story' }],
658
- stream: true,
659
- // Metadata works seamlessly with streaming - all fields optional!
660
- usageMetadata: {
661
- organizationId: 'story-app',
662
- taskType: 'creative-writing',
663
- },
664
- });
665
-
666
- for await (const chunk of stream) {
667
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
668
- }
669
- // Usage tracking happens automatically when stream completes
670
- ```
671
-
672
- ### Temporarily Disabling Tracking
673
-
674
- If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
675
-
676
- ```javascript
677
- import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
678
-
679
- // Disable tracking
680
- unpatchOpenAI();
681
-
682
- // Your OpenAI calls now bypass Revenium tracking
683
- await openai.chat.completions.create({...});
684
-
685
- // Re-enable tracking
686
- patchOpenAI();
687
- ```
688
-
689
- **Common use cases:**
690
-
691
- - **Debugging**: Isolate whether issues are caused by the middleware
692
- - **Testing**: Compare behavior with/without tracking
693
- - **Conditional tracking**: Enable/disable based on environment
694
- - **Troubleshooting**: Temporary bypass during incident response
695
-
696
- **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
697
-
698
- ## Azure OpenAI Integration
699
-
700
- **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
701
-
702
- ### Quick Start with Azure OpenAI
703
-
704
- ```bash
705
- # Set your Azure OpenAI environment variables
706
- export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
707
- export AZURE_OPENAI_API_KEY="your-azure-api-key"
708
- export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
709
- export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
710
-
711
- # Set your Revenium credentials
712
- export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
713
- # export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
714
- ```
715
-
716
- ```typescript
717
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
718
- import { AzureOpenAI } from 'openai';
719
-
720
- // Initialize Revenium middleware
721
- initializeReveniumFromEnv();
722
-
723
- // Create and patch Azure OpenAI client
724
- const azure = patchOpenAIInstance(
725
- new AzureOpenAI({
726
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
727
- apiKey: process.env.AZURE_OPENAI_API_KEY,
728
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
729
- })
730
- );
731
-
732
- // Your existing Azure OpenAI code works with seamless metadata
733
- const response = await azure.chat.completions.create({
734
- model: 'gpt-4o', // Uses your deployment name
735
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
736
- // Optional metadata with native TypeScript support
737
- usageMetadata: {
738
- organizationId: 'my-company',
739
- taskType: 'azure-chat',
740
- },
741
- });
742
-
743
- console.log(response.choices[0].message.content);
744
- ```
745
-
746
- ### Azure Features
747
-
748
- - **Automatic Detection**: Detects Azure OpenAI clients automatically
749
- - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
750
- - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
751
- - **Deployment Support**: Works with any Azure deployment name (simple or complex)
752
- - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
753
- - **Zero Code Changes**: Existing Azure OpenAI code works without modification
754
-
755
- ### Azure Environment Variables
756
-
757
- | Variable | Required | Description | Example |
758
- | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
759
- | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
760
- | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
761
- | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
762
- | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
763
-
764
- ### Azure Model Name Resolution
765
-
766
- The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
767
-
768
- ```typescript
769
- // Azure deployment names → Standard model names for pricing
770
- "gpt-4o-2024-11-20" → "gpt-4o"
771
- "gpt4o-prod" → "gpt-4o"
772
- "o4-mini" → "gpt-4o-mini"
773
- "gpt-35-turbo-dev" → "gpt-3.5-turbo"
774
- "text-embedding-3-large" → "text-embedding-3-large" // Direct match
775
- "embedding-3-large" → "text-embedding-3-large"
776
- ```
777
-
778
- ## 🔧 Advanced Usage
779
-
780
- ### Streaming with Metadata
781
-
782
- The middleware seamlessly handles streaming requests with full metadata support:
783
-
784
- ```typescript
785
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
786
- import OpenAI from 'openai';
787
-
788
- initializeReveniumFromEnv();
789
- const openai = patchOpenAIInstance(new OpenAI());
790
-
791
- // Chat Completions API streaming
792
- const stream = await openai.chat.completions.create({
793
- model: 'gpt-4o-mini',
794
- messages: [{ role: 'user', content: 'Tell me a story' }],
795
- stream: true,
796
- usageMetadata: {
797
- subscriber: { id: 'user-123', email: 'user@example.com' },
798
- organizationId: 'story-app',
799
- taskType: 'creative-writing',
800
- traceId: 'session-' + Date.now(),
801
- },
802
- });
803
-
804
- for await (const chunk of stream) {
805
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
806
- }
807
- // Usage tracking happens automatically when stream completes
808
- ```
809
-
810
- ### Responses API with Metadata
811
-
812
- Full support for OpenAI's new Responses API:
813
-
814
- ```typescript
815
- // Simple string input with metadata
816
- const response = await openai.responses.create({
817
- model: 'gpt-5',
818
- input: 'What is the capital of France?',
819
- max_output_tokens: 150,
820
- usageMetadata: {
821
- subscriber: { id: 'user-123', email: 'user@example.com' },
822
- organizationId: 'org-456',
823
- productId: 'geography-tutor',
824
- taskType: 'educational-query',
825
- },
826
- });
827
-
828
- console.log(response.output_text); // "Paris."
829
- ```
830
-
831
- ### Azure OpenAI Integration
832
-
833
- Automatic Azure OpenAI detection with seamless metadata:
834
-
835
- ```typescript
836
- import { AzureOpenAI } from 'openai';
837
-
838
- // Create and patch Azure OpenAI client
839
- const azure = patchOpenAIInstance(
840
- new AzureOpenAI({
841
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
842
- apiKey: process.env.AZURE_OPENAI_API_KEY,
843
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
844
- })
845
- );
846
-
847
- // Your existing Azure OpenAI code works with seamless metadata
848
- const response = await azure.chat.completions.create({
849
- model: 'gpt-4o', // Uses your deployment name
850
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
851
- usageMetadata: {
852
- organizationId: 'my-company',
853
- taskType: 'azure-chat',
854
- agent: 'azure-assistant',
855
- },
856
- });
857
- ```
858
-
859
- ### Embeddings with Metadata
860
-
861
- Track embeddings usage with optional metadata:
862
-
863
- ```typescript
864
- const embedding = await openai.embeddings.create({
865
- model: 'text-embedding-3-small',
866
- input: 'Advanced text embedding with comprehensive tracking metadata',
867
- usageMetadata: {
868
- subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
869
- organizationId: 'my-company',
870
- taskType: 'document-embedding',
871
- productId: 'search-engine',
872
- traceId: `embed-${Date.now()}`,
873
- agent: 'openai-embeddings-node',
874
- },
875
- });
876
-
877
- console.log('Model:', embedding.model);
878
- console.log('Usage:', embedding.usage);
879
- console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
880
- ```
881
-
882
- ### Manual Configuration
883
-
884
- For advanced use cases, configure the middleware manually:
885
-
886
- ```typescript
887
- import { configure } from '@revenium/openai';
888
-
889
- configure({
890
- reveniumApiKey: 'hak_your_api_key',
891
- reveniumBaseUrl: 'https://api.revenium.io/meter',
892
- apiTimeout: 5000,
893
- failSilent: true,
894
- maxRetries: 3,
895
- });
896
- ```
897
-
898
- ## 🛠️ Configuration Options
899
-
900
- ### Environment Variables
901
-
902
- | Variable | Required | Default | Description |
903
- | ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
904
- | `REVENIUM_METERING_API_KEY` | ✅ | - | Your Revenium API key (starts with `hak_`) |
905
- | `OPENAI_API_KEY` | ✅ | - | Your OpenAI API key (starts with `sk-`) |
906
- | `REVENIUM_METERING_BASE_URL` | ❌ | `https://api.revenium.io/meter` | Revenium metering API base URL |
907
- | `REVENIUM_DEBUG` | ❌ | `false` | Enable debug logging (`true`/`false`) |
908
- | `AZURE_OPENAI_ENDPOINT` | ❌ | - | Azure OpenAI endpoint URL (for Azure testing) |
909
- | `AZURE_OPENAI_API_KEY` | ❌ | - | Azure OpenAI API key (for Azure testing) |
910
- | `AZURE_OPENAI_DEPLOYMENT` | ❌ | - | Azure OpenAI deployment name (for Azure) |
911
- | `AZURE_OPENAI_API_VERSION` | ❌ | `2024-12-01-preview` | Azure OpenAI API version (for Azure) |
912
-
913
- **⚠️ Important Note about `REVENIUM_METERING_BASE_URL`:**
914
-
915
- - This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
916
- - If you don't set it explicitly, the middleware will use the default production endpoint
917
- - However, you may see console warnings or errors if the middleware cannot determine the correct environment
918
- - **Best practice:** Always set this variable explicitly to match your environment:
919
-
920
- ```bash
921
- # For Production
922
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
923
-
924
- # For QA/Testing
925
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
926
- ```
927
-
928
- - **Remember:** Your `REVENIUM_METERING_API_KEY` must match the environment of your base URL
929
-
930
- ### Usage Metadata Options
931
-
932
- All metadata fields are optional and help provide better analytics:
933
-
934
- ```typescript
935
- interface UsageMetadata {
936
- traceId?: string; // Session or conversation ID
937
- taskType?: string; // Type of AI task (e.g., "chat", "summary")
938
- subscriber?: {
939
- // User information (nested structure)
940
- id?: string; // User ID from your system
941
- email?: string; // User's email address
942
- credential?: {
943
- // User credentials
944
- name?: string; // Credential name
945
- value?: string; // Credential value
946
- };
947
- };
948
- organizationId?: string; // Organization/company ID
949
- subscriptionId?: string; // Billing plan ID
950
- productId?: string; // Your product/feature ID
951
- agent?: string; // AI agent identifier
952
- responseQualityScore?: number; // Quality score (0-1)
953
- }
954
- ```
955
-
956
- ## How It Works
957
-
958
- 1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
959
- - `chat.completions.create` (Chat Completions API)
960
- - `responses.create` (Responses API - when available)
961
- - `embeddings.create` (Embeddings API)
962
- 2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
963
- 3. **Usage Extraction**: Token counts, model info, and timing data are captured
964
- 4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
965
- 5. **Transparent Response**: Original OpenAI responses are returned unchanged
966
-
967
- The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
968
-
969
- ## 🔍 Troubleshooting
970
-
971
- ### Common Issues
972
-
973
- #### 1. **No tracking data in dashboard**
974
-
975
- **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
976
-
977
- **Solution**: Enable debug logging to check middleware status:
978
-
979
- ```bash
980
- export REVENIUM_DEBUG=true
981
- ```
982
-
983
- **Expected output for successful tracking**:
984
-
985
- ```bash
986
- [Revenium Debug] OpenAI chat.completions.create intercepted
987
- [Revenium Debug] Revenium tracking successful
988
-
989
- # For Responses API:
990
- [Revenium Debug] OpenAI responses.create intercepted
991
- [Revenium Debug] Revenium tracking successful
992
- ```
993
-
994
- #### 2. **Environment mismatch errors**
995
-
996
- **Symptoms**: Authentication errors or 401/403 responses
997
-
998
- **Solution**: Ensure your API key matches your base URL environment:
999
-
1000
- ```bash
1001
- # ✅ Correct - Production key with production URL
1002
- REVENIUM_METERING_API_KEY=hak_prod_key_here
1003
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1004
-
1005
- # ✅ Correct - QA key with QA URL
1006
- REVENIUM_METERING_API_KEY=hak_qa_key_here
1007
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
1008
-
1009
- # ❌ Wrong - Production key with QA URL
1010
- REVENIUM_METERING_API_KEY=hak_prod_key_here
1011
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
1012
- ```
1013
-
1014
- #### 3. **TypeScript type errors**
1015
-
1016
- **Symptoms**: TypeScript errors about `usageMetadata` property
1017
-
1018
- **Solution**: Ensure you're importing the middleware before OpenAI:
1019
-
1020
- ```typescript
1021
- // ✅ Correct order
1022
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1023
- import OpenAI from 'openai';
1024
-
1025
- // ❌ Wrong order
1026
- import OpenAI from 'openai';
1027
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1028
- ```
1029
-
1030
- #### 4. **Azure OpenAI not working**
1031
-
1032
- **Symptoms**: Azure OpenAI calls not being tracked
1033
-
1034
- **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
1035
-
1036
- ```typescript
1037
- import { AzureOpenAI } from 'openai';
1038
- import { patchOpenAIInstance } from '@revenium/openai';
1039
-
1040
- // ✅ Correct
1041
- const azure = patchOpenAIInstance(new AzureOpenAI({...}));
1042
-
1043
- // ❌ Wrong - not patched
1044
- const azure = new AzureOpenAI({...});
1045
- ```
1046
-
1047
- #### 5. **Responses API not available**
1048
-
1049
- **Symptoms**: `openai.responses.create` is undefined
1050
-
1051
- **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
1052
-
1053
- ```bash
1054
- npm install openai@^5.8.0
1055
- ```
1056
-
1057
- ### Debug Mode
1058
-
1059
- Enable comprehensive debug logging:
1060
-
1061
- ```bash
1062
- export REVENIUM_DEBUG=true
1063
- ```
1064
-
1065
- This will show:
1066
-
1067
- - ✅ Middleware initialization status
1068
- - ✅ Request interception confirmations
1069
- - ✅ Metadata extraction details
1070
- - ✅ Tracking success/failure messages
1071
- - ✅ Error details and stack traces
1072
-
1073
- ### Getting Help
1074
-
1075
- If you're still experiencing issues:
1076
-
1077
- 1. **Check the logs** with `REVENIUM_DEBUG=true`
1078
- 2. **Verify environment variables** are set correctly
1079
- 3. **Test with minimal example** from our documentation
1080
- 4. **Contact support** with debug logs and error details
1081
-
1082
- For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
1083
-
1084
- ## 🤖 Supported Models
1085
-
1086
- ### OpenAI Models
1087
-
1088
- | Model Family | Models | APIs Supported |
1089
- | ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
1090
- | **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
1091
- | **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
1092
- | **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
1093
- | **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
1094
- | **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
1095
- | **GPT-5** | `gpt-5` (when available) | Responses API |
1096
- | **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
1097
-
1098
- ### Azure OpenAI Models
1099
-
1100
- All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
1101
-
1102
- | Azure Deployment | Resolved Model | API Support |
1103
- | ------------------------ | ------------------------ | --------------------------- |
1104
- | `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
1105
- | `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
1106
- | `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
1107
- | `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
1108
- | `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
1109
- | `embedding-3-large` | `text-embedding-3-large` | Embeddings |
1110
-
1111
- **Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
1112
-
1113
- ### API Support Matrix
1114
-
1115
- | Feature | Chat Completions API | Responses API | Embeddings API |
1116
- | --------------------- | -------------------- | ------------- | -------------- |
1117
- | **Basic Requests** | ✅ | ✅ | ✅ |
1118
- | **Streaming** | ✅ | ✅ | ❌ |
1119
- | **Metadata Tracking** | ✅ | ✅ | ✅ |
1120
- | **Azure OpenAI** | ✅ | ✅ | ✅ |
1121
- | **Cost Calculation** | ✅ | ✅ | ✅ |
1122
- | **Token Counting** | ✅ | ✅ | ✅ |
1123
-
1124
- ## Requirements
1125
-
1126
- - Node.js 16+
1127
- - OpenAI package v4.0+
1128
- - TypeScript 5.0+ (for TypeScript projects)
1129
-
1130
- ## Documentation
1131
-
1132
- For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
1133
-
1134
- ## Contributing
1135
-
1136
- See [CONTRIBUTING.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/CONTRIBUTING.md)
1137
-
1138
- ## Code of Conduct
1139
-
1140
- See [CODE_OF_CONDUCT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/CODE_OF_CONDUCT.md)
1141
-
1142
- ## Security
1143
-
1144
- See [SECURITY.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/SECURITY.md)
1145
-
1146
- ## License
1147
-
1148
- This project is licensed under the MIT License - see the [LICENSE](https://github.com/revenium/revenium-middleware-openai-node/blob/main/LICENSE) file for details.
1149
-
1150
- ## Acknowledgments
1151
-
1152
- - Built by the Revenium team
1
+ # Revenium OpenAI Middleware for Node.js
2
+
3
+ [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
+ [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
+ [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
9
+
10
+ A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
11
+
12
+ ## Features
13
+
14
+ - **Seamless Integration** - Native TypeScript support, no type casting required
15
+ - **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
16
+ - **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
17
+ - **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
18
+ - **Type Safety** - Complete TypeScript support with IntelliSense
19
+ - **Streaming Support** - Handles regular and streaming requests seamlessly
20
+ - **Fire-and-Forget** - Never blocks your application flow
21
+ - **Zero Configuration** - Auto-initialization from environment variables
22
+
23
+ ## Getting Started
24
+
25
+ ### 1. Create Project Directory
26
+
27
+ ```bash
28
+ # Create project directory and navigate to it
29
+ mkdir my-openai-project
30
+ cd my-openai-project
31
+
32
+ # Initialize npm project
33
+ npm init -y
34
+
35
+ # Install packages
36
+ npm install @revenium/openai openai dotenv tsx
37
+ npm install --save-dev typescript @types/node
38
+ ```
39
+
40
+ ### 2. Configure Environment Variables
41
+
42
+ Create a `.env` file:
43
+
44
+ **NOTE: YOU MUST REPLACE THE PLACEHOLDERS WITH YOUR OWN API KEYS**
45
+
46
+ ```env
47
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io
48
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
49
+ OPENAI_API_KEY=sk_your_openai_api_key_here
50
+ ```
51
+
52
+ ### 3. Run Your First Example
53
+
54
+ Run the [getting started example](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/getting_started.ts):
55
+
56
+ ```bash
57
+ npx tsx node_modules/@revenium/openai/examples/getting_started.ts
58
+ ```
59
+
60
+ Or with debug logging:
61
+
62
+ ```bash
63
+ # Linux/macOS
64
+ REVENIUM_DEBUG=true npx tsx node_modules/@revenium/openai/examples/getting_started.ts
65
+
66
+ # Windows (PowerShell)
67
+ $env:REVENIUM_DEBUG="true"; npx tsx node_modules/@revenium/openai/examples/getting_started.ts
68
+ ```
69
+
70
+ **For more examples and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).**
71
+
72
+ ---
73
+
74
+ ## Requirements
75
+
76
+ - Node.js 16+
77
+ - OpenAI package v4.0+
78
+ - TypeScript 5.0+ (for TypeScript projects)
79
+
80
+ ---
81
+
82
+ ## What Gets Tracked
83
+
84
+ The middleware automatically captures comprehensive usage data:
85
+
86
+ ### **Usage Metrics**
87
+
88
+ - **Token Counts** - Input tokens, output tokens, total tokens
89
+ - **Model Information** - Model name, provider (OpenAI/Azure), API version
90
+ - **Request Timing** - Request duration, response time
91
+ - **Cost Calculation** - Estimated costs based on current pricing
92
+
93
+ ### **Business Context (Optional)**
94
+
95
+ - **User Tracking** - Subscriber ID, email, credentials
96
+ - **Organization Data** - Organization ID, subscription ID, product ID
97
+ - **Task Classification** - Task type, agent identifier, trace ID
98
+ - **Quality Metrics** - Response quality scores, custom metadata
99
+
100
+ ### **Technical Details**
101
+
102
+ - **API Endpoints** - Chat completions, embeddings, responses API
103
+ - **Request Types** - Streaming vs non-streaming
104
+ - **Error Tracking** - Failed requests, error types, retry attempts
105
+ - **Environment Info** - Development vs production usage
106
+
107
+ ## OpenAI Responses API Support
108
+
109
+ This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
110
+
111
+ ### What is the Responses API?
112
+
113
+ The Responses API is OpenAI's new stateful API that:
114
+
115
+ - Uses `input` instead of `messages` parameter for simplified interaction
116
+ - Provides unified experience combining chat completions and assistants capabilities
117
+ - Supports advanced features like background tasks, function calling, and code interpreter
118
+ - Offers better streaming and real-time response generation
119
+ - Works with GPT-5 and other advanced models
120
+
121
+ ### API Comparison
122
+
123
+ **Traditional Chat Completions:**
124
+
125
+ ```javascript
126
+ const response = await openai.chat.completions.create({
127
+ model: 'gpt-4o',
128
+ messages: [{ role: 'user', content: 'Hello' }],
129
+ });
130
+ ```
131
+
132
+ **New Responses API:**
133
+
134
+ ```javascript
135
+ const response = await openai.responses.create({
136
+ model: 'gpt-5',
137
+ input: 'Hello', // Simplified input parameter
138
+ });
139
+ ```
140
+
141
+ ### Key Differences
142
+
143
+ | Feature | Chat Completions | Responses API |
144
+ | ---------------------- | ---------------------------- | ----------------------------------- |
145
+ | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
146
+ | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
147
+ | **Response Structure** | `choices[0].message.content` | `output_text` |
148
+ | **Stateful** | No | Yes (with `store: true`) |
149
+ | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
150
+ | **Temperature** | Supported | Not supported with GPT-5 |
151
+
152
+ ### Requirements & Installation
153
+
154
+ **OpenAI SDK Version:**
155
+
156
+ - **Minimum:** `5.8.0` (when Responses API was officially released)
157
+ - **Recommended:** `5.8.2` or later (tested and verified)
158
+ - **Current:** `6.2.0` (latest available)
159
+
160
+ **Installation:**
161
+
162
+ ```bash
163
+ # Install latest version with Responses API support
164
+ npm install openai@^5.8.0
165
+
166
+ # Or install specific tested version
167
+ npm install openai@5.8.2
168
+ ```
169
+
170
+ ### Current Status
171
+
172
+ **The Responses API is officially available in OpenAI SDK 5.8+**
173
+
174
+ **Official Release:**
175
+
176
+ - Released by OpenAI in SDK version 5.8.0
177
+ - Fully documented in official OpenAI documentation
178
+ - Production-ready with GPT-5 and other supported models
179
+ - Complete middleware support with Revenium integration
180
+
181
+ **Middleware Features:**
182
+
183
+ - Full Responses API support (streaming & non-streaming)
184
+ - Seamless metadata tracking identical to Chat Completions
185
+ - Type-safe TypeScript integration
186
+ - Complete token tracking including reasoning tokens
187
+ - Azure OpenAI compatibility
188
+
189
+ **References:**
190
+
191
+ - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
192
+ - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
193
+
194
+ ### Working Examples
195
+
196
+ Complete working examples are included with this package. Each example is fully documented and ready to run.
197
+
198
+ #### Available Examples
199
+
200
+ **OpenAI Chat Completions API:**
201
+ - `openai-basic.ts` - Basic chat + embeddings with optional metadata
202
+ - `openai-streaming.ts` - Streaming responses + batch embeddings
203
+
204
+ **OpenAI Responses API (SDK 5.8+):**
205
+ - `openai-responses-basic.ts` - New Responses API with string input
206
+ - `openai-responses-streaming.ts` - Streaming with Responses API
207
+
208
+ **Azure OpenAI:**
209
+ - `azure-basic.ts` - Azure chat completions + embeddings
210
+ - `azure-streaming.ts` - Azure streaming responses
211
+ - `azure-responses-basic.ts` - Azure Responses API
212
+ - `azure-responses-streaming.ts` - Azure streaming Responses API
213
+
214
+ **Detailed Guide:**
215
+ - `examples/README.md` - Complete setup guide with TypeScript and JavaScript patterns
216
+
217
+ #### Running Examples
218
+
219
+ **Installed via npm?**
220
+ ```bash
221
+ # Try these in order:
222
+ npx tsx node_modules/@revenium/openai/examples/openai-basic.ts
223
+ npx tsx node_modules/@revenium/openai/examples/openai-streaming.ts
224
+ npx tsx node_modules/@revenium/openai/examples/openai-responses-basic.ts
225
+
226
+ # View all examples:
227
+ ls node_modules/@revenium/openai/examples/
228
+ ```
229
+
230
+ **Cloned from GitHub?**
231
+ ```bash
232
+ npm install
233
+ npm run example:openai-basic
234
+ npm run example:openai-streaming
235
+ npm run example:openai-responses-basic
236
+
237
+ # See all example scripts:
238
+ npm run
239
+ ```
240
+
241
+ **Browse online:** [`examples/` directory on GitHub](https://github.com/revenium/revenium-middleware-openai-node/tree/HEAD/examples)
242
+
243
+ ### Temporarily Disabling Tracking
244
+
245
+ If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
246
+
247
+ ```javascript
248
+ import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
249
+
250
+ // Disable tracking
251
+ unpatchOpenAI();
252
+
253
+ // Your OpenAI calls now bypass Revenium tracking
254
+ await openai.chat.completions.create({...});
255
+
256
+ // Re-enable tracking
257
+ patchOpenAI();
258
+ ```
259
+
260
+ **Common use cases:**
261
+
262
+ - **Debugging**: Isolate whether issues are caused by the middleware
263
+ - **Testing**: Compare behavior with/without tracking
264
+ - **Conditional tracking**: Enable/disable based on environment
265
+ - **Troubleshooting**: Temporary bypass during incident response
266
+
267
+ **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
268
+
269
+ ## Azure OpenAI Integration
270
+
271
+ **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
272
+
273
+ ### Quick Start with Azure OpenAI
274
+
275
+ **Use case:** Automatic Azure OpenAI client detection with deployment name mapping and accurate usage tracking.
276
+
277
+ See complete Azure examples:
278
+ - `examples/azure-basic.ts` - Azure chat completions with environment variable setup
279
+ - `examples/azure-streaming.ts` - Azure streaming responses
280
+ - `examples/azure-responses-basic.ts` - Azure Responses API integration
281
+
282
+ **Environment variables needed:**
283
+ ```bash
284
+ # Azure OpenAI configuration
285
+ AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
286
+ AZURE_OPENAI_API_KEY="your-azure-api-key"
287
+ AZURE_OPENAI_DEPLOYMENT="gpt-4o"
288
+ AZURE_OPENAI_API_VERSION="2024-12-01-preview"
289
+
290
+ # Revenium configuration
291
+ REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
292
+ REVENIUM_METERING_BASE_URL="https://api.revenium.io"
293
+ ```
294
+
295
+ ### Azure Features
296
+
297
+ - **Automatic Detection**: Detects Azure OpenAI clients automatically
298
+ - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
299
+ - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
300
+ - **Deployment Support**: Works with any Azure deployment name (simple or complex)
301
+ - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
302
+ - **Zero Code Changes**: Existing Azure OpenAI code works without modification
303
+
304
+ ### Azure Environment Variables
305
+
306
+ | Variable | Required | Description | Example |
307
+ | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
308
+ | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
309
+ | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
310
+ | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
311
+ | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
312
+
313
+ ### Azure Model Name Resolution
314
+
315
+ The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
316
+
317
+ ```typescript
318
+ // Azure deployment names → Standard model names for pricing
319
+ "gpt-4o-2024-11-20" → "gpt-4o"
320
+ "gpt4o-prod" → "gpt-4o"
321
+ "o4-mini" → "gpt-4o-mini"
322
+ "gpt-35-turbo-dev" → "gpt-3.5-turbo"
323
+ "text-embedding-3-large" → "text-embedding-3-large" // Direct match
324
+ "embedding-3-large" → "text-embedding-3-large"
325
+ ```
326
+
327
+ ## Advanced Usage
328
+
329
+ ### Initialization Options
330
+
331
+ The middleware supports three initialization patterns:
332
+
333
+ **Automatic (Recommended)** - Import and patch OpenAI instance:
334
+
335
+ ```typescript
336
+ import { patchOpenAIInstance } from '@revenium/openai';
337
+ import OpenAI from 'openai';
338
+
339
+ const openai = patchOpenAIInstance(new OpenAI());
340
+ // Tracking works automatically if env vars are set
341
+ ```
342
+
343
+ **Explicit** - Call `initializeReveniumFromEnv()` for error handling control:
344
+
345
+ ```typescript
346
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
347
+ import OpenAI from 'openai';
348
+
349
+ const result = initializeReveniumFromEnv();
350
+ if (!result.success) {
351
+ console.error('Failed to initialize:', result.message);
352
+ process.exit(1);
353
+ }
354
+
355
+ const openai = patchOpenAIInstance(new OpenAI());
356
+ ```
357
+
358
+ **Manual** - Use `configure()` to set all options programmatically (see Manual Configuration below).
359
+
360
+ For detailed examples of all initialization patterns, see [`examples/`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
361
+
362
+ ### Streaming Responses
363
+
364
+ Streaming is fully supported with real-time token tracking and time-to-first-token metrics. The middleware automatically tracks streaming responses without any additional configuration.
365
+
366
+ See [`examples/openai-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/openai-streaming.ts) and [`examples/azure-streaming.ts`](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/azure-streaming.ts) for working streaming examples.
367
+
368
+ ### Custom Metadata Tracking
369
+
370
+ Add business context to track usage by organization, user, task type, or custom fields. Pass a `usageMetadata` object with any of these optional fields:
371
+
372
+ | Field | Description | Use Case |
373
+ |-------|-------------|----------|
374
+ | `traceId` | Unique identifier for session or conversation tracking | Link multiple API calls together for debugging, user session analytics, or distributed tracing across services |
375
+ | `taskType` | Type of AI task being performed | Categorize usage by workload (e.g., "chat", "code-generation", "doc-summary") for cost analysis and optimization |
376
+ | `subscriber.id` | Unique user identifier | Track individual user consumption for billing, rate limiting, or user analytics |
377
+ | `subscriber.email` | User email address | Identify users for support, compliance, or usage reports |
378
+ | `subscriber.credential.name` | Authentication credential name | Track which API key or service account made the request |
379
+ | `subscriber.credential.value` | Authentication credential value | Associate usage with specific credentials for security auditing |
380
+ | `organizationId` | Organization or company identifier | Multi-tenant cost allocation, usage quotas per organization |
381
+ | `subscriptionId` | Subscription plan identifier | Track usage against subscription limits, identify plan upgrade opportunities |
382
+ | `productId` | Your product or feature identifier | Attribute AI costs to specific features in your application (e.g., "chatbot", "email-assistant") |
383
+ | `agent` | AI agent or bot identifier | Distinguish between multiple AI agents or automation workflows in your system |
384
+ | `responseQualityScore` | Custom quality rating (0.0-1.0) | Track user satisfaction or automated quality metrics for model performance analysis |
385
+
386
+ **Resources:**
387
+ - [API Reference](https://revenium.readme.io/reference/meter_ai_completion) - Complete metadata field documentation
388
+
389
+ ### OpenAI Responses API
390
+ **Use case:** Using OpenAI's new Responses API with string inputs and simplified interface (SDK 5.8+).
391
+
392
+ See working examples:
393
+ - `examples/openai-responses-basic.ts` - Basic Responses API usage
394
+ - `examples/openai-responses-streaming.ts` - Streaming with Responses API
395
+
396
+ ### Azure OpenAI Integration
397
+ **Use case:** Automatic Azure OpenAI detection with deployment name resolution and accurate pricing.
398
+
399
+ See working examples:
400
+ - `examples/azure-basic.ts` - Azure chat completions and embeddings
401
+ - `examples/azure-responses-basic.ts` - Azure Responses API integration
402
+
403
+ ### Embeddings with Metadata
404
+ **Use case:** Track embeddings usage for search engines, RAG systems, and document processing.
405
+
406
+ Embeddings examples are included in:
407
+ - `examples/openai-basic.ts` - Text embeddings with metadata
408
+ - `examples/openai-streaming.ts` - Batch embeddings processing
409
+
410
+ ### Manual Configuration
411
+
412
+ For advanced use cases, configure the middleware manually:
413
+
414
+ ```typescript
415
+ import { configure } from '@revenium/openai';
416
+
417
+ configure({
418
+ reveniumApiKey: 'hak_your_api_key',
419
+ reveniumBaseUrl: 'https://api.revenium.io',
420
+ apiTimeout: 5000,
421
+ failSilent: true,
422
+ maxRetries: 3,
423
+ });
424
+ ```
425
+
426
+ ## Configuration Options
427
+
428
+ ### Environment Variables
429
+
430
+ | Variable | Required | Default | Description |
431
+ | ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
432
+ | `REVENIUM_METERING_API_KEY` | true | - | Your Revenium API key (starts with `hak_`) |
433
+ | `OPENAI_API_KEY` | true | - | Your OpenAI API key (starts with `sk-`) |
434
+ | `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io` | Revenium metering API base URL |
435
+ | `REVENIUM_DEBUG` | false | `false` | Enable debug logging (`true`/`false`) |
436
+ | `AZURE_OPENAI_ENDPOINT` | false | - | Azure OpenAI endpoint URL (for Azure testing) |
437
+ | `AZURE_OPENAI_API_KEY` | false | - | Azure OpenAI API key (for Azure testing) |
438
+ | `AZURE_OPENAI_DEPLOYMENT` | false | - | Azure OpenAI deployment name (for Azure) |
439
+ | `AZURE_OPENAI_API_VERSION` | false | `2024-12-01-preview` | Azure OpenAI API version (for Azure) |
440
+
441
+ **Important Note about `REVENIUM_METERING_BASE_URL`:**
442
+
443
+ - This variable is **optional** and defaults to the production URL (`https://api.revenium.io`)
444
+ - If you don't set it explicitly, the middleware will use the default production endpoint
445
+ - However, you may see console warnings or errors if the middleware cannot determine the correct environment
446
+ - **Best practice:** Always set this variable explicitly to match your environment:
447
+
448
+ ```bash
449
+ # Default production URL (recommended)
450
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io
451
+ ```
452
+
453
+ - **Remember:** Your `REVENIUM_METERING_API_KEY` must match your base URL environment
454
+
455
+ ### Usage Metadata Options
456
+
457
+ All metadata fields are optional and help provide better analytics:
458
+
459
+ ```typescript
460
+ interface UsageMetadata {
461
+ traceId?: string; // Session or conversation ID
462
+ taskType?: string; // Type of AI task (e.g., "chat", "summary")
463
+ subscriber?: {
464
+ // User information (nested structure)
465
+ id?: string; // User ID from your system
466
+ email?: string; // User's email address
467
+ credential?: {
468
+ // User credentials
469
+ name?: string; // Credential name
470
+ value?: string; // Credential value
471
+ };
472
+ };
473
+ organizationId?: string; // Organization/company ID
474
+ subscriptionId?: string; // Billing plan ID
475
+ productId?: string; // Your product/feature ID
476
+ agent?: string; // AI agent identifier
477
+ responseQualityScore?: number; // Quality score (0-1)
478
+ }
479
+ ```
480
+
481
+ ## Included Examples
482
+
483
+ The package includes 8 comprehensive example files in your installation:
484
+
485
+ **OpenAI Examples:**
486
+ - **openai-basic.ts**: Basic chat completions with metadata tracking
487
+ - **openai-streaming.ts**: Streaming responses with real-time output
488
+ - **openai-responses-basic.ts**: New Responses API usage (OpenAI SDK 5.8+)
489
+ - **openai-responses-streaming.ts**: Streaming with Responses API
490
+
491
+ **Azure OpenAI Examples:**
492
+ - **azure-basic.ts**: Azure OpenAI chat completions
493
+ - **azure-streaming.ts**: Azure streaming responses
494
+ - **azure-responses-basic.ts**: Azure Responses API
495
+ - **azure-responses-streaming.ts**: Azure streaming Responses API
496
+
497
+ **For npm users:** Examples are installed in `node_modules/@revenium/openai/examples/`
498
+
499
+ **For GitHub users:** Examples are in the repository's `examples/` directory
500
+
501
+ For detailed setup instructions and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
502
+
503
+ ## How It Works
504
+
505
+ 1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
506
+ - `chat.completions.create` (Chat Completions API)
507
+ - `responses.create` (Responses API - when available)
508
+ - `embeddings.create` (Embeddings API)
509
+ 2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
510
+ 3. **Usage Extraction**: Token counts, model info, and timing data are captured
511
+ 4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
512
+ 5. **Transparent Response**: Original OpenAI responses are returned unchanged
513
+
514
+ The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
515
+
516
+ ## Troubleshooting
517
+
518
+ ### Common Issues
519
+
520
+ #### 1. **No tracking data in dashboard**
521
+
522
+ **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
523
+
524
+ **Solution**: Enable debug logging to check middleware status:
525
+
526
+ ```bash
527
+ export REVENIUM_DEBUG=true
528
+ ```
529
+
530
+ **Expected output for successful tracking**:
531
+
532
+ ```bash
533
+ [Revenium Debug] OpenAI chat.completions.create intercepted
534
+ [Revenium Debug] Revenium tracking successful
535
+
536
+ # For Responses API:
537
+ [Revenium Debug] OpenAI responses.create intercepted
538
+ [Revenium Debug] Revenium tracking successful
539
+ ```
540
+
541
+ #### 2. **Environment mismatch errors**
542
+
543
+ **Symptoms**: Authentication errors or 401/403 responses
544
+
545
+ **Solution**: Ensure your API key matches your base URL environment:
546
+
547
+ ```bash
548
+ # Correct - Key and URL from same environment
549
+ REVENIUM_METERING_API_KEY=hak_your_api_key_here
550
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io
551
+
552
+ # Wrong - Key and URL from different environments
553
+ REVENIUM_METERING_API_KEY=hak_wrong_environment_key
554
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io
555
+ ```
556
+
557
+ #### 3. **TypeScript type errors**
558
+
559
+ **Symptoms**: TypeScript errors about `usageMetadata` property
560
+
561
+ **Solution**: Ensure you're importing the middleware before OpenAI:
562
+
563
+ ```typescript
564
+ // Correct order
565
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
566
+ import OpenAI from 'openai';
567
+
568
+ // Wrong order
569
+ import OpenAI from 'openai';
570
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
571
+ ```
572
+
573
+ #### 4. **Azure OpenAI not working**
574
+
575
+ **Symptoms**: Azure OpenAI calls not being tracked
576
+
577
+ **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
578
+
579
+ ```typescript
580
+ import { AzureOpenAI } from 'openai';
581
+ import { patchOpenAIInstance } from '@revenium/openai';
582
+
583
+ // Correct
584
+ const azure = patchOpenAIInstance(new AzureOpenAI({...}));
585
+
586
+ // Wrong - not patched
587
+ const azure = new AzureOpenAI({...});
588
+ ```
589
+
590
+ #### 5. **Responses API not available**
591
+
592
+ **Symptoms**: `openai.responses.create` is undefined
593
+
594
+ **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
595
+
596
+ ```bash
597
+ npm install openai@^5.8.0
598
+ ```
599
+
600
+ ### Debug Mode
601
+
602
+ Enable comprehensive debug logging:
603
+
604
+ ```bash
605
+ export REVENIUM_DEBUG=true
606
+ ```
607
+
608
+ This will show:
609
+
610
+ - Middleware initialization status
611
+ - Request interception confirmations
612
+ - Metadata extraction details
613
+ - Tracking success/failure messages
614
+ - Error details and stack traces
615
+
616
+ ### Getting Help
617
+
618
+ If you're still experiencing issues:
619
+
620
+ 1. **Check the logs** with `REVENIUM_DEBUG=true`
621
+ 2. **Verify environment variables** are set correctly
622
+ 3. **Test with minimal example** from our documentation
623
+ 4. **Contact support** with debug logs and error details
624
+
625
+ For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
626
+
627
+ ## Supported Models
628
+
629
+ This middleware works with all OpenAI chat completion and embedding models, including those available through Azure OpenAI.
630
+
631
+ **For the current list of supported models, pricing, and capabilities:**
632
+ - [Revenium AI Models API](https://revenium.readme.io/v2.0.0/reference/get_ai_model)
633
+
634
+ Models are continuously updated as new versions are released by OpenAI and Azure OpenAI. The middleware automatically handles model detection and pricing for accurate usage tracking.
635
+
636
+ ### API Support Matrix
637
+
638
+ | Feature | Chat Completions API | Responses API | Embeddings API |
639
+ | --------------------- | -------------------- | ------------- | -------------- |
640
+ | **Basic Requests** | Yes | Yes | Yes |
641
+ | **Streaming** | Yes | Yes | No |
642
+ | **Metadata Tracking** | Yes | Yes | Yes |
643
+ | **Azure OpenAI** | Yes | Yes | Yes |
644
+ | **Cost Calculation** | Yes | Yes | Yes |
645
+ | **Token Counting** | Yes | Yes | Yes |
646
+
647
+ ## Documentation
648
+
649
+ For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
650
+
651
+ ## Contributing
652
+
653
+ See [CONTRIBUTING.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CONTRIBUTING.md)
654
+
655
+ ## Code of Conduct
656
+
657
+ See [CODE_OF_CONDUCT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CODE_OF_CONDUCT.md)
658
+
659
+ ## Security
660
+
661
+ See [SECURITY.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/SECURITY.md)
662
+
663
+ ## License
664
+
665
+ This project is licensed under the MIT License - see the [LICENSE](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/LICENSE) file for details.
666
+
667
+ ## Support
668
+
669
+ For issues, feature requests, or contributions:
670
+
671
+ - **GitHub Repository**: [revenium/revenium-middleware-openai-node](https://github.com/revenium/revenium-middleware-openai-node)
672
+ - **Issues**: [Report bugs or request features](https://github.com/revenium/revenium-middleware-openai-node/issues)
673
+ - **Documentation**: [docs.revenium.io](https://docs.revenium.io)
674
+ - **Contact**: Reach out to the Revenium team for additional support
675
+
676
+ ## Development
677
+
678
+ For development and testing instructions, see [DEVELOPMENT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/DEVELOPMENT.md).
679
+
680
+ ---
681
+
682
+ **Built by Revenium**