@revenium/openai 1.0.10 → 1.0.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,1152 +1,1231 @@
1
- # 🚀 Revenium OpenAI Middleware for Node.js
2
-
3
- [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
- [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
- [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
-
8
- > **📦 Package Renamed**: This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming. Please update your dependencies accordingly.
9
-
10
- **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
11
-
12
- A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
13
-
14
- ## Features
15
-
16
- - 🔄 **Seamless Integration** - Native TypeScript support, no type casting required
17
- - 📊 **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
18
- - 🎯 **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
19
- - ☁️ **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
20
- - 🛡️ **Type Safety** - Complete TypeScript support with IntelliSense
21
- - 🌊 **Streaming Support** - Handles regular and streaming requests seamlessly
22
- - ⚡ **Fire-and-Forget** - Never blocks your application flow
23
- - 🔧 **Zero Configuration** - Auto-initialization from environment variables
24
-
25
- ## 🚀 Getting Started
26
-
27
- Choose your preferred approach to get started quickly:
28
-
29
- ### Option 1: Create Project from Scratch
30
-
31
- Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
32
- [👉 Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
33
-
34
- ### Option 2: Clone Our Repository
35
-
36
- Clone and run the repository with working examples.
37
- [👉 Go to Repository Guide](#option-2-clone-our-repository)
38
-
39
- ### Option 3: Add to Existing Project
40
-
41
- Already have a project? Just install and replace imports.
42
- [👉 Go to Integration Guide](#option-3-existing-project-integration)
43
-
44
- ---
45
-
46
- ## Option 1: Create Project from Scratch
47
-
48
- ### Step 1: Create Project Directory
49
-
50
- ```bash
51
- # Create and navigate to your project
52
- mkdir my-openai-project
53
- cd my-openai-project
54
-
55
- # Initialize npm project
56
- npm init -y
57
- ```
58
-
59
- ### Step 2: Install Dependencies
60
-
61
- ```bash
62
- # Install the middleware and OpenAI SDK
63
- npm install @revenium/openai openai@^5.8.0 dotenv
64
-
65
- # For TypeScript projects (optional)
66
- npm install -D typescript tsx @types/node
67
- ```
68
-
69
- ### Step 3: Setup Environment Variables
70
-
71
- Create a `.env` file in your project root:
72
-
73
- ```bash
74
- # Create .env file
75
- echo. > .env # On Windows (CMD)
76
- touch .env # On Mac/Linux
77
- # OR PowerShell
78
- New-Item -Path .env -ItemType File
79
- ```
80
-
81
- Copy and paste the following into `.env`:
82
-
83
- ```env
84
- # Revenium OpenAI Middleware Configuration
85
- # Copy this file to .env and fill in your actual values
86
-
87
- # Required: Your Revenium API key (starts with hak_)
88
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
89
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
90
-
91
- # Required: Your OpenAI API key (starts with sk-)
92
- OPENAI_API_KEY=sk_your_openai_api_key_here
93
-
94
- # Optional: Your Azure OpenAI configuration (for Azure testing)
95
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
96
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
97
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
98
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
99
-
100
- # Optional: Enable debug logging
101
- REVENIUM_DEBUG=false
102
- ```
103
-
104
- **💡 NOTE**: Replace each `your_..._here` with your actual values.
105
-
106
- **⚠️ IMPORTANT - Environment Matching**:
107
-
108
- - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
109
- - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
110
- - **Mismatched environments will cause authentication failures**
111
-
112
- ### Step 4: Create Your First Test
113
-
114
- #### TypeScript Test
115
-
116
- Create `test-openai.ts`:
117
-
118
- ```typescript
119
- import 'dotenv/config';
120
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
121
- import OpenAI from 'openai';
122
-
123
- async function testOpenAI() {
124
- try {
125
- // Initialize Revenium middleware
126
- const initResult = initializeReveniumFromEnv();
127
- if (!initResult.success) {
128
- console.error('❌ Failed to initialize Revenium:', initResult.message);
129
- process.exit(1);
130
- }
131
-
132
- // Create and patch OpenAI instance
133
- const openai = patchOpenAIInstance(new OpenAI());
134
-
135
- const response = await openai.chat.completions.create({
136
- model: 'gpt-4o-mini',
137
- max_tokens: 100,
138
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
139
- usageMetadata: {
140
- subscriber: {
141
- id: 'user-456',
142
- email: 'user@demo-org.com',
143
- credential: {
144
- name: 'demo-api-key',
145
- value: 'demo-key-123',
146
- },
147
- },
148
- organizationId: 'demo-org-123',
149
- productId: 'ai-assistant-v2',
150
- taskType: 'educational-query',
151
- agent: 'openai-basic-demo',
152
- traceId: 'session-' + Date.now(),
153
- },
154
- });
155
-
156
- const text = response.choices[0]?.message?.content || 'No response';
157
- console.log('Response:', text);
158
- } catch (error) {
159
- console.error('Error:', error);
160
- }
161
- }
162
-
163
- testOpenAI();
164
- ```
165
-
166
- #### JavaScript Test
167
-
168
- Create `test-openai.js`:
169
-
170
- ```javascript
171
- require('dotenv').config();
172
- const {
173
- initializeReveniumFromEnv,
174
- patchOpenAIInstance,
175
- } = require('@revenium/openai');
176
- const OpenAI = require('openai');
177
-
178
- async function testOpenAI() {
179
- try {
180
- // Initialize Revenium middleware
181
- const initResult = initializeReveniumFromEnv();
182
- if (!initResult.success) {
183
- console.error('❌ Failed to initialize Revenium:', initResult.message);
184
- process.exit(1);
185
- }
186
-
187
- // Create and patch OpenAI instance
188
- const openai = patchOpenAIInstance(new OpenAI());
189
-
190
- const response = await openai.chat.completions.create({
191
- model: 'gpt-4o-mini',
192
- max_tokens: 100,
193
- messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
194
- usageMetadata: {
195
- subscriber: {
196
- id: 'user-456',
197
- email: 'user@demo-org.com',
198
- },
199
- organizationId: 'demo-org-123',
200
- taskType: 'educational-query',
201
- },
202
- });
203
-
204
- const text = response.choices[0]?.message?.content || 'No response';
205
- console.log('Response:', text);
206
- } catch (error) {
207
- // Handle error appropriately
208
- }
209
- }
210
-
211
- testOpenAI();
212
- ```
213
-
214
- ### Step 5: Add Package Scripts
215
-
216
- Update your `package.json`:
217
-
218
- ```json
219
- {
220
- "name": "my-openai-project",
221
- "version": "1.0.0",
222
- "type": "commonjs",
223
- "scripts": {
224
- "test-ts": "npx tsx test-openai.ts",
225
- "test-js": "node test-openai.js"
226
- },
227
- "dependencies": {
228
- "@revenium/openai": "^1.0.7",
229
- "openai": "^5.8.0",
230
- "dotenv": "^16.5.0"
231
- }
232
- }
233
- ```
234
-
235
- ### Step 6: Run Your Tests
236
-
237
- ```bash
238
- # Test TypeScript version
239
- npm run test-ts
240
-
241
- # Test JavaScript version
242
- npm run test-js
243
- ```
244
-
245
- ### Step 7: Project Structure
246
-
247
- Your project should now look like this:
248
-
249
- ```
250
- my-openai-project/
251
- ├── .env # Environment variables
252
- ├── .gitignore # Git ignore file
253
- ├── package.json # Project configuration
254
- ├── test-openai.ts # TypeScript test
255
- └── test-openai.js # JavaScript test
256
- ```
257
-
258
- ## Option 2: Clone Our Repository
259
-
260
- ### Step 1: Clone the Repository
261
-
262
- ```bash
263
- # Clone the repository
264
- git clone git@github.com:revenium/revenium-middleware-openai-node.git
265
- cd revenium-middleware-openai-node
266
- ```
267
-
268
- ### Step 2: Install Dependencies
269
-
270
- ```bash
271
- # Install all dependencies
272
- npm install
273
- ```
274
-
275
- ### Step 3: Setup Environment Variables
276
-
277
- Create a `.env` file in the project root:
278
-
279
- ```bash
280
- # Create .env file
281
- cp .env.example .env # If available, or create manually
282
- ```
283
-
284
- Copy and paste the following into `.env`:
285
-
286
- ```bash
287
- # Revenium OpenAI Middleware Configuration
288
- # Copy this file to .env and fill in your actual values
289
-
290
- # Required: Your Revenium API key (starts with hak_)
291
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
292
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
293
-
294
- # Required: Your OpenAI API key (starts with sk-)
295
- OPENAI_API_KEY=sk_your_openai_api_key_here
296
-
297
- # Optional: Your Azure OpenAI configuration (for Azure testing)
298
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
299
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
300
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
301
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
302
-
303
- # Optional: Enable debug logging
304
- REVENIUM_DEBUG=false
305
- ```
306
-
307
- **⚠️ IMPORTANT - Environment Matching**:
308
-
309
- - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
310
- - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
311
- - **Mismatched environments will cause authentication failures**
312
-
313
- ### Step 4: Build the Project
314
-
315
- ```bash
316
- # Build the middleware
317
- npm run build
318
- ```
319
-
320
- ### Step 5: Run the Examples
321
-
322
- The repository includes working example files:
323
-
324
- ```bash
325
- # Run Chat Completions API examples (using npm scripts)
326
- npm run example:openai-basic
327
- npm run example:openai-streaming
328
- npm run example:azure-basic
329
- npm run example:azure-streaming
330
-
331
- # Run Responses API examples (available with OpenAI SDK 5.8+)
332
- npm run example:openai-responses-basic
333
- npm run example:openai-responses-streaming
334
- npm run example:azure-responses-basic
335
- npm run example:azure-responses-streaming
336
-
337
- # Or run examples directly with tsx
338
- npx tsx examples/openai-basic.ts
339
- npx tsx examples/openai-streaming.ts
340
- npx tsx examples/azure-basic.ts
341
- npx tsx examples/azure-streaming.ts
342
- npx tsx examples/openai-responses-basic.ts
343
- npx tsx examples/openai-responses-streaming.ts
344
- npx tsx examples/azure-responses-basic.ts
345
- npx tsx examples/azure-responses-streaming.ts
346
- ```
347
-
348
- These examples demonstrate:
349
-
350
- - **Chat Completions API** - Traditional OpenAI chat completions and embeddings
351
- - **Responses API** - New OpenAI Responses API with enhanced capabilities
352
- - **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
353
- - **Streaming Support** - Real-time response streaming with metadata tracking
354
- - **Optional Metadata** - Rich business context and user tracking
355
- - **Error Handling** - Robust error handling and debugging
356
-
357
- ## Option 3: Existing Project Integration
358
-
359
- Already have a project? Just install and replace imports:
360
-
361
- ### Step 1: Install the Package
362
-
363
- ```bash
364
- npm install @revenium/openai
365
- ```
366
-
367
- ### Step 2: Update Your Imports
368
-
369
- **Before:**
370
-
371
- ```typescript
372
- import OpenAI from 'openai';
373
-
374
- const openai = new OpenAI();
375
- ```
376
-
377
- **After:**
378
-
379
- ```typescript
380
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
381
- import OpenAI from 'openai';
382
-
383
- // Initialize Revenium middleware
384
- initializeReveniumFromEnv();
385
-
386
- // Patch your OpenAI instance
387
- const openai = patchOpenAIInstance(new OpenAI());
388
- ```
389
-
390
- ### Step 3: Add Environment Variables
391
-
392
- Add to your `.env` file:
393
-
394
- ```env
395
- # Revenium OpenAI Middleware Configuration
396
-
397
- # Required: Your Revenium API key (starts with hak_)
398
- REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
399
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
400
-
401
- # Required: Your OpenAI API key (starts with sk-)
402
- OPENAI_API_KEY=sk_your_openai_api_key_here
403
-
404
- # Optional: Your Azure OpenAI configuration (for Azure testing)
405
- AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
406
- AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
407
- AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
408
- AZURE_OPENAI_API_VERSION=2024-12-01-preview
409
-
410
- # Optional: Enable debug logging
411
- REVENIUM_DEBUG=false
412
- ```
413
-
414
- ### Step 4: Optional - Add Metadata
415
-
416
- Enhance your existing calls with optional metadata:
417
-
418
- ```typescript
419
- // Your existing code works unchanged
420
- const response = await openai.chat.completions.create({
421
- model: 'gpt-4o-mini',
422
- messages: [{ role: 'user', content: 'Hello!' }],
423
- // Add optional metadata for better analytics
424
- usageMetadata: {
425
- subscriber: { id: 'user-123' },
426
- organizationId: 'my-company',
427
- taskType: 'chat',
428
- },
429
- });
430
- ```
431
-
432
- **✅ That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
433
-
434
- ## 📊 What Gets Tracked
435
-
436
- The middleware automatically captures comprehensive usage data:
437
-
438
- ### **🔢 Usage Metrics**
439
-
440
- - **Token Counts** - Input tokens, output tokens, total tokens
441
- - **Model Information** - Model name, provider (OpenAI/Azure), API version
442
- - **Request Timing** - Request duration, response time
443
- - **Cost Calculation** - Estimated costs based on current pricing
444
-
445
- ### **🏷️ Business Context (Optional)**
446
-
447
- - **User Tracking** - Subscriber ID, email, credentials
448
- - **Organization Data** - Organization ID, subscription ID, product ID
449
- - **Task Classification** - Task type, agent identifier, trace ID
450
- - **Quality Metrics** - Response quality scores, custom metadata
451
-
452
- ### **🔧 Technical Details**
453
-
454
- - **API Endpoints** - Chat completions, embeddings, responses API
455
- - **Request Types** - Streaming vs non-streaming
456
- - **Error Tracking** - Failed requests, error types, retry attempts
457
- - **Environment Info** - Development vs production usage
458
-
459
- ## OpenAI Responses API Support
460
-
461
- This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
462
-
463
- ### What is the Responses API?
464
-
465
- The Responses API is OpenAI's new stateful API that:
466
-
467
- - Uses `input` instead of `messages` parameter for simplified interaction
468
- - Provides unified experience combining chat completions and assistants capabilities
469
- - Supports advanced features like background tasks, function calling, and code interpreter
470
- - Offers better streaming and real-time response generation
471
- - Works with GPT-5 and other advanced models
472
-
473
- ### API Comparison
474
-
475
- **Traditional Chat Completions:**
476
-
477
- ```javascript
478
- const response = await openai.chat.completions.create({
479
- model: 'gpt-4o',
480
- messages: [{ role: 'user', content: 'Hello' }],
481
- });
482
- ```
483
-
484
- **New Responses API:**
485
-
486
- ```javascript
487
- const response = await openai.responses.create({
488
- model: 'gpt-5',
489
- input: 'Hello', // Simplified input parameter
490
- });
491
- ```
492
-
493
- ### Key Differences
494
-
495
- | Feature | Chat Completions | Responses API |
496
- | ---------------------- | ---------------------------- | ----------------------------------- |
497
- | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
498
- | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
499
- | **Response Structure** | `choices[0].message.content` | `output_text` |
500
- | **Stateful** | No | Yes (with `store: true`) |
501
- | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
502
- | **Temperature** | Supported | Not supported with GPT-5 |
503
-
504
- ### Requirements & Installation
505
-
506
- **OpenAI SDK Version:**
507
-
508
- - **Minimum:** `5.8.0` (when Responses API was officially released)
509
- - **Recommended:** `5.8.2` or later (tested and verified)
510
- - **Current:** `6.2.0` (latest available)
511
-
512
- **Installation:**
513
-
514
- ```bash
515
- # Install latest version with Responses API support
516
- npm install openai@^5.8.0
517
-
518
- # Or install specific tested version
519
- npm install openai@5.8.2
520
- ```
521
-
522
- ### Current Status
523
-
524
- **The Responses API is officially available in OpenAI SDK 5.8+**
525
-
526
- **Official Release:**
527
-
528
- - ✅ Released by OpenAI in SDK version 5.8.0
529
- - Fully documented in official OpenAI documentation
530
- - ✅ Production-ready with GPT-5 and other supported models
531
- - Complete middleware support with Revenium integration
532
-
533
- **Middleware Features:**
534
-
535
- - ✅ Full Responses API support (streaming & non-streaming)
536
- - ✅ Seamless metadata tracking identical to Chat Completions
537
- - ✅ Type-safe TypeScript integration
538
- - Complete token tracking including reasoning tokens
539
- - ✅ Azure OpenAI compatibility
540
-
541
- **References:**
542
-
543
- - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
544
- - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
545
-
546
- ### Responses API Examples
547
-
548
- The middleware includes comprehensive examples for the new Responses API:
549
-
550
- **Basic Usage:**
551
-
552
- ```typescript
553
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
554
- import OpenAI from 'openai';
555
-
556
- // Initialize and patch OpenAI instance
557
- initializeReveniumFromEnv();
558
- const openai = patchOpenAIInstance(new OpenAI());
559
-
560
- // Simple string input
561
- const response = await openai.responses.create({
562
- model: 'gpt-5',
563
- input: 'What is the capital of France?',
564
- max_output_tokens: 150,
565
- usageMetadata: {
566
- subscriber: { id: 'user-123', email: 'user@example.com' },
567
- organizationId: 'org-456',
568
- productId: 'quantum-explainer',
569
- taskType: 'educational-content',
570
- },
571
- });
572
-
573
- console.log(response.output_text); // "Paris."
574
- ```
575
-
576
- **Streaming Example:**
577
-
578
- ```typescript
579
- const stream = await openai.responses.create({
580
- model: 'gpt-5',
581
- input: 'Write a short story about AI',
582
- stream: true,
583
- max_output_tokens: 500,
584
- usageMetadata: {
585
- subscriber: { id: 'user-123', email: 'user@example.com' },
586
- organizationId: 'org-456',
587
- },
588
- });
589
-
590
- for await (const chunk of stream) {
591
- process.stdout.write(chunk.delta?.content || '');
592
- }
593
- ```
594
-
595
- ### Adding Custom Metadata
596
-
597
- Track users, organizations, and custom data with seamless TypeScript integration:
598
-
599
- ```typescript
600
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
601
- import OpenAI from 'openai';
602
-
603
- // Initialize and patch OpenAI instance
604
- initializeReveniumFromEnv();
605
- const openai = patchOpenAIInstance(new OpenAI());
606
-
607
- const response = await openai.chat.completions.create({
608
- model: 'gpt-4',
609
- messages: [{ role: 'user', content: 'Summarize this document' }],
610
- // Add custom tracking metadata - all fields optional, no type casting needed!
611
- usageMetadata: {
612
- subscriber: {
613
- id: 'user-12345',
614
- email: 'john@acme-corp.com',
615
- },
616
- organizationId: 'acme-corp',
617
- productId: 'document-ai',
618
- taskType: 'document-summary',
619
- agent: 'doc-summarizer-v2',
620
- traceId: 'session-abc123',
621
- },
622
- });
623
-
624
- // Same metadata works with Responses API
625
- const responsesResult = await openai.responses.create({
626
- model: 'gpt-5',
627
- input: 'Summarize this document',
628
- // Same metadata structure - seamless compatibility!
629
- usageMetadata: {
630
- subscriber: {
631
- id: 'user-12345',
632
- email: 'john@acme-corp.com',
633
- },
634
- organizationId: 'acme-corp',
635
- productId: 'document-ai',
636
- taskType: 'document-summary',
637
- agent: 'doc-summarizer-v2',
638
- traceId: 'session-abc123',
639
- },
640
- });
641
- ```
642
-
643
- ### Streaming Support
644
-
645
- The middleware automatically handles streaming requests with seamless metadata:
646
-
647
- ```typescript
648
- import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
649
- import OpenAI from 'openai';
650
-
651
- // Initialize and patch OpenAI instance
652
- initializeReveniumFromEnv();
653
- const openai = patchOpenAIInstance(new OpenAI());
654
-
655
- const stream = await openai.chat.completions.create({
656
- model: 'gpt-4',
657
- messages: [{ role: 'user', content: 'Tell me a story' }],
658
- stream: true,
659
- // Metadata works seamlessly with streaming - all fields optional!
660
- usageMetadata: {
661
- organizationId: 'story-app',
662
- taskType: 'creative-writing',
663
- },
664
- });
665
-
666
- for await (const chunk of stream) {
667
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
668
- }
669
- // Usage tracking happens automatically when stream completes
670
- ```
671
-
672
- ### Temporarily Disabling Tracking
673
-
674
- If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
675
-
676
- ```javascript
677
- import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
678
-
679
- // Disable tracking
680
- unpatchOpenAI();
681
-
682
- // Your OpenAI calls now bypass Revenium tracking
683
- await openai.chat.completions.create({...});
684
-
685
- // Re-enable tracking
686
- patchOpenAI();
687
- ```
688
-
689
- **Common use cases:**
690
-
691
- - **Debugging**: Isolate whether issues are caused by the middleware
692
- - **Testing**: Compare behavior with/without tracking
693
- - **Conditional tracking**: Enable/disable based on environment
694
- - **Troubleshooting**: Temporary bypass during incident response
695
-
696
- **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
697
-
698
- ## Azure OpenAI Integration
699
-
700
- **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
701
-
702
- ### Quick Start with Azure OpenAI
703
-
704
- ```bash
705
- # Set your Azure OpenAI environment variables
706
- export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
707
- export AZURE_OPENAI_API_KEY="your-azure-api-key"
708
- export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
709
- export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
710
-
711
- # Set your Revenium credentials
712
- export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
713
- # export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
714
- ```
715
-
716
- ```typescript
717
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
718
- import { AzureOpenAI } from 'openai';
719
-
720
- // Initialize Revenium middleware
721
- initializeReveniumFromEnv();
722
-
723
- // Create and patch Azure OpenAI client
724
- const azure = patchOpenAIInstance(
725
- new AzureOpenAI({
726
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
727
- apiKey: process.env.AZURE_OPENAI_API_KEY,
728
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
729
- })
730
- );
731
-
732
- // Your existing Azure OpenAI code works with seamless metadata
733
- const response = await azure.chat.completions.create({
734
- model: 'gpt-4o', // Uses your deployment name
735
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
736
- // Optional metadata with native TypeScript support
737
- usageMetadata: {
738
- organizationId: 'my-company',
739
- taskType: 'azure-chat',
740
- },
741
- });
742
-
743
- console.log(response.choices[0].message.content);
744
- ```
745
-
746
- ### Azure Features
747
-
748
- - **Automatic Detection**: Detects Azure OpenAI clients automatically
749
- - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
750
- - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
751
- - **Deployment Support**: Works with any Azure deployment name (simple or complex)
752
- - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
753
- - **Zero Code Changes**: Existing Azure OpenAI code works without modification
754
-
755
- ### Azure Environment Variables
756
-
757
- | Variable | Required | Description | Example |
758
- | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
759
- | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
760
- | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
761
- | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
762
- | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
763
-
764
- ### Azure Model Name Resolution
765
-
766
- The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
767
-
768
- ```typescript
769
- // Azure deployment names Standard model names for pricing
770
- "gpt-4o-2024-11-20" → "gpt-4o"
771
- "gpt4o-prod" → "gpt-4o"
772
- "o4-mini" → "gpt-4o-mini"
773
- "gpt-35-turbo-dev" → "gpt-3.5-turbo"
774
- "text-embedding-3-large" "text-embedding-3-large" // Direct match
775
- "embedding-3-large" → "text-embedding-3-large"
776
- ```
777
-
778
- ## 🔧 Advanced Usage
779
-
780
- ### Streaming with Metadata
781
-
782
- The middleware seamlessly handles streaming requests with full metadata support:
783
-
784
- ```typescript
785
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
786
- import OpenAI from 'openai';
787
-
788
- initializeReveniumFromEnv();
789
- const openai = patchOpenAIInstance(new OpenAI());
790
-
791
- // Chat Completions API streaming
792
- const stream = await openai.chat.completions.create({
793
- model: 'gpt-4o-mini',
794
- messages: [{ role: 'user', content: 'Tell me a story' }],
795
- stream: true,
796
- usageMetadata: {
797
- subscriber: { id: 'user-123', email: 'user@example.com' },
798
- organizationId: 'story-app',
799
- taskType: 'creative-writing',
800
- traceId: 'session-' + Date.now(),
801
- },
802
- });
803
-
804
- for await (const chunk of stream) {
805
- process.stdout.write(chunk.choices[0]?.delta?.content || '');
806
- }
807
- // Usage tracking happens automatically when stream completes
808
- ```
809
-
810
- ### Responses API with Metadata
811
-
812
- Full support for OpenAI's new Responses API:
813
-
814
- ```typescript
815
- // Simple string input with metadata
816
- const response = await openai.responses.create({
817
- model: 'gpt-5',
818
- input: 'What is the capital of France?',
819
- max_output_tokens: 150,
820
- usageMetadata: {
821
- subscriber: { id: 'user-123', email: 'user@example.com' },
822
- organizationId: 'org-456',
823
- productId: 'geography-tutor',
824
- taskType: 'educational-query',
825
- },
826
- });
827
-
828
- console.log(response.output_text); // "Paris."
829
- ```
830
-
831
- ### Azure OpenAI Integration
832
-
833
- Automatic Azure OpenAI detection with seamless metadata:
834
-
835
- ```typescript
836
- import { AzureOpenAI } from 'openai';
837
-
838
- // Create and patch Azure OpenAI client
839
- const azure = patchOpenAIInstance(
840
- new AzureOpenAI({
841
- endpoint: process.env.AZURE_OPENAI_ENDPOINT,
842
- apiKey: process.env.AZURE_OPENAI_API_KEY,
843
- apiVersion: process.env.AZURE_OPENAI_API_VERSION,
844
- })
845
- );
846
-
847
- // Your existing Azure OpenAI code works with seamless metadata
848
- const response = await azure.chat.completions.create({
849
- model: 'gpt-4o', // Uses your deployment name
850
- messages: [{ role: 'user', content: 'Hello from Azure!' }],
851
- usageMetadata: {
852
- organizationId: 'my-company',
853
- taskType: 'azure-chat',
854
- agent: 'azure-assistant',
855
- },
856
- });
857
- ```
858
-
859
- ### Embeddings with Metadata
860
-
861
- Track embeddings usage with optional metadata:
862
-
863
- ```typescript
864
- const embedding = await openai.embeddings.create({
865
- model: 'text-embedding-3-small',
866
- input: 'Advanced text embedding with comprehensive tracking metadata',
867
- usageMetadata: {
868
- subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
869
- organizationId: 'my-company',
870
- taskType: 'document-embedding',
871
- productId: 'search-engine',
872
- traceId: `embed-${Date.now()}`,
873
- agent: 'openai-embeddings-node',
874
- },
875
- });
876
-
877
- console.log('Model:', embedding.model);
878
- console.log('Usage:', embedding.usage);
879
- console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
880
- ```
881
-
882
- ### Manual Configuration
883
-
884
- For advanced use cases, configure the middleware manually:
885
-
886
- ```typescript
887
- import { configure } from '@revenium/openai';
888
-
889
- configure({
890
- reveniumApiKey: 'hak_your_api_key',
891
- reveniumBaseUrl: 'https://api.revenium.io/meter',
892
- apiTimeout: 5000,
893
- failSilent: true,
894
- maxRetries: 3,
895
- });
896
- ```
897
-
898
- ## 🛠️ Configuration Options
899
-
900
- ### Environment Variables
901
-
902
- | Variable | Required | Default | Description |
903
- | ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
904
- | `REVENIUM_METERING_API_KEY` | ✅ | - | Your Revenium API key (starts with `hak_`) |
905
- | `OPENAI_API_KEY` | ✅ | - | Your OpenAI API key (starts with `sk-`) |
906
- | `REVENIUM_METERING_BASE_URL` | ❌ | `https://api.revenium.io/meter` | Revenium metering API base URL |
907
- | `REVENIUM_DEBUG` | ❌ | `false` | Enable debug logging (`true`/`false`) |
908
- | `AZURE_OPENAI_ENDPOINT` | ❌ | - | Azure OpenAI endpoint URL (for Azure testing) |
909
- | `AZURE_OPENAI_API_KEY` | ❌ | - | Azure OpenAI API key (for Azure testing) |
910
- | `AZURE_OPENAI_DEPLOYMENT` | ❌ | - | Azure OpenAI deployment name (for Azure) |
911
- | `AZURE_OPENAI_API_VERSION` | ❌ | `2024-12-01-preview` | Azure OpenAI API version (for Azure) |
912
-
913
- **⚠️ Important Note about `REVENIUM_METERING_BASE_URL`:**
914
-
915
- - This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
916
- - If you don't set it explicitly, the middleware will use the default production endpoint
917
- - However, you may see console warnings or errors if the middleware cannot determine the correct environment
918
- - **Best practice:** Always set this variable explicitly to match your environment:
919
-
920
- ```bash
921
- # For Production
922
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
923
-
924
- # For QA/Testing
925
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
926
- ```
927
-
928
- - **Remember:** Your `REVENIUM_METERING_API_KEY` must match the environment of your base URL
929
-
930
- ### Usage Metadata Options
931
-
932
- All metadata fields are optional and help provide better analytics:
933
-
934
- ```typescript
935
- interface UsageMetadata {
936
- traceId?: string; // Session or conversation ID
937
- taskType?: string; // Type of AI task (e.g., "chat", "summary")
938
- subscriber?: {
939
- // User information (nested structure)
940
- id?: string; // User ID from your system
941
- email?: string; // User's email address
942
- credential?: {
943
- // User credentials
944
- name?: string; // Credential name
945
- value?: string; // Credential value
946
- };
947
- };
948
- organizationId?: string; // Organization/company ID
949
- subscriptionId?: string; // Billing plan ID
950
- productId?: string; // Your product/feature ID
951
- agent?: string; // AI agent identifier
952
- responseQualityScore?: number; // Quality score (0-1)
953
- }
954
- ```
955
-
956
- ## How It Works
957
-
958
- 1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
959
- - `chat.completions.create` (Chat Completions API)
960
- - `responses.create` (Responses API - when available)
961
- - `embeddings.create` (Embeddings API)
962
- 2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
963
- 3. **Usage Extraction**: Token counts, model info, and timing data are captured
964
- 4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
965
- 5. **Transparent Response**: Original OpenAI responses are returned unchanged
966
-
967
- The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
968
-
969
- ## 🔍 Troubleshooting
970
-
971
- ### Common Issues
972
-
973
- #### 1. **No tracking data in dashboard**
974
-
975
- **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
976
-
977
- **Solution**: Enable debug logging to check middleware status:
978
-
979
- ```bash
980
- export REVENIUM_DEBUG=true
981
- ```
982
-
983
- **Expected output for successful tracking**:
984
-
985
- ```bash
986
- [Revenium Debug] OpenAI chat.completions.create intercepted
987
- [Revenium Debug] Revenium tracking successful
988
-
989
- # For Responses API:
990
- [Revenium Debug] OpenAI responses.create intercepted
991
- [Revenium Debug] Revenium tracking successful
992
- ```
993
-
994
- #### 2. **Environment mismatch errors**
995
-
996
- **Symptoms**: Authentication errors or 401/403 responses
997
-
998
- **Solution**: Ensure your API key matches your base URL environment:
999
-
1000
- ```bash
1001
- # ✅ Correct - Production key with production URL
1002
- REVENIUM_METERING_API_KEY=hak_prod_key_here
1003
- REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1004
-
1005
- # ✅ Correct - QA key with QA URL
1006
- REVENIUM_METERING_API_KEY=hak_qa_key_here
1007
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
1008
-
1009
- # Wrong - Production key with QA URL
1010
- REVENIUM_METERING_API_KEY=hak_prod_key_here
1011
- REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
1012
- ```
1013
-
1014
- #### 3. **TypeScript type errors**
1015
-
1016
- **Symptoms**: TypeScript errors about `usageMetadata` property
1017
-
1018
- **Solution**: Ensure you're importing the middleware before OpenAI:
1019
-
1020
- ```typescript
1021
- // ✅ Correct order
1022
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1023
- import OpenAI from 'openai';
1024
-
1025
- // ❌ Wrong order
1026
- import OpenAI from 'openai';
1027
- import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1028
- ```
1029
-
1030
- #### 4. **Azure OpenAI not working**
1031
-
1032
- **Symptoms**: Azure OpenAI calls not being tracked
1033
-
1034
- **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
1035
-
1036
- ```typescript
1037
- import { AzureOpenAI } from 'openai';
1038
- import { patchOpenAIInstance } from '@revenium/openai';
1039
-
1040
- // ✅ Correct
1041
- const azure = patchOpenAIInstance(new AzureOpenAI({...}));
1042
-
1043
- // Wrong - not patched
1044
- const azure = new AzureOpenAI({...});
1045
- ```
1046
-
1047
- #### 5. **Responses API not available**
1048
-
1049
- **Symptoms**: `openai.responses.create` is undefined
1050
-
1051
- **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
1052
-
1053
- ```bash
1054
- npm install openai@^5.8.0
1055
- ```
1056
-
1057
- ### Debug Mode
1058
-
1059
- Enable comprehensive debug logging:
1060
-
1061
- ```bash
1062
- export REVENIUM_DEBUG=true
1063
- ```
1064
-
1065
- This will show:
1066
-
1067
- - ✅ Middleware initialization status
1068
- - Request interception confirmations
1069
- - ✅ Metadata extraction details
1070
- - ✅ Tracking success/failure messages
1071
- - Error details and stack traces
1072
-
1073
- ### Getting Help
1074
-
1075
- If you're still experiencing issues:
1076
-
1077
- 1. **Check the logs** with `REVENIUM_DEBUG=true`
1078
- 2. **Verify environment variables** are set correctly
1079
- 3. **Test with minimal example** from our documentation
1080
- 4. **Contact support** with debug logs and error details
1081
-
1082
- For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
1083
-
1084
- ## 🤖 Supported Models
1085
-
1086
- ### OpenAI Models
1087
-
1088
- | Model Family | Models | APIs Supported |
1089
- | ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
1090
- | **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
1091
- | **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
1092
- | **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
1093
- | **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
1094
- | **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
1095
- | **GPT-5** | `gpt-5` (when available) | Responses API |
1096
- | **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
1097
-
1098
- ### Azure OpenAI Models
1099
-
1100
- All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
1101
-
1102
- | Azure Deployment | Resolved Model | API Support |
1103
- | ------------------------ | ------------------------ | --------------------------- |
1104
- | `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
1105
- | `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
1106
- | `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
1107
- | `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
1108
- | `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
1109
- | `embedding-3-large` | `text-embedding-3-large` | Embeddings |
1110
-
1111
- **Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
1112
-
1113
- ### API Support Matrix
1114
-
1115
- | Feature | Chat Completions API | Responses API | Embeddings API |
1116
- | --------------------- | -------------------- | ------------- | -------------- |
1117
- | **Basic Requests** | ✅ | ✅ | ✅ |
1118
- | **Streaming** | ✅ | ✅ | ❌ |
1119
- | **Metadata Tracking** | ✅ | ✅ | ✅ |
1120
- | **Azure OpenAI** | ✅ | ✅ | ✅ |
1121
- | **Cost Calculation** | ✅ | ✅ | ✅ |
1122
- | **Token Counting** | ✅ | ✅ | ✅ |
1123
-
1124
- ## Requirements
1125
-
1126
- - Node.js 16+
1127
- - OpenAI package v4.0+
1128
- - TypeScript 5.0+ (for TypeScript projects)
1129
-
1130
- ## Documentation
1131
-
1132
- For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
1133
-
1134
- ## Contributing
1135
-
1136
- See [CONTRIBUTING.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/CONTRIBUTING.md)
1137
-
1138
- ## Code of Conduct
1139
-
1140
- See [CODE_OF_CONDUCT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/CODE_OF_CONDUCT.md)
1141
-
1142
- ## Security
1143
-
1144
- See [SECURITY.md](https://github.com/revenium/revenium-middleware-openai-node/blob/main/SECURITY.md)
1145
-
1146
- ## License
1147
-
1148
- This project is licensed under the MIT License - see the [LICENSE](https://github.com/revenium/revenium-middleware-openai-node/blob/main/LICENSE) file for details.
1149
-
1150
- ## Acknowledgments
1151
-
1152
- - Built by the Revenium team
1
+ # Revenium OpenAI Middleware for Node.js
2
+
3
+ [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
+ [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
+ [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
9
+
10
+ A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
11
+
12
+ ## Features
13
+
14
+ - **Seamless Integration** - Native TypeScript support, no type casting required
15
+ - **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
16
+ - **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
17
+ - **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
18
+ - **Type Safety** - Complete TypeScript support with IntelliSense
19
+ - **Streaming Support** - Handles regular and streaming requests seamlessly
20
+ - **Fire-and-Forget** - Never blocks your application flow
21
+ - **Zero Configuration** - Auto-initialization from environment variables
22
+
23
+ ## Package Migration
24
+
25
+ This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming.
26
+
27
+ ### Migration Steps
28
+
29
+ If you're upgrading from the old package:
30
+
31
+ ```bash
32
+ # Uninstall the old package
33
+ npm uninstall revenium-middleware-openai-node
34
+
35
+ # Install the new package
36
+ npm install @revenium/openai
37
+ ```
38
+
39
+ **Update your imports:**
40
+
41
+ ```typescript
42
+ // Old import
43
+ import { patchOpenAIInstance } from "revenium-middleware-openai-node";
44
+
45
+ // New import
46
+ import { patchOpenAIInstance } from "@revenium/openai";
47
+ ```
48
+
49
+ All functionality remains exactly the same - only the package name has changed.
50
+
51
+ ## Getting Started
52
+
53
+ Choose your preferred approach to get started quickly:
54
+
55
+ ### Option 1: Create Project from Scratch
56
+
57
+ Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
58
+ [Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
59
+
60
+ ### Option 2: Clone Our Repository
61
+
62
+ Clone and run the repository with working examples.
63
+ [Go to Repository Guide](#option-2-clone-our-repository)
64
+
65
+ ### Option 3: Add to Existing Project
66
+
67
+ Already have a project? Just install and replace imports.
68
+ [Go to Integration Guide](#option-3-existing-project-integration)
69
+
70
+ ---
71
+
72
+ ## Option 1: Create Project from Scratch
73
+
74
+ ### Step 1: Create Project Directory
75
+
76
+ ```bash
77
+ # Create and navigate to your project
78
+ mkdir my-openai-project
79
+ cd my-openai-project
80
+
81
+ # Initialize npm project
82
+ npm init -y
83
+ ```
84
+
85
+ ### Step 2: Install Dependencies
86
+
87
+ ```bash
88
+ # Install the middleware and OpenAI SDK
89
+ npm install @revenium/openai openai@^5.8.0 dotenv
90
+
91
+ # For TypeScript projects (optional)
92
+ npm install -D typescript tsx @types/node
93
+ ```
94
+
95
+ ### Step 3: Setup Environment Variables
96
+
97
+ Create a `.env` file in your project root:
98
+
99
+ ```bash
100
+ # Create .env file
101
+ echo. > .env # On Windows (CMD)
102
+ touch .env # On Mac/Linux
103
+ # OR PowerShell
104
+ New-Item -Path .env -ItemType File
105
+ ```
106
+
107
+ Copy and paste the following into `.env`:
108
+
109
+ ```env
110
+ # Revenium OpenAI Middleware Configuration
111
+ # Copy this file to .env and fill in your actual values
112
+
113
+ # Required: Your Revenium API key (starts with hak_)
114
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
115
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
116
+
117
+ # Required: Your OpenAI API key (starts with sk-)
118
+ OPENAI_API_KEY=sk_your_openai_api_key_here
119
+
120
+ # Optional: Your Azure OpenAI configuration (for Azure testing)
121
+ AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
122
+ AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
123
+ AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
124
+ AZURE_OPENAI_API_VERSION=2024-12-01-preview
125
+
126
+ # Optional: Enable debug logging
127
+ REVENIUM_DEBUG=false
128
+ ```
129
+
130
+ **NOTE**: Replace each `your_..._here` with your actual values.
131
+
132
+ **IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
133
+
134
+ ### Step 4: Protect Your API Keys
135
+
136
+ **CRITICAL SECURITY**: Never commit your `.env` file to version control!
137
+
138
+ Your `.env` file contains sensitive API keys that must be kept secret:
139
+
140
+ ```bash
141
+ # Verify .env is in your .gitignore
142
+ git check-ignore .env
143
+ ```
144
+
145
+ If the command returns nothing, add `.env` to your `.gitignore`:
146
+
147
+ ```gitignore
148
+ # Environment variables
149
+ .env
150
+ .env.*
151
+ !.env.example
152
+ ```
153
+
154
+ **Best Practice**: Use GitHub's standard Node.gitignore as a starting point:
155
+ - Reference: https://github.com/github/gitignore/blob/main/Node.gitignore
156
+
157
+ **Warning:** The following command will overwrite your current `.gitignore` file.
158
+ To avoid losing custom rules, back up your file first or append instead:
159
+ `curl https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore >> .gitignore`
160
+
161
+ **Note:** Appending may result in duplicate entries if your `.gitignore` already contains some of the patterns from Node.gitignore.
162
+ Please review your `.gitignore` after appending and remove any duplicate lines as needed.
163
+
164
+ This protects your OpenAI API key, Revenium API key, and any other secrets from being accidentally committed to your repository.
165
+
166
+ ### Step 5: Create Your First Test
167
+
168
+ #### TypeScript Test
169
+
170
+ Create `test-openai.ts`:
171
+
172
+ ```typescript
173
+ import 'dotenv/config';
174
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
175
+ import OpenAI from 'openai';
176
+
177
+ async function testOpenAI() {
178
+ try {
179
+ // Initialize Revenium middleware
180
+ const initResult = initializeReveniumFromEnv();
181
+ if (!initResult.success) {
182
+ console.error(' Failed to initialize Revenium:', initResult.message);
183
+ process.exit(1);
184
+ }
185
+
186
+ // Create and patch OpenAI instance
187
+ const openai = patchOpenAIInstance(new OpenAI());
188
+
189
+ const response = await openai.chat.completions.create({
190
+ model: 'gpt-4o-mini',
191
+ max_tokens: 100,
192
+ messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
193
+ usageMetadata: {
194
+ subscriber: {
195
+ id: 'user-456',
196
+ email: 'user@demo-org.com',
197
+ credential: {
198
+ name: 'demo-api-key',
199
+ value: 'demo-key-123',
200
+ },
201
+ },
202
+ organizationId: 'demo-org-123',
203
+ productId: 'ai-assistant-v2',
204
+ taskType: 'educational-query',
205
+ agent: 'openai-basic-demo',
206
+ traceId: 'session-' + Date.now(),
207
+ },
208
+ });
209
+
210
+ const text = response.choices[0]?.message?.content || 'No response';
211
+ console.log('Response:', text);
212
+ } catch (error) {
213
+ console.error('Error:', error);
214
+ }
215
+ }
216
+
217
+ testOpenAI();
218
+ ```
219
+
220
+ #### JavaScript Test
221
+
222
+ Create `test-openai.js`:
223
+
224
+ ```javascript
225
+ require('dotenv').config();
226
+ const {
227
+ initializeReveniumFromEnv,
228
+ patchOpenAIInstance,
229
+ } = require('@revenium/openai');
230
+ const OpenAI = require('openai');
231
+
232
+ async function testOpenAI() {
233
+ try {
234
+ // Initialize Revenium middleware
235
+ const initResult = initializeReveniumFromEnv();
236
+ if (!initResult.success) {
237
+ console.error(' Failed to initialize Revenium:', initResult.message);
238
+ process.exit(1);
239
+ }
240
+
241
+ // Create and patch OpenAI instance
242
+ const openai = patchOpenAIInstance(new OpenAI());
243
+
244
+ const response = await openai.chat.completions.create({
245
+ model: 'gpt-4o-mini',
246
+ max_tokens: 100,
247
+ messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
248
+ usageMetadata: {
249
+ subscriber: {
250
+ id: 'user-456',
251
+ email: 'user@demo-org.com',
252
+ },
253
+ organizationId: 'demo-org-123',
254
+ taskType: 'educational-query',
255
+ },
256
+ });
257
+
258
+ const text = response.choices[0]?.message?.content || 'No response';
259
+ console.log('Response:', text);
260
+ } catch (error) {
261
+ // Handle error appropriately
262
+ }
263
+ }
264
+
265
+ testOpenAI();
266
+ ```
267
+
268
+ ### Step 6: Add Package Scripts
269
+
270
+ Update your `package.json`:
271
+
272
+ ```json
273
+ {
274
+ "name": "my-openai-project",
275
+ "version": "1.0.0",
276
+ "type": "commonjs",
277
+ "scripts": {
278
+ "test-ts": "npx tsx test-openai.ts",
279
+ "test-js": "node test-openai.js"
280
+ },
281
+ "dependencies": {
282
+ "@revenium/openai": "^1.0.11",
283
+ "openai": "^5.8.0",
284
+ "dotenv": "^16.5.0"
285
+ }
286
+ }
287
+ ```
288
+
289
+ ### Step 7: Run Your Tests
290
+
291
+ ```bash
292
+ # Test TypeScript version
293
+ npm run test-ts
294
+
295
+ # Test JavaScript version
296
+ npm run test-js
297
+ ```
298
+
299
+ ### Step 8: Project Structure
300
+
301
+ Your project should now look like this:
302
+
303
+ ```
304
+ my-openai-project/
305
+ ├── .env # Environment variables
306
+ ├── .gitignore # Git ignore file
307
+ ├── package.json # Project configuration
308
+ ├── test-openai.ts # TypeScript test
309
+ └── test-openai.js # JavaScript test
310
+ ```
311
+
312
+ ## Option 2: Clone Our Repository
313
+
314
+ ### Step 1: Clone the Repository
315
+
316
+ ```bash
317
+ # Clone the repository
318
+ git clone git@github.com:revenium/revenium-middleware-openai-node.git
319
+ cd revenium-middleware-openai-node
320
+ ```
321
+
322
+ ### Step 2: Install Dependencies
323
+
324
+ ```bash
325
+ # Install all dependencies
326
+ npm install
327
+ npm install @revenium/openai
328
+ ```
329
+
330
+ ### Step 3: Setup Environment Variables
331
+
332
+ Create a `.env` file in the project root:
333
+
334
+ ```bash
335
+ # Create .env file
336
+ cp .env.example .env # If available, or create manually
337
+ ```
338
+
339
+ Copy and paste the following into `.env`:
340
+
341
+ ```bash
342
+ # Revenium OpenAI Middleware Configuration
343
+ # Copy this file to .env and fill in your actual values
344
+
345
+ # Required: Your Revenium API key (starts with hak_)
346
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
347
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
348
+
349
+ # Required: Your OpenAI API key (starts with sk-)
350
+ OPENAI_API_KEY=sk_your_openai_api_key_here
351
+
352
+ # Optional: Your Azure OpenAI configuration (for Azure testing)
353
+ AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
354
+ AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
355
+ AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
356
+ AZURE_OPENAI_API_VERSION=2024-12-01-preview
357
+
358
+ # Optional: Enable debug logging
359
+ REVENIUM_DEBUG=false
360
+ ```
361
+
362
+ **IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
363
+
364
+ ### Step 4: Build the Project
365
+
366
+ ```bash
367
+ # Build the middleware
368
+ npm run build
369
+ ```
370
+
371
+ ### Step 5: Run the Examples
372
+
373
+ The repository includes working example files:
374
+
375
+ ```bash
376
+ # Run Chat Completions API examples (using npm scripts)
377
+ npm run example:openai-basic
378
+ npm run example:openai-streaming
379
+ npm run example:azure-basic
380
+ npm run example:azure-streaming
381
+
382
+ # Run Responses API examples (available with OpenAI SDK 5.8+)
383
+ npm run example:openai-responses-basic
384
+ npm run example:openai-responses-streaming
385
+ npm run example:azure-responses-basic
386
+ npm run example:azure-responses-streaming
387
+
388
+ # Or run examples directly with tsx
389
+ npx tsx examples/openai-basic.ts
390
+ npx tsx examples/openai-streaming.ts
391
+ npx tsx examples/azure-basic.ts
392
+ npx tsx examples/azure-streaming.ts
393
+ npx tsx examples/openai-responses-basic.ts
394
+ npx tsx examples/openai-responses-streaming.ts
395
+ npx tsx examples/azure-responses-basic.ts
396
+ npx tsx examples/azure-responses-streaming.ts
397
+ ```
398
+
399
+ These examples demonstrate:
400
+
401
+ - **Chat Completions API** - Traditional OpenAI chat completions and embeddings
402
+ - **Responses API** - New OpenAI Responses API with enhanced capabilities
403
+ - **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
404
+ - **Streaming Support** - Real-time response streaming with metadata tracking
405
+ - **Optional Metadata** - Rich business context and user tracking
406
+ - **Error Handling** - Robust error handling and debugging
407
+
408
+ ## Option 3: Existing Project Integration
409
+
410
+ Already have a project? Just install and replace imports:
411
+
412
+ ### Step 1: Install the Package
413
+
414
+ ```bash
415
+ npm install @revenium/openai
416
+ ```
417
+
418
+ ### Step 2: Update Your Imports
419
+
420
+ **Before:**
421
+
422
+ ```typescript
423
+ import OpenAI from 'openai';
424
+
425
+ const openai = new OpenAI();
426
+ ```
427
+
428
+ **After:**
429
+
430
+ ```typescript
431
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
432
+ import OpenAI from 'openai';
433
+
434
+ // Initialize Revenium middleware
435
+ initializeReveniumFromEnv();
436
+
437
+ // Patch your OpenAI instance
438
+ const openai = patchOpenAIInstance(new OpenAI());
439
+ ```
440
+
441
+ ### Step 3: Add Environment Variables
442
+
443
+ Add to your `.env` file:
444
+
445
+ ```env
446
+ # Revenium OpenAI Middleware Configuration
447
+
448
+ # Required: Your Revenium API key (starts with hak_)
449
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
450
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
451
+
452
+ # Required: Your OpenAI API key (starts with sk-)
453
+ OPENAI_API_KEY=sk_your_openai_api_key_here
454
+
455
+ # Optional: Your Azure OpenAI configuration (for Azure testing)
456
+ AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
457
+ AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
458
+ AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
459
+ AZURE_OPENAI_API_VERSION=2024-12-01-preview
460
+
461
+ # Optional: Enable debug logging
462
+ REVENIUM_DEBUG=false
463
+ ```
464
+
465
+ ### Step 4: Optional - Add Metadata
466
+
467
+ Enhance your existing calls with optional metadata:
468
+
469
+ ```typescript
470
+ // Your existing code works unchanged
471
+ const response = await openai.chat.completions.create({
472
+ model: 'gpt-4o-mini',
473
+ messages: [{ role: 'user', content: 'Hello!' }],
474
+ // Add optional metadata for better analytics
475
+ usageMetadata: {
476
+ subscriber: { id: 'user-123' },
477
+ organizationId: 'my-company',
478
+ taskType: 'chat',
479
+ },
480
+ });
481
+ ```
482
+
483
+ ** That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
484
+
485
+ ## What Gets Tracked
486
+
487
+ The middleware automatically captures comprehensive usage data:
488
+
489
+ ### **Usage Metrics**
490
+
491
+ - **Token Counts** - Input tokens, output tokens, total tokens
492
+ - **Model Information** - Model name, provider (OpenAI/Azure), API version
493
+ - **Request Timing** - Request duration, response time
494
+ - **Cost Calculation** - Estimated costs based on current pricing
495
+
496
+ ### **Business Context (Optional)**
497
+
498
+ - **User Tracking** - Subscriber ID, email, credentials
499
+ - **Organization Data** - Organization ID, subscription ID, product ID
500
+ - **Task Classification** - Task type, agent identifier, trace ID
501
+ - **Quality Metrics** - Response quality scores, custom metadata
502
+
503
+ ### **Technical Details**
504
+
505
+ - **API Endpoints** - Chat completions, embeddings, responses API
506
+ - **Request Types** - Streaming vs non-streaming
507
+ - **Error Tracking** - Failed requests, error types, retry attempts
508
+ - **Environment Info** - Development vs production usage
509
+
510
+ ## OpenAI Responses API Support
511
+
512
+ This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
513
+
514
+ ### What is the Responses API?
515
+
516
+ The Responses API is OpenAI's new stateful API that:
517
+
518
+ - Uses `input` instead of `messages` parameter for simplified interaction
519
+ - Provides unified experience combining chat completions and assistants capabilities
520
+ - Supports advanced features like background tasks, function calling, and code interpreter
521
+ - Offers better streaming and real-time response generation
522
+ - Works with GPT-5 and other advanced models
523
+
524
+ ### API Comparison
525
+
526
+ **Traditional Chat Completions:**
527
+
528
+ ```javascript
529
+ const response = await openai.chat.completions.create({
530
+ model: 'gpt-4o',
531
+ messages: [{ role: 'user', content: 'Hello' }],
532
+ });
533
+ ```
534
+
535
+ **New Responses API:**
536
+
537
+ ```javascript
538
+ const response = await openai.responses.create({
539
+ model: 'gpt-5',
540
+ input: 'Hello', // Simplified input parameter
541
+ });
542
+ ```
543
+
544
+ ### Key Differences
545
+
546
+ | Feature | Chat Completions | Responses API |
547
+ | ---------------------- | ---------------------------- | ----------------------------------- |
548
+ | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
549
+ | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
550
+ | **Response Structure** | `choices[0].message.content` | `output_text` |
551
+ | **Stateful** | No | Yes (with `store: true`) |
552
+ | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
553
+ | **Temperature** | Supported | Not supported with GPT-5 |
554
+
555
+ ### Requirements & Installation
556
+
557
+ **OpenAI SDK Version:**
558
+
559
+ - **Minimum:** `5.8.0` (when Responses API was officially released)
560
+ - **Recommended:** `5.8.2` or later (tested and verified)
561
+ - **Current:** `6.2.0` (latest available)
562
+
563
+ **Installation:**
564
+
565
+ ```bash
566
+ # Install latest version with Responses API support
567
+ npm install openai@^5.8.0
568
+
569
+ # Or install specific tested version
570
+ npm install openai@5.8.2
571
+ ```
572
+
573
+ ### Current Status
574
+
575
+ **The Responses API is officially available in OpenAI SDK 5.8+**
576
+
577
+ **Official Release:**
578
+
579
+ - Released by OpenAI in SDK version 5.8.0
580
+ - Fully documented in official OpenAI documentation
581
+ - Production-ready with GPT-5 and other supported models
582
+ - Complete middleware support with Revenium integration
583
+
584
+ **Middleware Features:**
585
+
586
+ - Full Responses API support (streaming & non-streaming)
587
+ - Seamless metadata tracking identical to Chat Completions
588
+ - Type-safe TypeScript integration
589
+ - Complete token tracking including reasoning tokens
590
+ - Azure OpenAI compatibility
591
+
592
+ **References:**
593
+
594
+ - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
595
+ - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
596
+
597
+ ### Responses API Examples
598
+
599
+ The middleware includes comprehensive examples for the new Responses API:
600
+
601
+ **Basic Usage:**
602
+
603
+ ```typescript
604
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
605
+ import OpenAI from 'openai';
606
+
607
+ // Initialize and patch OpenAI instance
608
+ initializeReveniumFromEnv();
609
+ const openai = patchOpenAIInstance(new OpenAI());
610
+
611
+ // Simple string input
612
+ const response = await openai.responses.create({
613
+ model: 'gpt-5',
614
+ input: 'What is the capital of France?',
615
+ max_output_tokens: 150,
616
+ usageMetadata: {
617
+ subscriber: { id: 'user-123', email: 'user@example.com' },
618
+ organizationId: 'org-456',
619
+ productId: 'quantum-explainer',
620
+ taskType: 'educational-content',
621
+ },
622
+ });
623
+
624
+ console.log(response.output_text); // "Paris."
625
+ ```
626
+
627
+ **Streaming Example:**
628
+
629
+ ```typescript
630
+ const stream = await openai.responses.create({
631
+ model: 'gpt-5',
632
+ input: 'Write a short story about AI',
633
+ stream: true,
634
+ max_output_tokens: 500,
635
+ usageMetadata: {
636
+ subscriber: { id: 'user-123', email: 'user@example.com' },
637
+ organizationId: 'org-456',
638
+ },
639
+ });
640
+
641
+ for await (const chunk of stream) {
642
+ process.stdout.write(chunk.delta?.content || '');
643
+ }
644
+ ```
645
+
646
+ ### Adding Custom Metadata
647
+
648
+ Track users, organizations, and custom data with seamless TypeScript integration:
649
+
650
+ ```typescript
651
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
652
+ import OpenAI from 'openai';
653
+
654
+ // Initialize and patch OpenAI instance
655
+ initializeReveniumFromEnv();
656
+ const openai = patchOpenAIInstance(new OpenAI());
657
+
658
+ const response = await openai.chat.completions.create({
659
+ model: 'gpt-4',
660
+ messages: [{ role: 'user', content: 'Summarize this document' }],
661
+ // Add custom tracking metadata - all fields optional, no type casting needed!
662
+ usageMetadata: {
663
+ subscriber: {
664
+ id: 'user-12345',
665
+ email: 'john@acme-corp.com',
666
+ },
667
+ organizationId: 'acme-corp',
668
+ productId: 'document-ai',
669
+ taskType: 'document-summary',
670
+ agent: 'doc-summarizer-v2',
671
+ traceId: 'session-abc123',
672
+ },
673
+ });
674
+
675
+ // Same metadata works with Responses API
676
+ const responsesResult = await openai.responses.create({
677
+ model: 'gpt-5',
678
+ input: 'Summarize this document',
679
+ // Same metadata structure - seamless compatibility!
680
+ usageMetadata: {
681
+ subscriber: {
682
+ id: 'user-12345',
683
+ email: 'john@acme-corp.com',
684
+ },
685
+ organizationId: 'acme-corp',
686
+ productId: 'document-ai',
687
+ taskType: 'document-summary',
688
+ agent: 'doc-summarizer-v2',
689
+ traceId: 'session-abc123',
690
+ },
691
+ });
692
+ ```
693
+
694
+ ### Streaming Support
695
+
696
+ The middleware automatically handles streaming requests with seamless metadata:
697
+
698
+ ```typescript
699
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
700
+ import OpenAI from 'openai';
701
+
702
+ // Initialize and patch OpenAI instance
703
+ initializeReveniumFromEnv();
704
+ const openai = patchOpenAIInstance(new OpenAI());
705
+
706
+ const stream = await openai.chat.completions.create({
707
+ model: 'gpt-4',
708
+ messages: [{ role: 'user', content: 'Tell me a story' }],
709
+ stream: true,
710
+ // Metadata works seamlessly with streaming - all fields optional!
711
+ usageMetadata: {
712
+ organizationId: 'story-app',
713
+ taskType: 'creative-writing',
714
+ },
715
+ });
716
+
717
+ for await (const chunk of stream) {
718
+ process.stdout.write(chunk.choices[0]?.delta?.content || '');
719
+ }
720
+ // Usage tracking happens automatically when stream completes
721
+ ```
722
+
723
+ ### Temporarily Disabling Tracking
724
+
725
+ If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
726
+
727
+ ```javascript
728
+ import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
729
+
730
+ // Disable tracking
731
+ unpatchOpenAI();
732
+
733
+ // Your OpenAI calls now bypass Revenium tracking
734
+ await openai.chat.completions.create({...});
735
+
736
+ // Re-enable tracking
737
+ patchOpenAI();
738
+ ```
739
+
740
+ **Common use cases:**
741
+
742
+ - **Debugging**: Isolate whether issues are caused by the middleware
743
+ - **Testing**: Compare behavior with/without tracking
744
+ - **Conditional tracking**: Enable/disable based on environment
745
+ - **Troubleshooting**: Temporary bypass during incident response
746
+
747
+ **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
748
+
749
+ ## Azure OpenAI Integration
750
+
751
+ **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
752
+
753
+ ### Quick Start with Azure OpenAI
754
+
755
+ ```bash
756
+ # Set your Azure OpenAI environment variables
757
+ export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
758
+ export AZURE_OPENAI_API_KEY="your-azure-api-key"
759
+ export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
760
+ export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
761
+
762
+ # Set your Revenium credentials
763
+ export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
764
+ # export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
765
+ ```
766
+
767
+ ```typescript
768
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
769
+ import { AzureOpenAI } from 'openai';
770
+
771
+ // Initialize Revenium middleware
772
+ initializeReveniumFromEnv();
773
+
774
+ // Create and patch Azure OpenAI client
775
+ const azure = patchOpenAIInstance(
776
+ new AzureOpenAI({
777
+ endpoint: process.env.AZURE_OPENAI_ENDPOINT,
778
+ apiKey: process.env.AZURE_OPENAI_API_KEY,
779
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION,
780
+ })
781
+ );
782
+
783
+ // Your existing Azure OpenAI code works with seamless metadata
784
+ const response = await azure.chat.completions.create({
785
+ model: 'gpt-4o', // Uses your deployment name
786
+ messages: [{ role: 'user', content: 'Hello from Azure!' }],
787
+ // Optional metadata with native TypeScript support
788
+ usageMetadata: {
789
+ organizationId: 'my-company',
790
+ taskType: 'azure-chat',
791
+ },
792
+ });
793
+
794
+ console.log(response.choices[0].message.content);
795
+ ```
796
+
797
+ ### Azure Features
798
+
799
+ - **Automatic Detection**: Detects Azure OpenAI clients automatically
800
+ - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
801
+ - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
802
+ - **Deployment Support**: Works with any Azure deployment name (simple or complex)
803
+ - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
804
+ - **Zero Code Changes**: Existing Azure OpenAI code works without modification
805
+
806
+ ### Azure Environment Variables
807
+
808
+ | Variable | Required | Description | Example |
809
+ | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
810
+ | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
811
+ | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
812
+ | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
813
+ | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
814
+
815
+ ### Azure Model Name Resolution
816
+
817
+ The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
818
+
819
+ ```typescript
820
+ // Azure deployment names → Standard model names for pricing
821
+ "gpt-4o-2024-11-20" → "gpt-4o"
822
+ "gpt4o-prod" → "gpt-4o"
823
+ "o4-mini" → "gpt-4o-mini"
824
+ "gpt-35-turbo-dev" → "gpt-3.5-turbo"
825
+ "text-embedding-3-large" → "text-embedding-3-large" // Direct match
826
+ "embedding-3-large" → "text-embedding-3-large"
827
+ ```
828
+
829
+ ## Advanced Usage
830
+
831
+ ### Streaming with Metadata
832
+
833
+ The middleware seamlessly handles streaming requests with full metadata support:
834
+
835
+ ```typescript
836
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
837
+ import OpenAI from 'openai';
838
+
839
+ initializeReveniumFromEnv();
840
+ const openai = patchOpenAIInstance(new OpenAI());
841
+
842
+ // Chat Completions API streaming
843
+ const stream = await openai.chat.completions.create({
844
+ model: 'gpt-4o-mini',
845
+ messages: [{ role: 'user', content: 'Tell me a story' }],
846
+ stream: true,
847
+ usageMetadata: {
848
+ subscriber: { id: 'user-123', email: 'user@example.com' },
849
+ organizationId: 'story-app',
850
+ taskType: 'creative-writing',
851
+ traceId: 'session-' + Date.now(),
852
+ },
853
+ });
854
+
855
+ for await (const chunk of stream) {
856
+ process.stdout.write(chunk.choices[0]?.delta?.content || '');
857
+ }
858
+ // Usage tracking happens automatically when stream completes
859
+ ```
860
+
861
+ ### Responses API with Metadata
862
+
863
+ Full support for OpenAI's new Responses API:
864
+
865
+ ```typescript
866
+ // Simple string input with metadata
867
+ const response = await openai.responses.create({
868
+ model: 'gpt-5',
869
+ input: 'What is the capital of France?',
870
+ max_output_tokens: 150,
871
+ usageMetadata: {
872
+ subscriber: { id: 'user-123', email: 'user@example.com' },
873
+ organizationId: 'org-456',
874
+ productId: 'geography-tutor',
875
+ taskType: 'educational-query',
876
+ },
877
+ });
878
+
879
+ console.log(response.output_text); // "Paris."
880
+ ```
881
+
882
+ ### Azure OpenAI Integration
883
+
884
+ Automatic Azure OpenAI detection with seamless metadata:
885
+
886
+ ```typescript
887
+ import { AzureOpenAI } from 'openai';
888
+
889
+ // Create and patch Azure OpenAI client
890
+ const azure = patchOpenAIInstance(
891
+ new AzureOpenAI({
892
+ endpoint: process.env.AZURE_OPENAI_ENDPOINT,
893
+ apiKey: process.env.AZURE_OPENAI_API_KEY,
894
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION,
895
+ })
896
+ );
897
+
898
+ // Your existing Azure OpenAI code works with seamless metadata
899
+ const response = await azure.chat.completions.create({
900
+ model: 'gpt-4o', // Uses your deployment name
901
+ messages: [{ role: 'user', content: 'Hello from Azure!' }],
902
+ usageMetadata: {
903
+ organizationId: 'my-company',
904
+ taskType: 'azure-chat',
905
+ agent: 'azure-assistant',
906
+ },
907
+ });
908
+ ```
909
+
910
+ ### Embeddings with Metadata
911
+
912
+ Track embeddings usage with optional metadata:
913
+
914
+ ```typescript
915
+ const embedding = await openai.embeddings.create({
916
+ model: 'text-embedding-3-small',
917
+ input: 'Advanced text embedding with comprehensive tracking metadata',
918
+ usageMetadata: {
919
+ subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
920
+ organizationId: 'my-company',
921
+ taskType: 'document-embedding',
922
+ productId: 'search-engine',
923
+ traceId: `embed-${Date.now()}`,
924
+ agent: 'openai-embeddings-node',
925
+ },
926
+ });
927
+
928
+ console.log('Model:', embedding.model);
929
+ console.log('Usage:', embedding.usage);
930
+ console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
931
+ ```
932
+
933
+ ### Manual Configuration
934
+
935
+ For advanced use cases, configure the middleware manually:
936
+
937
+ ```typescript
938
+ import { configure } from '@revenium/openai';
939
+
940
+ configure({
941
+ reveniumApiKey: 'hak_your_api_key',
942
+ reveniumBaseUrl: 'https://api.revenium.io/meter',
943
+ apiTimeout: 5000,
944
+ failSilent: true,
945
+ maxRetries: 3,
946
+ });
947
+ ```
948
+
949
+ ## Configuration Options
950
+
951
+ ### Environment Variables
952
+
953
+ | Variable | Required | Default | Description |
954
+ | ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
955
+ | `REVENIUM_METERING_API_KEY` | true | - | Your Revenium API key (starts with `hak_`) |
956
+ | `OPENAI_API_KEY` | true | - | Your OpenAI API key (starts with `sk-`) |
957
+ | `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io/meter` | Revenium metering API base URL |
958
+ | `REVENIUM_DEBUG` | false | `false` | Enable debug logging (`true`/`false`) |
959
+ | `AZURE_OPENAI_ENDPOINT` | false | - | Azure OpenAI endpoint URL (for Azure testing) |
960
+ | `AZURE_OPENAI_API_KEY` | false | - | Azure OpenAI API key (for Azure testing) |
961
+ | `AZURE_OPENAI_DEPLOYMENT` | false | - | Azure OpenAI deployment name (for Azure) |
962
+ | `AZURE_OPENAI_API_VERSION` | false | `2024-12-01-preview` | Azure OpenAI API version (for Azure) |
963
+
964
+ **Important Note about `REVENIUM_METERING_BASE_URL`:**
965
+
966
+ - This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
967
+ - If you don't set it explicitly, the middleware will use the default production endpoint
968
+ - However, you may see console warnings or errors if the middleware cannot determine the correct environment
969
+ - **Best practice:** Always set this variable explicitly to match your environment:
970
+
971
+ ```bash
972
+ # Default production URL (recommended)
973
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
974
+ ```
975
+
976
+ - **Remember:** Your `REVENIUM_METERING_API_KEY` must match your base URL environment
977
+
978
+ ### Usage Metadata Options
979
+
980
+ All metadata fields are optional and help provide better analytics:
981
+
982
+ ```typescript
983
+ interface UsageMetadata {
984
+ traceId?: string; // Session or conversation ID
985
+ taskType?: string; // Type of AI task (e.g., "chat", "summary")
986
+ subscriber?: {
987
+ // User information (nested structure)
988
+ id?: string; // User ID from your system
989
+ email?: string; // User's email address
990
+ credential?: {
991
+ // User credentials
992
+ name?: string; // Credential name
993
+ value?: string; // Credential value
994
+ };
995
+ };
996
+ organizationId?: string; // Organization/company ID
997
+ subscriptionId?: string; // Billing plan ID
998
+ productId?: string; // Your product/feature ID
999
+ agent?: string; // AI agent identifier
1000
+ responseQualityScore?: number; // Quality score (0-1)
1001
+ }
1002
+ ```
1003
+
1004
+ ## Included Examples
1005
+
1006
+ The package includes 8 comprehensive example files in your installation:
1007
+
1008
+ **OpenAI Examples:**
1009
+ - **openai-basic.ts**: Basic chat completions with metadata tracking
1010
+ - **openai-streaming.ts**: Streaming responses with real-time output
1011
+ - **openai-responses-basic.ts**: New Responses API usage (OpenAI SDK 5.8+)
1012
+ - **openai-responses-streaming.ts**: Streaming with Responses API
1013
+
1014
+ **Azure OpenAI Examples:**
1015
+ - **azure-basic.ts**: Azure OpenAI chat completions
1016
+ - **azure-streaming.ts**: Azure streaming responses
1017
+ - **azure-responses-basic.ts**: Azure Responses API
1018
+ - **azure-responses-streaming.ts**: Azure streaming Responses API
1019
+
1020
+ **For npm users:** Examples are installed in `node_modules/@revenium/openai/examples/`
1021
+
1022
+ **For GitHub users:** Examples are in the repository's `examples/` directory
1023
+
1024
+ For detailed setup instructions and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
1025
+
1026
+ ## How It Works
1027
+
1028
+ 1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
1029
+ - `chat.completions.create` (Chat Completions API)
1030
+ - `responses.create` (Responses API - when available)
1031
+ - `embeddings.create` (Embeddings API)
1032
+ 2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
1033
+ 3. **Usage Extraction**: Token counts, model info, and timing data are captured
1034
+ 4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
1035
+ 5. **Transparent Response**: Original OpenAI responses are returned unchanged
1036
+
1037
+ The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
1038
+
1039
+ ## Troubleshooting
1040
+
1041
+ ### Common Issues
1042
+
1043
+ #### 1. **No tracking data in dashboard**
1044
+
1045
+ **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
1046
+
1047
+ **Solution**: Enable debug logging to check middleware status:
1048
+
1049
+ ```bash
1050
+ export REVENIUM_DEBUG=true
1051
+ ```
1052
+
1053
+ **Expected output for successful tracking**:
1054
+
1055
+ ```bash
1056
+ [Revenium Debug] OpenAI chat.completions.create intercepted
1057
+ [Revenium Debug] Revenium tracking successful
1058
+
1059
+ # For Responses API:
1060
+ [Revenium Debug] OpenAI responses.create intercepted
1061
+ [Revenium Debug] Revenium tracking successful
1062
+ ```
1063
+
1064
+ #### 2. **Environment mismatch errors**
1065
+
1066
+ **Symptoms**: Authentication errors or 401/403 responses
1067
+
1068
+ **Solution**: Ensure your API key matches your base URL environment:
1069
+
1070
+ ```bash
1071
+ # Correct - Key and URL from same environment
1072
+ REVENIUM_METERING_API_KEY=hak_your_api_key_here
1073
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1074
+
1075
+ # Wrong - Key and URL from different environments
1076
+ REVENIUM_METERING_API_KEY=hak_wrong_environment_key
1077
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
1078
+ ```
1079
+
1080
+ #### 3. **TypeScript type errors**
1081
+
1082
+ **Symptoms**: TypeScript errors about `usageMetadata` property
1083
+
1084
+ **Solution**: Ensure you're importing the middleware before OpenAI:
1085
+
1086
+ ```typescript
1087
+ // Correct order
1088
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1089
+ import OpenAI from 'openai';
1090
+
1091
+ // Wrong order
1092
+ import OpenAI from 'openai';
1093
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
1094
+ ```
1095
+
1096
+ #### 4. **Azure OpenAI not working**
1097
+
1098
+ **Symptoms**: Azure OpenAI calls not being tracked
1099
+
1100
+ **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
1101
+
1102
+ ```typescript
1103
+ import { AzureOpenAI } from 'openai';
1104
+ import { patchOpenAIInstance } from '@revenium/openai';
1105
+
1106
+ // Correct
1107
+ const azure = patchOpenAIInstance(new AzureOpenAI({...}));
1108
+
1109
+ // Wrong - not patched
1110
+ const azure = new AzureOpenAI({...});
1111
+ ```
1112
+
1113
+ #### 5. **Responses API not available**
1114
+
1115
+ **Symptoms**: `openai.responses.create` is undefined
1116
+
1117
+ **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
1118
+
1119
+ ```bash
1120
+ npm install openai@^5.8.0
1121
+ ```
1122
+
1123
+ ### Debug Mode
1124
+
1125
+ Enable comprehensive debug logging:
1126
+
1127
+ ```bash
1128
+ export REVENIUM_DEBUG=true
1129
+ ```
1130
+
1131
+ This will show:
1132
+
1133
+ - Middleware initialization status
1134
+ - Request interception confirmations
1135
+ - Metadata extraction details
1136
+ - Tracking success/failure messages
1137
+ - Error details and stack traces
1138
+
1139
+ ### Getting Help
1140
+
1141
+ If you're still experiencing issues:
1142
+
1143
+ 1. **Check the logs** with `REVENIUM_DEBUG=true`
1144
+ 2. **Verify environment variables** are set correctly
1145
+ 3. **Test with minimal example** from our documentation
1146
+ 4. **Contact support** with debug logs and error details
1147
+
1148
+ For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
1149
+
1150
+ ## Supported Models
1151
+
1152
+ ### OpenAI Models
1153
+
1154
+ | Model Family | Models | APIs Supported |
1155
+ | ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
1156
+ | **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
1157
+ | **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
1158
+ | **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
1159
+ | **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
1160
+ | **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
1161
+ | **GPT-5** | `gpt-5` (when available) | Responses API |
1162
+ | **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
1163
+
1164
+ ### Azure OpenAI Models
1165
+
1166
+ All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
1167
+
1168
+ | Azure Deployment | Resolved Model | API Support |
1169
+ | ------------------------ | ------------------------ | --------------------------- |
1170
+ | `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
1171
+ | `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
1172
+ | `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
1173
+ | `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
1174
+ | `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
1175
+ | `embedding-3-large` | `text-embedding-3-large` | Embeddings |
1176
+
1177
+ **Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
1178
+
1179
+ ### API Support Matrix
1180
+
1181
+ | Feature | Chat Completions API | Responses API | Embeddings API |
1182
+ | --------------------- | -------------------- | ------------- | -------------- |
1183
+ | **Basic Requests** | Yes | Yes | Yes |
1184
+ | **Streaming** | Yes | Yes | No |
1185
+ | **Metadata Tracking** | Yes | Yes | Yes |
1186
+ | **Azure OpenAI** | Yes | Yes | Yes |
1187
+ | **Cost Calculation** | Yes | Yes | Yes |
1188
+ | **Token Counting** | Yes | Yes | Yes |
1189
+
1190
+ ## Requirements
1191
+
1192
+ - Node.js 16+
1193
+ - OpenAI package v4.0+
1194
+ - TypeScript 5.0+ (for TypeScript projects)
1195
+
1196
+ ## Documentation
1197
+
1198
+ For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
1199
+
1200
+ ## Contributing
1201
+
1202
+ See [CONTRIBUTING.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CONTRIBUTING.md)
1203
+
1204
+ ## Code of Conduct
1205
+
1206
+ See [CODE_OF_CONDUCT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CODE_OF_CONDUCT.md)
1207
+
1208
+ ## Security
1209
+
1210
+ See [SECURITY.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/SECURITY.md)
1211
+
1212
+ ## License
1213
+
1214
+ This project is licensed under the MIT License - see the [LICENSE](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/LICENSE) file for details.
1215
+
1216
+ ## Support
1217
+
1218
+ For issues, feature requests, or contributions:
1219
+
1220
+ - **GitHub Repository**: [revenium/revenium-middleware-openai-node](https://github.com/revenium/revenium-middleware-openai-node)
1221
+ - **Issues**: [Report bugs or request features](https://github.com/revenium/revenium-middleware-openai-node/issues)
1222
+ - **Documentation**: [docs.revenium.io](https://docs.revenium.io)
1223
+ - **Contact**: Reach out to the Revenium team for additional support
1224
+
1225
+ ## Development
1226
+
1227
+ For development and testing instructions, see [DEVELOPMENT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/DEVELOPMENT.md).
1228
+
1229
+ ---
1230
+
1231
+ **Built by Revenium**