@revenium/openai 1.0.9 → 1.0.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +78 -0
- package/LICENSE +21 -21
- package/README.md +1231 -1152
- package/dist/cjs/index.js +2 -2
- package/dist/cjs/types/openai-augmentation.js +1 -1
- package/dist/esm/index.js +2 -2
- package/dist/esm/types/openai-augmentation.js +1 -1
- package/dist/types/index.d.ts +2 -2
- package/dist/types/types/openai-augmentation.d.ts +1 -1
- package/examples/README.md +361 -0
- package/examples/azure-basic.ts +194 -0
- package/examples/azure-responses-basic.ts +204 -0
- package/examples/azure-responses-streaming.ts +226 -0
- package/examples/azure-streaming.ts +188 -0
- package/examples/openai-basic.ts +125 -0
- package/examples/openai-responses-basic.ts +183 -0
- package/examples/openai-responses-streaming.ts +203 -0
- package/examples/openai-streaming.ts +161 -0
- package/package.json +87 -84
package/README.md
CHANGED
|
@@ -1,1152 +1,1231 @@
|
|
|
1
|
-
#
|
|
2
|
-
|
|
3
|
-
[](https://www.npmjs.com/package/@revenium/openai)
|
|
4
|
-
[](https://nodejs.org/)
|
|
5
|
-
[](https://docs.revenium.io)
|
|
6
|
-
[](https://opensource.org/licenses/MIT)
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
-
|
|
17
|
-
-
|
|
18
|
-
-
|
|
19
|
-
-
|
|
20
|
-
-
|
|
21
|
-
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
#
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
```
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
#
|
|
92
|
-
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
#
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
```
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
### Step
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
289
|
-
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
299
|
-
|
|
300
|
-
|
|
301
|
-
|
|
302
|
-
|
|
303
|
-
|
|
304
|
-
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
|
|
309
|
-
-
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
|
|
314
|
-
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
```bash
|
|
325
|
-
#
|
|
326
|
-
npm
|
|
327
|
-
npm
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
|
|
338
|
-
|
|
339
|
-
|
|
340
|
-
|
|
341
|
-
|
|
342
|
-
|
|
343
|
-
|
|
344
|
-
|
|
345
|
-
|
|
346
|
-
|
|
347
|
-
|
|
348
|
-
|
|
349
|
-
|
|
350
|
-
|
|
351
|
-
|
|
352
|
-
|
|
353
|
-
-
|
|
354
|
-
-
|
|
355
|
-
-
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
359
|
-
|
|
360
|
-
|
|
361
|
-
|
|
362
|
-
|
|
363
|
-
|
|
364
|
-
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
|
|
370
|
-
|
|
371
|
-
|
|
372
|
-
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
```
|
|
376
|
-
|
|
377
|
-
|
|
378
|
-
|
|
379
|
-
|
|
380
|
-
|
|
381
|
-
|
|
382
|
-
|
|
383
|
-
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
|
|
387
|
-
|
|
388
|
-
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
|
|
392
|
-
|
|
393
|
-
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
|
|
400
|
-
|
|
401
|
-
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
|
|
405
|
-
|
|
406
|
-
|
|
407
|
-
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
|
|
426
|
-
|
|
427
|
-
|
|
428
|
-
|
|
429
|
-
|
|
430
|
-
```
|
|
431
|
-
|
|
432
|
-
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
436
|
-
|
|
437
|
-
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
|
|
445
|
-
|
|
446
|
-
|
|
447
|
-
|
|
448
|
-
|
|
449
|
-
|
|
450
|
-
|
|
451
|
-
|
|
452
|
-
|
|
453
|
-
|
|
454
|
-
|
|
455
|
-
|
|
456
|
-
-
|
|
457
|
-
-
|
|
458
|
-
|
|
459
|
-
|
|
460
|
-
|
|
461
|
-
|
|
462
|
-
|
|
463
|
-
|
|
464
|
-
|
|
465
|
-
|
|
466
|
-
|
|
467
|
-
|
|
468
|
-
|
|
469
|
-
|
|
470
|
-
|
|
471
|
-
|
|
472
|
-
|
|
473
|
-
|
|
474
|
-
|
|
475
|
-
|
|
476
|
-
|
|
477
|
-
|
|
478
|
-
|
|
479
|
-
|
|
480
|
-
|
|
481
|
-
|
|
482
|
-
|
|
483
|
-
|
|
484
|
-
|
|
485
|
-
|
|
486
|
-
|
|
487
|
-
|
|
488
|
-
|
|
489
|
-
|
|
490
|
-
|
|
491
|
-
|
|
492
|
-
|
|
493
|
-
|
|
494
|
-
|
|
495
|
-
|
|
496
|
-
|
|
497
|
-
|
|
498
|
-
|
|
499
|
-
|
|
500
|
-
|
|
501
|
-
|
|
502
|
-
|
|
503
|
-
|
|
504
|
-
|
|
505
|
-
|
|
506
|
-
**
|
|
507
|
-
|
|
508
|
-
- **
|
|
509
|
-
|
|
510
|
-
|
|
511
|
-
|
|
512
|
-
**
|
|
513
|
-
|
|
514
|
-
|
|
515
|
-
|
|
516
|
-
|
|
517
|
-
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
|
|
521
|
-
|
|
522
|
-
|
|
523
|
-
|
|
524
|
-
|
|
525
|
-
|
|
526
|
-
**
|
|
527
|
-
|
|
528
|
-
|
|
529
|
-
|
|
530
|
-
|
|
531
|
-
|
|
532
|
-
|
|
533
|
-
|
|
534
|
-
|
|
535
|
-
|
|
536
|
-
|
|
537
|
-
|
|
538
|
-
|
|
539
|
-
-
|
|
540
|
-
|
|
541
|
-
|
|
542
|
-
|
|
543
|
-
|
|
544
|
-
|
|
545
|
-
|
|
546
|
-
|
|
547
|
-
|
|
548
|
-
|
|
549
|
-
|
|
550
|
-
**
|
|
551
|
-
|
|
552
|
-
|
|
553
|
-
|
|
554
|
-
|
|
555
|
-
|
|
556
|
-
|
|
557
|
-
|
|
558
|
-
|
|
559
|
-
|
|
560
|
-
|
|
561
|
-
|
|
562
|
-
|
|
563
|
-
|
|
564
|
-
|
|
565
|
-
|
|
566
|
-
|
|
567
|
-
|
|
568
|
-
|
|
569
|
-
|
|
570
|
-
|
|
571
|
-
|
|
572
|
-
|
|
573
|
-
|
|
574
|
-
|
|
575
|
-
|
|
576
|
-
|
|
577
|
-
|
|
578
|
-
|
|
579
|
-
|
|
580
|
-
|
|
581
|
-
|
|
582
|
-
|
|
583
|
-
|
|
584
|
-
|
|
585
|
-
|
|
586
|
-
|
|
587
|
-
|
|
588
|
-
|
|
589
|
-
|
|
590
|
-
|
|
591
|
-
|
|
592
|
-
|
|
593
|
-
|
|
594
|
-
|
|
595
|
-
|
|
596
|
-
|
|
597
|
-
|
|
598
|
-
|
|
599
|
-
|
|
600
|
-
|
|
601
|
-
|
|
602
|
-
|
|
603
|
-
|
|
604
|
-
initializeReveniumFromEnv
|
|
605
|
-
|
|
606
|
-
|
|
607
|
-
|
|
608
|
-
|
|
609
|
-
|
|
610
|
-
|
|
611
|
-
|
|
612
|
-
|
|
613
|
-
|
|
614
|
-
|
|
615
|
-
|
|
616
|
-
|
|
617
|
-
|
|
618
|
-
|
|
619
|
-
|
|
620
|
-
|
|
621
|
-
},
|
|
622
|
-
});
|
|
623
|
-
|
|
624
|
-
//
|
|
625
|
-
|
|
626
|
-
|
|
627
|
-
|
|
628
|
-
|
|
629
|
-
|
|
630
|
-
|
|
631
|
-
|
|
632
|
-
|
|
633
|
-
|
|
634
|
-
|
|
635
|
-
|
|
636
|
-
|
|
637
|
-
|
|
638
|
-
|
|
639
|
-
|
|
640
|
-
|
|
641
|
-
|
|
642
|
-
|
|
643
|
-
|
|
644
|
-
|
|
645
|
-
|
|
646
|
-
|
|
647
|
-
|
|
648
|
-
|
|
649
|
-
|
|
650
|
-
|
|
651
|
-
|
|
652
|
-
|
|
653
|
-
|
|
654
|
-
|
|
655
|
-
|
|
656
|
-
|
|
657
|
-
|
|
658
|
-
|
|
659
|
-
|
|
660
|
-
|
|
661
|
-
|
|
662
|
-
|
|
663
|
-
|
|
664
|
-
|
|
665
|
-
|
|
666
|
-
|
|
667
|
-
|
|
668
|
-
|
|
669
|
-
|
|
670
|
-
|
|
671
|
-
|
|
672
|
-
|
|
673
|
-
|
|
674
|
-
|
|
675
|
-
|
|
676
|
-
|
|
677
|
-
|
|
678
|
-
|
|
679
|
-
//
|
|
680
|
-
|
|
681
|
-
|
|
682
|
-
|
|
683
|
-
|
|
684
|
-
|
|
685
|
-
|
|
686
|
-
|
|
687
|
-
|
|
688
|
-
|
|
689
|
-
|
|
690
|
-
|
|
691
|
-
|
|
692
|
-
|
|
693
|
-
|
|
694
|
-
|
|
695
|
-
|
|
696
|
-
|
|
697
|
-
|
|
698
|
-
|
|
699
|
-
|
|
700
|
-
|
|
701
|
-
|
|
702
|
-
|
|
703
|
-
|
|
704
|
-
|
|
705
|
-
|
|
706
|
-
|
|
707
|
-
|
|
708
|
-
|
|
709
|
-
|
|
710
|
-
|
|
711
|
-
|
|
712
|
-
|
|
713
|
-
|
|
714
|
-
|
|
715
|
-
|
|
716
|
-
|
|
717
|
-
|
|
718
|
-
|
|
719
|
-
|
|
720
|
-
//
|
|
721
|
-
|
|
722
|
-
|
|
723
|
-
|
|
724
|
-
|
|
725
|
-
|
|
726
|
-
|
|
727
|
-
|
|
728
|
-
|
|
729
|
-
|
|
730
|
-
|
|
731
|
-
|
|
732
|
-
|
|
733
|
-
|
|
734
|
-
|
|
735
|
-
|
|
736
|
-
|
|
737
|
-
|
|
738
|
-
|
|
739
|
-
|
|
740
|
-
|
|
741
|
-
|
|
742
|
-
|
|
743
|
-
|
|
744
|
-
|
|
745
|
-
|
|
746
|
-
|
|
747
|
-
|
|
748
|
-
|
|
749
|
-
|
|
750
|
-
|
|
751
|
-
|
|
752
|
-
|
|
753
|
-
|
|
754
|
-
|
|
755
|
-
|
|
756
|
-
|
|
757
|
-
|
|
758
|
-
|
|
759
|
-
|
|
760
|
-
|
|
761
|
-
|
|
762
|
-
|
|
763
|
-
|
|
764
|
-
|
|
765
|
-
|
|
766
|
-
|
|
767
|
-
|
|
768
|
-
|
|
769
|
-
|
|
770
|
-
|
|
771
|
-
|
|
772
|
-
|
|
773
|
-
|
|
774
|
-
|
|
775
|
-
|
|
776
|
-
|
|
777
|
-
|
|
778
|
-
|
|
779
|
-
|
|
780
|
-
|
|
781
|
-
|
|
782
|
-
|
|
783
|
-
|
|
784
|
-
|
|
785
|
-
|
|
786
|
-
|
|
787
|
-
|
|
788
|
-
|
|
789
|
-
|
|
790
|
-
|
|
791
|
-
|
|
792
|
-
|
|
793
|
-
|
|
794
|
-
|
|
795
|
-
|
|
796
|
-
|
|
797
|
-
|
|
798
|
-
|
|
799
|
-
|
|
800
|
-
|
|
801
|
-
|
|
802
|
-
|
|
803
|
-
|
|
804
|
-
|
|
805
|
-
|
|
806
|
-
|
|
807
|
-
|
|
808
|
-
|
|
809
|
-
|
|
810
|
-
|
|
811
|
-
|
|
812
|
-
|
|
813
|
-
|
|
814
|
-
|
|
815
|
-
|
|
816
|
-
|
|
817
|
-
|
|
818
|
-
|
|
819
|
-
|
|
820
|
-
|
|
821
|
-
|
|
822
|
-
|
|
823
|
-
|
|
824
|
-
|
|
825
|
-
|
|
826
|
-
|
|
827
|
-
|
|
828
|
-
|
|
829
|
-
|
|
830
|
-
|
|
831
|
-
###
|
|
832
|
-
|
|
833
|
-
|
|
834
|
-
|
|
835
|
-
```typescript
|
|
836
|
-
import {
|
|
837
|
-
|
|
838
|
-
|
|
839
|
-
|
|
840
|
-
|
|
841
|
-
|
|
842
|
-
|
|
843
|
-
|
|
844
|
-
|
|
845
|
-
|
|
846
|
-
|
|
847
|
-
|
|
848
|
-
|
|
849
|
-
|
|
850
|
-
|
|
851
|
-
|
|
852
|
-
|
|
853
|
-
|
|
854
|
-
|
|
855
|
-
|
|
856
|
-
|
|
857
|
-
|
|
858
|
-
|
|
859
|
-
|
|
860
|
-
|
|
861
|
-
|
|
862
|
-
|
|
863
|
-
|
|
864
|
-
|
|
865
|
-
|
|
866
|
-
|
|
867
|
-
|
|
868
|
-
|
|
869
|
-
|
|
870
|
-
|
|
871
|
-
|
|
872
|
-
|
|
873
|
-
|
|
874
|
-
|
|
875
|
-
|
|
876
|
-
|
|
877
|
-
|
|
878
|
-
|
|
879
|
-
console.log(
|
|
880
|
-
```
|
|
881
|
-
|
|
882
|
-
###
|
|
883
|
-
|
|
884
|
-
|
|
885
|
-
|
|
886
|
-
```typescript
|
|
887
|
-
import {
|
|
888
|
-
|
|
889
|
-
|
|
890
|
-
|
|
891
|
-
|
|
892
|
-
|
|
893
|
-
|
|
894
|
-
|
|
895
|
-
})
|
|
896
|
-
|
|
897
|
-
|
|
898
|
-
|
|
899
|
-
|
|
900
|
-
|
|
901
|
-
|
|
902
|
-
|
|
903
|
-
|
|
904
|
-
|
|
905
|
-
|
|
906
|
-
|
|
907
|
-
|
|
908
|
-
|
|
909
|
-
|
|
910
|
-
|
|
911
|
-
|
|
912
|
-
|
|
913
|
-
|
|
914
|
-
|
|
915
|
-
|
|
916
|
-
|
|
917
|
-
|
|
918
|
-
|
|
919
|
-
|
|
920
|
-
|
|
921
|
-
|
|
922
|
-
|
|
923
|
-
|
|
924
|
-
|
|
925
|
-
|
|
926
|
-
|
|
927
|
-
|
|
928
|
-
|
|
929
|
-
|
|
930
|
-
|
|
931
|
-
|
|
932
|
-
|
|
933
|
-
|
|
934
|
-
|
|
935
|
-
|
|
936
|
-
|
|
937
|
-
|
|
938
|
-
|
|
939
|
-
|
|
940
|
-
|
|
941
|
-
|
|
942
|
-
|
|
943
|
-
|
|
944
|
-
|
|
945
|
-
|
|
946
|
-
|
|
947
|
-
|
|
948
|
-
|
|
949
|
-
|
|
950
|
-
|
|
951
|
-
|
|
952
|
-
|
|
953
|
-
|
|
954
|
-
|
|
955
|
-
|
|
956
|
-
|
|
957
|
-
|
|
958
|
-
|
|
959
|
-
|
|
960
|
-
|
|
961
|
-
|
|
962
|
-
|
|
963
|
-
|
|
964
|
-
|
|
965
|
-
|
|
966
|
-
|
|
967
|
-
|
|
968
|
-
|
|
969
|
-
|
|
970
|
-
|
|
971
|
-
|
|
972
|
-
|
|
973
|
-
|
|
974
|
-
|
|
975
|
-
|
|
976
|
-
|
|
977
|
-
|
|
978
|
-
|
|
979
|
-
|
|
980
|
-
|
|
981
|
-
|
|
982
|
-
|
|
983
|
-
|
|
984
|
-
|
|
985
|
-
|
|
986
|
-
|
|
987
|
-
|
|
988
|
-
|
|
989
|
-
|
|
990
|
-
|
|
991
|
-
|
|
992
|
-
|
|
993
|
-
|
|
994
|
-
|
|
995
|
-
|
|
996
|
-
|
|
997
|
-
|
|
998
|
-
|
|
999
|
-
|
|
1000
|
-
|
|
1001
|
-
|
|
1002
|
-
|
|
1003
|
-
|
|
1004
|
-
|
|
1005
|
-
|
|
1006
|
-
|
|
1007
|
-
|
|
1008
|
-
|
|
1009
|
-
|
|
1010
|
-
|
|
1011
|
-
|
|
1012
|
-
|
|
1013
|
-
|
|
1014
|
-
|
|
1015
|
-
|
|
1016
|
-
**
|
|
1017
|
-
|
|
1018
|
-
**
|
|
1019
|
-
|
|
1020
|
-
|
|
1021
|
-
|
|
1022
|
-
|
|
1023
|
-
|
|
1024
|
-
|
|
1025
|
-
|
|
1026
|
-
|
|
1027
|
-
|
|
1028
|
-
|
|
1029
|
-
|
|
1030
|
-
|
|
1031
|
-
|
|
1032
|
-
**
|
|
1033
|
-
|
|
1034
|
-
**
|
|
1035
|
-
|
|
1036
|
-
|
|
1037
|
-
|
|
1038
|
-
|
|
1039
|
-
|
|
1040
|
-
|
|
1041
|
-
|
|
1042
|
-
|
|
1043
|
-
|
|
1044
|
-
|
|
1045
|
-
|
|
1046
|
-
|
|
1047
|
-
|
|
1048
|
-
|
|
1049
|
-
|
|
1050
|
-
|
|
1051
|
-
|
|
1052
|
-
|
|
1053
|
-
|
|
1054
|
-
|
|
1055
|
-
```
|
|
1056
|
-
|
|
1057
|
-
|
|
1058
|
-
|
|
1059
|
-
|
|
1060
|
-
|
|
1061
|
-
|
|
1062
|
-
|
|
1063
|
-
|
|
1064
|
-
|
|
1065
|
-
|
|
1066
|
-
|
|
1067
|
-
|
|
1068
|
-
|
|
1069
|
-
|
|
1070
|
-
|
|
1071
|
-
-
|
|
1072
|
-
|
|
1073
|
-
|
|
1074
|
-
|
|
1075
|
-
|
|
1076
|
-
|
|
1077
|
-
|
|
1078
|
-
|
|
1079
|
-
|
|
1080
|
-
|
|
1081
|
-
|
|
1082
|
-
|
|
1083
|
-
|
|
1084
|
-
|
|
1085
|
-
|
|
1086
|
-
|
|
1087
|
-
|
|
1088
|
-
|
|
1089
|
-
|
|
1090
|
-
|
|
1091
|
-
|
|
1092
|
-
|
|
1093
|
-
|
|
1094
|
-
|
|
1095
|
-
|
|
1096
|
-
|
|
1097
|
-
|
|
1098
|
-
|
|
1099
|
-
|
|
1100
|
-
|
|
1101
|
-
|
|
1102
|
-
|
|
1103
|
-
|
|
1104
|
-
|
|
1105
|
-
|
|
1106
|
-
|
|
1107
|
-
|
|
1108
|
-
|
|
1109
|
-
|
|
1110
|
-
|
|
1111
|
-
|
|
1112
|
-
|
|
1113
|
-
|
|
1114
|
-
|
|
1115
|
-
|
|
1116
|
-
|
|
1117
|
-
|
|
1118
|
-
|
|
1119
|
-
|
|
1120
|
-
|
|
1121
|
-
|
|
1122
|
-
|
|
1123
|
-
|
|
1124
|
-
|
|
1125
|
-
|
|
1126
|
-
|
|
1127
|
-
|
|
1128
|
-
|
|
1129
|
-
|
|
1130
|
-
|
|
1131
|
-
|
|
1132
|
-
|
|
1133
|
-
|
|
1134
|
-
|
|
1135
|
-
|
|
1136
|
-
|
|
1137
|
-
|
|
1138
|
-
|
|
1139
|
-
|
|
1140
|
-
|
|
1141
|
-
|
|
1142
|
-
|
|
1143
|
-
|
|
1144
|
-
|
|
1145
|
-
|
|
1146
|
-
|
|
1147
|
-
|
|
1148
|
-
|
|
1149
|
-
|
|
1150
|
-
##
|
|
1151
|
-
|
|
1152
|
-
|
|
1
|
+
# Revenium OpenAI Middleware for Node.js
|
|
2
|
+
|
|
3
|
+
[](https://www.npmjs.com/package/@revenium/openai)
|
|
4
|
+
[](https://nodejs.org/)
|
|
5
|
+
[](https://docs.revenium.io)
|
|
6
|
+
[](https://opensource.org/licenses/MIT)
|
|
7
|
+
|
|
8
|
+
**Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
|
|
9
|
+
|
|
10
|
+
A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
|
|
11
|
+
|
|
12
|
+
## Features
|
|
13
|
+
|
|
14
|
+
- **Seamless Integration** - Native TypeScript support, no type casting required
|
|
15
|
+
- **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
|
|
16
|
+
- **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
|
|
17
|
+
- **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
|
|
18
|
+
- **Type Safety** - Complete TypeScript support with IntelliSense
|
|
19
|
+
- **Streaming Support** - Handles regular and streaming requests seamlessly
|
|
20
|
+
- **Fire-and-Forget** - Never blocks your application flow
|
|
21
|
+
- **Zero Configuration** - Auto-initialization from environment variables
|
|
22
|
+
|
|
23
|
+
## Package Migration
|
|
24
|
+
|
|
25
|
+
This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming.
|
|
26
|
+
|
|
27
|
+
### Migration Steps
|
|
28
|
+
|
|
29
|
+
If you're upgrading from the old package:
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
# Uninstall the old package
|
|
33
|
+
npm uninstall revenium-middleware-openai-node
|
|
34
|
+
|
|
35
|
+
# Install the new package
|
|
36
|
+
npm install @revenium/openai
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
**Update your imports:**
|
|
40
|
+
|
|
41
|
+
```typescript
|
|
42
|
+
// Old import
|
|
43
|
+
import { patchOpenAIInstance } from "revenium-middleware-openai-node";
|
|
44
|
+
|
|
45
|
+
// New import
|
|
46
|
+
import { patchOpenAIInstance } from "@revenium/openai";
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
All functionality remains exactly the same - only the package name has changed.
|
|
50
|
+
|
|
51
|
+
## Getting Started
|
|
52
|
+
|
|
53
|
+
Choose your preferred approach to get started quickly:
|
|
54
|
+
|
|
55
|
+
### Option 1: Create Project from Scratch
|
|
56
|
+
|
|
57
|
+
Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
|
|
58
|
+
[Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
|
|
59
|
+
|
|
60
|
+
### Option 2: Clone Our Repository
|
|
61
|
+
|
|
62
|
+
Clone and run the repository with working examples.
|
|
63
|
+
[Go to Repository Guide](#option-2-clone-our-repository)
|
|
64
|
+
|
|
65
|
+
### Option 3: Add to Existing Project
|
|
66
|
+
|
|
67
|
+
Already have a project? Just install and replace imports.
|
|
68
|
+
[Go to Integration Guide](#option-3-existing-project-integration)
|
|
69
|
+
|
|
70
|
+
---
|
|
71
|
+
|
|
72
|
+
## Option 1: Create Project from Scratch
|
|
73
|
+
|
|
74
|
+
### Step 1: Create Project Directory
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
# Create and navigate to your project
|
|
78
|
+
mkdir my-openai-project
|
|
79
|
+
cd my-openai-project
|
|
80
|
+
|
|
81
|
+
# Initialize npm project
|
|
82
|
+
npm init -y
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
### Step 2: Install Dependencies
|
|
86
|
+
|
|
87
|
+
```bash
|
|
88
|
+
# Install the middleware and OpenAI SDK
|
|
89
|
+
npm install @revenium/openai openai@^5.8.0 dotenv
|
|
90
|
+
|
|
91
|
+
# For TypeScript projects (optional)
|
|
92
|
+
npm install -D typescript tsx @types/node
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Step 3: Setup Environment Variables
|
|
96
|
+
|
|
97
|
+
Create a `.env` file in your project root:
|
|
98
|
+
|
|
99
|
+
```bash
|
|
100
|
+
# Create .env file
|
|
101
|
+
echo. > .env # On Windows (CMD)
|
|
102
|
+
touch .env # On Mac/Linux
|
|
103
|
+
# OR PowerShell
|
|
104
|
+
New-Item -Path .env -ItemType File
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
Copy and paste the following into `.env`:
|
|
108
|
+
|
|
109
|
+
```env
|
|
110
|
+
# Revenium OpenAI Middleware Configuration
|
|
111
|
+
# Copy this file to .env and fill in your actual values
|
|
112
|
+
|
|
113
|
+
# Required: Your Revenium API key (starts with hak_)
|
|
114
|
+
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
115
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
116
|
+
|
|
117
|
+
# Required: Your OpenAI API key (starts with sk-)
|
|
118
|
+
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
119
|
+
|
|
120
|
+
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
121
|
+
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
122
|
+
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
123
|
+
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
124
|
+
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
125
|
+
|
|
126
|
+
# Optional: Enable debug logging
|
|
127
|
+
REVENIUM_DEBUG=false
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
**NOTE**: Replace each `your_..._here` with your actual values.
|
|
131
|
+
|
|
132
|
+
**IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
|
|
133
|
+
|
|
134
|
+
### Step 4: Protect Your API Keys
|
|
135
|
+
|
|
136
|
+
**CRITICAL SECURITY**: Never commit your `.env` file to version control!
|
|
137
|
+
|
|
138
|
+
Your `.env` file contains sensitive API keys that must be kept secret:
|
|
139
|
+
|
|
140
|
+
```bash
|
|
141
|
+
# Verify .env is in your .gitignore
|
|
142
|
+
git check-ignore .env
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
If the command returns nothing, add `.env` to your `.gitignore`:
|
|
146
|
+
|
|
147
|
+
```gitignore
|
|
148
|
+
# Environment variables
|
|
149
|
+
.env
|
|
150
|
+
.env.*
|
|
151
|
+
!.env.example
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Best Practice**: Use GitHub's standard Node.gitignore as a starting point:
|
|
155
|
+
- Reference: https://github.com/github/gitignore/blob/main/Node.gitignore
|
|
156
|
+
|
|
157
|
+
**Warning:** The following command will overwrite your current `.gitignore` file.
|
|
158
|
+
To avoid losing custom rules, back up your file first or append instead:
|
|
159
|
+
`curl https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore >> .gitignore`
|
|
160
|
+
|
|
161
|
+
**Note:** Appending may result in duplicate entries if your `.gitignore` already contains some of the patterns from Node.gitignore.
|
|
162
|
+
Please review your `.gitignore` after appending and remove any duplicate lines as needed.
|
|
163
|
+
|
|
164
|
+
This protects your OpenAI API key, Revenium API key, and any other secrets from being accidentally committed to your repository.
|
|
165
|
+
|
|
166
|
+
### Step 5: Create Your First Test
|
|
167
|
+
|
|
168
|
+
#### TypeScript Test
|
|
169
|
+
|
|
170
|
+
Create `test-openai.ts`:
|
|
171
|
+
|
|
172
|
+
```typescript
|
|
173
|
+
import 'dotenv/config';
|
|
174
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
175
|
+
import OpenAI from 'openai';
|
|
176
|
+
|
|
177
|
+
async function testOpenAI() {
|
|
178
|
+
try {
|
|
179
|
+
// Initialize Revenium middleware
|
|
180
|
+
const initResult = initializeReveniumFromEnv();
|
|
181
|
+
if (!initResult.success) {
|
|
182
|
+
console.error(' Failed to initialize Revenium:', initResult.message);
|
|
183
|
+
process.exit(1);
|
|
184
|
+
}
|
|
185
|
+
|
|
186
|
+
// Create and patch OpenAI instance
|
|
187
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
188
|
+
|
|
189
|
+
const response = await openai.chat.completions.create({
|
|
190
|
+
model: 'gpt-4o-mini',
|
|
191
|
+
max_tokens: 100,
|
|
192
|
+
messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
|
|
193
|
+
usageMetadata: {
|
|
194
|
+
subscriber: {
|
|
195
|
+
id: 'user-456',
|
|
196
|
+
email: 'user@demo-org.com',
|
|
197
|
+
credential: {
|
|
198
|
+
name: 'demo-api-key',
|
|
199
|
+
value: 'demo-key-123',
|
|
200
|
+
},
|
|
201
|
+
},
|
|
202
|
+
organizationId: 'demo-org-123',
|
|
203
|
+
productId: 'ai-assistant-v2',
|
|
204
|
+
taskType: 'educational-query',
|
|
205
|
+
agent: 'openai-basic-demo',
|
|
206
|
+
traceId: 'session-' + Date.now(),
|
|
207
|
+
},
|
|
208
|
+
});
|
|
209
|
+
|
|
210
|
+
const text = response.choices[0]?.message?.content || 'No response';
|
|
211
|
+
console.log('Response:', text);
|
|
212
|
+
} catch (error) {
|
|
213
|
+
console.error('Error:', error);
|
|
214
|
+
}
|
|
215
|
+
}
|
|
216
|
+
|
|
217
|
+
testOpenAI();
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
#### JavaScript Test
|
|
221
|
+
|
|
222
|
+
Create `test-openai.js`:
|
|
223
|
+
|
|
224
|
+
```javascript
|
|
225
|
+
require('dotenv').config();
|
|
226
|
+
const {
|
|
227
|
+
initializeReveniumFromEnv,
|
|
228
|
+
patchOpenAIInstance,
|
|
229
|
+
} = require('@revenium/openai');
|
|
230
|
+
const OpenAI = require('openai');
|
|
231
|
+
|
|
232
|
+
async function testOpenAI() {
|
|
233
|
+
try {
|
|
234
|
+
// Initialize Revenium middleware
|
|
235
|
+
const initResult = initializeReveniumFromEnv();
|
|
236
|
+
if (!initResult.success) {
|
|
237
|
+
console.error(' Failed to initialize Revenium:', initResult.message);
|
|
238
|
+
process.exit(1);
|
|
239
|
+
}
|
|
240
|
+
|
|
241
|
+
// Create and patch OpenAI instance
|
|
242
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
243
|
+
|
|
244
|
+
const response = await openai.chat.completions.create({
|
|
245
|
+
model: 'gpt-4o-mini',
|
|
246
|
+
max_tokens: 100,
|
|
247
|
+
messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
|
|
248
|
+
usageMetadata: {
|
|
249
|
+
subscriber: {
|
|
250
|
+
id: 'user-456',
|
|
251
|
+
email: 'user@demo-org.com',
|
|
252
|
+
},
|
|
253
|
+
organizationId: 'demo-org-123',
|
|
254
|
+
taskType: 'educational-query',
|
|
255
|
+
},
|
|
256
|
+
});
|
|
257
|
+
|
|
258
|
+
const text = response.choices[0]?.message?.content || 'No response';
|
|
259
|
+
console.log('Response:', text);
|
|
260
|
+
} catch (error) {
|
|
261
|
+
// Handle error appropriately
|
|
262
|
+
}
|
|
263
|
+
}
|
|
264
|
+
|
|
265
|
+
testOpenAI();
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
### Step 6: Add Package Scripts
|
|
269
|
+
|
|
270
|
+
Update your `package.json`:
|
|
271
|
+
|
|
272
|
+
```json
|
|
273
|
+
{
|
|
274
|
+
"name": "my-openai-project",
|
|
275
|
+
"version": "1.0.0",
|
|
276
|
+
"type": "commonjs",
|
|
277
|
+
"scripts": {
|
|
278
|
+
"test-ts": "npx tsx test-openai.ts",
|
|
279
|
+
"test-js": "node test-openai.js"
|
|
280
|
+
},
|
|
281
|
+
"dependencies": {
|
|
282
|
+
"@revenium/openai": "^1.0.11",
|
|
283
|
+
"openai": "^5.8.0",
|
|
284
|
+
"dotenv": "^16.5.0"
|
|
285
|
+
}
|
|
286
|
+
}
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
### Step 7: Run Your Tests
|
|
290
|
+
|
|
291
|
+
```bash
|
|
292
|
+
# Test TypeScript version
|
|
293
|
+
npm run test-ts
|
|
294
|
+
|
|
295
|
+
# Test JavaScript version
|
|
296
|
+
npm run test-js
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
### Step 8: Project Structure
|
|
300
|
+
|
|
301
|
+
Your project should now look like this:
|
|
302
|
+
|
|
303
|
+
```
|
|
304
|
+
my-openai-project/
|
|
305
|
+
├── .env # Environment variables
|
|
306
|
+
├── .gitignore # Git ignore file
|
|
307
|
+
├── package.json # Project configuration
|
|
308
|
+
├── test-openai.ts # TypeScript test
|
|
309
|
+
└── test-openai.js # JavaScript test
|
|
310
|
+
```
|
|
311
|
+
|
|
312
|
+
## Option 2: Clone Our Repository
|
|
313
|
+
|
|
314
|
+
### Step 1: Clone the Repository
|
|
315
|
+
|
|
316
|
+
```bash
|
|
317
|
+
# Clone the repository
|
|
318
|
+
git clone git@github.com:revenium/revenium-middleware-openai-node.git
|
|
319
|
+
cd revenium-middleware-openai-node
|
|
320
|
+
```
|
|
321
|
+
|
|
322
|
+
### Step 2: Install Dependencies
|
|
323
|
+
|
|
324
|
+
```bash
|
|
325
|
+
# Install all dependencies
|
|
326
|
+
npm install
|
|
327
|
+
npm install @revenium/openai
|
|
328
|
+
```
|
|
329
|
+
|
|
330
|
+
### Step 3: Setup Environment Variables
|
|
331
|
+
|
|
332
|
+
Create a `.env` file in the project root:
|
|
333
|
+
|
|
334
|
+
```bash
|
|
335
|
+
# Create .env file
|
|
336
|
+
cp .env.example .env # If available, or create manually
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
Copy and paste the following into `.env`:
|
|
340
|
+
|
|
341
|
+
```bash
|
|
342
|
+
# Revenium OpenAI Middleware Configuration
|
|
343
|
+
# Copy this file to .env and fill in your actual values
|
|
344
|
+
|
|
345
|
+
# Required: Your Revenium API key (starts with hak_)
|
|
346
|
+
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
347
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
348
|
+
|
|
349
|
+
# Required: Your OpenAI API key (starts with sk-)
|
|
350
|
+
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
351
|
+
|
|
352
|
+
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
353
|
+
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
354
|
+
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
355
|
+
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
356
|
+
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
357
|
+
|
|
358
|
+
# Optional: Enable debug logging
|
|
359
|
+
REVENIUM_DEBUG=false
|
|
360
|
+
```
|
|
361
|
+
|
|
362
|
+
**IMPORTANT**: Ensure your `REVENIUM_METERING_API_KEY` matches your `REVENIUM_METERING_BASE_URL` environment. Mismatched credentials will cause authentication failures.
|
|
363
|
+
|
|
364
|
+
### Step 4: Build the Project
|
|
365
|
+
|
|
366
|
+
```bash
|
|
367
|
+
# Build the middleware
|
|
368
|
+
npm run build
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
### Step 5: Run the Examples
|
|
372
|
+
|
|
373
|
+
The repository includes working example files:
|
|
374
|
+
|
|
375
|
+
```bash
|
|
376
|
+
# Run Chat Completions API examples (using npm scripts)
|
|
377
|
+
npm run example:openai-basic
|
|
378
|
+
npm run example:openai-streaming
|
|
379
|
+
npm run example:azure-basic
|
|
380
|
+
npm run example:azure-streaming
|
|
381
|
+
|
|
382
|
+
# Run Responses API examples (available with OpenAI SDK 5.8+)
|
|
383
|
+
npm run example:openai-responses-basic
|
|
384
|
+
npm run example:openai-responses-streaming
|
|
385
|
+
npm run example:azure-responses-basic
|
|
386
|
+
npm run example:azure-responses-streaming
|
|
387
|
+
|
|
388
|
+
# Or run examples directly with tsx
|
|
389
|
+
npx tsx examples/openai-basic.ts
|
|
390
|
+
npx tsx examples/openai-streaming.ts
|
|
391
|
+
npx tsx examples/azure-basic.ts
|
|
392
|
+
npx tsx examples/azure-streaming.ts
|
|
393
|
+
npx tsx examples/openai-responses-basic.ts
|
|
394
|
+
npx tsx examples/openai-responses-streaming.ts
|
|
395
|
+
npx tsx examples/azure-responses-basic.ts
|
|
396
|
+
npx tsx examples/azure-responses-streaming.ts
|
|
397
|
+
```
|
|
398
|
+
|
|
399
|
+
These examples demonstrate:
|
|
400
|
+
|
|
401
|
+
- **Chat Completions API** - Traditional OpenAI chat completions and embeddings
|
|
402
|
+
- **Responses API** - New OpenAI Responses API with enhanced capabilities
|
|
403
|
+
- **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
|
|
404
|
+
- **Streaming Support** - Real-time response streaming with metadata tracking
|
|
405
|
+
- **Optional Metadata** - Rich business context and user tracking
|
|
406
|
+
- **Error Handling** - Robust error handling and debugging
|
|
407
|
+
|
|
408
|
+
## Option 3: Existing Project Integration
|
|
409
|
+
|
|
410
|
+
Already have a project? Just install and replace imports:
|
|
411
|
+
|
|
412
|
+
### Step 1: Install the Package
|
|
413
|
+
|
|
414
|
+
```bash
|
|
415
|
+
npm install @revenium/openai
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
### Step 2: Update Your Imports
|
|
419
|
+
|
|
420
|
+
**Before:**
|
|
421
|
+
|
|
422
|
+
```typescript
|
|
423
|
+
import OpenAI from 'openai';
|
|
424
|
+
|
|
425
|
+
const openai = new OpenAI();
|
|
426
|
+
```
|
|
427
|
+
|
|
428
|
+
**After:**
|
|
429
|
+
|
|
430
|
+
```typescript
|
|
431
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
432
|
+
import OpenAI from 'openai';
|
|
433
|
+
|
|
434
|
+
// Initialize Revenium middleware
|
|
435
|
+
initializeReveniumFromEnv();
|
|
436
|
+
|
|
437
|
+
// Patch your OpenAI instance
|
|
438
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
439
|
+
```
|
|
440
|
+
|
|
441
|
+
### Step 3: Add Environment Variables
|
|
442
|
+
|
|
443
|
+
Add to your `.env` file:
|
|
444
|
+
|
|
445
|
+
```env
|
|
446
|
+
# Revenium OpenAI Middleware Configuration
|
|
447
|
+
|
|
448
|
+
# Required: Your Revenium API key (starts with hak_)
|
|
449
|
+
REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
|
|
450
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
451
|
+
|
|
452
|
+
# Required: Your OpenAI API key (starts with sk-)
|
|
453
|
+
OPENAI_API_KEY=sk_your_openai_api_key_here
|
|
454
|
+
|
|
455
|
+
# Optional: Your Azure OpenAI configuration (for Azure testing)
|
|
456
|
+
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
|
|
457
|
+
AZURE_OPENAI_API_KEY=your-azure-openai-api-key-here
|
|
458
|
+
AZURE_OPENAI_DEPLOYMENT=your-deployment-name-here
|
|
459
|
+
AZURE_OPENAI_API_VERSION=2024-12-01-preview
|
|
460
|
+
|
|
461
|
+
# Optional: Enable debug logging
|
|
462
|
+
REVENIUM_DEBUG=false
|
|
463
|
+
```
|
|
464
|
+
|
|
465
|
+
### Step 4: Optional - Add Metadata
|
|
466
|
+
|
|
467
|
+
Enhance your existing calls with optional metadata:
|
|
468
|
+
|
|
469
|
+
```typescript
|
|
470
|
+
// Your existing code works unchanged
|
|
471
|
+
const response = await openai.chat.completions.create({
|
|
472
|
+
model: 'gpt-4o-mini',
|
|
473
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
474
|
+
// Add optional metadata for better analytics
|
|
475
|
+
usageMetadata: {
|
|
476
|
+
subscriber: { id: 'user-123' },
|
|
477
|
+
organizationId: 'my-company',
|
|
478
|
+
taskType: 'chat',
|
|
479
|
+
},
|
|
480
|
+
});
|
|
481
|
+
```
|
|
482
|
+
|
|
483
|
+
** That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
|
|
484
|
+
|
|
485
|
+
## What Gets Tracked
|
|
486
|
+
|
|
487
|
+
The middleware automatically captures comprehensive usage data:
|
|
488
|
+
|
|
489
|
+
### **Usage Metrics**
|
|
490
|
+
|
|
491
|
+
- **Token Counts** - Input tokens, output tokens, total tokens
|
|
492
|
+
- **Model Information** - Model name, provider (OpenAI/Azure), API version
|
|
493
|
+
- **Request Timing** - Request duration, response time
|
|
494
|
+
- **Cost Calculation** - Estimated costs based on current pricing
|
|
495
|
+
|
|
496
|
+
### **Business Context (Optional)**
|
|
497
|
+
|
|
498
|
+
- **User Tracking** - Subscriber ID, email, credentials
|
|
499
|
+
- **Organization Data** - Organization ID, subscription ID, product ID
|
|
500
|
+
- **Task Classification** - Task type, agent identifier, trace ID
|
|
501
|
+
- **Quality Metrics** - Response quality scores, custom metadata
|
|
502
|
+
|
|
503
|
+
### **Technical Details**
|
|
504
|
+
|
|
505
|
+
- **API Endpoints** - Chat completions, embeddings, responses API
|
|
506
|
+
- **Request Types** - Streaming vs non-streaming
|
|
507
|
+
- **Error Tracking** - Failed requests, error types, retry attempts
|
|
508
|
+
- **Environment Info** - Development vs production usage
|
|
509
|
+
|
|
510
|
+
## OpenAI Responses API Support
|
|
511
|
+
|
|
512
|
+
This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
|
|
513
|
+
|
|
514
|
+
### What is the Responses API?
|
|
515
|
+
|
|
516
|
+
The Responses API is OpenAI's new stateful API that:
|
|
517
|
+
|
|
518
|
+
- Uses `input` instead of `messages` parameter for simplified interaction
|
|
519
|
+
- Provides unified experience combining chat completions and assistants capabilities
|
|
520
|
+
- Supports advanced features like background tasks, function calling, and code interpreter
|
|
521
|
+
- Offers better streaming and real-time response generation
|
|
522
|
+
- Works with GPT-5 and other advanced models
|
|
523
|
+
|
|
524
|
+
### API Comparison
|
|
525
|
+
|
|
526
|
+
**Traditional Chat Completions:**
|
|
527
|
+
|
|
528
|
+
```javascript
|
|
529
|
+
const response = await openai.chat.completions.create({
|
|
530
|
+
model: 'gpt-4o',
|
|
531
|
+
messages: [{ role: 'user', content: 'Hello' }],
|
|
532
|
+
});
|
|
533
|
+
```
|
|
534
|
+
|
|
535
|
+
**New Responses API:**
|
|
536
|
+
|
|
537
|
+
```javascript
|
|
538
|
+
const response = await openai.responses.create({
|
|
539
|
+
model: 'gpt-5',
|
|
540
|
+
input: 'Hello', // Simplified input parameter
|
|
541
|
+
});
|
|
542
|
+
```
|
|
543
|
+
|
|
544
|
+
### Key Differences
|
|
545
|
+
|
|
546
|
+
| Feature | Chat Completions | Responses API |
|
|
547
|
+
| ---------------------- | ---------------------------- | ----------------------------------- |
|
|
548
|
+
| **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
|
|
549
|
+
| **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
|
|
550
|
+
| **Response Structure** | `choices[0].message.content` | `output_text` |
|
|
551
|
+
| **Stateful** | No | Yes (with `store: true`) |
|
|
552
|
+
| **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
|
|
553
|
+
| **Temperature** | Supported | Not supported with GPT-5 |
|
|
554
|
+
|
|
555
|
+
### Requirements & Installation
|
|
556
|
+
|
|
557
|
+
**OpenAI SDK Version:**
|
|
558
|
+
|
|
559
|
+
- **Minimum:** `5.8.0` (when Responses API was officially released)
|
|
560
|
+
- **Recommended:** `5.8.2` or later (tested and verified)
|
|
561
|
+
- **Current:** `6.2.0` (latest available)
|
|
562
|
+
|
|
563
|
+
**Installation:**
|
|
564
|
+
|
|
565
|
+
```bash
|
|
566
|
+
# Install latest version with Responses API support
|
|
567
|
+
npm install openai@^5.8.0
|
|
568
|
+
|
|
569
|
+
# Or install specific tested version
|
|
570
|
+
npm install openai@5.8.2
|
|
571
|
+
```
|
|
572
|
+
|
|
573
|
+
### Current Status
|
|
574
|
+
|
|
575
|
+
**The Responses API is officially available in OpenAI SDK 5.8+**
|
|
576
|
+
|
|
577
|
+
**Official Release:**
|
|
578
|
+
|
|
579
|
+
- Released by OpenAI in SDK version 5.8.0
|
|
580
|
+
- Fully documented in official OpenAI documentation
|
|
581
|
+
- Production-ready with GPT-5 and other supported models
|
|
582
|
+
- Complete middleware support with Revenium integration
|
|
583
|
+
|
|
584
|
+
**Middleware Features:**
|
|
585
|
+
|
|
586
|
+
- Full Responses API support (streaming & non-streaming)
|
|
587
|
+
- Seamless metadata tracking identical to Chat Completions
|
|
588
|
+
- Type-safe TypeScript integration
|
|
589
|
+
- Complete token tracking including reasoning tokens
|
|
590
|
+
- Azure OpenAI compatibility
|
|
591
|
+
|
|
592
|
+
**References:**
|
|
593
|
+
|
|
594
|
+
- [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
|
|
595
|
+
- [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
|
|
596
|
+
|
|
597
|
+
### Responses API Examples
|
|
598
|
+
|
|
599
|
+
The middleware includes comprehensive examples for the new Responses API:
|
|
600
|
+
|
|
601
|
+
**Basic Usage:**
|
|
602
|
+
|
|
603
|
+
```typescript
|
|
604
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
605
|
+
import OpenAI from 'openai';
|
|
606
|
+
|
|
607
|
+
// Initialize and patch OpenAI instance
|
|
608
|
+
initializeReveniumFromEnv();
|
|
609
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
610
|
+
|
|
611
|
+
// Simple string input
|
|
612
|
+
const response = await openai.responses.create({
|
|
613
|
+
model: 'gpt-5',
|
|
614
|
+
input: 'What is the capital of France?',
|
|
615
|
+
max_output_tokens: 150,
|
|
616
|
+
usageMetadata: {
|
|
617
|
+
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
618
|
+
organizationId: 'org-456',
|
|
619
|
+
productId: 'quantum-explainer',
|
|
620
|
+
taskType: 'educational-content',
|
|
621
|
+
},
|
|
622
|
+
});
|
|
623
|
+
|
|
624
|
+
console.log(response.output_text); // "Paris."
|
|
625
|
+
```
|
|
626
|
+
|
|
627
|
+
**Streaming Example:**
|
|
628
|
+
|
|
629
|
+
```typescript
|
|
630
|
+
const stream = await openai.responses.create({
|
|
631
|
+
model: 'gpt-5',
|
|
632
|
+
input: 'Write a short story about AI',
|
|
633
|
+
stream: true,
|
|
634
|
+
max_output_tokens: 500,
|
|
635
|
+
usageMetadata: {
|
|
636
|
+
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
637
|
+
organizationId: 'org-456',
|
|
638
|
+
},
|
|
639
|
+
});
|
|
640
|
+
|
|
641
|
+
for await (const chunk of stream) {
|
|
642
|
+
process.stdout.write(chunk.delta?.content || '');
|
|
643
|
+
}
|
|
644
|
+
```
|
|
645
|
+
|
|
646
|
+
### Adding Custom Metadata
|
|
647
|
+
|
|
648
|
+
Track users, organizations, and custom data with seamless TypeScript integration:
|
|
649
|
+
|
|
650
|
+
```typescript
|
|
651
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
652
|
+
import OpenAI from 'openai';
|
|
653
|
+
|
|
654
|
+
// Initialize and patch OpenAI instance
|
|
655
|
+
initializeReveniumFromEnv();
|
|
656
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
657
|
+
|
|
658
|
+
const response = await openai.chat.completions.create({
|
|
659
|
+
model: 'gpt-4',
|
|
660
|
+
messages: [{ role: 'user', content: 'Summarize this document' }],
|
|
661
|
+
// Add custom tracking metadata - all fields optional, no type casting needed!
|
|
662
|
+
usageMetadata: {
|
|
663
|
+
subscriber: {
|
|
664
|
+
id: 'user-12345',
|
|
665
|
+
email: 'john@acme-corp.com',
|
|
666
|
+
},
|
|
667
|
+
organizationId: 'acme-corp',
|
|
668
|
+
productId: 'document-ai',
|
|
669
|
+
taskType: 'document-summary',
|
|
670
|
+
agent: 'doc-summarizer-v2',
|
|
671
|
+
traceId: 'session-abc123',
|
|
672
|
+
},
|
|
673
|
+
});
|
|
674
|
+
|
|
675
|
+
// Same metadata works with Responses API
|
|
676
|
+
const responsesResult = await openai.responses.create({
|
|
677
|
+
model: 'gpt-5',
|
|
678
|
+
input: 'Summarize this document',
|
|
679
|
+
// Same metadata structure - seamless compatibility!
|
|
680
|
+
usageMetadata: {
|
|
681
|
+
subscriber: {
|
|
682
|
+
id: 'user-12345',
|
|
683
|
+
email: 'john@acme-corp.com',
|
|
684
|
+
},
|
|
685
|
+
organizationId: 'acme-corp',
|
|
686
|
+
productId: 'document-ai',
|
|
687
|
+
taskType: 'document-summary',
|
|
688
|
+
agent: 'doc-summarizer-v2',
|
|
689
|
+
traceId: 'session-abc123',
|
|
690
|
+
},
|
|
691
|
+
});
|
|
692
|
+
```
|
|
693
|
+
|
|
694
|
+
### Streaming Support
|
|
695
|
+
|
|
696
|
+
The middleware automatically handles streaming requests with seamless metadata:
|
|
697
|
+
|
|
698
|
+
```typescript
|
|
699
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
700
|
+
import OpenAI from 'openai';
|
|
701
|
+
|
|
702
|
+
// Initialize and patch OpenAI instance
|
|
703
|
+
initializeReveniumFromEnv();
|
|
704
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
705
|
+
|
|
706
|
+
const stream = await openai.chat.completions.create({
|
|
707
|
+
model: 'gpt-4',
|
|
708
|
+
messages: [{ role: 'user', content: 'Tell me a story' }],
|
|
709
|
+
stream: true,
|
|
710
|
+
// Metadata works seamlessly with streaming - all fields optional!
|
|
711
|
+
usageMetadata: {
|
|
712
|
+
organizationId: 'story-app',
|
|
713
|
+
taskType: 'creative-writing',
|
|
714
|
+
},
|
|
715
|
+
});
|
|
716
|
+
|
|
717
|
+
for await (const chunk of stream) {
|
|
718
|
+
process.stdout.write(chunk.choices[0]?.delta?.content || '');
|
|
719
|
+
}
|
|
720
|
+
// Usage tracking happens automatically when stream completes
|
|
721
|
+
```
|
|
722
|
+
|
|
723
|
+
### Temporarily Disabling Tracking
|
|
724
|
+
|
|
725
|
+
If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
|
|
726
|
+
|
|
727
|
+
```javascript
|
|
728
|
+
import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
|
|
729
|
+
|
|
730
|
+
// Disable tracking
|
|
731
|
+
unpatchOpenAI();
|
|
732
|
+
|
|
733
|
+
// Your OpenAI calls now bypass Revenium tracking
|
|
734
|
+
await openai.chat.completions.create({...});
|
|
735
|
+
|
|
736
|
+
// Re-enable tracking
|
|
737
|
+
patchOpenAI();
|
|
738
|
+
```
|
|
739
|
+
|
|
740
|
+
**Common use cases:**
|
|
741
|
+
|
|
742
|
+
- **Debugging**: Isolate whether issues are caused by the middleware
|
|
743
|
+
- **Testing**: Compare behavior with/without tracking
|
|
744
|
+
- **Conditional tracking**: Enable/disable based on environment
|
|
745
|
+
- **Troubleshooting**: Temporary bypass during incident response
|
|
746
|
+
|
|
747
|
+
**Note**: This affects all OpenAI instances globally since we patch the prototype methods.
|
|
748
|
+
|
|
749
|
+
## Azure OpenAI Integration
|
|
750
|
+
|
|
751
|
+
**Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
|
|
752
|
+
|
|
753
|
+
### Quick Start with Azure OpenAI
|
|
754
|
+
|
|
755
|
+
```bash
|
|
756
|
+
# Set your Azure OpenAI environment variables
|
|
757
|
+
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
|
|
758
|
+
export AZURE_OPENAI_API_KEY="your-azure-api-key"
|
|
759
|
+
export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
|
|
760
|
+
export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
|
|
761
|
+
|
|
762
|
+
# Set your Revenium credentials
|
|
763
|
+
export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
|
|
764
|
+
# export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
|
|
765
|
+
```
|
|
766
|
+
|
|
767
|
+
```typescript
|
|
768
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
769
|
+
import { AzureOpenAI } from 'openai';
|
|
770
|
+
|
|
771
|
+
// Initialize Revenium middleware
|
|
772
|
+
initializeReveniumFromEnv();
|
|
773
|
+
|
|
774
|
+
// Create and patch Azure OpenAI client
|
|
775
|
+
const azure = patchOpenAIInstance(
|
|
776
|
+
new AzureOpenAI({
|
|
777
|
+
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
|
|
778
|
+
apiKey: process.env.AZURE_OPENAI_API_KEY,
|
|
779
|
+
apiVersion: process.env.AZURE_OPENAI_API_VERSION,
|
|
780
|
+
})
|
|
781
|
+
);
|
|
782
|
+
|
|
783
|
+
// Your existing Azure OpenAI code works with seamless metadata
|
|
784
|
+
const response = await azure.chat.completions.create({
|
|
785
|
+
model: 'gpt-4o', // Uses your deployment name
|
|
786
|
+
messages: [{ role: 'user', content: 'Hello from Azure!' }],
|
|
787
|
+
// Optional metadata with native TypeScript support
|
|
788
|
+
usageMetadata: {
|
|
789
|
+
organizationId: 'my-company',
|
|
790
|
+
taskType: 'azure-chat',
|
|
791
|
+
},
|
|
792
|
+
});
|
|
793
|
+
|
|
794
|
+
console.log(response.choices[0].message.content);
|
|
795
|
+
```
|
|
796
|
+
|
|
797
|
+
### Azure Features
|
|
798
|
+
|
|
799
|
+
- **Automatic Detection**: Detects Azure OpenAI clients automatically
|
|
800
|
+
- **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
|
|
801
|
+
- **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
|
|
802
|
+
- **Deployment Support**: Works with any Azure deployment name (simple or complex)
|
|
803
|
+
- **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
|
|
804
|
+
- **Zero Code Changes**: Existing Azure OpenAI code works without modification
|
|
805
|
+
|
|
806
|
+
### Azure Environment Variables
|
|
807
|
+
|
|
808
|
+
| Variable | Required | Description | Example |
|
|
809
|
+
| -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
|
|
810
|
+
| `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
|
|
811
|
+
| `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
|
|
812
|
+
| `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
|
|
813
|
+
| `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
|
|
814
|
+
|
|
815
|
+
### Azure Model Name Resolution
|
|
816
|
+
|
|
817
|
+
The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
|
|
818
|
+
|
|
819
|
+
```typescript
|
|
820
|
+
// Azure deployment names → Standard model names for pricing
|
|
821
|
+
"gpt-4o-2024-11-20" → "gpt-4o"
|
|
822
|
+
"gpt4o-prod" → "gpt-4o"
|
|
823
|
+
"o4-mini" → "gpt-4o-mini"
|
|
824
|
+
"gpt-35-turbo-dev" → "gpt-3.5-turbo"
|
|
825
|
+
"text-embedding-3-large" → "text-embedding-3-large" // Direct match
|
|
826
|
+
"embedding-3-large" → "text-embedding-3-large"
|
|
827
|
+
```
|
|
828
|
+
|
|
829
|
+
## Advanced Usage
|
|
830
|
+
|
|
831
|
+
### Streaming with Metadata
|
|
832
|
+
|
|
833
|
+
The middleware seamlessly handles streaming requests with full metadata support:
|
|
834
|
+
|
|
835
|
+
```typescript
|
|
836
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
837
|
+
import OpenAI from 'openai';
|
|
838
|
+
|
|
839
|
+
initializeReveniumFromEnv();
|
|
840
|
+
const openai = patchOpenAIInstance(new OpenAI());
|
|
841
|
+
|
|
842
|
+
// Chat Completions API streaming
|
|
843
|
+
const stream = await openai.chat.completions.create({
|
|
844
|
+
model: 'gpt-4o-mini',
|
|
845
|
+
messages: [{ role: 'user', content: 'Tell me a story' }],
|
|
846
|
+
stream: true,
|
|
847
|
+
usageMetadata: {
|
|
848
|
+
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
849
|
+
organizationId: 'story-app',
|
|
850
|
+
taskType: 'creative-writing',
|
|
851
|
+
traceId: 'session-' + Date.now(),
|
|
852
|
+
},
|
|
853
|
+
});
|
|
854
|
+
|
|
855
|
+
for await (const chunk of stream) {
|
|
856
|
+
process.stdout.write(chunk.choices[0]?.delta?.content || '');
|
|
857
|
+
}
|
|
858
|
+
// Usage tracking happens automatically when stream completes
|
|
859
|
+
```
|
|
860
|
+
|
|
861
|
+
### Responses API with Metadata
|
|
862
|
+
|
|
863
|
+
Full support for OpenAI's new Responses API:
|
|
864
|
+
|
|
865
|
+
```typescript
|
|
866
|
+
// Simple string input with metadata
|
|
867
|
+
const response = await openai.responses.create({
|
|
868
|
+
model: 'gpt-5',
|
|
869
|
+
input: 'What is the capital of France?',
|
|
870
|
+
max_output_tokens: 150,
|
|
871
|
+
usageMetadata: {
|
|
872
|
+
subscriber: { id: 'user-123', email: 'user@example.com' },
|
|
873
|
+
organizationId: 'org-456',
|
|
874
|
+
productId: 'geography-tutor',
|
|
875
|
+
taskType: 'educational-query',
|
|
876
|
+
},
|
|
877
|
+
});
|
|
878
|
+
|
|
879
|
+
console.log(response.output_text); // "Paris."
|
|
880
|
+
```
|
|
881
|
+
|
|
882
|
+
### Azure OpenAI Integration
|
|
883
|
+
|
|
884
|
+
Automatic Azure OpenAI detection with seamless metadata:
|
|
885
|
+
|
|
886
|
+
```typescript
|
|
887
|
+
import { AzureOpenAI } from 'openai';
|
|
888
|
+
|
|
889
|
+
// Create and patch Azure OpenAI client
|
|
890
|
+
const azure = patchOpenAIInstance(
|
|
891
|
+
new AzureOpenAI({
|
|
892
|
+
endpoint: process.env.AZURE_OPENAI_ENDPOINT,
|
|
893
|
+
apiKey: process.env.AZURE_OPENAI_API_KEY,
|
|
894
|
+
apiVersion: process.env.AZURE_OPENAI_API_VERSION,
|
|
895
|
+
})
|
|
896
|
+
);
|
|
897
|
+
|
|
898
|
+
// Your existing Azure OpenAI code works with seamless metadata
|
|
899
|
+
const response = await azure.chat.completions.create({
|
|
900
|
+
model: 'gpt-4o', // Uses your deployment name
|
|
901
|
+
messages: [{ role: 'user', content: 'Hello from Azure!' }],
|
|
902
|
+
usageMetadata: {
|
|
903
|
+
organizationId: 'my-company',
|
|
904
|
+
taskType: 'azure-chat',
|
|
905
|
+
agent: 'azure-assistant',
|
|
906
|
+
},
|
|
907
|
+
});
|
|
908
|
+
```
|
|
909
|
+
|
|
910
|
+
### Embeddings with Metadata
|
|
911
|
+
|
|
912
|
+
Track embeddings usage with optional metadata:
|
|
913
|
+
|
|
914
|
+
```typescript
|
|
915
|
+
const embedding = await openai.embeddings.create({
|
|
916
|
+
model: 'text-embedding-3-small',
|
|
917
|
+
input: 'Advanced text embedding with comprehensive tracking metadata',
|
|
918
|
+
usageMetadata: {
|
|
919
|
+
subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
|
|
920
|
+
organizationId: 'my-company',
|
|
921
|
+
taskType: 'document-embedding',
|
|
922
|
+
productId: 'search-engine',
|
|
923
|
+
traceId: `embed-${Date.now()}`,
|
|
924
|
+
agent: 'openai-embeddings-node',
|
|
925
|
+
},
|
|
926
|
+
});
|
|
927
|
+
|
|
928
|
+
console.log('Model:', embedding.model);
|
|
929
|
+
console.log('Usage:', embedding.usage);
|
|
930
|
+
console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
|
|
931
|
+
```
|
|
932
|
+
|
|
933
|
+
### Manual Configuration
|
|
934
|
+
|
|
935
|
+
For advanced use cases, configure the middleware manually:
|
|
936
|
+
|
|
937
|
+
```typescript
|
|
938
|
+
import { configure } from '@revenium/openai';
|
|
939
|
+
|
|
940
|
+
configure({
|
|
941
|
+
reveniumApiKey: 'hak_your_api_key',
|
|
942
|
+
reveniumBaseUrl: 'https://api.revenium.io/meter',
|
|
943
|
+
apiTimeout: 5000,
|
|
944
|
+
failSilent: true,
|
|
945
|
+
maxRetries: 3,
|
|
946
|
+
});
|
|
947
|
+
```
|
|
948
|
+
|
|
949
|
+
## Configuration Options
|
|
950
|
+
|
|
951
|
+
### Environment Variables
|
|
952
|
+
|
|
953
|
+
| Variable | Required | Default | Description |
|
|
954
|
+
| ------------------------------ | -------- | ------------------------------- | ---------------------------------------------- |
|
|
955
|
+
| `REVENIUM_METERING_API_KEY` | true | - | Your Revenium API key (starts with `hak_`) |
|
|
956
|
+
| `OPENAI_API_KEY` | true | - | Your OpenAI API key (starts with `sk-`) |
|
|
957
|
+
| `REVENIUM_METERING_BASE_URL` | false | `https://api.revenium.io/meter` | Revenium metering API base URL |
|
|
958
|
+
| `REVENIUM_DEBUG` | false | `false` | Enable debug logging (`true`/`false`) |
|
|
959
|
+
| `AZURE_OPENAI_ENDPOINT` | false | - | Azure OpenAI endpoint URL (for Azure testing) |
|
|
960
|
+
| `AZURE_OPENAI_API_KEY` | false | - | Azure OpenAI API key (for Azure testing) |
|
|
961
|
+
| `AZURE_OPENAI_DEPLOYMENT` | false | - | Azure OpenAI deployment name (for Azure) |
|
|
962
|
+
| `AZURE_OPENAI_API_VERSION` | false | `2024-12-01-preview` | Azure OpenAI API version (for Azure) |
|
|
963
|
+
|
|
964
|
+
**Important Note about `REVENIUM_METERING_BASE_URL`:**
|
|
965
|
+
|
|
966
|
+
- This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
|
|
967
|
+
- If you don't set it explicitly, the middleware will use the default production endpoint
|
|
968
|
+
- However, you may see console warnings or errors if the middleware cannot determine the correct environment
|
|
969
|
+
- **Best practice:** Always set this variable explicitly to match your environment:
|
|
970
|
+
|
|
971
|
+
```bash
|
|
972
|
+
# Default production URL (recommended)
|
|
973
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
974
|
+
```
|
|
975
|
+
|
|
976
|
+
- **Remember:** Your `REVENIUM_METERING_API_KEY` must match your base URL environment
|
|
977
|
+
|
|
978
|
+
### Usage Metadata Options
|
|
979
|
+
|
|
980
|
+
All metadata fields are optional and help provide better analytics:
|
|
981
|
+
|
|
982
|
+
```typescript
|
|
983
|
+
interface UsageMetadata {
|
|
984
|
+
traceId?: string; // Session or conversation ID
|
|
985
|
+
taskType?: string; // Type of AI task (e.g., "chat", "summary")
|
|
986
|
+
subscriber?: {
|
|
987
|
+
// User information (nested structure)
|
|
988
|
+
id?: string; // User ID from your system
|
|
989
|
+
email?: string; // User's email address
|
|
990
|
+
credential?: {
|
|
991
|
+
// User credentials
|
|
992
|
+
name?: string; // Credential name
|
|
993
|
+
value?: string; // Credential value
|
|
994
|
+
};
|
|
995
|
+
};
|
|
996
|
+
organizationId?: string; // Organization/company ID
|
|
997
|
+
subscriptionId?: string; // Billing plan ID
|
|
998
|
+
productId?: string; // Your product/feature ID
|
|
999
|
+
agent?: string; // AI agent identifier
|
|
1000
|
+
responseQualityScore?: number; // Quality score (0-1)
|
|
1001
|
+
}
|
|
1002
|
+
```
|
|
1003
|
+
|
|
1004
|
+
## Included Examples
|
|
1005
|
+
|
|
1006
|
+
The package includes 8 comprehensive example files in your installation:
|
|
1007
|
+
|
|
1008
|
+
**OpenAI Examples:**
|
|
1009
|
+
- **openai-basic.ts**: Basic chat completions with metadata tracking
|
|
1010
|
+
- **openai-streaming.ts**: Streaming responses with real-time output
|
|
1011
|
+
- **openai-responses-basic.ts**: New Responses API usage (OpenAI SDK 5.8+)
|
|
1012
|
+
- **openai-responses-streaming.ts**: Streaming with Responses API
|
|
1013
|
+
|
|
1014
|
+
**Azure OpenAI Examples:**
|
|
1015
|
+
- **azure-basic.ts**: Azure OpenAI chat completions
|
|
1016
|
+
- **azure-streaming.ts**: Azure streaming responses
|
|
1017
|
+
- **azure-responses-basic.ts**: Azure Responses API
|
|
1018
|
+
- **azure-responses-streaming.ts**: Azure streaming Responses API
|
|
1019
|
+
|
|
1020
|
+
**For npm users:** Examples are installed in `node_modules/@revenium/openai/examples/`
|
|
1021
|
+
|
|
1022
|
+
**For GitHub users:** Examples are in the repository's `examples/` directory
|
|
1023
|
+
|
|
1024
|
+
For detailed setup instructions and usage patterns, see [examples/README.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/examples/README.md).
|
|
1025
|
+
|
|
1026
|
+
## How It Works
|
|
1027
|
+
|
|
1028
|
+
1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
|
|
1029
|
+
- `chat.completions.create` (Chat Completions API)
|
|
1030
|
+
- `responses.create` (Responses API - when available)
|
|
1031
|
+
- `embeddings.create` (Embeddings API)
|
|
1032
|
+
2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
|
|
1033
|
+
3. **Usage Extraction**: Token counts, model info, and timing data are captured
|
|
1034
|
+
4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
|
|
1035
|
+
5. **Transparent Response**: Original OpenAI responses are returned unchanged
|
|
1036
|
+
|
|
1037
|
+
The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
|
|
1038
|
+
|
|
1039
|
+
## Troubleshooting
|
|
1040
|
+
|
|
1041
|
+
### Common Issues
|
|
1042
|
+
|
|
1043
|
+
#### 1. **No tracking data in dashboard**
|
|
1044
|
+
|
|
1045
|
+
**Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
|
|
1046
|
+
|
|
1047
|
+
**Solution**: Enable debug logging to check middleware status:
|
|
1048
|
+
|
|
1049
|
+
```bash
|
|
1050
|
+
export REVENIUM_DEBUG=true
|
|
1051
|
+
```
|
|
1052
|
+
|
|
1053
|
+
**Expected output for successful tracking**:
|
|
1054
|
+
|
|
1055
|
+
```bash
|
|
1056
|
+
[Revenium Debug] OpenAI chat.completions.create intercepted
|
|
1057
|
+
[Revenium Debug] Revenium tracking successful
|
|
1058
|
+
|
|
1059
|
+
# For Responses API:
|
|
1060
|
+
[Revenium Debug] OpenAI responses.create intercepted
|
|
1061
|
+
[Revenium Debug] Revenium tracking successful
|
|
1062
|
+
```
|
|
1063
|
+
|
|
1064
|
+
#### 2. **Environment mismatch errors**
|
|
1065
|
+
|
|
1066
|
+
**Symptoms**: Authentication errors or 401/403 responses
|
|
1067
|
+
|
|
1068
|
+
**Solution**: Ensure your API key matches your base URL environment:
|
|
1069
|
+
|
|
1070
|
+
```bash
|
|
1071
|
+
# Correct - Key and URL from same environment
|
|
1072
|
+
REVENIUM_METERING_API_KEY=hak_your_api_key_here
|
|
1073
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
1074
|
+
|
|
1075
|
+
# Wrong - Key and URL from different environments
|
|
1076
|
+
REVENIUM_METERING_API_KEY=hak_wrong_environment_key
|
|
1077
|
+
REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
|
|
1078
|
+
```
|
|
1079
|
+
|
|
1080
|
+
#### 3. **TypeScript type errors**
|
|
1081
|
+
|
|
1082
|
+
**Symptoms**: TypeScript errors about `usageMetadata` property
|
|
1083
|
+
|
|
1084
|
+
**Solution**: Ensure you're importing the middleware before OpenAI:
|
|
1085
|
+
|
|
1086
|
+
```typescript
|
|
1087
|
+
// Correct order
|
|
1088
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
1089
|
+
import OpenAI from 'openai';
|
|
1090
|
+
|
|
1091
|
+
// Wrong order
|
|
1092
|
+
import OpenAI from 'openai';
|
|
1093
|
+
import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
|
|
1094
|
+
```
|
|
1095
|
+
|
|
1096
|
+
#### 4. **Azure OpenAI not working**
|
|
1097
|
+
|
|
1098
|
+
**Symptoms**: Azure OpenAI calls not being tracked
|
|
1099
|
+
|
|
1100
|
+
**Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
|
|
1101
|
+
|
|
1102
|
+
```typescript
|
|
1103
|
+
import { AzureOpenAI } from 'openai';
|
|
1104
|
+
import { patchOpenAIInstance } from '@revenium/openai';
|
|
1105
|
+
|
|
1106
|
+
// Correct
|
|
1107
|
+
const azure = patchOpenAIInstance(new AzureOpenAI({...}));
|
|
1108
|
+
|
|
1109
|
+
// Wrong - not patched
|
|
1110
|
+
const azure = new AzureOpenAI({...});
|
|
1111
|
+
```
|
|
1112
|
+
|
|
1113
|
+
#### 5. **Responses API not available**
|
|
1114
|
+
|
|
1115
|
+
**Symptoms**: `openai.responses.create` is undefined
|
|
1116
|
+
|
|
1117
|
+
**Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
|
|
1118
|
+
|
|
1119
|
+
```bash
|
|
1120
|
+
npm install openai@^5.8.0
|
|
1121
|
+
```
|
|
1122
|
+
|
|
1123
|
+
### Debug Mode
|
|
1124
|
+
|
|
1125
|
+
Enable comprehensive debug logging:
|
|
1126
|
+
|
|
1127
|
+
```bash
|
|
1128
|
+
export REVENIUM_DEBUG=true
|
|
1129
|
+
```
|
|
1130
|
+
|
|
1131
|
+
This will show:
|
|
1132
|
+
|
|
1133
|
+
- Middleware initialization status
|
|
1134
|
+
- Request interception confirmations
|
|
1135
|
+
- Metadata extraction details
|
|
1136
|
+
- Tracking success/failure messages
|
|
1137
|
+
- Error details and stack traces
|
|
1138
|
+
|
|
1139
|
+
### Getting Help
|
|
1140
|
+
|
|
1141
|
+
If you're still experiencing issues:
|
|
1142
|
+
|
|
1143
|
+
1. **Check the logs** with `REVENIUM_DEBUG=true`
|
|
1144
|
+
2. **Verify environment variables** are set correctly
|
|
1145
|
+
3. **Test with minimal example** from our documentation
|
|
1146
|
+
4. **Contact support** with debug logs and error details
|
|
1147
|
+
|
|
1148
|
+
For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
|
|
1149
|
+
|
|
1150
|
+
## Supported Models
|
|
1151
|
+
|
|
1152
|
+
### OpenAI Models
|
|
1153
|
+
|
|
1154
|
+
| Model Family | Models | APIs Supported |
|
|
1155
|
+
| ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
|
|
1156
|
+
| **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
|
|
1157
|
+
| **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
|
|
1158
|
+
| **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
|
|
1159
|
+
| **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
|
|
1160
|
+
| **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
|
|
1161
|
+
| **GPT-5** | `gpt-5` (when available) | Responses API |
|
|
1162
|
+
| **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
|
|
1163
|
+
|
|
1164
|
+
### Azure OpenAI Models
|
|
1165
|
+
|
|
1166
|
+
All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
|
|
1167
|
+
|
|
1168
|
+
| Azure Deployment | Resolved Model | API Support |
|
|
1169
|
+
| ------------------------ | ------------------------ | --------------------------- |
|
|
1170
|
+
| `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
|
|
1171
|
+
| `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
|
|
1172
|
+
| `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
|
|
1173
|
+
| `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
|
|
1174
|
+
| `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
|
|
1175
|
+
| `embedding-3-large` | `text-embedding-3-large` | Embeddings |
|
|
1176
|
+
|
|
1177
|
+
**Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
|
|
1178
|
+
|
|
1179
|
+
### API Support Matrix
|
|
1180
|
+
|
|
1181
|
+
| Feature | Chat Completions API | Responses API | Embeddings API |
|
|
1182
|
+
| --------------------- | -------------------- | ------------- | -------------- |
|
|
1183
|
+
| **Basic Requests** | Yes | Yes | Yes |
|
|
1184
|
+
| **Streaming** | Yes | Yes | No |
|
|
1185
|
+
| **Metadata Tracking** | Yes | Yes | Yes |
|
|
1186
|
+
| **Azure OpenAI** | Yes | Yes | Yes |
|
|
1187
|
+
| **Cost Calculation** | Yes | Yes | Yes |
|
|
1188
|
+
| **Token Counting** | Yes | Yes | Yes |
|
|
1189
|
+
|
|
1190
|
+
## Requirements
|
|
1191
|
+
|
|
1192
|
+
- Node.js 16+
|
|
1193
|
+
- OpenAI package v4.0+
|
|
1194
|
+
- TypeScript 5.0+ (for TypeScript projects)
|
|
1195
|
+
|
|
1196
|
+
## Documentation
|
|
1197
|
+
|
|
1198
|
+
For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
|
|
1199
|
+
|
|
1200
|
+
## Contributing
|
|
1201
|
+
|
|
1202
|
+
See [CONTRIBUTING.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CONTRIBUTING.md)
|
|
1203
|
+
|
|
1204
|
+
## Code of Conduct
|
|
1205
|
+
|
|
1206
|
+
See [CODE_OF_CONDUCT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/CODE_OF_CONDUCT.md)
|
|
1207
|
+
|
|
1208
|
+
## Security
|
|
1209
|
+
|
|
1210
|
+
See [SECURITY.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/SECURITY.md)
|
|
1211
|
+
|
|
1212
|
+
## License
|
|
1213
|
+
|
|
1214
|
+
This project is licensed under the MIT License - see the [LICENSE](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/LICENSE) file for details.
|
|
1215
|
+
|
|
1216
|
+
## Support
|
|
1217
|
+
|
|
1218
|
+
For issues, feature requests, or contributions:
|
|
1219
|
+
|
|
1220
|
+
- **GitHub Repository**: [revenium/revenium-middleware-openai-node](https://github.com/revenium/revenium-middleware-openai-node)
|
|
1221
|
+
- **Issues**: [Report bugs or request features](https://github.com/revenium/revenium-middleware-openai-node/issues)
|
|
1222
|
+
- **Documentation**: [docs.revenium.io](https://docs.revenium.io)
|
|
1223
|
+
- **Contact**: Reach out to the Revenium team for additional support
|
|
1224
|
+
|
|
1225
|
+
## Development
|
|
1226
|
+
|
|
1227
|
+
For development and testing instructions, see [DEVELOPMENT.md](https://github.com/revenium/revenium-middleware-openai-node/blob/HEAD/DEVELOPMENT.md).
|
|
1228
|
+
|
|
1229
|
+
---
|
|
1230
|
+
|
|
1231
|
+
**Built by Revenium**
|