@revenium/openai 1.0.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (177) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +1095 -0
  3. package/dist/cjs/core/config/azure-config.js +64 -0
  4. package/dist/cjs/core/config/azure-config.js.map +1 -0
  5. package/dist/cjs/core/config/index.js +41 -0
  6. package/dist/cjs/core/config/index.js.map +1 -0
  7. package/dist/cjs/core/config/loader.js +63 -0
  8. package/dist/cjs/core/config/loader.js.map +1 -0
  9. package/dist/cjs/core/config/manager.js +93 -0
  10. package/dist/cjs/core/config/manager.js.map +1 -0
  11. package/dist/cjs/core/config/validator.js +73 -0
  12. package/dist/cjs/core/config/validator.js.map +1 -0
  13. package/dist/cjs/core/providers/detector.js +140 -0
  14. package/dist/cjs/core/providers/detector.js.map +1 -0
  15. package/dist/cjs/core/providers/index.js +18 -0
  16. package/dist/cjs/core/providers/index.js.map +1 -0
  17. package/dist/cjs/core/tracking/api-client.js +68 -0
  18. package/dist/cjs/core/tracking/api-client.js.map +1 -0
  19. package/dist/cjs/core/tracking/index.js +23 -0
  20. package/dist/cjs/core/tracking/index.js.map +1 -0
  21. package/dist/cjs/core/tracking/payload-builder.js +107 -0
  22. package/dist/cjs/core/tracking/payload-builder.js.map +1 -0
  23. package/dist/cjs/core/tracking/usage-tracker.js +120 -0
  24. package/dist/cjs/core/tracking/usage-tracker.js.map +1 -0
  25. package/dist/cjs/core/wrapper/index.js +15 -0
  26. package/dist/cjs/core/wrapper/index.js.map +1 -0
  27. package/dist/cjs/core/wrapper/instance-patcher.js +202 -0
  28. package/dist/cjs/core/wrapper/instance-patcher.js.map +1 -0
  29. package/dist/cjs/core/wrapper/request-handler.js +317 -0
  30. package/dist/cjs/core/wrapper/request-handler.js.map +1 -0
  31. package/dist/cjs/core/wrapper/stream-wrapper.js +82 -0
  32. package/dist/cjs/core/wrapper/stream-wrapper.js.map +1 -0
  33. package/dist/cjs/index.js +195 -0
  34. package/dist/cjs/index.js.map +1 -0
  35. package/dist/cjs/types/function-parameters.js +14 -0
  36. package/dist/cjs/types/function-parameters.js.map +1 -0
  37. package/dist/cjs/types/index.js +49 -0
  38. package/dist/cjs/types/index.js.map +1 -0
  39. package/dist/cjs/types/openai-augmentation.js +55 -0
  40. package/dist/cjs/types/openai-augmentation.js.map +1 -0
  41. package/dist/cjs/types/responses-api.js +30 -0
  42. package/dist/cjs/types/responses-api.js.map +1 -0
  43. package/dist/cjs/utils/azure-model-resolver.js +211 -0
  44. package/dist/cjs/utils/azure-model-resolver.js.map +1 -0
  45. package/dist/cjs/utils/constants.js +24 -0
  46. package/dist/cjs/utils/constants.js.map +1 -0
  47. package/dist/cjs/utils/error-handler.js +194 -0
  48. package/dist/cjs/utils/error-handler.js.map +1 -0
  49. package/dist/cjs/utils/metadata-builder.js +184 -0
  50. package/dist/cjs/utils/metadata-builder.js.map +1 -0
  51. package/dist/cjs/utils/provider-detection.js +212 -0
  52. package/dist/cjs/utils/provider-detection.js.map +1 -0
  53. package/dist/cjs/utils/request-handler-factory.js +185 -0
  54. package/dist/cjs/utils/request-handler-factory.js.map +1 -0
  55. package/dist/cjs/utils/stop-reason-mapper.js +70 -0
  56. package/dist/cjs/utils/stop-reason-mapper.js.map +1 -0
  57. package/dist/cjs/utils/type-guards.js +175 -0
  58. package/dist/cjs/utils/type-guards.js.map +1 -0
  59. package/dist/cjs/utils/url-builder.js +43 -0
  60. package/dist/cjs/utils/url-builder.js.map +1 -0
  61. package/dist/esm/core/config/azure-config.js +61 -0
  62. package/dist/esm/core/config/azure-config.js.map +1 -0
  63. package/dist/esm/core/config/index.js +13 -0
  64. package/dist/esm/core/config/index.js.map +1 -0
  65. package/dist/esm/core/config/loader.js +58 -0
  66. package/dist/esm/core/config/loader.js.map +1 -0
  67. package/dist/esm/core/config/manager.js +85 -0
  68. package/dist/esm/core/config/manager.js.map +1 -0
  69. package/dist/esm/core/config/validator.js +69 -0
  70. package/dist/esm/core/config/validator.js.map +1 -0
  71. package/dist/esm/core/providers/detector.js +134 -0
  72. package/dist/esm/core/providers/detector.js.map +1 -0
  73. package/dist/esm/core/providers/index.js +10 -0
  74. package/dist/esm/core/providers/index.js.map +1 -0
  75. package/dist/esm/core/tracking/api-client.js +65 -0
  76. package/dist/esm/core/tracking/api-client.js.map +1 -0
  77. package/dist/esm/core/tracking/index.js +13 -0
  78. package/dist/esm/core/tracking/index.js.map +1 -0
  79. package/dist/esm/core/tracking/payload-builder.js +104 -0
  80. package/dist/esm/core/tracking/payload-builder.js.map +1 -0
  81. package/dist/esm/core/tracking/usage-tracker.js +114 -0
  82. package/dist/esm/core/tracking/usage-tracker.js.map +1 -0
  83. package/dist/esm/core/wrapper/index.js +9 -0
  84. package/dist/esm/core/wrapper/index.js.map +1 -0
  85. package/dist/esm/core/wrapper/instance-patcher.js +199 -0
  86. package/dist/esm/core/wrapper/instance-patcher.js.map +1 -0
  87. package/dist/esm/core/wrapper/request-handler.js +310 -0
  88. package/dist/esm/core/wrapper/request-handler.js.map +1 -0
  89. package/dist/esm/core/wrapper/stream-wrapper.js +79 -0
  90. package/dist/esm/core/wrapper/stream-wrapper.js.map +1 -0
  91. package/dist/esm/index.js +175 -0
  92. package/dist/esm/index.js.map +1 -0
  93. package/dist/esm/types/function-parameters.js +13 -0
  94. package/dist/esm/types/function-parameters.js.map +1 -0
  95. package/dist/esm/types/index.js +32 -0
  96. package/dist/esm/types/index.js.map +1 -0
  97. package/dist/esm/types/openai-augmentation.js +54 -0
  98. package/dist/esm/types/openai-augmentation.js.map +1 -0
  99. package/dist/esm/types/responses-api.js +26 -0
  100. package/dist/esm/types/responses-api.js.map +1 -0
  101. package/dist/esm/utils/azure-model-resolver.js +204 -0
  102. package/dist/esm/utils/azure-model-resolver.js.map +1 -0
  103. package/dist/esm/utils/constants.js +21 -0
  104. package/dist/esm/utils/constants.js.map +1 -0
  105. package/dist/esm/utils/error-handler.js +182 -0
  106. package/dist/esm/utils/error-handler.js.map +1 -0
  107. package/dist/esm/utils/metadata-builder.js +176 -0
  108. package/dist/esm/utils/metadata-builder.js.map +1 -0
  109. package/dist/esm/utils/provider-detection.js +206 -0
  110. package/dist/esm/utils/provider-detection.js.map +1 -0
  111. package/dist/esm/utils/request-handler-factory.js +146 -0
  112. package/dist/esm/utils/request-handler-factory.js.map +1 -0
  113. package/dist/esm/utils/stop-reason-mapper.js +65 -0
  114. package/dist/esm/utils/stop-reason-mapper.js.map +1 -0
  115. package/dist/esm/utils/type-guards.js +158 -0
  116. package/dist/esm/utils/type-guards.js.map +1 -0
  117. package/dist/esm/utils/url-builder.js +39 -0
  118. package/dist/esm/utils/url-builder.js.map +1 -0
  119. package/dist/types/core/config/azure-config.d.ts +16 -0
  120. package/dist/types/core/config/azure-config.d.ts.map +1 -0
  121. package/dist/types/core/config/index.d.ts +11 -0
  122. package/dist/types/core/config/index.d.ts.map +1 -0
  123. package/dist/types/core/config/loader.d.ts +20 -0
  124. package/dist/types/core/config/loader.d.ts.map +1 -0
  125. package/dist/types/core/config/manager.d.ts +32 -0
  126. package/dist/types/core/config/manager.d.ts.map +1 -0
  127. package/dist/types/core/config/validator.d.ts +23 -0
  128. package/dist/types/core/config/validator.d.ts.map +1 -0
  129. package/dist/types/core/providers/detector.d.ts +44 -0
  130. package/dist/types/core/providers/detector.d.ts.map +1 -0
  131. package/dist/types/core/providers/index.d.ts +9 -0
  132. package/dist/types/core/providers/index.d.ts.map +1 -0
  133. package/dist/types/core/tracking/api-client.d.ts +17 -0
  134. package/dist/types/core/tracking/api-client.d.ts.map +1 -0
  135. package/dist/types/core/tracking/index.d.ts +11 -0
  136. package/dist/types/core/tracking/index.d.ts.map +1 -0
  137. package/dist/types/core/tracking/payload-builder.d.ts +24 -0
  138. package/dist/types/core/tracking/payload-builder.d.ts.map +1 -0
  139. package/dist/types/core/tracking/usage-tracker.d.ts +48 -0
  140. package/dist/types/core/tracking/usage-tracker.d.ts.map +1 -0
  141. package/dist/types/core/wrapper/index.d.ts +8 -0
  142. package/dist/types/core/wrapper/index.d.ts.map +1 -0
  143. package/dist/types/core/wrapper/instance-patcher.d.ts +33 -0
  144. package/dist/types/core/wrapper/instance-patcher.d.ts.map +1 -0
  145. package/dist/types/core/wrapper/request-handler.d.ts +29 -0
  146. package/dist/types/core/wrapper/request-handler.d.ts.map +1 -0
  147. package/dist/types/core/wrapper/stream-wrapper.d.ts +13 -0
  148. package/dist/types/core/wrapper/stream-wrapper.d.ts.map +1 -0
  149. package/dist/types/index.d.ts +179 -0
  150. package/dist/types/index.d.ts.map +1 -0
  151. package/dist/types/types/function-parameters.d.ts +229 -0
  152. package/dist/types/types/function-parameters.d.ts.map +1 -0
  153. package/dist/types/types/index.d.ts +283 -0
  154. package/dist/types/types/index.d.ts.map +1 -0
  155. package/dist/types/types/openai-augmentation.d.ts +226 -0
  156. package/dist/types/types/openai-augmentation.d.ts.map +1 -0
  157. package/dist/types/types/responses-api.d.ts +247 -0
  158. package/dist/types/types/responses-api.d.ts.map +1 -0
  159. package/dist/types/utils/azure-model-resolver.d.ts +41 -0
  160. package/dist/types/utils/azure-model-resolver.d.ts.map +1 -0
  161. package/dist/types/utils/constants.d.ts +4 -0
  162. package/dist/types/utils/constants.d.ts.map +1 -0
  163. package/dist/types/utils/error-handler.d.ts +95 -0
  164. package/dist/types/utils/error-handler.d.ts.map +1 -0
  165. package/dist/types/utils/metadata-builder.d.ts +64 -0
  166. package/dist/types/utils/metadata-builder.d.ts.map +1 -0
  167. package/dist/types/utils/provider-detection.d.ts +51 -0
  168. package/dist/types/utils/provider-detection.d.ts.map +1 -0
  169. package/dist/types/utils/request-handler-factory.d.ts +81 -0
  170. package/dist/types/utils/request-handler-factory.d.ts.map +1 -0
  171. package/dist/types/utils/stop-reason-mapper.d.ts +29 -0
  172. package/dist/types/utils/stop-reason-mapper.d.ts.map +1 -0
  173. package/dist/types/utils/type-guards.d.ts +73 -0
  174. package/dist/types/utils/type-guards.d.ts.map +1 -0
  175. package/dist/types/utils/url-builder.d.ts +25 -0
  176. package/dist/types/utils/url-builder.d.ts.map +1 -0
  177. package/package.json +84 -0
package/README.md ADDED
@@ -0,0 +1,1095 @@
1
+ # 🚀 Revenium OpenAI Middleware for Node.js
2
+
3
+ [![npm version](https://img.shields.io/npm/v/@revenium/openai.svg)](https://www.npmjs.com/package/@revenium/openai)
4
+ [![Node.js](https://img.shields.io/badge/Node.js-16%2B-green)](https://nodejs.org/)
5
+ [![Documentation](https://img.shields.io/badge/docs-revenium.io-blue)](https://docs.revenium.io)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+ > **📦 Package Renamed**: This package has been renamed from `revenium-middleware-openai-node` to `@revenium/openai` for better organization and simpler naming. Please update your dependencies accordingly.
9
+
10
+ **Transparent TypeScript middleware for automatic Revenium usage tracking with OpenAI**
11
+
12
+ A professional-grade Node.js middleware that seamlessly integrates with OpenAI and Azure OpenAI to provide automatic usage tracking, billing analytics, and comprehensive metadata collection. Features native TypeScript support with zero type casting required and supports both traditional Chat Completions API and the new Responses API.
13
+
14
+ ## ✨ Features
15
+
16
+ - 🔄 **Seamless Integration** - Native TypeScript support, no type casting required
17
+ - 📊 **Optional Metadata** - Track users, organizations, and custom metadata (all fields optional)
18
+ - 🎯 **Dual API Support** - Chat Completions API + new Responses API (OpenAI SDK 5.8+)
19
+ - ☁️ **Azure OpenAI Support** - Full Azure OpenAI integration with automatic detection
20
+ - 🛡️ **Type Safety** - Complete TypeScript support with IntelliSense
21
+ - 🌊 **Streaming Support** - Handles regular and streaming requests seamlessly
22
+ - ⚡ **Fire-and-Forget** - Never blocks your application flow
23
+ - 🔧 **Zero Configuration** - Auto-initialization from environment variables
24
+
25
+ ## 🚀 Getting Started
26
+
27
+ Choose your preferred approach to get started quickly:
28
+
29
+ ### Option 1: Create Project from Scratch
30
+
31
+ Perfect for new projects. We'll guide you step-by-step from `mkdir` to running tests.
32
+ [👉 Go to Step-by-Step Guide](#option-1-create-project-from-scratch)
33
+
34
+ ### Option 2: Try the Examples
35
+
36
+ Quick testing with the included example files.
37
+ [👉 Go to Examples Guide](#option-2-try-the-examples)
38
+
39
+ ### Option 3: Add to Existing Project
40
+
41
+ Already have a project? Just install and replace imports.
42
+ [👉 Go to Integration Guide](#option-3-existing-project-integration)
43
+
44
+ ---
45
+
46
+ ## Option 1: Create Project from Scratch
47
+
48
+ ### Step 1: Create Project Directory
49
+
50
+ ```bash
51
+ # Create and navigate to your project
52
+ mkdir my-openai-project
53
+ cd my-openai-project
54
+
55
+ # Initialize npm project
56
+ npm init -y
57
+ ```
58
+
59
+ ### Step 2: Install Dependencies
60
+
61
+ ```bash
62
+ # Install the middleware and OpenAI SDK
63
+ npm install @revenium/openai openai@^5.8.0 dotenv
64
+
65
+ # For TypeScript projects (optional)
66
+ npm install -D typescript tsx @types/node
67
+ ```
68
+
69
+ ### Step 3: Setup Environment Variables
70
+
71
+ Create a `.env` file in your project root:
72
+
73
+ ```bash
74
+ # Create .env file
75
+ echo. > .env # On Windows (CMD)
76
+ touch .env # On Mac/Linux
77
+ # OR PowerShell
78
+ New-Item -Path .env -ItemType File
79
+ ```
80
+
81
+ Copy and paste the following into `.env`:
82
+
83
+ ```env
84
+ # OpenAI Configuration
85
+ OPENAI_API_KEY=sk-your_openai_api_key_here
86
+
87
+ # Revenium Configuration
88
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
89
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
90
+
91
+ # Optional: Enable debug logging
92
+ REVENIUM_DEBUG=true
93
+ ```
94
+
95
+ **💡 NOTE**: Replace each `your_..._here` with your actual values.
96
+
97
+ **⚠️ IMPORTANT - Environment Matching**:
98
+
99
+ - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
100
+ - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
101
+ - **Mismatched environments will cause authentication failures**
102
+
103
+ ### Step 4: Create Your First Test
104
+
105
+ #### TypeScript Test
106
+
107
+ Create `test-openai.ts`:
108
+
109
+ ```typescript
110
+ import 'dotenv/config';
111
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
112
+ import OpenAI from 'openai';
113
+
114
+ async function testOpenAI() {
115
+ try {
116
+ // Initialize Revenium middleware
117
+ const initResult = initializeReveniumFromEnv();
118
+ if (!initResult.success) {
119
+ console.error('❌ Failed to initialize Revenium:', initResult.message);
120
+ process.exit(1);
121
+ }
122
+
123
+ // Create and patch OpenAI instance
124
+ const openai = patchOpenAIInstance(new OpenAI());
125
+
126
+ const response = await openai.chat.completions.create({
127
+ model: 'gpt-4o-mini',
128
+ max_tokens: 100,
129
+ messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
130
+ usageMetadata: {
131
+ subscriber: {
132
+ id: 'user-456',
133
+ email: 'user@demo-org.com',
134
+ credential: {
135
+ name: 'demo-api-key',
136
+ value: 'demo-key-123',
137
+ },
138
+ },
139
+ organizationId: 'demo-org-123',
140
+ productId: 'ai-assistant-v2',
141
+ taskType: 'educational-query',
142
+ agent: 'openai-basic-demo',
143
+ traceId: 'session-' + Date.now(),
144
+ },
145
+ });
146
+
147
+ const text = response.choices[0]?.message?.content || 'No response';
148
+ console.log('Response:', text);
149
+ } catch (error) {
150
+ console.error('Error:', error);
151
+ }
152
+ }
153
+
154
+ testOpenAI();
155
+ ```
156
+
157
+ #### JavaScript Test
158
+
159
+ Create `test-openai.js`:
160
+
161
+ ```javascript
162
+ require('dotenv').config();
163
+ const {
164
+ initializeReveniumFromEnv,
165
+ patchOpenAIInstance,
166
+ } = require('@revenium/openai');
167
+ const OpenAI = require('openai');
168
+
169
+ async function testOpenAI() {
170
+ try {
171
+ // Initialize Revenium middleware
172
+ const initResult = initializeReveniumFromEnv();
173
+ if (!initResult.success) {
174
+ console.error('❌ Failed to initialize Revenium:', initResult.message);
175
+ process.exit(1);
176
+ }
177
+
178
+ // Create and patch OpenAI instance
179
+ const openai = patchOpenAIInstance(new OpenAI());
180
+
181
+ const response = await openai.chat.completions.create({
182
+ model: 'gpt-4o-mini',
183
+ max_tokens: 100,
184
+ messages: [{ role: 'user', content: 'What is artificial intelligence?' }],
185
+ usageMetadata: {
186
+ subscriber: {
187
+ id: 'user-456',
188
+ email: 'user@demo-org.com',
189
+ },
190
+ organizationId: 'demo-org-123',
191
+ taskType: 'educational-query',
192
+ },
193
+ });
194
+
195
+ const text = response.choices[0]?.message?.content || 'No response';
196
+ console.log('Response:', text);
197
+ } catch (error) {
198
+ // Handle error appropriately
199
+ }
200
+ }
201
+
202
+ testOpenAI();
203
+ ```
204
+
205
+ ### Step 5: Add Package Scripts
206
+
207
+ Update your `package.json`:
208
+
209
+ ```json
210
+ {
211
+ "name": "my-openai-project",
212
+ "version": "1.0.0",
213
+ "type": "commonjs",
214
+ "scripts": {
215
+ "test-ts": "npx tsx test-openai.ts",
216
+ "test-js": "node test-openai.js"
217
+ },
218
+ "dependencies": {
219
+ "@revenium/openai": "^1.0.7",
220
+ "openai": "^5.8.0",
221
+ "dotenv": "^16.5.0"
222
+ }
223
+ }
224
+ ```
225
+
226
+ ### Step 6: Run Your Tests
227
+
228
+ ```bash
229
+ # Test TypeScript version
230
+ npm run test-ts
231
+
232
+ # Test JavaScript version
233
+ npm run test-js
234
+ ```
235
+
236
+ ### Step 7: Project Structure
237
+
238
+ Your project should now look like this:
239
+
240
+ ```
241
+ my-openai-project/
242
+ ├── .env # Environment variables
243
+ ├── .gitignore # Git ignore file
244
+ ├── package.json # Project configuration
245
+ ├── test-openai.ts # TypeScript test
246
+ └── test-openai.js # JavaScript test
247
+ ```
248
+
249
+ ## Option 2: Try the Examples
250
+
251
+ ### Step 1: Install the Package
252
+
253
+ ```bash
254
+ npm install @revenium/openai openai@^5.8.0
255
+ ```
256
+
257
+ ### Step 2: Set Environment Variables
258
+
259
+ ```bash
260
+ # OpenAI Configuration
261
+ export OPENAI_API_KEY="sk-your-openai-api-key"
262
+
263
+ # Revenium Configuration
264
+ export REVENIUM_METERING_API_KEY="hak-your-revenium-api-key"
265
+ export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter"
266
+
267
+ # Optional: Enable debug logging
268
+ export REVENIUM_DEBUG="true"
269
+ ```
270
+
271
+ **⚠️ IMPORTANT - Environment Matching**:
272
+
273
+ - If using QA environment URL `"https://api.qa.hcapp.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **QA environment**
274
+ - If using Production environment URL `"https://api.revenium.io/meter"`, ensure your `REVENIUM_METERING_API_KEY` is from the **Production environment**
275
+ - **Mismatched environments will cause authentication failures**
276
+
277
+ ### Step 3: Run the Included Examples
278
+
279
+ The package includes working example files:
280
+
281
+ ### Chat Completions API (Current)
282
+
283
+ - **[OpenAI Basic](examples/openai-basic.ts)** - Chat completions and embeddings with optional metadata
284
+ - **[OpenAI Streaming](examples/openai-streaming.ts)** - Streaming responses and batch embeddings with optional metadata
285
+ - **[Azure Basic](examples/azure-basic.ts)** - Azure OpenAI chat completions and embeddings with optional metadata
286
+ - **[Azure Streaming](examples/azure-streaming.ts)** - Azure OpenAI streaming and batch embeddings with optional metadata
287
+
288
+ ### Responses API (New)
289
+
290
+ - **[OpenAI Responses Basic](examples/openai-responses-basic.ts)** - New Responses API with optional metadata
291
+ - **[OpenAI Responses Streaming](examples/openai-responses-streaming.ts)** - Streaming Responses API with optional metadata
292
+ - **[Azure Responses Basic](examples/azure-responses-basic.ts)** - Azure Responses API with optional metadata
293
+ - **[Azure Responses Streaming](examples/azure-responses-streaming.ts)** - Azure streaming Responses API with optional metadata
294
+
295
+ ```bash
296
+ # Chat Completions API examples
297
+ npx tsx node_modules/@revenium/openai/examples/openai-basic.ts
298
+ npx tsx node_modules/@revenium/openai/examples/openai-streaming.ts
299
+ npx tsx node_modules/@revenium/openai/examples/azure-basic.ts
300
+ npx tsx node_modules/@revenium/openai/examples/azure-streaming.ts
301
+
302
+ # Responses API examples (available with OpenAI SDK 5.8+)
303
+ npx tsx node_modules/@revenium/openai/examples/openai-responses-basic.ts
304
+ npx tsx node_modules/@revenium/openai/examples/openai-responses-streaming.ts
305
+ npx tsx node_modules/@revenium/openai/examples/azure-responses-basic.ts
306
+ npx tsx node_modules/@revenium/openai/examples/azure-responses-streaming.ts
307
+ ```
308
+
309
+ These examples demonstrate:
310
+
311
+ - **Chat Completions API** - Traditional OpenAI chat completions and embeddings
312
+ - **Responses API** - New OpenAI Responses API with enhanced capabilities
313
+ - **Azure OpenAI** - Full Azure OpenAI integration with automatic detection
314
+ - **Streaming Support** - Real-time response streaming with metadata tracking
315
+ - **Optional Metadata** - Rich business context and user tracking
316
+ - **Error Handling** - Robust error handling and debugging
317
+
318
+ ## Option 3: Existing Project Integration
319
+
320
+ Already have a project? Just install and replace imports:
321
+
322
+ ### Step 1: Install the Package
323
+
324
+ ```bash
325
+ npm install @revenium/openai
326
+ ```
327
+
328
+ ### Step 2: Update Your Imports
329
+
330
+ **Before:**
331
+
332
+ ```typescript
333
+ import OpenAI from 'openai';
334
+
335
+ const openai = new OpenAI();
336
+ ```
337
+
338
+ **After:**
339
+
340
+ ```typescript
341
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
342
+ import OpenAI from 'openai';
343
+
344
+ // Initialize Revenium middleware
345
+ initializeReveniumFromEnv();
346
+
347
+ // Patch your OpenAI instance
348
+ const openai = patchOpenAIInstance(new OpenAI());
349
+ ```
350
+
351
+ ### Step 3: Add Environment Variables
352
+
353
+ Add to your `.env` file:
354
+
355
+ ```env
356
+ REVENIUM_METERING_API_KEY=hak_your_revenium_api_key_here
357
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
358
+ REVENIUM_DEBUG=true
359
+ ```
360
+
361
+ ### Step 4: Optional - Add Metadata
362
+
363
+ Enhance your existing calls with optional metadata:
364
+
365
+ ```typescript
366
+ // Your existing code works unchanged
367
+ const response = await openai.chat.completions.create({
368
+ model: 'gpt-4o-mini',
369
+ messages: [{ role: 'user', content: 'Hello!' }],
370
+ // Add optional metadata for better analytics
371
+ usageMetadata: {
372
+ subscriber: { id: 'user-123' },
373
+ organizationId: 'my-company',
374
+ taskType: 'chat',
375
+ },
376
+ });
377
+ ```
378
+
379
+ **✅ That's it!** Your existing OpenAI code now automatically tracks usage to Revenium.
380
+
381
+ ## 📊 What Gets Tracked
382
+
383
+ The middleware automatically captures comprehensive usage data:
384
+
385
+ ### **🔢 Usage Metrics**
386
+
387
+ - **Token Counts** - Input tokens, output tokens, total tokens
388
+ - **Model Information** - Model name, provider (OpenAI/Azure), API version
389
+ - **Request Timing** - Request duration, response time
390
+ - **Cost Calculation** - Estimated costs based on current pricing
391
+
392
+ ### **🏷️ Business Context (Optional)**
393
+
394
+ - **User Tracking** - Subscriber ID, email, credentials
395
+ - **Organization Data** - Organization ID, subscription ID, product ID
396
+ - **Task Classification** - Task type, agent identifier, trace ID
397
+ - **Quality Metrics** - Response quality scores, custom metadata
398
+
399
+ ### **🔧 Technical Details**
400
+
401
+ - **API Endpoints** - Chat completions, embeddings, responses API
402
+ - **Request Types** - Streaming vs non-streaming
403
+ - **Error Tracking** - Failed requests, error types, retry attempts
404
+ - **Environment Info** - Development vs production usage
405
+
406
+ ## OpenAI Responses API Support
407
+
408
+ This middleware includes **full support** for OpenAI's new Responses API, which is designed to replace the traditional Chat Completions API with enhanced capabilities for agent-like applications.
409
+
410
+ ### What is the Responses API?
411
+
412
+ The Responses API is OpenAI's new stateful API that:
413
+
414
+ - Uses `input` instead of `messages` parameter for simplified interaction
415
+ - Provides unified experience combining chat completions and assistants capabilities
416
+ - Supports advanced features like background tasks, function calling, and code interpreter
417
+ - Offers better streaming and real-time response generation
418
+ - Works with GPT-5 and other advanced models
419
+
420
+ ### API Comparison
421
+
422
+ **Traditional Chat Completions:**
423
+
424
+ ```javascript
425
+ const response = await openai.chat.completions.create({
426
+ model: 'gpt-4o',
427
+ messages: [{ role: 'user', content: 'Hello' }],
428
+ });
429
+ ```
430
+
431
+ **New Responses API:**
432
+
433
+ ```javascript
434
+ const response = await openai.responses.create({
435
+ model: 'gpt-5',
436
+ input: 'Hello', // Simplified input parameter
437
+ });
438
+ ```
439
+
440
+ ### Key Differences
441
+
442
+ | Feature | Chat Completions | Responses API |
443
+ | ---------------------- | ---------------------------- | ----------------------------------- |
444
+ | **Input Format** | `messages: [...]` | `input: "string"` or `input: [...]` |
445
+ | **Models** | GPT-4, GPT-4o, etc. | GPT-5, GPT-4o, etc. |
446
+ | **Response Structure** | `choices[0].message.content` | `output_text` |
447
+ | **Stateful** | No | Yes (with `store: true`) |
448
+ | **Advanced Features** | Limited | Built-in tools, reasoning, etc. |
449
+ | **Temperature** | Supported | Not supported with GPT-5 |
450
+
451
+ ### Requirements & Installation
452
+
453
+ **OpenAI SDK Version:**
454
+
455
+ - **Minimum:** `5.8.0` (when Responses API was officially released)
456
+ - **Recommended:** `5.8.2` or later (tested and verified)
457
+ - **Current:** `6.2.0` (latest available)
458
+
459
+ **Installation:**
460
+
461
+ ```bash
462
+ # Install latest version with Responses API support
463
+ npm install openai@^5.8.0
464
+
465
+ # Or install specific tested version
466
+ npm install openai@5.8.2
467
+ ```
468
+
469
+ ### Current Status
470
+
471
+ ✅ **The Responses API is officially available in OpenAI SDK 5.8+**
472
+
473
+ **Official Release:**
474
+
475
+ - ✅ Released by OpenAI in SDK version 5.8.0
476
+ - ✅ Fully documented in official OpenAI documentation
477
+ - ✅ Production-ready with GPT-5 and other supported models
478
+ - ✅ Complete middleware support with Revenium integration
479
+
480
+ **Middleware Features:**
481
+
482
+ - ✅ Full Responses API support (streaming & non-streaming)
483
+ - ✅ Seamless metadata tracking identical to Chat Completions
484
+ - ✅ Type-safe TypeScript integration
485
+ - ✅ Complete token tracking including reasoning tokens
486
+ - ✅ Azure OpenAI compatibility
487
+
488
+ **References:**
489
+
490
+ - [OpenAI Responses API Documentation](https://platform.openai.com/docs/guides/migrate-to-responses)
491
+ - [Azure OpenAI Responses API Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/responses)
492
+
493
+ ### Responses API Examples
494
+
495
+ The middleware includes comprehensive examples for the new Responses API:
496
+
497
+ **Basic Usage:**
498
+
499
+ ```typescript
500
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
501
+ import OpenAI from 'openai';
502
+
503
+ // Initialize and patch OpenAI instance
504
+ initializeReveniumFromEnv();
505
+ const openai = patchOpenAIInstance(new OpenAI());
506
+
507
+ // Simple string input
508
+ const response = await openai.responses.create({
509
+ model: 'gpt-5',
510
+ input: 'What is the capital of France?',
511
+ max_output_tokens: 150,
512
+ usageMetadata: {
513
+ subscriber: { id: 'user-123', email: 'user@example.com' },
514
+ organizationId: 'org-456',
515
+ productId: 'quantum-explainer',
516
+ taskType: 'educational-content',
517
+ },
518
+ });
519
+
520
+ console.log(response.output_text); // "Paris."
521
+ ```
522
+
523
+ **Streaming Example:**
524
+
525
+ ```typescript
526
+ const stream = await openai.responses.create({
527
+ model: 'gpt-5',
528
+ input: 'Write a short story about AI',
529
+ stream: true,
530
+ max_output_tokens: 500,
531
+ usageMetadata: {
532
+ subscriber: { id: 'user-123', email: 'user@example.com' },
533
+ organizationId: 'org-456',
534
+ },
535
+ });
536
+
537
+ for await (const chunk of stream) {
538
+ process.stdout.write(chunk.delta?.content || '');
539
+ }
540
+ ```
541
+
542
+ ### Adding Custom Metadata
543
+
544
+ Track users, organizations, and custom data with seamless TypeScript integration:
545
+
546
+ ```typescript
547
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
548
+ import OpenAI from 'openai';
549
+
550
+ // Initialize and patch OpenAI instance
551
+ initializeReveniumFromEnv();
552
+ const openai = patchOpenAIInstance(new OpenAI());
553
+
554
+ const response = await openai.chat.completions.create({
555
+ model: 'gpt-4',
556
+ messages: [{ role: 'user', content: 'Summarize this document' }],
557
+ // Add custom tracking metadata - all fields optional, no type casting needed!
558
+ usageMetadata: {
559
+ subscriber: {
560
+ id: 'user-12345',
561
+ email: 'john@acme-corp.com',
562
+ },
563
+ organizationId: 'acme-corp',
564
+ productId: 'document-ai',
565
+ taskType: 'document-summary',
566
+ agent: 'doc-summarizer-v2',
567
+ traceId: 'session-abc123',
568
+ },
569
+ });
570
+
571
+ // Same metadata works with Responses API
572
+ const responsesResult = await openai.responses.create({
573
+ model: 'gpt-5',
574
+ input: 'Summarize this document',
575
+ // Same metadata structure - seamless compatibility!
576
+ usageMetadata: {
577
+ subscriber: {
578
+ id: 'user-12345',
579
+ email: 'john@acme-corp.com',
580
+ },
581
+ organizationId: 'acme-corp',
582
+ productId: 'document-ai',
583
+ taskType: 'document-summary',
584
+ agent: 'doc-summarizer-v2',
585
+ traceId: 'session-abc123',
586
+ },
587
+ });
588
+ ```
589
+
590
+ ### Streaming Support
591
+
592
+ The middleware automatically handles streaming requests with seamless metadata:
593
+
594
+ ```typescript
595
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from 'revenium-middleware-openai-node';
596
+ import OpenAI from 'openai';
597
+
598
+ // Initialize and patch OpenAI instance
599
+ initializeReveniumFromEnv();
600
+ const openai = patchOpenAIInstance(new OpenAI());
601
+
602
+ const stream = await openai.chat.completions.create({
603
+ model: 'gpt-4',
604
+ messages: [{ role: 'user', content: 'Tell me a story' }],
605
+ stream: true,
606
+ // Metadata works seamlessly with streaming - all fields optional!
607
+ usageMetadata: {
608
+ organizationId: 'story-app',
609
+ taskType: 'creative-writing',
610
+ },
611
+ });
612
+
613
+ for await (const chunk of stream) {
614
+ process.stdout.write(chunk.choices[0]?.delta?.content || '');
615
+ }
616
+ // Usage tracking happens automatically when stream completes
617
+ ```
618
+
619
+ ### Temporarily Disabling Tracking
620
+
621
+ If you need to disable Revenium tracking temporarily, you can unpatch the OpenAI client:
622
+
623
+ ```javascript
624
+ import { unpatchOpenAI, patchOpenAI } from '@revenium/openai-middleware';
625
+
626
+ // Disable tracking
627
+ unpatchOpenAI();
628
+
629
+ // Your OpenAI calls now bypass Revenium tracking
630
+ await openai.chat.completions.create({...});
631
+
632
+ // Re-enable tracking
633
+ patchOpenAI();
634
+ ```
635
+
636
+ **Common use cases:**
637
+
638
+ - **Debugging**: Isolate whether issues are caused by the middleware
639
+ - **Testing**: Compare behavior with/without tracking
640
+ - **Conditional tracking**: Enable/disable based on environment
641
+ - **Troubleshooting**: Temporary bypass during incident response
642
+
643
+ **Note**: This affects all OpenAI instances globally since we patch the prototype methods.
644
+
645
+ ## Azure OpenAI Integration
646
+
647
+ **Azure OpenAI support** The middleware automatically detects Azure OpenAI clients and provides accurate usage tracking and cost calculation.
648
+
649
+ ### Quick Start with Azure OpenAI
650
+
651
+ ```bash
652
+ # Set your Azure OpenAI environment variables
653
+ export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
654
+ export AZURE_OPENAI_API_KEY="your-azure-api-key"
655
+ export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Your deployment name
656
+ export AZURE_OPENAI_API_VERSION="2024-12-01-preview" # Optional, defaults to latest
657
+
658
+ # Set your Revenium credentials
659
+ export REVENIUM_METERING_API_KEY="hak_your_revenium_api_key"
660
+ # export REVENIUM_METERING_BASE_URL="https://api.revenium.io/meter" # Optional: defaults to this URL
661
+ ```
662
+
663
+ ```typescript
664
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
665
+ import { AzureOpenAI } from 'openai';
666
+
667
+ // Initialize Revenium middleware
668
+ initializeReveniumFromEnv();
669
+
670
+ // Create and patch Azure OpenAI client
671
+ const azure = patchOpenAIInstance(
672
+ new AzureOpenAI({
673
+ endpoint: process.env.AZURE_OPENAI_ENDPOINT,
674
+ apiKey: process.env.AZURE_OPENAI_API_KEY,
675
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION,
676
+ })
677
+ );
678
+
679
+ // Your existing Azure OpenAI code works with seamless metadata
680
+ const response = await azure.chat.completions.create({
681
+ model: 'gpt-4o', // Uses your deployment name
682
+ messages: [{ role: 'user', content: 'Hello from Azure!' }],
683
+ // Optional metadata with native TypeScript support
684
+ usageMetadata: {
685
+ organizationId: 'my-company',
686
+ taskType: 'azure-chat',
687
+ },
688
+ });
689
+
690
+ console.log(response.choices[0].message.content);
691
+ ```
692
+
693
+ ### Azure Features
694
+
695
+ - **Automatic Detection**: Detects Azure OpenAI clients automatically
696
+ - **Model Name Resolution**: Maps Azure deployment names to standard model names for accurate pricing
697
+ - **Provider Metadata**: Correctly tags requests with `provider: "Azure"` and `modelSource: "OPENAI"`
698
+ - **Deployment Support**: Works with any Azure deployment name (simple or complex)
699
+ - **Endpoint Flexibility**: Supports all Azure OpenAI endpoint formats
700
+ - **Zero Code Changes**: Existing Azure OpenAI code works without modification
701
+
702
+ ### Azure Environment Variables
703
+
704
+ | Variable | Required | Description | Example |
705
+ | -------------------------- | -------- | ---------------------------------------------- | ------------------------------------ |
706
+ | `AZURE_OPENAI_ENDPOINT` | Yes | Your Azure OpenAI endpoint URL | `https://acme.openai.azure.com/` |
707
+ | `AZURE_OPENAI_API_KEY` | Yes | Your Azure OpenAI API key | `abc123...` |
708
+ | `AZURE_OPENAI_DEPLOYMENT` | No | Default deployment name | `gpt-4o` or `text-embedding-3-large` |
709
+ | `AZURE_OPENAI_API_VERSION` | No | API version (defaults to `2024-12-01-preview`) | `2024-12-01-preview` |
710
+
711
+ ### Azure Model Name Resolution
712
+
713
+ The middleware automatically maps Azure deployment names to standard model names for accurate pricing:
714
+
715
+ ```typescript
716
+ // Azure deployment names → Standard model names for pricing
717
+ "gpt-4o-2024-11-20" → "gpt-4o"
718
+ "gpt4o-prod" → "gpt-4o"
719
+ "o4-mini" → "gpt-4o-mini"
720
+ "gpt-35-turbo-dev" → "gpt-3.5-turbo"
721
+ "text-embedding-3-large" → "text-embedding-3-large" // Direct match
722
+ "embedding-3-large" → "text-embedding-3-large"
723
+ ```
724
+
725
+ ## 🔧 Advanced Usage
726
+
727
+ ### Streaming with Metadata
728
+
729
+ The middleware seamlessly handles streaming requests with full metadata support:
730
+
731
+ ```typescript
732
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
733
+ import OpenAI from 'openai';
734
+
735
+ initializeReveniumFromEnv();
736
+ const openai = patchOpenAIInstance(new OpenAI());
737
+
738
+ // Chat Completions API streaming
739
+ const stream = await openai.chat.completions.create({
740
+ model: 'gpt-4o-mini',
741
+ messages: [{ role: 'user', content: 'Tell me a story' }],
742
+ stream: true,
743
+ usageMetadata: {
744
+ subscriber: { id: 'user-123', email: 'user@example.com' },
745
+ organizationId: 'story-app',
746
+ taskType: 'creative-writing',
747
+ traceId: 'session-' + Date.now(),
748
+ },
749
+ });
750
+
751
+ for await (const chunk of stream) {
752
+ process.stdout.write(chunk.choices[0]?.delta?.content || '');
753
+ }
754
+ // Usage tracking happens automatically when stream completes
755
+ ```
756
+
757
+ ### Responses API with Metadata
758
+
759
+ Full support for OpenAI's new Responses API:
760
+
761
+ ```typescript
762
+ // Simple string input with metadata
763
+ const response = await openai.responses.create({
764
+ model: 'gpt-5',
765
+ input: 'What is the capital of France?',
766
+ max_output_tokens: 150,
767
+ usageMetadata: {
768
+ subscriber: { id: 'user-123', email: 'user@example.com' },
769
+ organizationId: 'org-456',
770
+ productId: 'geography-tutor',
771
+ taskType: 'educational-query',
772
+ },
773
+ });
774
+
775
+ console.log(response.output_text); // "Paris."
776
+ ```
777
+
778
+ ### Azure OpenAI Integration
779
+
780
+ Automatic Azure OpenAI detection with seamless metadata:
781
+
782
+ ```typescript
783
+ import { AzureOpenAI } from 'openai';
784
+
785
+ // Create and patch Azure OpenAI client
786
+ const azure = patchOpenAIInstance(
787
+ new AzureOpenAI({
788
+ endpoint: process.env.AZURE_OPENAI_ENDPOINT,
789
+ apiKey: process.env.AZURE_OPENAI_API_KEY,
790
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION,
791
+ })
792
+ );
793
+
794
+ // Your existing Azure OpenAI code works with seamless metadata
795
+ const response = await azure.chat.completions.create({
796
+ model: 'gpt-4o', // Uses your deployment name
797
+ messages: [{ role: 'user', content: 'Hello from Azure!' }],
798
+ usageMetadata: {
799
+ organizationId: 'my-company',
800
+ taskType: 'azure-chat',
801
+ agent: 'azure-assistant',
802
+ },
803
+ });
804
+ ```
805
+
806
+ ### Embeddings with Metadata
807
+
808
+ Track embeddings usage with optional metadata:
809
+
810
+ ```typescript
811
+ const embedding = await openai.embeddings.create({
812
+ model: 'text-embedding-3-small',
813
+ input: 'Advanced text embedding with comprehensive tracking metadata',
814
+ usageMetadata: {
815
+ subscriber: { id: 'embedding-user-789', email: 'embeddings@company.com' },
816
+ organizationId: 'my-company',
817
+ taskType: 'document-embedding',
818
+ productId: 'search-engine',
819
+ traceId: `embed-${Date.now()}`,
820
+ agent: 'openai-embeddings-node',
821
+ },
822
+ });
823
+
824
+ console.log('Model:', embedding.model);
825
+ console.log('Usage:', embedding.usage);
826
+ console.log('Embedding dimensions:', embedding.data[0]?.embedding.length);
827
+ ```
828
+
829
+ ### Manual Configuration
830
+
831
+ For advanced use cases, configure the middleware manually:
832
+
833
+ ```typescript
834
+ import { configure } from '@revenium/openai';
835
+
836
+ configure({
837
+ reveniumApiKey: 'hak_your_api_key',
838
+ reveniumBaseUrl: 'https://api.revenium.io/meter',
839
+ apiTimeout: 5000,
840
+ failSilent: true,
841
+ maxRetries: 3,
842
+ });
843
+ ```
844
+
845
+ ## 🛠️ Configuration Options
846
+
847
+ ### Environment Variables
848
+
849
+ | Variable | Required | Default | Description |
850
+ | ---------------------------- | -------- | ------------------------------- | --------------------------------- |
851
+ | `REVENIUM_METERING_API_KEY` | ✅ | - | Your Revenium API key |
852
+ | `OPENAI_API_KEY` | ✅ | - | OpenAI API key |
853
+ | `REVENIUM_METERING_BASE_URL` | ❌ | `https://api.revenium.io/meter` | Revenium metering API base URL |
854
+ | `REVENIUM_DEBUG` | ❌ | `false` | Enable debug logging (true/false) |
855
+
856
+ **⚠️ Important Note about `REVENIUM_METERING_BASE_URL`:**
857
+
858
+ - This variable is **optional** and defaults to the production URL (`https://api.revenium.io/meter`)
859
+ - If you don't set it explicitly, the middleware will use the default production endpoint
860
+ - However, you may see console warnings or errors if the middleware cannot determine the correct environment
861
+ - **Best practice:** Always set this variable explicitly to match your environment:
862
+
863
+ ```bash
864
+ # For Production
865
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
866
+
867
+ # For QA/Testing
868
+ REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
869
+ ```
870
+
871
+ - **Remember:** Your `REVENIUM_METERING_API_KEY` must match the environment of your base URL
872
+
873
+ ### Usage Metadata Options
874
+
875
+ All metadata fields are optional and help provide better analytics:
876
+
877
+ ```typescript
878
+ interface UsageMetadata {
879
+ traceId?: string; // Session or conversation ID
880
+ taskType?: string; // Type of AI task (e.g., "chat", "summary")
881
+ subscriber?: {
882
+ // User information (nested structure)
883
+ id?: string; // User ID from your system
884
+ email?: string; // User's email address
885
+ credential?: {
886
+ // User credentials
887
+ name?: string; // Credential name
888
+ value?: string; // Credential value
889
+ };
890
+ };
891
+ organizationId?: string; // Organization/company ID
892
+ subscriptionId?: string; // Billing plan ID
893
+ productId?: string; // Your product/feature ID
894
+ agent?: string; // AI agent identifier
895
+ responseQualityScore?: number; // Quality score (0-1)
896
+ }
897
+ ```
898
+
899
+ ## How It Works
900
+
901
+ 1. **Automatic Patching**: When imported, the middleware patches OpenAI's methods:
902
+ - `chat.completions.create` (Chat Completions API)
903
+ - `responses.create` (Responses API - when available)
904
+ - `embeddings.create` (Embeddings API)
905
+ 2. **Request Interception**: All OpenAI requests are intercepted to extract metadata
906
+ 3. **Usage Extraction**: Token counts, model info, and timing data are captured
907
+ 4. **Async Tracking**: Usage data is sent to Revenium in the background (fire-and-forget)
908
+ 5. **Transparent Response**: Original OpenAI responses are returned unchanged
909
+
910
+ The middleware never blocks your application - if Revenium tracking fails, your OpenAI requests continue normally.
911
+
912
+ ## 🔍 Troubleshooting
913
+
914
+ ### Common Issues
915
+
916
+ #### 1. **No tracking data in dashboard**
917
+
918
+ **Symptoms**: OpenAI calls work but no data appears in Revenium dashboard
919
+
920
+ **Solution**: Enable debug logging to check middleware status:
921
+
922
+ ```bash
923
+ export REVENIUM_DEBUG=true
924
+ ```
925
+
926
+ **Expected output for successful tracking**:
927
+
928
+ ```bash
929
+ [Revenium Debug] OpenAI chat.completions.create intercepted
930
+ [Revenium Debug] Revenium tracking successful
931
+
932
+ # For Responses API:
933
+ [Revenium Debug] OpenAI responses.create intercepted
934
+ [Revenium Debug] Revenium tracking successful
935
+ ```
936
+
937
+ #### 2. **Environment mismatch errors**
938
+
939
+ **Symptoms**: Authentication errors or 401/403 responses
940
+
941
+ **Solution**: Ensure your API key matches your base URL environment:
942
+
943
+ ```bash
944
+ # ✅ Correct - Production key with production URL
945
+ REVENIUM_METERING_API_KEY=hak_prod_key_here
946
+ REVENIUM_METERING_BASE_URL=https://api.revenium.io/meter
947
+
948
+ # ✅ Correct - QA key with QA URL
949
+ REVENIUM_METERING_API_KEY=hak_qa_key_here
950
+ REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
951
+
952
+ # ❌ Wrong - Production key with QA URL
953
+ REVENIUM_METERING_API_KEY=hak_prod_key_here
954
+ REVENIUM_METERING_BASE_URL=https://api.qa.hcapp.io/meter
955
+ ```
956
+
957
+ #### 3. **TypeScript type errors**
958
+
959
+ **Symptoms**: TypeScript errors about `usageMetadata` property
960
+
961
+ **Solution**: Ensure you're importing the middleware before OpenAI:
962
+
963
+ ```typescript
964
+ // ✅ Correct order
965
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
966
+ import OpenAI from 'openai';
967
+
968
+ // ❌ Wrong order
969
+ import OpenAI from 'openai';
970
+ import { initializeReveniumFromEnv, patchOpenAIInstance } from '@revenium/openai';
971
+ ```
972
+
973
+ #### 4. **Azure OpenAI not working**
974
+
975
+ **Symptoms**: Azure OpenAI calls not being tracked
976
+
977
+ **Solution**: Ensure you're using `patchOpenAIInstance()` with your Azure client:
978
+
979
+ ```typescript
980
+ import { AzureOpenAI } from 'openai';
981
+ import { patchOpenAIInstance } from '@revenium/openai';
982
+
983
+ // ✅ Correct
984
+ const azure = patchOpenAIInstance(new AzureOpenAI({...}));
985
+
986
+ // ❌ Wrong - not patched
987
+ const azure = new AzureOpenAI({...});
988
+ ```
989
+
990
+ #### 5. **Responses API not available**
991
+
992
+ **Symptoms**: `openai.responses.create` is undefined
993
+
994
+ **Solution**: Upgrade to OpenAI SDK 5.8+ for Responses API support:
995
+
996
+ ```bash
997
+ npm install openai@^5.8.0
998
+ ```
999
+
1000
+ ### Debug Mode
1001
+
1002
+ Enable comprehensive debug logging:
1003
+
1004
+ ```bash
1005
+ export REVENIUM_DEBUG=true
1006
+ ```
1007
+
1008
+ This will show:
1009
+
1010
+ - ✅ Middleware initialization status
1011
+ - ✅ Request interception confirmations
1012
+ - ✅ Metadata extraction details
1013
+ - ✅ Tracking success/failure messages
1014
+ - ✅ Error details and stack traces
1015
+
1016
+ ### Getting Help
1017
+
1018
+ If you're still experiencing issues:
1019
+
1020
+ 1. **Check the logs** with `REVENIUM_DEBUG=true`
1021
+ 2. **Verify environment variables** are set correctly
1022
+ 3. **Test with minimal example** from our documentation
1023
+ 4. **Contact support** with debug logs and error details
1024
+
1025
+ For detailed troubleshooting guides, visit [docs.revenium.io](https://docs.revenium.io)
1026
+
1027
+ ## 🤖 Supported Models
1028
+
1029
+ ### OpenAI Models
1030
+
1031
+ | Model Family | Models | APIs Supported |
1032
+ | ----------------- | ---------------------------------------------------------------------------- | --------------------------- |
1033
+ | **GPT-4o** | `gpt-4o`, `gpt-4o-2024-11-20`, `gpt-4o-2024-08-06`, `gpt-4o-2024-05-13` | Chat Completions, Responses |
1034
+ | **GPT-4o Mini** | `gpt-4o-mini`, `gpt-4o-mini-2024-07-18` | Chat Completions, Responses |
1035
+ | **GPT-4 Turbo** | `gpt-4-turbo`, `gpt-4-turbo-2024-04-09`, `gpt-4-turbo-preview` | Chat Completions |
1036
+ | **GPT-4** | `gpt-4`, `gpt-4-0613`, `gpt-4-0314` | Chat Completions |
1037
+ | **GPT-3.5 Turbo** | `gpt-3.5-turbo`, `gpt-3.5-turbo-0125`, `gpt-3.5-turbo-1106` | Chat Completions |
1038
+ | **GPT-5** | `gpt-5` (when available) | Responses API |
1039
+ | **Embeddings** | `text-embedding-3-large`, `text-embedding-3-small`, `text-embedding-ada-002` | Embeddings |
1040
+
1041
+ ### Azure OpenAI Models
1042
+
1043
+ All OpenAI models are supported through Azure OpenAI with automatic deployment name resolution:
1044
+
1045
+ | Azure Deployment | Resolved Model | API Support |
1046
+ | ------------------------ | ------------------------ | --------------------------- |
1047
+ | `gpt-4o-2024-11-20` | `gpt-4o` | Chat Completions, Responses |
1048
+ | `gpt4o-prod` | `gpt-4o` | Chat Completions, Responses |
1049
+ | `o4-mini` | `gpt-4o-mini` | Chat Completions, Responses |
1050
+ | `gpt-35-turbo-dev` | `gpt-3.5-turbo` | Chat Completions |
1051
+ | `text-embedding-3-large` | `text-embedding-3-large` | Embeddings |
1052
+ | `embedding-3-large` | `text-embedding-3-large` | Embeddings |
1053
+
1054
+ **Note**: The middleware automatically maps Azure deployment names to standard model names for accurate pricing and analytics.
1055
+
1056
+ ### API Support Matrix
1057
+
1058
+ | Feature | Chat Completions API | Responses API | Embeddings API |
1059
+ | --------------------- | -------------------- | ------------- | -------------- |
1060
+ | **Basic Requests** | ✅ | ✅ | ✅ |
1061
+ | **Streaming** | ✅ | ✅ | ❌ |
1062
+ | **Metadata Tracking** | ✅ | ✅ | ✅ |
1063
+ | **Azure OpenAI** | ✅ | ✅ | ✅ |
1064
+ | **Cost Calculation** | ✅ | ✅ | ✅ |
1065
+ | **Token Counting** | ✅ | ✅ | ✅ |
1066
+
1067
+ ## Requirements
1068
+
1069
+ - Node.js 16+
1070
+ - OpenAI package v4.0+
1071
+ - TypeScript 5.0+ (for TypeScript projects)
1072
+
1073
+ ## Documentation
1074
+
1075
+ For detailed documentation, visit [docs.revenium.io](https://docs.revenium.io)
1076
+
1077
+ ## Contributing
1078
+
1079
+ See [CONTRIBUTING.md](./CONTRIBUTING.md)
1080
+
1081
+ ## Code of Conduct
1082
+
1083
+ See [CODE_OF_CONDUCT.md](./CODE_OF_CONDUCT.md)
1084
+
1085
+ ## Security
1086
+
1087
+ See [SECURITY.md](./SECURITY.md)
1088
+
1089
+ ## License
1090
+
1091
+ This project is licensed under the MIT License - see the [LICENSE](./LICENSE) file for details.
1092
+
1093
+ ## Acknowledgments
1094
+
1095
+ - Built by the Revenium team