lupislabs 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,476 @@
1
+ # Lupis JavaScript SDK
2
+
3
+ [![npm version](https://badge.fury.io/js/%40lupislabs%2Fjs-sdk.svg)](https://badge.fury.io/js/%40lupislabs%2Fjs-sdk)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
+
6
+ A pure OpenTelemetry-based SDK for tracing HTTP requests in JavaScript applications with automatic instrumentation.
7
+
8
+ ## ✨ Features
9
+
10
+ - 🔭 **OpenTelemetry Native** - Uses official OpenTelemetry auto-instrumentation
11
+ - 🚀 **Zero Manual Patching** - Automatic HTTP/fetch interception
12
+ - 📊 **OTLP Export** - Standard protocol for any observability backend
13
+ - 💬 **Conversation Grouping** - Group related requests with chatId
14
+ - 🎯 **Smart Provider Detection** - Automatically detects AI providers (OpenAI, Claude, etc.)
15
+ - 📝 **Full Request/Response Capture** - Automatically captures request & response bodies, headers, and status (extends OpenTelemetry's capabilities)
16
+ - 🔄 **No Conflicts** - Works alongside other OpenTelemetry instrumentation
17
+ - ⚡ **Lightweight** - Clean, minimal codebase
18
+
19
+ ## 📦 Installation
20
+
21
+ ```bash
22
+ npm install @lupislabs/js-sdk
23
+ ```
24
+
25
+ ## 🚀 Quick Start
26
+
27
+ ```javascript
28
+ import LupisSDK from '@lupislabs/js-sdk';
29
+
30
+ const sdk = new LupisSDK({
31
+ projectId: 'your-project-id',
32
+ otlpEndpoint: 'http://localhost:3010/v1/traces',
33
+ });
34
+
35
+ await sdk.run(async () => {
36
+ const response = await fetch('https://api.openai.com/v1/chat/completions', {
37
+ method: 'POST',
38
+ headers: {
39
+ 'Authorization': `Bearer ${API_KEY}`,
40
+ 'Content-Type': 'application/json',
41
+ },
42
+ body: JSON.stringify({
43
+ model: 'gpt-4',
44
+ messages: [{ role: 'user', content: 'Hello!' }],
45
+ }),
46
+ });
47
+
48
+ return await response.json();
49
+ }, { chatId: 'conversation-123' });
50
+ ```
51
+
52
+ ## 📝 Request & Response Capture
53
+
54
+ Unlike standard OpenTelemetry implementations that only capture metadata, Lupis SDK automatically captures full request and response bodies, headers, and status codes. This is achieved through a custom `ResponseCaptureProcessor` that extends OpenTelemetry's capabilities.
55
+
56
+ ### What Gets Captured
57
+
58
+ **Request Data:**
59
+
60
+ - Request body (JSON, text, etc.)
61
+ - Request headers
62
+ - HTTP method and URL
63
+
64
+ **Response Data:**
65
+
66
+ - Response body (full content)
67
+ - Response headers
68
+ - HTTP status code
69
+ - Response timing
70
+
71
+ ### How It Works
72
+
73
+ The SDK uses a custom span processor that:
74
+
75
+ 1. Intercepts the fetch API at the earliest point
76
+ 2. Captures request data before the request is sent
77
+ 3. Clones and reads the response body without affecting the original response
78
+ 4. Attaches all data to the OpenTelemetry span as attributes
79
+
80
+ ### Example
81
+
82
+ ```javascript
83
+ await sdk.run(async () => {
84
+ const response = await fetch('https://api.openai.com/v1/chat/completions', {
85
+ method: 'POST',
86
+ headers: {
87
+ 'Content-Type': 'application/json',
88
+ 'Authorization': `Bearer ${API_KEY}`,
89
+ },
90
+ body: JSON.stringify({
91
+ model: 'gpt-4',
92
+ messages: [{ role: 'user', content: 'Hello!' }],
93
+ }),
94
+ });
95
+
96
+ const data = await response.json();
97
+ }, { chatId: 'my-conversation' });
98
+ ```
99
+
100
+ The span will automatically include:
101
+
102
+ - `http.request.body` - The full request payload
103
+ - `http.request.headers` - All request headers (as JSON string)
104
+ - `http.response.body` - The complete response body
105
+ - `http.response.headers` - All response headers (as JSON string)
106
+ - `http.response.status` - HTTP status code (e.g., 200, 404, 500)
107
+
108
+ See `examples/response-capture-example.js` for more detailed examples.
109
+
110
+ ## 📖 Configuration
111
+
112
+ ```typescript
113
+ interface LupisConfig {
114
+ projectId: string; // Required: Your project identifier
115
+ enabled?: boolean; // Default: true
116
+ otlpEndpoint?: string; // Default: 'http://localhost:4318/v1/traces'
117
+ serviceName?: string; // Default: 'lupis-sdk'
118
+ serviceVersion?: string; // Default: '1.0.0'
119
+ }
120
+
121
+ const sdk = new LupisSDK(config);
122
+ ```
123
+
124
+ ## 🔭 OpenTelemetry Integration
125
+
126
+ This SDK uses **pure OpenTelemetry** with automatic instrumentation:
127
+
128
+ - **Browser**: `@opentelemetry/instrumentation-fetch` auto-instruments `fetch()`
129
+ - **Node.js**: `@opentelemetry/instrumentation-http` auto-instruments `http` and `https`
130
+
131
+ ### Features
132
+
133
+ ✅ **Automatic span creation** for all HTTP requests
134
+ ✅ **Semantic conventions** (http.method, http.url, http.status_code)
135
+ ✅ **W3C Trace Context** propagation
136
+ ✅ **OTLP export** to any backend (Jaeger, Tempo, Datadog, etc.)
137
+ ✅ **Custom attributes** (projectId, chatId, provider)
138
+
139
+ ### Architecture
140
+
141
+ ```
142
+ HTTP Request (fetch/http)
143
+
144
+ Custom Fetch Patch (captures request/response data)
145
+
146
+ OpenTelemetry Auto-Instrumentation
147
+ ├─ FetchInstrumentation (browser)
148
+ └─ HttpInstrumentation (Node.js)
149
+
150
+ Span Created with Attributes
151
+
152
+ TracerProvider
153
+ ├─ ChatIdSpanProcessor (adds chatId)
154
+ ├─ ResponseCaptureProcessor (adds request/response bodies)
155
+ └─ BatchSpanProcessor + OTLPExporter
156
+
157
+ Observability Backend
158
+ ```
159
+
160
+ ## 📋 Usage Examples
161
+
162
+ ### Basic HTTP Tracing
163
+
164
+ ```javascript
165
+ await sdk.run(async () => {
166
+ const response = await fetch('https://api.example.com/data');
167
+ return await response.json();
168
+ }, { chatId: 'my-conversation' });
169
+ ```
170
+
171
+ ### Multiple Requests
172
+
173
+ ```javascript
174
+ await sdk.run(async () => {
175
+ const user = await fetch('/api/user').then(r => r.json());
176
+ const posts = await fetch('/api/posts').then(r => r.json());
177
+ return { user, posts };
178
+ }, { chatId: 'user-session-123' });
179
+ ```
180
+
181
+ ### AI Provider Requests
182
+
183
+ ```javascript
184
+ // OpenAI
185
+ await sdk.run(async () => {
186
+ const response = await fetch('https://api.openai.com/v1/chat/completions', {
187
+ method: 'POST',
188
+ headers: { 'Authorization': `Bearer ${API_KEY}` },
189
+ body: JSON.stringify({
190
+ model: 'gpt-4',
191
+ messages: [{ role: 'user', content: 'Hello!' }],
192
+ }),
193
+ });
194
+ }, { chatId: 'openai-conversation' });
195
+
196
+ // Claude
197
+ await sdk.run(async () => {
198
+ const response = await fetch('https://api.anthropic.com/v1/messages', {
199
+ method: 'POST',
200
+ headers: { 'x-api-key': API_KEY },
201
+ body: JSON.stringify({
202
+ model: 'claude-3-sonnet-20240229',
203
+ messages: [{ role: 'user', content: 'Hello!' }],
204
+ }),
205
+ });
206
+ }, { chatId: 'claude-conversation' });
207
+ ```
208
+
209
+ ### Custom Spans
210
+
211
+ ```javascript
212
+ import { otel } from '@lupislabs/js-sdk';
213
+
214
+ const span = sdk.createSpan('data-processing', {
215
+ 'processing.type': 'batch',
216
+ 'batch.size': 100,
217
+ }, otel.SpanKind.INTERNAL);
218
+
219
+ try {
220
+ // Your processing logic
221
+ span.end();
222
+ } catch (error) {
223
+ span.recordException(error);
224
+ span.setStatus({
225
+ code: otel.SpanStatusCode.ERROR,
226
+ message: error.message
227
+ });
228
+ span.end();
229
+ }
230
+ ```
231
+
232
+ ### Using OpenTelemetry API Directly
233
+
234
+ ```javascript
235
+ const tracer = sdk.getTracer();
236
+
237
+ const span = tracer.startSpan('custom-operation', {
238
+ attributes: {
239
+ 'operation.type': 'ai-inference',
240
+ },
241
+ });
242
+
243
+ // Your code
244
+ span.end();
245
+ ```
246
+
247
+ ## 🔌 Export to Observability Backends
248
+
249
+ ### Jaeger
250
+
251
+ ```javascript
252
+ const sdk = new LupisSDK({
253
+ projectId: 'my-project',
254
+ otlpEndpoint: 'http://localhost:4318/v1/traces',
255
+ });
256
+ ```
257
+
258
+ ### Grafana Tempo
259
+
260
+ ```javascript
261
+ const sdk = new LupisSDK({
262
+ projectId: 'my-project',
263
+ otlpEndpoint: 'https://tempo.example.com/v1/traces',
264
+ });
265
+ ```
266
+
267
+ ### Datadog
268
+
269
+ ```javascript
270
+ const sdk = new LupisSDK({
271
+ projectId: 'my-project',
272
+ otlpEndpoint: 'https://http-intake.logs.datadoghq.com/v1/traces',
273
+ });
274
+ ```
275
+
276
+ ### New Relic
277
+
278
+ ```javascript
279
+ const sdk = new LupisSDK({
280
+ projectId: 'my-project',
281
+ otlpEndpoint: 'https://otlp.nr-data.net/v1/traces',
282
+ });
283
+ ```
284
+
285
+ ## 💬 Conversation Grouping
286
+
287
+ Group related requests with `chatId`:
288
+
289
+ ```javascript
290
+ // All requests in this block will have the same chatId
291
+ await sdk.run(async () => {
292
+ await fetch('/api/chat', {
293
+ method: 'POST',
294
+ body: JSON.stringify({ message: 'Hello' })
295
+ });
296
+ await fetch('/api/chat', {
297
+ method: 'POST',
298
+ body: JSON.stringify({ message: 'How are you?' })
299
+ });
300
+ }, { chatId: 'conversation-123' });
301
+
302
+ // Or set/clear chatId manually
303
+ sdk.setChatId('conversation-123');
304
+ // Make requests...
305
+ sdk.clearChatId();
306
+ ```
307
+
308
+ ## 🏷️ Span Attributes
309
+
310
+ All HTTP spans automatically include:
311
+
312
+ **Standard OpenTelemetry:**
313
+
314
+ - `http.method` - HTTP method (GET, POST, etc.)
315
+ - `http.url` - Full URL
316
+ - `http.status_code` - Response status code
317
+
318
+ **Custom Lupis Attributes:**
319
+
320
+ - `lupis.project.id` - Your project ID
321
+ - `lupis.chat.id` - Conversation ID (when set)
322
+ - `http.provider` - Detected provider (openai, claude, cohere, huggingface, google)
323
+
324
+ **Request/Response Capture Attributes (Custom Extension):**
325
+
326
+ - `http.request.body` - Full request body content
327
+ - `http.request.headers` - Request headers as JSON string
328
+ - `http.response.body` - Full response body content
329
+ - `http.response.headers` - Response headers as JSON string
330
+ - `http.response.status` - HTTP response status code
331
+
332
+ ## 🔧 API Reference
333
+
334
+ ### LupisSDK
335
+
336
+ ```typescript
337
+ class LupisSDK {
338
+ constructor(config: LupisConfig)
339
+
340
+ async run<T>(fn: () => Promise<T> | T, options?: { chatId?: string }): Promise<T>
341
+
342
+ setChatId(chatId: string): void
343
+ clearChatId(): void
344
+
345
+ getTracer(): Tracer
346
+ createSpan(name: string, attributes?: Attributes, spanKind?: SpanKind): Span
347
+
348
+ async shutdown(): Promise<void>
349
+ }
350
+ ```
351
+
352
+ ### Methods
353
+
354
+ #### `sdk.run(fn, options)`
355
+
356
+ Execute a function with automatic HTTP tracing:
357
+
358
+ ```typescript
359
+ await sdk.run(async () => {
360
+ // Your code
361
+ }, { chatId: 'optional-chat-id' });
362
+ ```
363
+
364
+ #### `sdk.setChatId(chatId)` / `sdk.clearChatId()`
365
+
366
+ Manually set/clear the chatId for subsequent requests:
367
+
368
+ ```typescript
369
+ sdk.setChatId('conversation-123');
370
+ // All requests will have this chatId
371
+ sdk.clearChatId();
372
+ ```
373
+
374
+ #### `sdk.getTracer()`
375
+
376
+ Get the OpenTelemetry tracer for advanced usage:
377
+
378
+ ```typescript
379
+ const tracer = sdk.getTracer();
380
+ const span = tracer.startSpan('my-operation');
381
+ ```
382
+
383
+ #### `sdk.createSpan(name, attributes, spanKind)`
384
+
385
+ Create a custom span:
386
+
387
+ ```typescript
388
+ const span = sdk.createSpan('operation-name', {
389
+ 'custom.attribute': 'value',
390
+ }, SpanKind.INTERNAL);
391
+ ```
392
+
393
+ #### `sdk.shutdown()`
394
+
395
+ Gracefully shutdown and flush all spans:
396
+
397
+ ```typescript
398
+ await sdk.shutdown();
399
+ ```
400
+
401
+ ## 🎯 Provider Detection
402
+
403
+ The SDK automatically detects AI providers based on URL patterns:
404
+
405
+ | Provider | URL Pattern | Attribute Value |
406
+ |----------|-------------|-----------------|
407
+ | OpenAI | `api.openai.com` | `openai` |
408
+ | Anthropic (Claude) | `api.anthropic.com` | `claude` |
409
+ | Cohere | `api.cohere.ai` | `cohere` |
410
+ | HuggingFace | `api.huggingface.co` | `huggingface` |
411
+ | Google AI | `generativelanguage.googleapis.com` | `google` |
412
+ | Others | - | `unknown` |
413
+
414
+ ## ⚙️ Best Practices
415
+
416
+ ### 1. Meaningful ChatIds
417
+
418
+ ```javascript
419
+ // Good: Descriptive and unique
420
+ { chatId: 'user-login-flow-2024-03-15' }
421
+ { chatId: 'document-analysis-task-456' }
422
+
423
+ // Avoid: Generic or unclear
424
+ { chatId: 'test' }
425
+ { chatId: '1' }
426
+ ```
427
+
428
+ ### 2. Always Shutdown
429
+
430
+ ```javascript
431
+ process.on('SIGTERM', async () => {
432
+ await sdk.shutdown();
433
+ process.exit(0);
434
+ });
435
+ ```
436
+
437
+ ### 3. Error Handling
438
+
439
+ ```javascript
440
+ await sdk.run(async () => {
441
+ try {
442
+ const response = await fetch('/api/data');
443
+ if (!response.ok) {
444
+ throw new Error(`HTTP error! status: ${response.status}`);
445
+ }
446
+ return await response.json();
447
+ } catch (error) {
448
+ console.error('Request failed:', error);
449
+ throw error;
450
+ }
451
+ }, { chatId: 'error-handling-example' });
452
+ ```
453
+
454
+ ## 📚 Documentation
455
+
456
+ - [ARCHITECTURE.md](./ARCHITECTURE.md) - Detailed architecture
457
+ - [OPENTELEMETRY.md](./OPENTELEMETRY.md) - OpenTelemetry integration guide
458
+ - [examples/](./examples/) - Code examples
459
+
460
+ ## 🤝 Contributing
461
+
462
+ Contributions are welcome! Please feel free to submit a Pull Request.
463
+
464
+ ## 📄 License
465
+
466
+ This project is licensed under the MIT License.
467
+
468
+ ## 🆘 Support
469
+
470
+ If you encounter any issues:
471
+ 1. Check the [Issues](https://github.com/lupislabs/lupis/issues) page
472
+ 2. Create a new issue with detailed information
473
+
474
+ ---
475
+
476
+ **Made with ❤️ by the Lupis team**
@@ -0,0 +1,23 @@
1
+ import * as api from '@opentelemetry/api';
2
+ import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
3
+ import { WebTracerProvider } from '@opentelemetry/sdk-trace-web';
4
+ export declare class HttpInterceptor {
5
+ private originalFetch?;
6
+ private originalHttpRequest?;
7
+ private originalHttpsRequest?;
8
+ private tracer;
9
+ private provider;
10
+ private projectId;
11
+ private isIntercepting;
12
+ private zlib?;
13
+ constructor(tracer: api.Tracer, provider: NodeTracerProvider | WebTracerProvider, projectId: string);
14
+ startIntercepting(): void;
15
+ stopIntercepting(): void;
16
+ private patchFetch;
17
+ private patchNodeHttp;
18
+ private detectProvider;
19
+ private resolveHandler;
20
+ private getRawResponseText;
21
+ private decompressIfNeeded;
22
+ }
23
+ //# sourceMappingURL=http-interceptor.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"http-interceptor.d.ts","sourceRoot":"","sources":["../src/http-interceptor.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,GAAG,MAAM,oBAAoB,CAAC;AAC1C,OAAO,EAAE,kBAAkB,EAAE,MAAM,+BAA+B,CAAC;AACnE,OAAO,EAAE,iBAAiB,EAAE,MAAM,8BAA8B,CAAC;AAMjE,qBAAa,eAAe;IAC1B,OAAO,CAAC,aAAa,CAAC,CAAe;IACrC,OAAO,CAAC,mBAAmB,CAAC,CAAM;IAClC,OAAO,CAAC,oBAAoB,CAAC,CAAM;IACnC,OAAO,CAAC,MAAM,CAAa;IAC3B,OAAO,CAAC,QAAQ,CAAyC;IACzD,OAAO,CAAC,SAAS,CAAS;IAC1B,OAAO,CAAC,cAAc,CAAkB;IACxC,OAAO,CAAC,IAAI,CAAC,CAAM;gBAEP,MAAM,EAAE,GAAG,CAAC,MAAM,EAAE,QAAQ,EAAE,kBAAkB,GAAG,iBAAiB,EAAE,SAAS,EAAE,MAAM;IAMnG,iBAAiB;IAUjB,gBAAgB;IAsBhB,OAAO,CAAC,UAAU;IA+LlB,OAAO,CAAC,aAAa;IAsKrB,OAAO,CAAC,cAAc;IAStB,OAAO,CAAC,cAAc;YAYR,kBAAkB;IAShC,OAAO,CAAC,kBAAkB;CA4B3B"}