@mtharrison/loupe 1.1.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -20,6 +20,16 @@ Most tracing tools assume hosted infrastructure, persistent storage, or producti
20
20
  - cost rollups when token usage and pricing are available
21
21
  - zero external services
22
22
 
23
+ ## Screenshots
24
+
25
+ Conversation view with tool calls, staged traces, and session navigation:
26
+
27
+ ![Loupe conversation view](./assets/screenshot1.png)
28
+
29
+ Request view showing the captured OpenAI payload for a multi-turn tool call:
30
+
31
+ ![Loupe request view](./assets/screenshot2.png)
32
+
23
33
  ## Installation
24
34
 
25
35
  ```sh
@@ -38,6 +48,87 @@ Enable tracing:
38
48
  export LLM_TRACE_ENABLED=1
39
49
  ```
40
50
 
51
+ If your app already uses a higher-level model interface or the official OpenAI client, Loupe can wrap that directly instead of requiring manual `record*` calls.
52
+
53
+ ### `wrapOpenAIClient(client, getContext, config?)`
54
+
55
+ Wraps `client.chat.completions.create(...)` on an OpenAI-compatible client and records either an `invoke` trace or a `stream` trace based on `params.stream`.
56
+
57
+ ```ts
58
+ import {
59
+ wrapOpenAIClient,
60
+ } from '@mtharrison/loupe';
61
+ import OpenAI from 'openai';
62
+
63
+ const client = wrapOpenAIClient(
64
+ new OpenAI(),
65
+ () => ({
66
+ sessionId: 'session-123',
67
+ rootActorId: 'support-assistant',
68
+ actorId: 'support-assistant',
69
+ }),
70
+ );
71
+
72
+ const completion = await client.chat.completions.create({
73
+ model: 'gpt-4.1',
74
+ messages: [{ role: 'user', content: 'Summarize the latest notes.' }],
75
+ });
76
+
77
+ const stream = await client.chat.completions.create({
78
+ model: 'gpt-4.1',
79
+ messages: [{ role: 'user', content: 'Stream the same summary.' }],
80
+ stream: true,
81
+ });
82
+
83
+ for await (const chunk of stream) {
84
+ process.stdout.write(chunk.choices?.[0]?.delta?.content || '');
85
+ }
86
+ ```
87
+
88
+ If you do not call `startServer()` yourself, the dashboard starts lazily on the first recorded trace.
89
+
90
+ When the server starts, Loupe prints the local URL:
91
+
92
+ ```text
93
+ [llm-trace] dashboard: http://127.0.0.1:4319
94
+ ```
95
+
96
+ If `4319` is already in use and you did not explicitly configure a port, Loupe falls back to another free local port and prints that URL instead.
97
+
98
+ `wrapOpenAIClient()` is structurally typed, so Loupe's runtime API does not require the OpenAI SDK for normal library usage. The repo includes `openai` as a dev dependency for the bundled demo; if your own app instantiates `new OpenAI()` or runs the published example from a consumer install, install `openai` there too.
99
+
100
+ ### `wrapChatModel(model, getContext, config?)`
101
+
102
+ Wraps any object with `invoke()` and `stream()` methods.
103
+
104
+ ### Runnable OpenAI Tools Demo
105
+
106
+ There is also a runnable example at `examples/openai-multiturn-tools.js` that:
107
+
108
+ - starts the Loupe dashboard eagerly
109
+ - wraps an OpenAI client with `wrapOpenAIClient()`
110
+ - runs a multi-turn conversation with tool calls
111
+ - keeps the process alive so the in-memory traces stay visible in the dashboard
112
+
113
+ From this repo, after installing this package's dev dependencies, run:
114
+
115
+ ```bash
116
+ npm install
117
+ export OPENAI_API_KEY=your-key
118
+ export LLM_TRACE_ENABLED=1
119
+ node examples/openai-multiturn-tools.js
120
+ ```
121
+
122
+ If you copy this example pattern into another app, install `openai` in that app before using `new OpenAI()`.
123
+
124
+ Supported demo environment variables: `OPENAI_MODEL`, `LLM_TRACE_PORT`, `LOUPE_OPEN_BROWSER`.
125
+
126
+ The script tries to open the dashboard automatically and prints the local URL either way. Set `LOUPE_OPEN_BROWSER=0` if you want to suppress the browser launch.
127
+
128
+ ## Low-Level Lifecycle API
129
+
130
+ If you need full control over trace boundaries, Loupe also exposes the lower-level `record*` lifecycle functions.
131
+
41
132
  Start the dashboard during app startup, then instrument a model call:
42
133
 
43
134
  ```ts
@@ -87,15 +178,7 @@ try {
87
178
  }
88
179
  ```
89
180
 
90
- If you do not call `startServer()` yourself, the dashboard starts lazily on the first recorded trace.
91
-
92
- When the server starts, Loupe prints the local URL:
93
-
94
- ```text
95
- [llm-trace] dashboard: http://127.0.0.1:4319
96
- ```
97
-
98
- ## Streaming
181
+ ### Streaming
99
182
 
100
183
  Streaming works the same way. Loupe records each chunk event, first-chunk latency, and the reconstructed final response.
101
184
 
@@ -213,7 +296,7 @@ If usage or pricing is missing, Loupe still records the trace, but cost will sho
213
296
 
214
297
  The local dashboard includes:
215
298
 
216
- - `Traces` and `Sessions` navigation
299
+ - session-first tree navigation
217
300
  - hierarchy-aware browsing
218
301
  - conversation, request, response, context, and stream views
219
302
  - formatted and raw JSON modes
@@ -231,7 +314,7 @@ Environment variables:
231
314
  | --- | --- | --- |
232
315
  | `LLM_TRACE_ENABLED` | `false` | Enables Loupe. |
233
316
  | `LLM_TRACE_HOST` | `127.0.0.1` | Host for the local dashboard server. |
234
- | `LLM_TRACE_PORT` | `4319` | Port for the local dashboard server. |
317
+ | `LLM_TRACE_PORT` | `4319` | Port for the local dashboard server. If unset, Loupe tries `4319` first and falls back to a free local port if it is already in use. |
235
318
  | `LLM_TRACE_MAX_TRACES` | `1000` | Maximum number of traces kept in memory. |
236
319
  | `LLM_TRACE_UI_HOT_RELOAD` | auto in local interactive dev | Enables UI rebuild + reload while developing the dashboard itself. |
237
320
 
@@ -239,7 +322,7 @@ Programmatic configuration is also available through `getLocalLLMTracer(config)`
239
322
 
240
323
  ## API
241
324
 
242
- The supported public API is the low-level tracer lifecycle API.
325
+ Loupe exposes both low-level lifecycle functions and lightweight wrappers.
243
326
 
244
327
  ### `isTraceEnabled()`
245
328
 
@@ -283,6 +366,14 @@ Marks a trace as failed and stores a serialized error payload.
283
366
 
284
367
  All of these functions forward to the singleton tracer returned by `getLocalLLMTracer()`.
285
368
 
369
+ ### `wrapChatModel(model, getContext, config?)`
370
+
371
+ Returns a traced model wrapper for `invoke()` and `stream()`.
372
+
373
+ ### `wrapOpenAIClient(client, getContext, config?)`
374
+
375
+ Returns a traced OpenAI client wrapper for `chat.completions.create(...)`.
376
+
286
377
  ## HTTP Endpoints
287
378
 
288
379
  Loupe serves a small local API alongside the UI:
Binary file
Binary file