agentracer 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Agentracer
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,527 @@
1
+ # agentracer
2
+
3
+ Lightweight AI observability for Node.js and TypeScript. Track costs, latency, and token usage across OpenAI, Anthropic, and Gemini with a single line change.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install agentracer
9
+ ```
10
+
11
+ ## Quick Start
12
+
13
+ **1. Initialize once** (at app startup):
14
+
15
+ ```typescript
16
+ import { init } from "agentracer";
17
+
18
+ init({
19
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
20
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
21
+ });
22
+ ```
23
+
24
+ **2. Replace your import:**
25
+
26
+ ```typescript
27
+ // Before
28
+ import OpenAI from "openai";
29
+ const openai = new OpenAI();
30
+
31
+ // After
32
+ import { openai } from "agentracer/openai";
33
+ ```
34
+
35
+ That's it. Every call is now tracked with cost, latency, and token usage.
36
+
37
+ ## Usage
38
+
39
+ ### OpenAI
40
+
41
+ ```typescript
42
+ import { init } from "agentracer";
43
+ import { openai } from "agentracer/openai";
44
+
45
+ init({
46
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
47
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
48
+ });
49
+
50
+ const response = await openai.chat.completions.create({
51
+ model: "gpt-4o",
52
+ messages: [{ role: "user", content: "Hello!" }],
53
+ feature_tag: "chatbot", // optional: tag this call
54
+ });
55
+
56
+ console.log(response.choices[0].message.content);
57
+ ```
58
+
59
+ ### Anthropic
60
+
61
+ ```typescript
62
+ import { init } from "agentracer";
63
+ import { anthropic } from "agentracer/anthropic";
64
+
65
+ init({
66
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
67
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
68
+ });
69
+
70
+ const response = await anthropic.messages.create({
71
+ model: "claude-sonnet-4-20250514",
72
+ max_tokens: 1024,
73
+ messages: [{ role: "user", content: "Hello!" }],
74
+ feature_tag: "summarizer", // optional: tag this call
75
+ });
76
+
77
+ console.log(response.content[0].text);
78
+ ```
79
+
80
+ ### Google Gemini
81
+
82
+ ```typescript
83
+ import { init } from "agentracer";
84
+ import { gemini } from "agentracer/gemini";
85
+
86
+ init({
87
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
88
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
89
+ });
90
+
91
+ const model = gemini.getGenerativeModel({ model: "gemini-1.5-pro" });
92
+
93
+ const result = await model.generateContent({
94
+ contents: [{ role: "user", parts: [{ text: "Hello!" }] }],
95
+ feature_tag: "content-gen", // optional: tag this call
96
+ });
97
+
98
+ console.log(result.response.text());
99
+ ```
100
+
101
+ ## Custom Client Configuration
102
+
103
+ If you need to pass custom options to the underlying SDK (API key, base URL, organization, etc.), use the `Tracked*` classes instead of the default proxy exports:
104
+
105
+ ### TrackedOpenAI
106
+
107
+ ```typescript
108
+ import { init } from "agentracer";
109
+ import { TrackedOpenAI } from "agentracer/openai";
110
+
111
+ init({
112
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
113
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
114
+ });
115
+
116
+ const openai = new TrackedOpenAI({
117
+ apiKey: process.env.OPENAI_API_KEY,
118
+ organization: "org-xxx",
119
+ baseURL: "https://custom-endpoint.example.com/v1",
120
+ });
121
+
122
+ const response = await openai.chat.completions.create({
123
+ model: "gpt-4o",
124
+ messages: [{ role: "user", content: "Hello!" }],
125
+ });
126
+ ```
127
+
128
+ ### TrackedAnthropic
129
+
130
+ ```typescript
131
+ import { init } from "agentracer";
132
+ import { TrackedAnthropic } from "agentracer/anthropic";
133
+
134
+ init({
135
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
136
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
137
+ });
138
+
139
+ const anthropic = new TrackedAnthropic({
140
+ apiKey: process.env.ANTHROPIC_API_KEY,
141
+ baseURL: "https://custom-endpoint.example.com",
142
+ });
143
+
144
+ const response = await anthropic.messages.create({
145
+ model: "claude-sonnet-4-20250514",
146
+ max_tokens: 1024,
147
+ messages: [{ role: "user", content: "Hello!" }],
148
+ });
149
+ ```
150
+
151
+ ## Streaming
152
+
153
+ All providers support streaming. Token usage is automatically tracked after the stream completes.
154
+
155
+ ### OpenAI Streaming
156
+
157
+ ```typescript
158
+ import { openai } from "agentracer/openai";
159
+
160
+ const stream = await openai.chat.completions.create({
161
+ model: "gpt-4o",
162
+ messages: [{ role: "user", content: "Write a poem" }],
163
+ stream: true,
164
+ feature_tag: "poet",
165
+ });
166
+
167
+ for await (const chunk of stream) {
168
+ process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
169
+ }
170
+ // Telemetry is sent automatically after the stream ends
171
+ ```
172
+
173
+ ### Anthropic Streaming
174
+
175
+ ```typescript
176
+ import { anthropic } from "agentracer/anthropic";
177
+
178
+ const stream = await anthropic.messages.create({
179
+ model: "claude-sonnet-4-20250514",
180
+ max_tokens: 1024,
181
+ messages: [{ role: "user", content: "Write a poem" }],
182
+ stream: true,
183
+ feature_tag: "poet",
184
+ });
185
+
186
+ for await (const event of stream) {
187
+ if (event.type === "content_block_delta") {
188
+ process.stdout.write(event.delta.text ?? "");
189
+ }
190
+ }
191
+ ```
192
+
193
+ ### Gemini Streaming
194
+
195
+ ```typescript
196
+ import { gemini } from "agentracer/gemini";
197
+
198
+ const model = gemini.getGenerativeModel({ model: "gemini-1.5-pro" });
199
+
200
+ const { stream } = await model.generateContentStream("Write a poem");
201
+
202
+ for await (const chunk of stream) {
203
+ process.stdout.write(chunk.text());
204
+ }
205
+ ```
206
+
207
+ > Streaming works transparently -- usage is captured from the final chunk (OpenAI), SSE events (Anthropic), or chunk metadata (Gemini), then sent as a single telemetry event after the stream finishes.
208
+
209
+ ## Feature Tags
210
+
211
+ Feature tags let you break down costs by feature (e.g., "chatbot", "summarizer", "code-review"). There are two ways to tag calls.
212
+
213
+ ### Option 1: Pass directly
214
+
215
+ ```typescript
216
+ const response = await openai.chat.completions.create({
217
+ model: "gpt-4o",
218
+ messages: [{ role: "user", content: "Hello!" }],
219
+ feature_tag: "chatbot",
220
+ });
221
+ ```
222
+
223
+ ### Option 2: Use `observe` for automatic tagging
224
+
225
+ Wrap a function with `observe` to automatically tag every LLM call inside it:
226
+
227
+ ```typescript
228
+ import { init, observe } from "agentracer";
229
+ import { openai } from "agentracer/openai";
230
+
231
+ init({
232
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
233
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
234
+ });
235
+
236
+ const handleChat = observe(
237
+ async (userMessage: string) => {
238
+ const response = await openai.chat.completions.create({
239
+ model: "gpt-4o",
240
+ messages: [{ role: "user", content: userMessage }],
241
+ });
242
+ return response.choices[0].message.content;
243
+ },
244
+ { featureTag: "chatbot" }
245
+ );
246
+
247
+ // All LLM calls inside handleChat are tagged "chatbot"
248
+ const reply = await handleChat("What is TypeScript?");
249
+ ```
250
+
251
+ `observe` uses Node.js `AsyncLocalStorage` under the hood, so it works correctly with concurrent requests -- each request gets its own tag even in parallel.
252
+
253
+ ## Agent Runs
254
+
255
+ Track multi-step AI agent workflows as a single run with individual step tracking:
256
+
257
+ ```typescript
258
+ import { init, AgentRun } from "agentracer";
259
+ import { openai } from "agentracer/openai";
260
+
261
+ init({
262
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
263
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
264
+ });
265
+
266
+ const run = new AgentRun({
267
+ runName: "research-agent",
268
+ featureTag: "research",
269
+ endUserId: "user-123",
270
+ });
271
+
272
+ const result = await run.execute(async () => {
273
+ // Step 1: Plan
274
+ const plan = await openai.chat.completions.create({
275
+ model: "gpt-4o",
276
+ messages: [{ role: "user", content: "Plan a research strategy for quantum computing" }],
277
+ });
278
+
279
+ // Step 2: Execute
280
+ const research = await openai.chat.completions.create({
281
+ model: "gpt-4o",
282
+ messages: [
283
+ { role: "user", content: "Research quantum computing" },
284
+ { role: "assistant", content: plan.choices[0].message.content! },
285
+ { role: "user", content: "Now execute the research plan" },
286
+ ],
287
+ });
288
+
289
+ // Step 3: Summarize
290
+ const summary = await openai.chat.completions.create({
291
+ model: "gpt-4o-mini",
292
+ messages: [
293
+ { role: "user", content: `Summarize: ${research.choices[0].message.content}` },
294
+ ],
295
+ });
296
+
297
+ return summary.choices[0].message.content;
298
+ });
299
+ ```
300
+
301
+ Each LLM call inside `run.execute()` is automatically:
302
+ - Tagged with the run's `featureTag`
303
+ - Linked to the run via `runId`
304
+ - Recorded as a numbered step with its own token/latency data
305
+
306
+ ### AgentRun Parameters
307
+
308
+ | Parameter | Type | Default | Description |
309
+ |-----------|------|---------|-------------|
310
+ | `runName` | `string` | - | Human-readable name for the run |
311
+ | `featureTag` | `string` | `"unknown"` | Feature tag applied to all calls |
312
+ | `endUserId` | `string` | - | User ID for per-user cost tracking |
313
+ | `runId` | `string` | auto-generated UUID | Custom run ID |
314
+
315
+ ## Manual Tracking
316
+
317
+ For providers not directly supported, or for custom tracking scenarios, use `track`:
318
+
319
+ ```typescript
320
+ import { init, track } from "agentracer";
321
+
322
+ init({
323
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
324
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
325
+ });
326
+
327
+ const start = Date.now();
328
+
329
+ // ... your LLM call here ...
330
+
331
+ await track({
332
+ model: "gpt-4o",
333
+ inputTokens: 150,
334
+ outputTokens: 50,
335
+ latencyMs: Date.now() - start,
336
+ featureTag: "search",
337
+ provider: "openai",
338
+ });
339
+ ```
340
+
341
+ ### track() Parameters
342
+
343
+ | Parameter | Type | Default | Description |
344
+ |-----------|------|---------|-------------|
345
+ | `model` | `string` | required | Model name |
346
+ | `inputTokens` | `number` | required | Tokens sent to the model |
347
+ | `outputTokens` | `number` | required | Tokens received from the model |
348
+ | `latencyMs` | `number` | required | Round-trip time in milliseconds |
349
+ | `featureTag` | `string` | from context or `"unknown"` | Which feature made the call |
350
+ | `provider` | `string` | `"custom"` | LLM provider name |
351
+ | `cachedTokens` | `number` | `0` | Cached input tokens |
352
+ | `success` | `boolean` | `true` | Whether the call succeeded |
353
+ | `errorType` | `string` | - | Error class name on failure |
354
+ | `endUserId` | `string` | - | User ID for per-user tracking |
355
+ | `runId` | `string` | auto from AgentRun | Agent run ID |
356
+ | `stepIndex` | `number` | auto from AgentRun | Step number within run |
357
+
358
+ ## Express Example
359
+
360
+ ```typescript
361
+ import express from "express";
362
+ import { init, observe } from "agentracer";
363
+ import { openai } from "agentracer/openai";
364
+
365
+ init({
366
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
367
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
368
+ environment: process.env.NODE_ENV ?? "development",
369
+ });
370
+
371
+ const app = express();
372
+ app.use(express.json());
373
+
374
+ const handleChat = observe(
375
+ async (message: string) => {
376
+ const response = await openai.chat.completions.create({
377
+ model: "gpt-4o",
378
+ messages: [{ role: "user", content: message }],
379
+ });
380
+ return response.choices[0].message.content;
381
+ },
382
+ { featureTag: "chatbot" }
383
+ );
384
+
385
+ const handleSummary = observe(
386
+ async (text: string) => {
387
+ const response = await openai.chat.completions.create({
388
+ model: "gpt-4o-mini",
389
+ messages: [{ role: "user", content: `Summarize: ${text}` }],
390
+ });
391
+ return response.choices[0].message.content;
392
+ },
393
+ { featureTag: "summarizer" }
394
+ );
395
+
396
+ app.post("/chat", async (req, res) => {
397
+ const reply = await handleChat(req.body.message);
398
+ res.json({ reply });
399
+ });
400
+
401
+ app.post("/summarize", async (req, res) => {
402
+ const summary = await handleSummary(req.body.text);
403
+ res.json({ summary });
404
+ });
405
+
406
+ app.listen(3000);
407
+ ```
408
+
409
+ ## Next.js Example
410
+
411
+ ```typescript
412
+ // app/api/chat/route.ts
413
+ import { init, observe } from "agentracer";
414
+ import { openai } from "agentracer/openai";
415
+ import { NextResponse } from "next/server";
416
+
417
+ init({
418
+ trackerApiKey: process.env.AGENTRACER_API_KEY!,
419
+ projectId: process.env.AGENTRACER_PROJECT_ID!,
420
+ environment: process.env.NODE_ENV,
421
+ });
422
+
423
+ const chat = observe(
424
+ async (message: string) => {
425
+ const response = await openai.chat.completions.create({
426
+ model: "gpt-4o",
427
+ messages: [{ role: "user", content: message }],
428
+ });
429
+ return response.choices[0].message.content;
430
+ },
431
+ { featureTag: "chatbot" }
432
+ );
433
+
434
+ export async function POST(req: Request) {
435
+ const { message } = await req.json();
436
+ const reply = await chat(message);
437
+ return NextResponse.json({ reply });
438
+ }
439
+ ```
440
+
441
+ ## Configuration
442
+
443
+ ```typescript
444
+ init({
445
+ // Required
446
+ trackerApiKey: "your-api-key",
447
+ projectId: "your-project-id",
448
+
449
+ // Optional
450
+ environment: "production", // default: "production"
451
+ host: "https://api.agentracer.dev", // default: Agentracer cloud
452
+ debug: false, // default: false -- logs payloads to console
453
+ enabled: true, // default: true -- set false to disable tracking
454
+ });
455
+ ```
456
+
457
+ | Option | Type | Default | Description |
458
+ |--------|------|---------|-------------|
459
+ | `trackerApiKey` | `string` | required | Your Agentracer API key |
460
+ | `projectId` | `string` | required | Your project ID |
461
+ | `environment` | `string` | `"production"` | Environment label (production, staging, development) |
462
+ | `host` | `string` | `"https://api.agentracer.dev"` | API endpoint |
463
+ | `debug` | `boolean` | `false` | Log telemetry payloads to console |
464
+ | `enabled` | `boolean` | `true` | Set to `false` to disable all tracking |
465
+
466
+ ## What We Track
467
+
468
+ Every LLM call sends a single lightweight payload:
469
+
470
+ | Field | Description |
471
+ |-------|-------------|
472
+ | `project_id` | Your project identifier |
473
+ | `provider` | LLM provider (openai, anthropic, gemini, custom) |
474
+ | `model` | Model name (gpt-4o, claude-sonnet-4-20250514, etc.) |
475
+ | `feature_tag` | Which feature made the call |
476
+ | `input_tokens` | Tokens sent to the model |
477
+ | `output_tokens` | Tokens received from the model |
478
+ | `cached_tokens` | Cached input tokens (prompt cache hits) |
479
+ | `latency_ms` | Round-trip time in milliseconds |
480
+ | `success` | Whether the call succeeded |
481
+ | `error_type` | Error class name (on failure) |
482
+ | `environment` | Environment label |
483
+ | `run_id` | Agent run ID (when inside AgentRun.execute) |
484
+ | `step_index` | Step number within an agent run |
485
+ | `end_user_id` | End user identifier (for per-user cost tracking) |
486
+
487
+ We never log prompts, responses, or any user data. Just counts and timing.
488
+
489
+ ## Troubleshooting
490
+
491
+ ### Calls are not showing up in the dashboard
492
+
493
+ 1. Verify your API key and project ID are correct.
494
+ 2. Make sure `init()` is called before any LLM calls.
495
+ 3. Enable debug mode to inspect payloads:
496
+
497
+ ```typescript
498
+ init({
499
+ trackerApiKey: "...",
500
+ projectId: "...",
501
+ debug: true,
502
+ });
503
+ ```
504
+
505
+ 4. Check that `enabled` is not set to `false`.
506
+
507
+ ### TypeScript errors with `feature_tag`
508
+
509
+ The `feature_tag` parameter is an Agentracer extension, not part of the official OpenAI/Anthropic SDK types. It is stripped before the call is forwarded to the provider. If you get type errors, you can cast the params:
510
+
511
+ ```typescript
512
+ const response = await openai.chat.completions.create({
513
+ model: "gpt-4o",
514
+ messages: [{ role: "user", content: "Hello!" }],
515
+ feature_tag: "chatbot",
516
+ } as any);
517
+ ```
518
+
519
+ Or use `observe` for automatic tagging instead.
520
+
521
+ ### Telemetry is not blocking my LLM calls
522
+
523
+ Correct -- telemetry is sent asynchronously with a 2-second timeout and failures are silently ignored. Your application is never impacted.
524
+
525
+ ## License
526
+
527
+ MIT
@@ -0,0 +1,10 @@
1
+ /** @internal Test-only: inject a mock client */
2
+ declare function _setClientForTesting(client: any): void;
3
+ declare class TrackedAnthropic {
4
+ private _proxy;
5
+ constructor(options?: any);
6
+ get messages(): any;
7
+ }
8
+ declare const anthropic: any;
9
+
10
+ export { TrackedAnthropic, _setClientForTesting, anthropic };
@@ -0,0 +1,10 @@
1
+ /** @internal Test-only: inject a mock client */
2
+ declare function _setClientForTesting(client: any): void;
3
+ declare class TrackedAnthropic {
4
+ private _proxy;
5
+ constructor(options?: any);
6
+ get messages(): any;
7
+ }
8
+ declare const anthropic: any;
9
+
10
+ export { TrackedAnthropic, _setClientForTesting, anthropic };