aden-ts 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +1241 -0
- package/dist/index.d.mts +2478 -0
- package/dist/index.d.ts +2478 -0
- package/dist/index.js +6744 -0
- package/dist/index.mjs +6626 -0
- package/package.json +99 -0
package/README.md
ADDED
|
@@ -0,0 +1,1241 @@
|
|
|
1
|
+
# Aden
|
|
2
|
+
|
|
3
|
+
**LLM Observability & Cost Control SDK**
|
|
4
|
+
|
|
5
|
+
Aden automatically tracks every LLM API call in your application—usage, latency, costs—and gives you real-time controls to prevent budget overruns. Works with OpenAI, Anthropic, and Google Gemini.
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { instrument } from "aden";
|
|
9
|
+
import OpenAI from "openai";
|
|
10
|
+
|
|
11
|
+
// One line to start tracking everything
|
|
12
|
+
await instrument({ sdks: { OpenAI } });
|
|
13
|
+
|
|
14
|
+
// Use your SDK normally - metrics collected automatically
|
|
15
|
+
const openai = new OpenAI();
|
|
16
|
+
const response = await openai.chat.completions.create({
|
|
17
|
+
model: "gpt-4o",
|
|
18
|
+
messages: [{ role: "user", content: "Hello!" }],
|
|
19
|
+
});
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
## Table of Contents
|
|
25
|
+
|
|
26
|
+
- [Why Aden?](#why-aden)
|
|
27
|
+
- [Installation](#installation)
|
|
28
|
+
- [Quick Start](#quick-start)
|
|
29
|
+
- [Sending Metrics to Your Backend](#sending-metrics-to-your-backend)
|
|
30
|
+
- [Cost Control](#cost-control)
|
|
31
|
+
- [Setting Up the Control Server](#setting-up-the-control-server)
|
|
32
|
+
- [Control Actions](#control-actions)
|
|
33
|
+
- [Hybrid Enforcement](#hybrid-enforcement)
|
|
34
|
+
- [Multi-Provider Support](#multi-provider-support)
|
|
35
|
+
- [What Metrics Are Collected?](#what-metrics-are-collected)
|
|
36
|
+
- [Metric Emitters](#metric-emitters)
|
|
37
|
+
- [File-based Logging](#file-based-logging)
|
|
38
|
+
- [Usage Normalization](#usage-normalization)
|
|
39
|
+
- [Call Relationship Tracking](#call-relationship-tracking)
|
|
40
|
+
- [Framework Integrations](#framework-integrations)
|
|
41
|
+
- [Advanced Configuration](#advanced-configuration)
|
|
42
|
+
- [Logging Configuration](#logging-configuration)
|
|
43
|
+
- [API Reference](#api-reference)
|
|
44
|
+
- [Examples](#examples)
|
|
45
|
+
- [Troubleshooting](#troubleshooting)
|
|
46
|
+
|
|
47
|
+
---
|
|
48
|
+
|
|
49
|
+
## Why Aden?
|
|
50
|
+
|
|
51
|
+
Building with LLMs is expensive and unpredictable:
|
|
52
|
+
|
|
53
|
+
- **No visibility**: You don't know which features or users consume the most tokens
|
|
54
|
+
- **Runaway costs**: One bug or bad prompt can blow through your budget in minutes
|
|
55
|
+
- **No control**: Once a request is sent, you can't stop it
|
|
56
|
+
|
|
57
|
+
Aden solves these problems:
|
|
58
|
+
|
|
59
|
+
| Problem | Aden Solution |
|
|
60
|
+
| ----------------------------------- | ---------------------------------------------------------- |
|
|
61
|
+
| No visibility into LLM usage | Automatic metric collection for every API call |
|
|
62
|
+
| Unpredictable costs | Real-time budget tracking and enforcement |
|
|
63
|
+
| No per-user limits | Context-based controls (per user, per feature, per tenant) |
|
|
64
|
+
| Expensive models used unnecessarily | Automatic model degradation when approaching limits |
|
|
65
|
+
| Alert fatigue | Smart alerts based on spend thresholds |
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Installation
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
npm install aden
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
You'll also need at least one LLM SDK:
|
|
76
|
+
|
|
77
|
+
```bash
|
|
78
|
+
# Install the SDKs you use
|
|
79
|
+
npm install openai # For OpenAI/GPT models
|
|
80
|
+
npm install @anthropic-ai/sdk # For Anthropic/Claude models
|
|
81
|
+
npm install @google/generative-ai # For Google Gemini models (classic SDK)
|
|
82
|
+
npm install @google/genai # For Google GenAI (new SDK for Google ADK)
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
---
|
|
86
|
+
|
|
87
|
+
## Quick Start
|
|
88
|
+
|
|
89
|
+
### Step 1: Add Instrumentation
|
|
90
|
+
|
|
91
|
+
Add this **once** at your application startup (before creating any LLM clients):
|
|
92
|
+
|
|
93
|
+
```typescript
|
|
94
|
+
// app.ts or index.ts
|
|
95
|
+
import { instrument, createConsoleEmitter } from "aden";
|
|
96
|
+
import OpenAI from "openai";
|
|
97
|
+
|
|
98
|
+
await instrument({
|
|
99
|
+
emitMetric: createConsoleEmitter({ pretty: true }),
|
|
100
|
+
sdks: { OpenAI },
|
|
101
|
+
});
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
### Step 2: Use Your SDK Normally
|
|
105
|
+
|
|
106
|
+
That's it! Every API call is now tracked:
|
|
107
|
+
|
|
108
|
+
```typescript
|
|
109
|
+
const openai = new OpenAI();
|
|
110
|
+
|
|
111
|
+
const response = await openai.chat.completions.create({
|
|
112
|
+
model: "gpt-4o",
|
|
113
|
+
messages: [{ role: "user", content: "Explain quantum computing" }],
|
|
114
|
+
});
|
|
115
|
+
|
|
116
|
+
// Console output:
|
|
117
|
+
// [aden] openai/gpt-4o | 247 tokens | 1,234ms | $0.00742
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
### Step 3: Clean Up on Shutdown
|
|
121
|
+
|
|
122
|
+
```typescript
|
|
123
|
+
import { uninstrument } from "aden";
|
|
124
|
+
|
|
125
|
+
// In your shutdown handler
|
|
126
|
+
await uninstrument();
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## Sending Metrics to Your Backend
|
|
132
|
+
|
|
133
|
+
For production, send metrics to your backend instead of the console:
|
|
134
|
+
|
|
135
|
+
### Option A: HTTP Endpoint
|
|
136
|
+
|
|
137
|
+
```typescript
|
|
138
|
+
import { instrument, createHttpTransport } from "aden";
|
|
139
|
+
import OpenAI from "openai";
|
|
140
|
+
|
|
141
|
+
const transport = createHttpTransport({
|
|
142
|
+
apiUrl: "https://api.yourcompany.com/v1/metrics",
|
|
143
|
+
apiKey: process.env.METRICS_API_KEY,
|
|
144
|
+
});
|
|
145
|
+
|
|
146
|
+
await instrument({
|
|
147
|
+
emitMetric: transport.emit,
|
|
148
|
+
sdks: { OpenAI },
|
|
149
|
+
});
|
|
150
|
+
|
|
151
|
+
// On shutdown
|
|
152
|
+
await transport.stop();
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
### Option B: Aden Control Server
|
|
156
|
+
|
|
157
|
+
For real-time cost control (budgets, throttling, model degradation), connect to an Aden control server:
|
|
158
|
+
|
|
159
|
+
```typescript
|
|
160
|
+
import { instrument } from "aden";
|
|
161
|
+
import OpenAI from "openai";
|
|
162
|
+
|
|
163
|
+
await instrument({
|
|
164
|
+
apiKey: process.env.ADEN_API_KEY, // Your Aden API key
|
|
165
|
+
serverUrl: process.env.ADEN_API_URL, // Control server URL
|
|
166
|
+
sdks: { OpenAI },
|
|
167
|
+
});
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
This enables all the [Cost Control](#cost-control) features described below.
|
|
171
|
+
|
|
172
|
+
### Option C: Custom Handler
|
|
173
|
+
|
|
174
|
+
```typescript
|
|
175
|
+
await instrument({
|
|
176
|
+
emitMetric: async (event) => {
|
|
177
|
+
// event contains: model, tokens, latency, cost, etc.
|
|
178
|
+
await myDatabase.insert("llm_metrics", event);
|
|
179
|
+
},
|
|
180
|
+
sdks: { OpenAI },
|
|
181
|
+
});
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## Cost Control
|
|
187
|
+
|
|
188
|
+
Aden's cost control system lets you set budgets, throttle requests, and automatically downgrade to cheaper models—all in real-time.
|
|
189
|
+
|
|
190
|
+
### Setting Up the Control Server
|
|
191
|
+
|
|
192
|
+
1. **Get an API key** from your Aden control server (or deploy your own)
|
|
193
|
+
|
|
194
|
+
2. **Set environment variables**:
|
|
195
|
+
|
|
196
|
+
```bash
|
|
197
|
+
ADEN_API_KEY=your-api-key
|
|
198
|
+
ADEN_API_URL=https://kube.acho.io # Optional, has default
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
3. **Instrument with cost control**:
|
|
202
|
+
|
|
203
|
+
```typescript
|
|
204
|
+
import { instrument } from "aden";
|
|
205
|
+
import OpenAI from "openai";
|
|
206
|
+
|
|
207
|
+
await instrument({
|
|
208
|
+
apiKey: process.env.ADEN_API_KEY,
|
|
209
|
+
sdks: { OpenAI },
|
|
210
|
+
|
|
211
|
+
// Track usage per user (required for per-user budgets)
|
|
212
|
+
getContextId: () => getCurrentUserId(),
|
|
213
|
+
|
|
214
|
+
// Get notified when alerts trigger
|
|
215
|
+
onAlert: (alert) => {
|
|
216
|
+
console.warn(`[${alert.level}] ${alert.message}`);
|
|
217
|
+
// Send to Slack, PagerDuty, etc.
|
|
218
|
+
},
|
|
219
|
+
});
|
|
220
|
+
```
|
|
221
|
+
|
|
222
|
+
### Control Actions
|
|
223
|
+
|
|
224
|
+
The control server can apply these actions to requests:
|
|
225
|
+
|
|
226
|
+
| Action | What It Does | Use Case |
|
|
227
|
+
| ------------ | ------------------------------------ | -------------------------- |
|
|
228
|
+
| **allow** | Request proceeds normally | Default when within limits |
|
|
229
|
+
| **block** | Request is rejected with an error | Budget exhausted |
|
|
230
|
+
| **throttle** | Request is delayed before proceeding | Rate limiting |
|
|
231
|
+
| **degrade** | Request uses a cheaper model | Approaching budget limit |
|
|
232
|
+
| **alert** | Request proceeds, notification sent | Warning threshold reached |
|
|
233
|
+
|
|
234
|
+
### Example: Budget with Degradation
|
|
235
|
+
|
|
236
|
+
Configure on your control server:
|
|
237
|
+
|
|
238
|
+
```bash
|
|
239
|
+
# Set a $10 budget for user_123
|
|
240
|
+
curl -X POST https://control-server/v1/control/policy/budgets \
|
|
241
|
+
-H "Authorization: Bearer $ADEN_API_KEY" \
|
|
242
|
+
-H "Content-Type: application/json" \
|
|
243
|
+
-d '{
|
|
244
|
+
"context_id": "user_123",
|
|
245
|
+
"limit_usd": 10.00,
|
|
246
|
+
"action_on_exceed": "block"
|
|
247
|
+
}'
|
|
248
|
+
|
|
249
|
+
# Degrade gpt-4o to gpt-4o-mini when at 50% budget
|
|
250
|
+
curl -X POST https://control-server/v1/control/policy/degradations \
|
|
251
|
+
-H "Authorization: Bearer $ADEN_API_KEY" \
|
|
252
|
+
-H "Content-Type: application/json" \
|
|
253
|
+
-d '{
|
|
254
|
+
"from_model": "gpt-4o",
|
|
255
|
+
"to_model": "gpt-4o-mini",
|
|
256
|
+
"trigger": "budget_threshold",
|
|
257
|
+
"threshold_percent": 50,
|
|
258
|
+
"context_id": "user_123"
|
|
259
|
+
}'
|
|
260
|
+
|
|
261
|
+
# Alert when budget exceeds 80%
|
|
262
|
+
curl -X POST https://control-server/v1/control/policy/alerts \
|
|
263
|
+
-H "Authorization: Bearer $ADEN_API_KEY" \
|
|
264
|
+
-H "Content-Type: application/json" \
|
|
265
|
+
-d '{
|
|
266
|
+
"context_id": "user_123",
|
|
267
|
+
"trigger": "budget_threshold",
|
|
268
|
+
"threshold_percent": 80,
|
|
269
|
+
"level": "warning",
|
|
270
|
+
"message": "User approaching budget limit"
|
|
271
|
+
}'
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
**What happens in your app**:
|
|
275
|
+
|
|
276
|
+
```typescript
|
|
277
|
+
// User has spent $0 (0% of $10 budget)
|
|
278
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
279
|
+
// → Uses gpt-4o ✓
|
|
280
|
+
|
|
281
|
+
// User has spent $5 (50% of budget)
|
|
282
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
283
|
+
// → Automatically uses gpt-4o-mini instead (degraded)
|
|
284
|
+
|
|
285
|
+
// User has spent $8 (80% of budget)
|
|
286
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
287
|
+
// → Uses gpt-4o-mini, triggers alert callback
|
|
288
|
+
|
|
289
|
+
// User has spent $10+ (100% of budget)
|
|
290
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
291
|
+
// → Throws RequestCancelledError: "Budget exceeded"
|
|
292
|
+
```
|
|
293
|
+
|
|
294
|
+
### Hybrid Enforcement
|
|
295
|
+
|
|
296
|
+
For high-concurrency scenarios, enable hybrid enforcement to combine fast local checks with accurate server-side validation:
|
|
297
|
+
|
|
298
|
+
```typescript
|
|
299
|
+
const agent = createControlAgent({
|
|
300
|
+
serverUrl: "https://kube.acho.io",
|
|
301
|
+
apiKey: process.env.ADEN_API_KEY,
|
|
302
|
+
|
|
303
|
+
// Enable hybrid enforcement
|
|
304
|
+
enableHybridEnforcement: true,
|
|
305
|
+
|
|
306
|
+
// Start server validation at 80% budget usage (default)
|
|
307
|
+
serverValidationThreshold: 80,
|
|
308
|
+
|
|
309
|
+
// Timeout for server validation (default: 2000ms)
|
|
310
|
+
serverValidationTimeoutMs: 2000,
|
|
311
|
+
|
|
312
|
+
// Force validation when remaining budget < $1 (default)
|
|
313
|
+
adaptiveThresholdEnabled: true,
|
|
314
|
+
adaptiveMinRemainingUsd: 1.0,
|
|
315
|
+
|
|
316
|
+
// Probabilistic sampling to reduce latency
|
|
317
|
+
samplingEnabled: true,
|
|
318
|
+
samplingBaseRate: 0.1, // 10% at threshold
|
|
319
|
+
samplingFullValidationPercent: 95, // 100% at 95% usage
|
|
320
|
+
|
|
321
|
+
// Hard limit safety net (110% = soft limit + 10% buffer)
|
|
322
|
+
maxExpectedOverspendPercent: 10,
|
|
323
|
+
});
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
**How it works:**
|
|
327
|
+
|
|
328
|
+
- **Below threshold (0-80%)**: Fast local-only enforcement
|
|
329
|
+
- **At threshold (80-95%)**: Probabilistic server validation (10% → 100%)
|
|
330
|
+
- **Above 95%**: All requests validated with server
|
|
331
|
+
- **Hard limit**: Always block if projected spend exceeds 110% of budget
|
|
332
|
+
|
|
333
|
+
---
|
|
334
|
+
|
|
335
|
+
## Multi-Provider Support
|
|
336
|
+
|
|
337
|
+
Aden works with all major LLM providers:
|
|
338
|
+
|
|
339
|
+
```typescript
|
|
340
|
+
import { instrument, instrumentGenai } from "aden";
|
|
341
|
+
import OpenAI from "openai";
|
|
342
|
+
import Anthropic from "@anthropic-ai/sdk";
|
|
343
|
+
import { GoogleGenerativeAI } from "@google/generative-ai";
|
|
344
|
+
|
|
345
|
+
// Instrument all providers at once
|
|
346
|
+
await instrument({
|
|
347
|
+
apiKey: process.env.ADEN_API_KEY,
|
|
348
|
+
sdks: {
|
|
349
|
+
OpenAI,
|
|
350
|
+
Anthropic,
|
|
351
|
+
GoogleGenerativeAI,
|
|
352
|
+
},
|
|
353
|
+
});
|
|
354
|
+
|
|
355
|
+
// Also instrument the new Google GenAI SDK (for Google ADK)
|
|
356
|
+
await instrumentGenai({ emitMetric: myEmitter });
|
|
357
|
+
|
|
358
|
+
// All SDKs are now tracked
|
|
359
|
+
const openai = new OpenAI();
|
|
360
|
+
const anthropic = new Anthropic();
|
|
361
|
+
const gemini = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
### OpenAI
|
|
365
|
+
|
|
366
|
+
```typescript
|
|
367
|
+
const openai = new OpenAI();
|
|
368
|
+
|
|
369
|
+
// Chat completions
|
|
370
|
+
await openai.chat.completions.create({
|
|
371
|
+
model: "gpt-4o",
|
|
372
|
+
messages: [{ role: "user", content: "Hello" }],
|
|
373
|
+
});
|
|
374
|
+
|
|
375
|
+
// Streaming
|
|
376
|
+
const stream = await openai.chat.completions.create({
|
|
377
|
+
model: "gpt-4o",
|
|
378
|
+
messages: [{ role: "user", content: "Tell me a story" }],
|
|
379
|
+
stream: true,
|
|
380
|
+
});
|
|
381
|
+
|
|
382
|
+
for await (const chunk of stream) {
|
|
383
|
+
process.stdout.write(chunk.choices[0]?.delta?.content || "");
|
|
384
|
+
}
|
|
385
|
+
// Metrics emitted when stream completes
|
|
386
|
+
```
|
|
387
|
+
|
|
388
|
+
### Anthropic
|
|
389
|
+
|
|
390
|
+
```typescript
|
|
391
|
+
const anthropic = new Anthropic();
|
|
392
|
+
|
|
393
|
+
await anthropic.messages.create({
|
|
394
|
+
model: "claude-3-5-sonnet-20241022",
|
|
395
|
+
max_tokens: 1024,
|
|
396
|
+
messages: [{ role: "user", content: "Hello" }],
|
|
397
|
+
});
|
|
398
|
+
```
|
|
399
|
+
|
|
400
|
+
### Google Gemini (Classic SDK)
|
|
401
|
+
|
|
402
|
+
```typescript
|
|
403
|
+
const gemini = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
|
|
404
|
+
const model = gemini.getGenerativeModel({ model: "gemini-2.0-flash" });
|
|
405
|
+
|
|
406
|
+
await model.generateContent("Explain quantum computing");
|
|
407
|
+
```
|
|
408
|
+
|
|
409
|
+
### Google GenAI (New SDK)
|
|
410
|
+
|
|
411
|
+
The new `@google/genai` SDK is used by Google ADK and other modern Google AI tools:
|
|
412
|
+
|
|
413
|
+
```typescript
|
|
414
|
+
import { instrumentGenai } from "aden";
|
|
415
|
+
import { GoogleGenAI } from "@google/genai";
|
|
416
|
+
|
|
417
|
+
// Instrument the new SDK
|
|
418
|
+
await instrumentGenai({ emitMetric: myEmitter });
|
|
419
|
+
|
|
420
|
+
// Use as normal
|
|
421
|
+
const client = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
|
|
422
|
+
const response = await client.models.generateContent({
|
|
423
|
+
model: "gemini-2.0-flash",
|
|
424
|
+
contents: "Explain quantum computing",
|
|
425
|
+
});
|
|
426
|
+
```
|
|
427
|
+
|
|
428
|
+
---
|
|
429
|
+
|
|
430
|
+
## What Metrics Are Collected?
|
|
431
|
+
|
|
432
|
+
Every LLM API call generates a `MetricEvent`:
|
|
433
|
+
|
|
434
|
+
```typescript
|
|
435
|
+
interface MetricEvent {
|
|
436
|
+
// Identity
|
|
437
|
+
trace_id: string; // Unique ID for this request
|
|
438
|
+
span_id: string; // Span ID (OTel compatible)
|
|
439
|
+
request_id: string | null; // Provider's request ID
|
|
440
|
+
|
|
441
|
+
// Request details
|
|
442
|
+
provider: "openai" | "anthropic" | "gemini";
|
|
443
|
+
model: string; // e.g., "gpt-4o", "claude-3-5-sonnet"
|
|
444
|
+
stream: boolean;
|
|
445
|
+
timestamp: string; // ISO timestamp
|
|
446
|
+
|
|
447
|
+
// Performance
|
|
448
|
+
latency_ms: number;
|
|
449
|
+
status_code?: number;
|
|
450
|
+
error?: string;
|
|
451
|
+
|
|
452
|
+
// Token usage
|
|
453
|
+
input_tokens: number;
|
|
454
|
+
output_tokens: number;
|
|
455
|
+
total_tokens: number;
|
|
456
|
+
cached_tokens: number; // Prompt cache hits
|
|
457
|
+
reasoning_tokens: number; // For o1/o3 models
|
|
458
|
+
|
|
459
|
+
// Rate limits (when available)
|
|
460
|
+
rate_limit_remaining_requests?: number;
|
|
461
|
+
rate_limit_remaining_tokens?: number;
|
|
462
|
+
|
|
463
|
+
// Tool usage
|
|
464
|
+
tool_call_count?: number;
|
|
465
|
+
tool_names?: string; // Comma-separated
|
|
466
|
+
|
|
467
|
+
// Call relationship (when enabled)
|
|
468
|
+
parent_span_id?: string;
|
|
469
|
+
call_sequence?: number;
|
|
470
|
+
agent_stack?: string[];
|
|
471
|
+
call_site_file?: string;
|
|
472
|
+
call_site_line?: number;
|
|
473
|
+
|
|
474
|
+
// Custom metadata
|
|
475
|
+
metadata?: Record<string, string>;
|
|
476
|
+
}
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
---
|
|
480
|
+
|
|
481
|
+
## Metric Emitters
|
|
482
|
+
|
|
483
|
+
Emitters determine where metrics go. You can use built-in emitters or create custom ones.
|
|
484
|
+
|
|
485
|
+
### Built-in Emitters
|
|
486
|
+
|
|
487
|
+
```typescript
|
|
488
|
+
import {
|
|
489
|
+
createConsoleEmitter, // Log to console (development)
|
|
490
|
+
createHttpTransport, // Send to HTTP endpoint
|
|
491
|
+
createBatchEmitter, // Batch before sending
|
|
492
|
+
createMultiEmitter, // Send to multiple destinations
|
|
493
|
+
createFilteredEmitter, // Filter events
|
|
494
|
+
createTransformEmitter, // Transform events
|
|
495
|
+
createJsonFileEmitter, // Write to JSON file
|
|
496
|
+
createMemoryEmitter, // Store in memory (testing)
|
|
497
|
+
createNoopEmitter, // Discard all events
|
|
498
|
+
} from "aden";
|
|
499
|
+
```
|
|
500
|
+
|
|
501
|
+
### Console Emitter (Development)
|
|
502
|
+
|
|
503
|
+
```typescript
|
|
504
|
+
await instrument({
|
|
505
|
+
emitMetric: createConsoleEmitter({ pretty: true }),
|
|
506
|
+
sdks: { OpenAI },
|
|
507
|
+
});
|
|
508
|
+
|
|
509
|
+
// Output:
|
|
510
|
+
// [aden] openai/gpt-4o | 247 tokens | 1,234ms
|
|
511
|
+
```
|
|
512
|
+
|
|
513
|
+
### HTTP Transport (Production)
|
|
514
|
+
|
|
515
|
+
```typescript
|
|
516
|
+
const transport = createHttpTransport({
|
|
517
|
+
apiUrl: "https://api.yourcompany.com/v1/metrics",
|
|
518
|
+
apiKey: process.env.METRICS_API_KEY,
|
|
519
|
+
|
|
520
|
+
// Batching (optional)
|
|
521
|
+
batchSize: 50, // Events per batch
|
|
522
|
+
flushInterval: 5000, // ms between flushes
|
|
523
|
+
|
|
524
|
+
// Reliability (optional)
|
|
525
|
+
maxRetries: 3,
|
|
526
|
+
timeout: 10000,
|
|
527
|
+
maxQueueSize: 10000,
|
|
528
|
+
|
|
529
|
+
// Error handling (optional)
|
|
530
|
+
onSendError: (error, batch) => {
|
|
531
|
+
console.error(`Failed to send ${batch.length} metrics:`, error);
|
|
532
|
+
},
|
|
533
|
+
});
|
|
534
|
+
|
|
535
|
+
await instrument({
|
|
536
|
+
emitMetric: transport.emit,
|
|
537
|
+
sdks: { OpenAI },
|
|
538
|
+
});
|
|
539
|
+
|
|
540
|
+
// Graceful shutdown
|
|
541
|
+
process.on("SIGTERM", async () => {
|
|
542
|
+
await transport.stop(); // Flushes remaining events
|
|
543
|
+
process.exit(0);
|
|
544
|
+
});
|
|
545
|
+
```
|
|
546
|
+
|
|
547
|
+
### Multiple Destinations
|
|
548
|
+
|
|
549
|
+
```typescript
|
|
550
|
+
await instrument({
|
|
551
|
+
emitMetric: createMultiEmitter([
|
|
552
|
+
createConsoleEmitter({ pretty: true }), // Log locally
|
|
553
|
+
transport.emit, // Send to backend
|
|
554
|
+
]),
|
|
555
|
+
sdks: { OpenAI },
|
|
556
|
+
});
|
|
557
|
+
```
|
|
558
|
+
|
|
559
|
+
### Filtering Events
|
|
560
|
+
|
|
561
|
+
```typescript
|
|
562
|
+
await instrument({
|
|
563
|
+
emitMetric: createFilteredEmitter(
|
|
564
|
+
transport.emit,
|
|
565
|
+
(event) => event.total_tokens > 100 // Only large requests
|
|
566
|
+
),
|
|
567
|
+
sdks: { OpenAI },
|
|
568
|
+
});
|
|
569
|
+
```
|
|
570
|
+
|
|
571
|
+
### Custom Emitter
|
|
572
|
+
|
|
573
|
+
```typescript
|
|
574
|
+
await instrument({
|
|
575
|
+
emitMetric: async (event) => {
|
|
576
|
+
// Calculate cost
|
|
577
|
+
const cost = calculateCost(
|
|
578
|
+
event.model,
|
|
579
|
+
event.input_tokens,
|
|
580
|
+
event.output_tokens
|
|
581
|
+
);
|
|
582
|
+
|
|
583
|
+
// Store in your database
|
|
584
|
+
await db.llmMetrics.create({
|
|
585
|
+
...event,
|
|
586
|
+
cost_usd: cost,
|
|
587
|
+
user_id: getCurrentUserId(),
|
|
588
|
+
});
|
|
589
|
+
|
|
590
|
+
// Check for anomalies
|
|
591
|
+
if (event.latency_ms > 30000) {
|
|
592
|
+
alertOps(`Slow LLM call: ${event.latency_ms}ms`);
|
|
593
|
+
}
|
|
594
|
+
},
|
|
595
|
+
sdks: { OpenAI },
|
|
596
|
+
});
|
|
597
|
+
```
|
|
598
|
+
|
|
599
|
+
### File-based Logging
|
|
600
|
+
|
|
601
|
+
Write metrics to local JSONL files for offline analysis, debugging, or compliance:
|
|
602
|
+
|
|
603
|
+
```typescript
|
|
604
|
+
import { instrument, createFileEmitter, MetricFileLogger } from "aden";
|
|
605
|
+
import OpenAI from "openai";
|
|
606
|
+
|
|
607
|
+
// Option 1: Use the emitter factory
|
|
608
|
+
await instrument({
|
|
609
|
+
emitMetric: createFileEmitter({ logDir: "./meter_logs" }),
|
|
610
|
+
sdks: { OpenAI },
|
|
611
|
+
});
|
|
612
|
+
|
|
613
|
+
// Option 2: Use the logger class directly
|
|
614
|
+
const logger = new MetricFileLogger({ logDir: "./meter_logs" });
|
|
615
|
+
|
|
616
|
+
// Write specific event types
|
|
617
|
+
await logger.writeLLMEvent({
|
|
618
|
+
sessionId: "session_123",
|
|
619
|
+
inputTokens: 100,
|
|
620
|
+
outputTokens: 50,
|
|
621
|
+
model: "gpt-4o",
|
|
622
|
+
latencyMs: 1234,
|
|
623
|
+
});
|
|
624
|
+
|
|
625
|
+
await logger.writeTTSEvent({
|
|
626
|
+
sessionId: "session_123",
|
|
627
|
+
characters: 500,
|
|
628
|
+
model: "tts-1",
|
|
629
|
+
});
|
|
630
|
+
|
|
631
|
+
await logger.writeSTTEvent({
|
|
632
|
+
sessionId: "session_123",
|
|
633
|
+
audioSeconds: 30,
|
|
634
|
+
model: "whisper-1",
|
|
635
|
+
});
|
|
636
|
+
```
|
|
637
|
+
|
|
638
|
+
Files are organized by date and session:
|
|
639
|
+
|
|
640
|
+
```
|
|
641
|
+
meter_logs/
|
|
642
|
+
2024-01-15/
|
|
643
|
+
session_abc123.jsonl
|
|
644
|
+
session_def456.jsonl
|
|
645
|
+
2024-01-16/
|
|
646
|
+
...
|
|
647
|
+
```
|
|
648
|
+
|
|
649
|
+
You can configure the log directory via environment variable:
|
|
650
|
+
|
|
651
|
+
```bash
|
|
652
|
+
export METER_LOG_DIR=/var/log/aden
|
|
653
|
+
```
|
|
654
|
+
|
|
655
|
+
---
|
|
656
|
+
|
|
657
|
+
## Usage Normalization
|
|
658
|
+
|
|
659
|
+
Aden normalizes token usage across providers into a consistent format:
|
|
660
|
+
|
|
661
|
+
```typescript
|
|
662
|
+
import {
|
|
663
|
+
normalizeUsage,
|
|
664
|
+
normalizeOpenAIUsage,
|
|
665
|
+
normalizeAnthropicUsage,
|
|
666
|
+
normalizeGeminiUsage,
|
|
667
|
+
} from "aden";
|
|
668
|
+
|
|
669
|
+
// Auto-detect provider format
|
|
670
|
+
const usage = normalizeUsage(response.usage, "openai");
|
|
671
|
+
// → { input_tokens, output_tokens, total_tokens, reasoning_tokens, cached_tokens }
|
|
672
|
+
|
|
673
|
+
// Provider-specific normalizers
|
|
674
|
+
const openaiUsage = normalizeOpenAIUsage(response.usage);
|
|
675
|
+
const anthropicUsage = normalizeAnthropicUsage(response.usage);
|
|
676
|
+
const geminiUsage = normalizeGeminiUsage(response.usageMetadata);
|
|
677
|
+
```
|
|
678
|
+
|
|
679
|
+
### Normalized Usage Format
|
|
680
|
+
|
|
681
|
+
```typescript
|
|
682
|
+
interface NormalizedUsage {
|
|
683
|
+
input_tokens: number;
|
|
684
|
+
output_tokens: number;
|
|
685
|
+
total_tokens: number;
|
|
686
|
+
reasoning_tokens: number; // For o1/o3 models
|
|
687
|
+
cached_tokens: number; // Prompt cache hits
|
|
688
|
+
}
|
|
689
|
+
```
|
|
690
|
+
|
|
691
|
+
### Provider Field Mappings
|
|
692
|
+
|
|
693
|
+
| Provider | Input Tokens | Output Tokens | Cached Tokens |
|
|
694
|
+
| --------- | ----------------------------- | -------------------------------- | ----------------------------- |
|
|
695
|
+
| OpenAI | `prompt_tokens` | `completion_tokens` | `prompt_tokens_details.cached_tokens` |
|
|
696
|
+
| Anthropic | `input_tokens` | `output_tokens` | `cache_read_input_tokens` |
|
|
697
|
+
| Gemini | `promptTokenCount` | `candidatesTokenCount` | `cachedContentTokenCount` |
|
|
698
|
+
|
|
699
|
+
---
|
|
700
|
+
|
|
701
|
+
## Call Relationship Tracking
|
|
702
|
+
|
|
703
|
+
Track relationships between LLM calls—useful for multi-agent systems, conversation threads, and debugging.
|
|
704
|
+
|
|
705
|
+
### Automatic Session Tracking
|
|
706
|
+
|
|
707
|
+
Related calls are automatically grouped:
|
|
708
|
+
|
|
709
|
+
```typescript
|
|
710
|
+
// These calls share a session automatically
|
|
711
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
712
|
+
// → trace_id: "abc", call_sequence: 1
|
|
713
|
+
|
|
714
|
+
await openai.chat.completions.create({ model: "gpt-4o", ... });
|
|
715
|
+
// → trace_id: "abc", call_sequence: 2, parent_span_id: <first call>
|
|
716
|
+
```
|
|
717
|
+
|
|
718
|
+
### Named Agent Tracking
|
|
719
|
+
|
|
720
|
+
For multi-agent systems, track which agent made each call:
|
|
721
|
+
|
|
722
|
+
```typescript
|
|
723
|
+
import { withAgent } from "aden";
|
|
724
|
+
|
|
725
|
+
await withAgent("ResearchAgent", async () => {
|
|
726
|
+
await openai.chat.completions.create({ ... });
|
|
727
|
+
// → agent_stack: ["ResearchAgent"]
|
|
728
|
+
|
|
729
|
+
await withAgent("WebSearchAgent", async () => {
|
|
730
|
+
await openai.chat.completions.create({ ... });
|
|
731
|
+
// → agent_stack: ["ResearchAgent", "WebSearchAgent"]
|
|
732
|
+
});
|
|
733
|
+
});
|
|
734
|
+
```
|
|
735
|
+
|
|
736
|
+
### Request Context
|
|
737
|
+
|
|
738
|
+
Isolate sessions per HTTP request:
|
|
739
|
+
|
|
740
|
+
```typescript
|
|
741
|
+
import { enterMeterContext } from "aden";
|
|
742
|
+
|
|
743
|
+
app.post("/chat", async (req, res) => {
|
|
744
|
+
// Create isolated session for this request
|
|
745
|
+
enterMeterContext({
|
|
746
|
+
sessionId: req.headers["x-request-id"],
|
|
747
|
+
metadata: { userId: req.userId },
|
|
748
|
+
});
|
|
749
|
+
|
|
750
|
+
// All LLM calls here share this session
|
|
751
|
+
const response = await openai.chat.completions.create({ ... });
|
|
752
|
+
// → metadata includes userId
|
|
753
|
+
});
|
|
754
|
+
```
|
|
755
|
+
|
|
756
|
+
### Disabling Relationship Tracking
|
|
757
|
+
|
|
758
|
+
For high-throughput scenarios:
|
|
759
|
+
|
|
760
|
+
```typescript
|
|
761
|
+
await instrument({
|
|
762
|
+
emitMetric: myEmitter,
|
|
763
|
+
sdks: { OpenAI },
|
|
764
|
+
trackCallRelationships: false, // Slight performance boost
|
|
765
|
+
});
|
|
766
|
+
```
|
|
767
|
+
|
|
768
|
+
---
|
|
769
|
+
|
|
770
|
+
## Framework Integrations
|
|
771
|
+
|
|
772
|
+
### Vercel AI SDK
|
|
773
|
+
|
|
774
|
+
```typescript
|
|
775
|
+
import { instrument, instrumentFetch } from "aden";
|
|
776
|
+
|
|
777
|
+
// Instrument fetch for Vercel AI SDK
|
|
778
|
+
instrumentFetch({
|
|
779
|
+
emitMetric: myEmitter,
|
|
780
|
+
urlPatterns: [/api\.openai\.com/, /api\.anthropic\.com/],
|
|
781
|
+
});
|
|
782
|
+
|
|
783
|
+
// Now Vercel AI SDK calls are tracked
|
|
784
|
+
import { generateText } from "ai";
|
|
785
|
+
import { openai } from "@ai-sdk/openai";
|
|
786
|
+
|
|
787
|
+
await generateText({
|
|
788
|
+
model: openai("gpt-4o"),
|
|
789
|
+
prompt: "Hello!",
|
|
790
|
+
});
|
|
791
|
+
```
|
|
792
|
+
|
|
793
|
+
### LangChain
|
|
794
|
+
|
|
795
|
+
```typescript
|
|
796
|
+
import { instrument } from "aden";
|
|
797
|
+
import OpenAI from "openai";
|
|
798
|
+
|
|
799
|
+
// Instrument the underlying SDK
|
|
800
|
+
await instrument({
|
|
801
|
+
emitMetric: myEmitter,
|
|
802
|
+
sdks: { OpenAI },
|
|
803
|
+
});
|
|
804
|
+
|
|
805
|
+
// LangChain uses OpenAI under the hood
|
|
806
|
+
import { ChatOpenAI } from "@langchain/openai";
|
|
807
|
+
|
|
808
|
+
const model = new ChatOpenAI({ model: "gpt-4o" });
|
|
809
|
+
await model.invoke("Hello!");
|
|
810
|
+
// → Metrics captured automatically
|
|
811
|
+
```
|
|
812
|
+
|
|
813
|
+
### Express.js Middleware
|
|
814
|
+
|
|
815
|
+
```typescript
|
|
816
|
+
import express from "express";
|
|
817
|
+
import { enterMeterContext } from "aden";
|
|
818
|
+
|
|
819
|
+
const app = express();
|
|
820
|
+
|
|
821
|
+
// Add context to each request
|
|
822
|
+
app.use((req, res, next) => {
|
|
823
|
+
enterMeterContext({
|
|
824
|
+
sessionId: req.headers["x-request-id"] as string,
|
|
825
|
+
metadata: {
|
|
826
|
+
userId: req.userId,
|
|
827
|
+
endpoint: req.path,
|
|
828
|
+
},
|
|
829
|
+
});
|
|
830
|
+
next();
|
|
831
|
+
});
|
|
832
|
+
|
|
833
|
+
app.post("/chat", async (req, res) => {
|
|
834
|
+
// LLM calls here include request metadata
|
|
835
|
+
const response = await openai.chat.completions.create({ ... });
|
|
836
|
+
res.json(response);
|
|
837
|
+
});
|
|
838
|
+
```
|
|
839
|
+
|
|
840
|
+
---
|
|
841
|
+
|
|
842
|
+
## Advanced Configuration
|
|
843
|
+
|
|
844
|
+
### Full Options Reference
|
|
845
|
+
|
|
846
|
+
```typescript
|
|
847
|
+
await instrument({
|
|
848
|
+
// === Metrics Destination ===
|
|
849
|
+
emitMetric: myEmitter, // Required unless apiKey is set
|
|
850
|
+
|
|
851
|
+
// === Control Server (enables cost control) ===
|
|
852
|
+
apiKey: "aden_xxx", // Your Aden API key
|
|
853
|
+
serverUrl: "https://...", // Control server URL (optional)
|
|
854
|
+
failOpen: true, // Allow requests if server is down (default: true)
|
|
855
|
+
|
|
856
|
+
// === Context Tracking ===
|
|
857
|
+
getContextId: () => getUserId(), // For per-user budgets
|
|
858
|
+
trackCallRelationships: true, // Track call hierarchies (default: true)
|
|
859
|
+
|
|
860
|
+
// === Alerts ===
|
|
861
|
+
onAlert: (alert) => {
|
|
862
|
+
// Callback when alert triggers
|
|
863
|
+
console.warn(`[${alert.level}] ${alert.message}`);
|
|
864
|
+
},
|
|
865
|
+
|
|
866
|
+
// === SDK Classes ===
|
|
867
|
+
sdks: {
|
|
868
|
+
// SDK classes to instrument
|
|
869
|
+
OpenAI,
|
|
870
|
+
Anthropic,
|
|
871
|
+
GoogleGenerativeAI,
|
|
872
|
+
},
|
|
873
|
+
|
|
874
|
+
// === Advanced ===
|
|
875
|
+
generateSpanId: () => uuid(), // Custom span ID generator
|
|
876
|
+
beforeRequest: async (params, context) => {
|
|
877
|
+
// Custom pre-request logic
|
|
878
|
+
return { action: "proceed" };
|
|
879
|
+
},
|
|
880
|
+
requestMetadata: {
|
|
881
|
+
// Passed to beforeRequest hook
|
|
882
|
+
environment: "production",
|
|
883
|
+
},
|
|
884
|
+
});
|
|
885
|
+
```
|
|
886
|
+
|
|
887
|
+
### beforeRequest Hook
|
|
888
|
+
|
|
889
|
+
Implement custom rate limiting or request modification:
|
|
890
|
+
|
|
891
|
+
```typescript
|
|
892
|
+
await instrument({
|
|
893
|
+
emitMetric: myEmitter,
|
|
894
|
+
sdks: { OpenAI },
|
|
895
|
+
|
|
896
|
+
beforeRequest: async (params, context) => {
|
|
897
|
+
// Check your own rate limits
|
|
898
|
+
const allowed = await checkRateLimit(context.metadata?.userId);
|
|
899
|
+
|
|
900
|
+
if (!allowed) {
|
|
901
|
+
return { action: "cancel", reason: "Rate limit exceeded" };
|
|
902
|
+
}
|
|
903
|
+
|
|
904
|
+
// Optionally delay the request
|
|
905
|
+
if (shouldThrottle()) {
|
|
906
|
+
return { action: "throttle", delayMs: 1000 };
|
|
907
|
+
}
|
|
908
|
+
|
|
909
|
+
// Optionally switch to a cheaper model
|
|
910
|
+
if (shouldDegrade()) {
|
|
911
|
+
return {
|
|
912
|
+
action: "degrade",
|
|
913
|
+
toModel: "gpt-4o-mini",
|
|
914
|
+
reason: "High load",
|
|
915
|
+
};
|
|
916
|
+
}
|
|
917
|
+
|
|
918
|
+
return { action: "proceed" };
|
|
919
|
+
},
|
|
920
|
+
|
|
921
|
+
requestMetadata: {
|
|
922
|
+
userId: getCurrentUserId(),
|
|
923
|
+
},
|
|
924
|
+
});
|
|
925
|
+
```
|
|
926
|
+
|
|
927
|
+
### Manual Control Agent
|
|
928
|
+
|
|
929
|
+
For advanced scenarios, create the control agent manually:
|
|
930
|
+
|
|
931
|
+
```typescript
|
|
932
|
+
import { createControlAgent, instrument } from "aden";
|
|
933
|
+
|
|
934
|
+
const agent = createControlAgent({
|
|
935
|
+
serverUrl: "https://kube.acho.io",
|
|
936
|
+
apiKey: process.env.ADEN_API_KEY,
|
|
937
|
+
|
|
938
|
+
// Polling options (for HTTP fallback)
|
|
939
|
+
pollingIntervalMs: 30000,
|
|
940
|
+
heartbeatIntervalMs: 10000,
|
|
941
|
+
timeoutMs: 5000,
|
|
942
|
+
|
|
943
|
+
// Behavior
|
|
944
|
+
failOpen: true, // Allow if server unreachable
|
|
945
|
+
getContextId: () => getUserId(),
|
|
946
|
+
|
|
947
|
+
// Alerts
|
|
948
|
+
onAlert: (alert) => {
|
|
949
|
+
sendToSlack(alert);
|
|
950
|
+
},
|
|
951
|
+
});
|
|
952
|
+
|
|
953
|
+
await agent.connect();
|
|
954
|
+
|
|
955
|
+
await instrument({
|
|
956
|
+
controlAgent: agent,
|
|
957
|
+
sdks: { OpenAI },
|
|
958
|
+
});
|
|
959
|
+
```
|
|
960
|
+
|
|
961
|
+
### Per-Instance Wrapping
|
|
962
|
+
|
|
963
|
+
If you need different options for different clients:
|
|
964
|
+
|
|
965
|
+
```typescript
|
|
966
|
+
import { makeMeteredOpenAI } from "aden";
|
|
967
|
+
|
|
968
|
+
const internalClient = makeMeteredOpenAI(new OpenAI(), {
|
|
969
|
+
emitMetric: internalMetricsEmitter,
|
|
970
|
+
});
|
|
971
|
+
|
|
972
|
+
const customerClient = makeMeteredOpenAI(new OpenAI(), {
|
|
973
|
+
emitMetric: customerMetricsEmitter,
|
|
974
|
+
beforeRequest: enforceCustomerLimits,
|
|
975
|
+
});
|
|
976
|
+
```
|
|
977
|
+
|
|
978
|
+
### Logging Configuration
|
|
979
|
+
|
|
980
|
+
Control Aden's internal logging via environment variables or programmatically:
|
|
981
|
+
|
|
982
|
+
```typescript
|
|
983
|
+
import { configureLogging, logger, metricsLogger } from "aden";
|
|
984
|
+
|
|
985
|
+
// Set log levels programmatically
|
|
986
|
+
configureLogging({
|
|
987
|
+
level: "debug", // Global log level
|
|
988
|
+
metricsLevel: "warn", // Metrics-specific logs (optional)
|
|
989
|
+
});
|
|
990
|
+
|
|
991
|
+
// Or via environment variables
|
|
992
|
+
// ADEN_LOG_LEVEL=debug
|
|
993
|
+
// ADEN_METRICS_LOG_LEVEL=warn
|
|
994
|
+
|
|
995
|
+
// Use the loggers directly
|
|
996
|
+
logger.debug("This is a debug message");
|
|
997
|
+
logger.info("This is an info message");
|
|
998
|
+
logger.warn("This is a warning");
|
|
999
|
+
logger.error("This is an error");
|
|
1000
|
+
|
|
1001
|
+
// Metrics-specific logger
|
|
1002
|
+
metricsLogger.debug("Metric event details");
|
|
1003
|
+
```
|
|
1004
|
+
|
|
1005
|
+
**Log Levels** (in order of severity):
|
|
1006
|
+
|
|
1007
|
+
| Level | Description |
|
|
1008
|
+
| -------- | ------------------------------------ |
|
|
1009
|
+
| `debug` | Detailed debugging information |
|
|
1010
|
+
| `info` | General operational messages |
|
|
1011
|
+
| `warn` | Warning conditions |
|
|
1012
|
+
| `error` | Error conditions |
|
|
1013
|
+
| `silent` | Suppress all logging |
|
|
1014
|
+
|
|
1015
|
+
**Custom Log Handler**:
|
|
1016
|
+
|
|
1017
|
+
```typescript
|
|
1018
|
+
import { configureLogging } from "aden";
|
|
1019
|
+
|
|
1020
|
+
configureLogging({
|
|
1021
|
+
level: "info",
|
|
1022
|
+
handler: {
|
|
1023
|
+
debug: (msg, ...args) => myLogger.debug(msg, ...args),
|
|
1024
|
+
info: (msg, ...args) => myLogger.info(msg, ...args),
|
|
1025
|
+
warn: (msg, ...args) => myLogger.warn(msg, ...args),
|
|
1026
|
+
error: (msg, ...args) => myLogger.error(msg, ...args),
|
|
1027
|
+
},
|
|
1028
|
+
});
|
|
1029
|
+
```
|
|
1030
|
+
|
|
1031
|
+
---
|
|
1032
|
+
|
|
1033
|
+
## API Reference
|
|
1034
|
+
|
|
1035
|
+
### Core Functions
|
|
1036
|
+
|
|
1037
|
+
| Function | Description |
|
|
1038
|
+
| ----------------------- | -------------------------------- |
|
|
1039
|
+
| `instrument(options)` | Instrument all LLM SDKs globally |
|
|
1040
|
+
| `uninstrument()` | Remove instrumentation |
|
|
1041
|
+
| `isInstrumented()` | Check if instrumented |
|
|
1042
|
+
| `getInstrumentedSDKs()` | Get which SDKs are instrumented |
|
|
1043
|
+
| `instrumentGenai(options)` | Instrument Google GenAI SDK (@google/genai) |
|
|
1044
|
+
| `uninstrumentGenai()` | Remove GenAI instrumentation |
|
|
1045
|
+
|
|
1046
|
+
### Emitter Factories
|
|
1047
|
+
|
|
1048
|
+
| Function | Description |
|
|
1049
|
+
| -------------------------------------------- | ---------------------------- |
|
|
1050
|
+
| `createConsoleEmitter(options?)` | Log to console |
|
|
1051
|
+
| `createHttpTransport(options)` | Send to HTTP endpoint |
|
|
1052
|
+
| `createBatchEmitter(handler, options?)` | Batch events |
|
|
1053
|
+
| `createMultiEmitter(emitters)` | Multiple destinations |
|
|
1054
|
+
| `createFilteredEmitter(emitter, filter)` | Filter events |
|
|
1055
|
+
| `createTransformEmitter(emitter, transform)` | Transform events |
|
|
1056
|
+
| `createJsonFileEmitter(options)` | Write to JSON file |
|
|
1057
|
+
| `createFileEmitter(options?)` | Write to session JSONL files |
|
|
1058
|
+
| `createMemoryEmitter()` | Store in memory |
|
|
1059
|
+
| `createNoopEmitter()` | Discard events |
|
|
1060
|
+
|
|
1061
|
+
### Usage Normalization
|
|
1062
|
+
|
|
1063
|
+
| Function | Description |
|
|
1064
|
+
| ------------------------------------- | --------------------------------- |
|
|
1065
|
+
| `normalizeUsage(usage, provider?)` | Normalize usage with auto-detect |
|
|
1066
|
+
| `normalizeOpenAIUsage(usage)` | Normalize OpenAI usage format |
|
|
1067
|
+
| `normalizeAnthropicUsage(usage)` | Normalize Anthropic usage format |
|
|
1068
|
+
| `normalizeGeminiUsage(usageMetadata)` | Normalize Gemini usage format |
|
|
1069
|
+
|
|
1070
|
+
### Context Functions
|
|
1071
|
+
|
|
1072
|
+
| Function | Description |
|
|
1073
|
+
| ------------------------------------- | ------------------------ |
|
|
1074
|
+
| `enterMeterContext(options?)` | Enter a tracking context |
|
|
1075
|
+
| `withMeterContextAsync(fn, options?)` | Run in isolated context |
|
|
1076
|
+
| `withAgent(name, fn)` | Run with named agent |
|
|
1077
|
+
| `pushAgent(name)` / `popAgent()` | Manual agent stack |
|
|
1078
|
+
| `getCurrentContext()` | Get current context |
|
|
1079
|
+
| `setContextMetadata(key, value)` | Set context metadata |
|
|
1080
|
+
|
|
1081
|
+
### Control Agent
|
|
1082
|
+
|
|
1083
|
+
| Function | Description |
|
|
1084
|
+
| ---------------------------------- | --------------------------- |
|
|
1085
|
+
| `createControlAgent(options)` | Create manual control agent |
|
|
1086
|
+
| `createControlAgentEmitter(agent)` | Create emitter from agent |
|
|
1087
|
+
|
|
1088
|
+
### Logging
|
|
1089
|
+
|
|
1090
|
+
| Function | Description |
|
|
1091
|
+
| ------------------------------- | ----------------------------------- |
|
|
1092
|
+
| `configureLogging(config)` | Configure SDK log levels |
|
|
1093
|
+
| `getLoggingConfig()` | Get current logging configuration |
|
|
1094
|
+
| `resetLoggingConfig()` | Reset to default configuration |
|
|
1095
|
+
| `logger` | General SDK logger |
|
|
1096
|
+
| `metricsLogger` | Metrics-specific logger |
|
|
1097
|
+
|
|
1098
|
+
### File Logger
|
|
1099
|
+
|
|
1100
|
+
| Class / Function | Description |
|
|
1101
|
+
| --------------------------- | ----------------------------------- |
|
|
1102
|
+
| `MetricFileLogger` | Class for session JSONL file logging |
|
|
1103
|
+
| `logger.writeLLMEvent()` | Write LLM metric event |
|
|
1104
|
+
| `logger.writeTTSEvent()` | Write TTS metric event |
|
|
1105
|
+
| `logger.writeSTTEvent()` | Write STT metric event |
|
|
1106
|
+
| `logger.writeSessionStart()`| Write session start event |
|
|
1107
|
+
| `logger.writeSessionEnd()` | Write session end event |
|
|
1108
|
+
|
|
1109
|
+
### Types
|
|
1110
|
+
|
|
1111
|
+
```typescript
|
|
1112
|
+
// Main types
|
|
1113
|
+
import type {
|
|
1114
|
+
MetricEvent,
|
|
1115
|
+
MeterOptions,
|
|
1116
|
+
ControlPolicy,
|
|
1117
|
+
ControlDecision,
|
|
1118
|
+
AlertEvent,
|
|
1119
|
+
BeforeRequestResult,
|
|
1120
|
+
NormalizedUsage,
|
|
1121
|
+
LogLevel,
|
|
1122
|
+
LoggingConfig,
|
|
1123
|
+
} from "aden";
|
|
1124
|
+
```
|
|
1125
|
+
|
|
1126
|
+
---
|
|
1127
|
+
|
|
1128
|
+
## Examples
|
|
1129
|
+
|
|
1130
|
+
Run examples with `npx tsx examples/<name>.ts`:
|
|
1131
|
+
|
|
1132
|
+
| Example | Description |
|
|
1133
|
+
| ------------------------ | ---------------------------------------------------- |
|
|
1134
|
+
| `openai-basic.ts` | Basic OpenAI instrumentation |
|
|
1135
|
+
| `anthropic-basic.ts` | Basic Anthropic instrumentation |
|
|
1136
|
+
| `gemini-basic.ts` | Basic Gemini instrumentation |
|
|
1137
|
+
| `control-actions.ts` | All control actions: block, throttle, degrade, alert |
|
|
1138
|
+
| `cost-control-local.ts` | Cost control without a server (offline mode) |
|
|
1139
|
+
| `vercel-ai-sdk.ts` | Vercel AI SDK integration |
|
|
1140
|
+
| `langchain-example.ts` | LangChain integration |
|
|
1141
|
+
| `llamaindex-example.ts` | LlamaIndex integration |
|
|
1142
|
+
| `mastra-example.ts` | Mastra framework integration |
|
|
1143
|
+
| `multi-agent-example.ts` | Multi-agent tracking |
|
|
1144
|
+
|
|
1145
|
+
---
|
|
1146
|
+
|
|
1147
|
+
## Troubleshooting
|
|
1148
|
+
|
|
1149
|
+
### Metrics not appearing
|
|
1150
|
+
|
|
1151
|
+
1. **Check instrumentation order**: Call `instrument()` before creating SDK clients
|
|
1152
|
+
|
|
1153
|
+
```typescript
|
|
1154
|
+
// Correct
|
|
1155
|
+
await instrument({ ... });
|
|
1156
|
+
const openai = new OpenAI();
|
|
1157
|
+
|
|
1158
|
+
// Wrong - client created before instrumentation
|
|
1159
|
+
const openai = new OpenAI();
|
|
1160
|
+
await instrument({ ... });
|
|
1161
|
+
```
|
|
1162
|
+
|
|
1163
|
+
2. **Verify SDK is passed**: Make sure you're passing the SDK class
|
|
1164
|
+
|
|
1165
|
+
```typescript
|
|
1166
|
+
import OpenAI from "openai";
|
|
1167
|
+
|
|
1168
|
+
await instrument({
|
|
1169
|
+
sdks: { OpenAI }, // Pass the class, not an instance
|
|
1170
|
+
});
|
|
1171
|
+
```
|
|
1172
|
+
|
|
1173
|
+
3. **Check emitter is async-safe**: If using a custom emitter, ensure it handles promises correctly
|
|
1174
|
+
|
|
1175
|
+
### Control server not connecting
|
|
1176
|
+
|
|
1177
|
+
1. **Check environment variables**:
|
|
1178
|
+
|
|
1179
|
+
```bash
|
|
1180
|
+
echo $ADEN_API_KEY
|
|
1181
|
+
echo $ADEN_API_URL
|
|
1182
|
+
```
|
|
1183
|
+
|
|
1184
|
+
2. **Verify server is reachable**:
|
|
1185
|
+
|
|
1186
|
+
```bash
|
|
1187
|
+
curl $ADEN_API_URL/v1/control/health
|
|
1188
|
+
```
|
|
1189
|
+
|
|
1190
|
+
3. **Enable debug logging**:
|
|
1191
|
+
```typescript
|
|
1192
|
+
// Aden logs to console with [aden] prefix
|
|
1193
|
+
// Check for connection errors
|
|
1194
|
+
```
|
|
1195
|
+
|
|
1196
|
+
### Budget not enforcing
|
|
1197
|
+
|
|
1198
|
+
1. **Ensure getContextId is set**: Budgets are per-context
|
|
1199
|
+
|
|
1200
|
+
```typescript
|
|
1201
|
+
await instrument({
|
|
1202
|
+
apiKey: process.env.ADEN_API_KEY,
|
|
1203
|
+
getContextId: () => getCurrentUserId(), // Required!
|
|
1204
|
+
});
|
|
1205
|
+
```
|
|
1206
|
+
|
|
1207
|
+
2. **Check policy on server**:
|
|
1208
|
+
```bash
|
|
1209
|
+
curl -H "Authorization: Bearer $ADEN_API_KEY" \
|
|
1210
|
+
$ADEN_API_URL/v1/control/policy
|
|
1211
|
+
```
|
|
1212
|
+
|
|
1213
|
+
### High memory usage
|
|
1214
|
+
|
|
1215
|
+
1. **Enable batching**: Don't send events one-by-one
|
|
1216
|
+
|
|
1217
|
+
```typescript
|
|
1218
|
+
const transport = createHttpTransport({
|
|
1219
|
+
batchSize: 100,
|
|
1220
|
+
flushInterval: 10000,
|
|
1221
|
+
});
|
|
1222
|
+
```
|
|
1223
|
+
|
|
1224
|
+
2. **Disable relationship tracking** if not needed:
|
|
1225
|
+
```typescript
|
|
1226
|
+
await instrument({
|
|
1227
|
+
trackCallRelationships: false,
|
|
1228
|
+
});
|
|
1229
|
+
```
|
|
1230
|
+
|
|
1231
|
+
---
|
|
1232
|
+
|
|
1233
|
+
## License
|
|
1234
|
+
|
|
1235
|
+
MIT
|
|
1236
|
+
|
|
1237
|
+
---
|
|
1238
|
+
|
|
1239
|
+
## Contributing
|
|
1240
|
+
|
|
1241
|
+
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
|