@ubercode/chronicler 0.1.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +282 -149
- package/dist/cli.js +0 -0
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +1 -1
- package/dist/index.d.ts +1 -1
- package/dist/index.js.map +1 -1
- package/package.json +112 -112
package/README.md
CHANGED
|
@@ -1,158 +1,291 @@
|
|
|
1
1
|
# @ubercode/chronicler
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Type-safe structured logging for Node.js. Define your events once — with keys, levels, fields, and docs — then get compile-time safety, runtime validation, and auto-generated documentation everywhere you log.
|
|
4
4
|
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
|
|
5
|
+
```
|
|
6
|
+
npm install @ubercode/chronicler
|
|
7
|
+
```
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
Node 20+ required. ESM + CJS with full TypeScript declarations.
|
|
10
10
|
|
|
11
|
-
|
|
12
|
-
- Enforce required/optional fields and flag type issues at runtime
|
|
13
|
-
- Correlate related logs with auto start/complete/fail/timeout events and durations
|
|
14
|
-
- Fork work into sub-operations with hierarchical fork IDs
|
|
15
|
-
- Route events to multiple backends with filter-based routing
|
|
16
|
-
- Auto-generate Markdown or JSON documentation from event definitions via the CLI
|
|
17
|
-
- Structured payloads ready for ingestion (e.g., CloudWatch, ELK, Datadog)
|
|
11
|
+
## The Problem
|
|
18
12
|
|
|
19
|
-
|
|
13
|
+
Most logging looks like this:
|
|
20
14
|
|
|
21
|
-
```
|
|
22
|
-
|
|
15
|
+
```ts
|
|
16
|
+
logger.info('user created', { userId: id, email });
|
|
17
|
+
logger.info('User Created', { user_id: id }); // different dev, different shape
|
|
18
|
+
logger.info('user created', { userId: id, emailAddress: email }); // another variation
|
|
23
19
|
```
|
|
24
20
|
|
|
25
|
-
|
|
21
|
+
Three devs, three formats, zero consistency. When you search your logs for user creation events, you find three different field names, two different message formats, and no way to know which fields are required. Your dashboards break, your alerts miss events, and nobody trusts the logs.
|
|
22
|
+
|
|
23
|
+
## The Solution
|
|
26
24
|
|
|
27
|
-
|
|
25
|
+
Define events once, log them everywhere with the same shape:
|
|
28
26
|
|
|
29
27
|
```ts
|
|
30
|
-
import {
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
28
|
+
import { createChronicle, defineEvent, field } from '@ubercode/chronicler';
|
|
29
|
+
|
|
30
|
+
const userCreated = defineEvent({
|
|
31
|
+
key: 'user.created',
|
|
32
|
+
level: 'info',
|
|
33
|
+
message: 'User created',
|
|
34
|
+
doc: 'Emitted when a new user account is created',
|
|
35
|
+
fields: {
|
|
36
|
+
userId: field.string().doc('Unique user identifier'),
|
|
37
|
+
email: field.string().optional().doc('User email address'),
|
|
38
|
+
},
|
|
39
|
+
});
|
|
40
|
+
|
|
41
|
+
const chronicle = createChronicle({ metadata: { service: 'api' } });
|
|
42
|
+
|
|
43
|
+
// TypeScript enforces the field contract
|
|
44
|
+
chronicle.event(userCreated, { userId: 'u-123', email: 'a@b.com' }); // OK
|
|
45
|
+
chronicle.event(userCreated, { user_id: 'u-123' }); // compile error: wrong field name
|
|
46
|
+
chronicle.event(userCreated, {}); // compile error: missing required 'userId'
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
Every log entry has the same structure. Dashboards work. Alerts fire. New devs can read the event definitions to understand what's logged.
|
|
50
|
+
|
|
51
|
+
## Core Concepts
|
|
52
|
+
|
|
53
|
+
### Events
|
|
54
|
+
|
|
55
|
+
An **event** is a single, well-defined thing that happens in your system. Instead of ad-hoc `logger.info()` calls with arbitrary strings and objects, you declare what each event looks like up front:
|
|
56
|
+
|
|
57
|
+
```ts
|
|
58
|
+
const orderPlaced = defineEvent({
|
|
59
|
+
key: 'order.placed',
|
|
60
|
+
level: 'info',
|
|
61
|
+
message: 'Order placed',
|
|
62
|
+
doc: 'Emitted when a customer successfully places an order',
|
|
63
|
+
fields: {
|
|
64
|
+
orderId: field.string().doc('Order identifier'),
|
|
65
|
+
total: field.number().doc('Order total in cents'),
|
|
66
|
+
itemCount: field.number().doc('Number of items'),
|
|
67
|
+
},
|
|
68
|
+
});
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
This gives you:
|
|
72
|
+
|
|
73
|
+
- **Compile-time safety** — TypeScript catches missing or mistyped fields before your code runs
|
|
74
|
+
- **Runtime validation** — missing required fields are flagged in `_validation` metadata (or thrown in strict mode)
|
|
75
|
+
- **Self-documenting logs** — the `doc` strings generate documentation via the CLI
|
|
76
|
+
- **Consistent payloads** — every instance of this event has the same shape, making log aggregation reliable
|
|
77
|
+
|
|
78
|
+
### Event Groups
|
|
79
|
+
|
|
80
|
+
**Event groups** organize related events under a namespace. Without them, you end up with hundreds of flat event keys and no way to understand the structure:
|
|
81
|
+
|
|
82
|
+
```ts
|
|
83
|
+
const admin = defineEventGroup({
|
|
84
|
+
key: 'admin',
|
|
41
85
|
type: 'system',
|
|
42
|
-
doc: '
|
|
86
|
+
doc: 'Administrative and compliance events',
|
|
43
87
|
events: {
|
|
44
|
-
|
|
45
|
-
key: '
|
|
46
|
-
level: '
|
|
47
|
-
message: '
|
|
48
|
-
doc: 'Emitted
|
|
49
|
-
fields: {
|
|
88
|
+
login: defineEvent({
|
|
89
|
+
key: 'admin.login',
|
|
90
|
+
level: 'audit',
|
|
91
|
+
message: 'Login attempt',
|
|
92
|
+
doc: 'Emitted on every authentication attempt',
|
|
93
|
+
fields: {
|
|
94
|
+
userId: field.string().doc('User ID'),
|
|
95
|
+
success: field.boolean().doc('Whether login succeeded'),
|
|
96
|
+
ip: field.string().optional().doc('Client IP'),
|
|
97
|
+
},
|
|
98
|
+
}),
|
|
99
|
+
action: defineEvent({
|
|
100
|
+
key: 'admin.action',
|
|
101
|
+
level: 'audit',
|
|
102
|
+
message: 'Admin action performed',
|
|
103
|
+
doc: 'Emitted for auditable administrative actions',
|
|
104
|
+
fields: {
|
|
105
|
+
action: field.string().doc('Action performed'),
|
|
106
|
+
userId: field.string().doc('User who performed the action'),
|
|
107
|
+
success: field.boolean().doc('Whether the action succeeded'),
|
|
108
|
+
},
|
|
50
109
|
}),
|
|
51
110
|
},
|
|
52
111
|
});
|
|
53
112
|
|
|
54
|
-
|
|
55
|
-
|
|
113
|
+
// Usage
|
|
114
|
+
chronicle.event(admin.events.login, { userId: 'u-1', success: true, ip: '10.0.0.1' });
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
Groups also enable **router backends** — you can route all `admin.*` events to a compliance log stream and all `http.*` events to a monitoring stream, from a single chronicle instance.
|
|
118
|
+
|
|
119
|
+
### Correlations
|
|
120
|
+
|
|
121
|
+
A **correlation** tracks a unit of work from start to finish. This is the feature you wish you had every time you're debugging a production issue and trying to piece together what happened during a single HTTP request across 20 log lines.
|
|
122
|
+
|
|
123
|
+
Without correlations, you get this in your logs:
|
|
124
|
+
|
|
125
|
+
```
|
|
126
|
+
INFO Request validated { path: '/api/users' }
|
|
127
|
+
INFO Database query complete { table: 'users', rows: 42 }
|
|
128
|
+
INFO Request validated { path: '/api/orders' } ← different request!
|
|
129
|
+
ERROR Database query failed { table: 'orders' } ← which request?
|
|
130
|
+
INFO Response sent { status: 200 } ← which request??
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
With correlations, every log entry for a single request shares a correlation ID, and you get automatic lifecycle events:
|
|
134
|
+
|
|
135
|
+
```ts
|
|
136
|
+
const httpRequest = defineCorrelationGroup({
|
|
137
|
+
key: 'http.request',
|
|
56
138
|
type: 'correlation',
|
|
57
|
-
doc: 'HTTP request
|
|
58
|
-
timeout: 30_000,
|
|
139
|
+
doc: 'HTTP request lifecycle',
|
|
140
|
+
timeout: 30_000,
|
|
59
141
|
events: {
|
|
60
142
|
validated: defineEvent({
|
|
61
|
-
key: '
|
|
143
|
+
key: 'http.request.validated',
|
|
62
144
|
level: 'info',
|
|
63
145
|
message: 'Request validated',
|
|
64
|
-
doc: '
|
|
146
|
+
doc: 'Request passed validation',
|
|
65
147
|
fields: {
|
|
66
|
-
method: field.string()
|
|
67
|
-
path: field.string()
|
|
148
|
+
method: field.string(),
|
|
149
|
+
path: field.string(),
|
|
68
150
|
},
|
|
69
151
|
}),
|
|
70
152
|
},
|
|
71
153
|
});
|
|
72
154
|
|
|
73
|
-
//
|
|
74
|
-
const
|
|
75
|
-
|
|
76
|
-
|
|
155
|
+
// In your middleware
|
|
156
|
+
const corr = chronicle.startCorrelation(httpRequest, { requestId: 'req-abc' });
|
|
157
|
+
// Auto-emits: http.request.start
|
|
158
|
+
|
|
159
|
+
corr.event(httpRequest.events.validated, { method: 'GET', path: '/api/users' });
|
|
160
|
+
|
|
161
|
+
// When done:
|
|
162
|
+
corr.complete();
|
|
163
|
+
// Auto-emits: http.request.complete { duration: 142 }
|
|
164
|
+
```
|
|
77
165
|
|
|
78
|
-
|
|
79
|
-
chronicle.event(system.events.startup, { port: 3000 });
|
|
166
|
+
Now filter by `correlationId: "corr-xyz"` in your log aggregator and see the entire request lifecycle in order. Auto-generated events give you:
|
|
80
167
|
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
168
|
+
| Auto-event | When | Includes |
|
|
169
|
+
| ---------------- | --------------------------- | ------------------- |
|
|
170
|
+
| `{key}.start` | `startCorrelation()` called | — |
|
|
171
|
+
| `{key}.complete` | `complete()` called | `duration` (ms) |
|
|
172
|
+
| `{key}.fail` | `fail(error)` called | `duration`, `error` |
|
|
173
|
+
| `{key}.timeout` | No activity within timeout | — |
|
|
84
174
|
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
175
|
+
### Forks
|
|
176
|
+
|
|
177
|
+
**Forks** handle parallel work within a correlation. When a single request fans out to multiple services, database queries, or processing steps, forks give each branch its own identity while maintaining the parent relationship:
|
|
178
|
+
|
|
179
|
+
```ts
|
|
180
|
+
const corr = chronicle.startCorrelation(httpRequest, { requestId: 'req-abc' });
|
|
181
|
+
|
|
182
|
+
// Fan out to parallel work
|
|
183
|
+
const authFork = corr.fork({ step: 'auth' });
|
|
184
|
+
authFork.event(someEvent, { ... }); // forkId: "1"
|
|
185
|
+
|
|
186
|
+
const dataFork = corr.fork({ step: 'data' });
|
|
187
|
+
dataFork.event(someEvent, { ... }); // forkId: "2"
|
|
188
|
+
|
|
189
|
+
// Forks can nest
|
|
190
|
+
const cacheFork = dataFork.fork({ step: 'cache-lookup' });
|
|
191
|
+
cacheFork.event(someEvent, { ... }); // forkId: "2.1"
|
|
88
192
|
|
|
89
|
-
// Complete the correlation (emits api.request.complete with duration)
|
|
90
193
|
corr.complete();
|
|
91
194
|
```
|
|
92
195
|
|
|
196
|
+
Every log entry carries its `forkId` (`0` for root, `1`, `2`, `2.1`, etc.), so you can reconstruct the execution tree when debugging. This is invaluable for understanding concurrency issues and performance bottlenecks.
|
|
197
|
+
|
|
198
|
+
### Context
|
|
199
|
+
|
|
200
|
+
**Context** is metadata attached to every subsequent event. Set it once, and it flows through all logs automatically:
|
|
201
|
+
|
|
202
|
+
```ts
|
|
203
|
+
const chronicle = createChronicle({
|
|
204
|
+
metadata: { service: 'api', env: 'production', version: '1.2.0' },
|
|
205
|
+
});
|
|
206
|
+
|
|
207
|
+
// Every event now includes service, env, and version in its payload.
|
|
208
|
+
|
|
209
|
+
// Add more context later (e.g., after auth middleware resolves the user):
|
|
210
|
+
chronicle.addContext({ userId: 'u-123', tenantId: 't-456' });
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
Context is immutable — collisions preserve the original value, so downstream code can't accidentally overwrite upstream context.
|
|
214
|
+
|
|
93
215
|
## Backends
|
|
94
216
|
|
|
217
|
+
Chronicler doesn't care where your logs go. You provide the transport.
|
|
218
|
+
|
|
95
219
|
### Console (default)
|
|
96
220
|
|
|
97
221
|
```ts
|
|
98
222
|
import { createConsoleBackend } from '@ubercode/chronicler';
|
|
99
223
|
|
|
100
224
|
const backend = createConsoleBackend();
|
|
101
|
-
//
|
|
102
|
-
//
|
|
225
|
+
// fatal/critical/alert/error → console.error
|
|
226
|
+
// warn → console.warn
|
|
227
|
+
// audit/info → console.info
|
|
228
|
+
// debug/trace → console.debug
|
|
103
229
|
```
|
|
104
230
|
|
|
105
|
-
###
|
|
231
|
+
### Custom backend with fallbacks
|
|
106
232
|
|
|
107
233
|
```ts
|
|
108
234
|
import { createBackend } from '@ubercode/chronicler';
|
|
109
235
|
|
|
110
|
-
// Only provide the levels you care about.
|
|
111
|
-
// Missing levels fall back through a chain (e.g. fatal → critical → error → warn → info),
|
|
112
|
-
// then to console if nothing matches.
|
|
113
236
|
const backend = createBackend({
|
|
114
|
-
error: (msg, payload) =>
|
|
115
|
-
info: (msg, payload) =>
|
|
237
|
+
error: (msg, payload) => errorTracker.capture(msg, payload),
|
|
238
|
+
info: (msg, payload) => logger.info(msg, payload),
|
|
116
239
|
});
|
|
240
|
+
// Missing levels fall back: fatal → critical → error → warn → info → console
|
|
117
241
|
```
|
|
118
242
|
|
|
119
243
|
### Router backend (multiple streams)
|
|
120
244
|
|
|
245
|
+
Split events into separate streams from a single chronicle:
|
|
246
|
+
|
|
121
247
|
```ts
|
|
122
248
|
import { createRouterBackend } from '@ubercode/chronicler';
|
|
123
249
|
|
|
124
|
-
// Route events to different backends based on filters.
|
|
125
|
-
// Events fan out to ALL matching routes (not first-match-wins).
|
|
126
250
|
const backend = createRouterBackend([
|
|
127
251
|
{ backend: auditBackend, filter: (_lvl, p) => p.eventKey.startsWith('admin.') },
|
|
128
252
|
{ backend: httpBackend, filter: (_lvl, p) => p.eventKey.startsWith('http.') },
|
|
129
|
-
{
|
|
130
|
-
backend: mainBackend,
|
|
131
|
-
filter: (_lvl, p) => !p.eventKey.startsWith('admin.') && !p.eventKey.startsWith('http.'),
|
|
132
|
-
},
|
|
253
|
+
{ backend: mainBackend }, // no filter = receives everything else
|
|
133
254
|
]);
|
|
134
255
|
|
|
135
256
|
const chronicle = createChronicle({ backend, metadata: { app: 'my-app' } });
|
|
136
257
|
```
|
|
137
258
|
|
|
138
|
-
|
|
259
|
+
Events fan out to **all** matching routes, not first-match-wins.
|
|
139
260
|
|
|
140
|
-
###
|
|
261
|
+
### Using with Winston
|
|
141
262
|
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
147
|
-
|
|
263
|
+
```ts
|
|
264
|
+
import winston from 'winston';
|
|
265
|
+
import { createBackend, createChronicle } from '@ubercode/chronicler';
|
|
266
|
+
|
|
267
|
+
const logger = winston.createLogger({
|
|
268
|
+
level: 'debug',
|
|
269
|
+
format: winston.format.combine(winston.format.timestamp(), winston.format.json()),
|
|
270
|
+
transports: [new winston.transports.Console()],
|
|
271
|
+
});
|
|
272
|
+
|
|
273
|
+
const backend = createBackend({
|
|
274
|
+
error: (msg, payload) => logger.error(msg, payload),
|
|
275
|
+
warn: (msg, payload) => logger.warn(msg, payload),
|
|
276
|
+
info: (msg, payload) => logger.info(msg, payload),
|
|
277
|
+
debug: (msg, payload) => logger.debug(msg, payload),
|
|
278
|
+
});
|
|
148
279
|
|
|
149
|
-
|
|
280
|
+
const chronicle = createChronicle({
|
|
281
|
+
backend,
|
|
282
|
+
metadata: { service: 'my-app', env: 'production' },
|
|
283
|
+
});
|
|
284
|
+
```
|
|
150
285
|
|
|
151
|
-
|
|
152
|
-
- `defineEventGroup({ key, type: 'system', doc?, events?, groups? })`
|
|
153
|
-
- `defineCorrelationGroup({ key, type: 'correlation', doc?, timeout?, events?, groups? })`
|
|
286
|
+
See [`examples/winston-app`](examples/winston-app) for a full multi-stream setup with router backend.
|
|
154
287
|
|
|
155
|
-
|
|
288
|
+
## Field Builders
|
|
156
289
|
|
|
157
290
|
```ts
|
|
158
291
|
field.string(); // required string
|
|
@@ -161,103 +294,103 @@ field.boolean().doc('...'); // required boolean with documentation
|
|
|
161
294
|
field.error(); // Error | string, serialized to stack trace
|
|
162
295
|
```
|
|
163
296
|
|
|
164
|
-
|
|
297
|
+
Error fields accept `Error` objects or strings and serialize to the stack trace (or message if no stack). Safe to ship to any log sink.
|
|
298
|
+
|
|
299
|
+
All string values are automatically sanitized — ANSI escape sequences are stripped and newlines are replaced with `\n` to prevent log injection.
|
|
300
|
+
|
|
301
|
+
## Log Levels
|
|
165
302
|
|
|
166
303
|
```ts
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
trace: 8, // Trace-level messages (very verbose)
|
|
177
|
-
} as const;
|
|
304
|
+
fatal: 0; // System is unusable
|
|
305
|
+
critical: 1; // Critical conditions requiring immediate attention
|
|
306
|
+
alert: 2; // Action must be taken immediately
|
|
307
|
+
error: 3; // Error conditions
|
|
308
|
+
warn: 4; // Warning conditions
|
|
309
|
+
audit: 5; // Audit trail events (compliance, security)
|
|
310
|
+
info: 6; // Informational messages
|
|
311
|
+
debug: 7; // Debug-level messages
|
|
312
|
+
trace: 8; // Trace-level messages (very verbose)
|
|
178
313
|
```
|
|
179
314
|
|
|
180
|
-
Filter
|
|
315
|
+
Filter with `minLevel`:
|
|
181
316
|
|
|
182
317
|
```ts
|
|
183
318
|
const chronicle = createChronicle({
|
|
184
319
|
metadata: {},
|
|
185
|
-
minLevel: 'warn', // only fatal
|
|
320
|
+
minLevel: 'warn', // only fatal through warn are emitted
|
|
186
321
|
});
|
|
187
322
|
```
|
|
188
323
|
|
|
189
|
-
|
|
324
|
+
## Strict Mode
|
|
190
325
|
|
|
191
|
-
|
|
326
|
+
In development or CI, enable strict mode to throw on field validation errors instead of silently capturing them:
|
|
192
327
|
|
|
193
328
|
```ts
|
|
194
329
|
const chronicle = createChronicle({
|
|
195
330
|
metadata: {},
|
|
196
|
-
strict: true, // throws
|
|
331
|
+
strict: true, // throws ChroniclerError with code FIELD_VALIDATION
|
|
197
332
|
});
|
|
198
333
|
```
|
|
199
334
|
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
These payload field names cannot be used in metadata or context: `eventKey`, `level`, `message`, `correlationId`, `forkId`, `timestamp`, `fields`, `_validation`.
|
|
335
|
+
## CLI
|
|
203
336
|
|
|
204
|
-
|
|
337
|
+
After installing, use the CLI to validate event definitions and generate documentation:
|
|
205
338
|
|
|
206
|
-
|
|
339
|
+
```bash
|
|
340
|
+
# Validate all event definitions
|
|
341
|
+
chronicler validate
|
|
207
342
|
|
|
208
|
-
|
|
343
|
+
# Generate Markdown docs
|
|
344
|
+
chronicler docs --format markdown --output docs/events.md
|
|
209
345
|
|
|
210
|
-
|
|
346
|
+
# Generate JSON docs
|
|
347
|
+
chronicler docs --format json --output docs/events.json
|
|
348
|
+
```
|
|
211
349
|
|
|
212
|
-
|
|
350
|
+
Requires a `chronicler.config.ts` in your project root:
|
|
213
351
|
|
|
214
352
|
```ts
|
|
215
|
-
|
|
216
|
-
|
|
353
|
+
export default {
|
|
354
|
+
eventsFile: './src/events.ts',
|
|
355
|
+
docs: {
|
|
356
|
+
format: 'markdown',
|
|
357
|
+
outputPath: './docs/events.md',
|
|
358
|
+
},
|
|
359
|
+
};
|
|
360
|
+
```
|
|
217
361
|
|
|
218
|
-
|
|
219
|
-
level: 'info',
|
|
220
|
-
format: winston.format.combine(winston.format.timestamp(), winston.format.json()),
|
|
221
|
-
transports: [new winston.transports.Console()],
|
|
222
|
-
});
|
|
362
|
+
## API Reference
|
|
223
363
|
|
|
224
|
-
|
|
225
|
-
const backend = createBackend({
|
|
226
|
-
error: (msg, payload) => {
|
|
227
|
-
logger.error(msg, payload);
|
|
228
|
-
},
|
|
229
|
-
warn: (msg, payload) => {
|
|
230
|
-
logger.warn(msg, payload);
|
|
231
|
-
},
|
|
232
|
-
info: (msg, payload) => {
|
|
233
|
-
logger.info(msg, payload);
|
|
234
|
-
},
|
|
235
|
-
debug: (msg, payload) => {
|
|
236
|
-
logger.debug(msg, payload);
|
|
237
|
-
},
|
|
238
|
-
});
|
|
364
|
+
### `createChronicle(config)`
|
|
239
365
|
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
366
|
+
| Option | Type | Default | Description |
|
|
367
|
+
| ------------------------------ | ----------------------------------------------------- | --------------- | -------------------------------- |
|
|
368
|
+
| `backend` | `LogBackend` | Console backend | Where log events are sent |
|
|
369
|
+
| `metadata` | `Record<string, string \| number \| boolean \| null>` | _required_ | Context attached to every event |
|
|
370
|
+
| `strict` | `boolean` | `false` | Throw on field validation errors |
|
|
371
|
+
| `minLevel` | `LogLevel` | `'trace'` | Minimum level to emit |
|
|
372
|
+
| `limits.maxContextKeys` | `number` | `100` | Max context entries |
|
|
373
|
+
| `limits.maxForkDepth` | `number` | `10` | Max fork nesting depth |
|
|
374
|
+
| `limits.maxActiveCorrelations` | `number` | `1000` | Max concurrent correlations |
|
|
375
|
+
| `correlationIdGenerator` | `() => string` | UUID-based | Custom correlation ID generator |
|
|
245
376
|
|
|
246
|
-
|
|
377
|
+
### `Chronicler` (returned by `createChronicle`)
|
|
247
378
|
|
|
248
|
-
|
|
379
|
+
- `event(eventDef, fields)` — emit a typed event
|
|
380
|
+
- `log(level, message, fields?)` — untyped escape hatch
|
|
381
|
+
- `addContext(context)` — add metadata to all subsequent events
|
|
382
|
+
- `startCorrelation(corrGroup, context?)` — start a correlation
|
|
383
|
+
- `fork(context?)` — create an isolated child chronicle
|
|
249
384
|
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
385
|
+
### `CorrelationChronicle` (returned by `startCorrelation`)
|
|
386
|
+
|
|
387
|
+
- `event(eventDef, fields)` — emit a typed event within this correlation
|
|
388
|
+
- `log(level, message, fields?)` — untyped escape hatch
|
|
389
|
+
- `addContext(context)` — add metadata to this correlation's events
|
|
390
|
+
- `fork(context?)` — create a parallel branch within this correlation
|
|
391
|
+
- `complete()` — end the correlation successfully (emits `{key}.complete` with duration)
|
|
392
|
+
- `fail(error?)` — end the correlation with failure (emits `{key}.fail` with duration and error)
|
|
254
393
|
|
|
255
|
-
##
|
|
394
|
+
## License
|
|
256
395
|
|
|
257
|
-
|
|
258
|
-
- `pnpm run build` – clean & create production bundles
|
|
259
|
-
- `pnpm run lint` – ESLint with TypeScript rules
|
|
260
|
-
- `pnpm run format` – Prettier formatting check
|
|
261
|
-
- `pnpm run test` – Vitest unit/integration tests
|
|
262
|
-
- `pnpm run coverage` – Coverage report
|
|
263
|
-
- `pnpm run check` – lint + typecheck + tests
|
|
396
|
+
MIT
|
package/dist/cli.js
CHANGED
|
File without changes
|