logixia 1.1.2 → 1.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  <p align="center">
4
4
  <strong>The async-first logging library that ships complete.</strong><br/>
5
- TypeScript-first · Non-blocking by design · NestJS · Database · Tracing · OTel
5
+ TypeScript-first &middot; Non-blocking by design &middot; NestJS &middot; Database &middot; Tracing &middot; OTel
6
6
  </p>
7
7
 
8
8
  <p align="center">
@@ -41,8 +41,11 @@ import { createLogger } from 'logixia';
41
41
  const logger = createLogger({
42
42
  appName: 'api',
43
43
  environment: 'production',
44
- file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
45
- database: { type: 'postgresql', host: 'localhost', database: 'appdb', table: 'logs' },
44
+ transports: {
45
+ console: { format: 'json' },
46
+ file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
47
+ database: { type: 'postgresql', host: 'localhost', database: 'appdb', table: 'logs' },
48
+ },
46
49
  });
47
50
 
48
51
  await logger.info('Server started', { port: 3000 });
@@ -62,6 +65,8 @@ await logger.info('Server started', { port: 3000 });
62
65
  - [Log levels](#log-levels)
63
66
  - [Structured logging](#structured-logging)
64
67
  - [Child loggers](#child-loggers)
68
+ - [Adaptive log level](#adaptive-log-level)
69
+ - [Per-namespace log levels](#per-namespace-log-levels)
65
70
  - [Transports](#transports)
66
71
  - [Console](#console)
67
72
  - [File with rotation](#file-with-rotation)
@@ -70,11 +75,21 @@ await logger.info('Server started', { port: 3000 });
70
75
  - [Multiple transports simultaneously](#multiple-transports-simultaneously)
71
76
  - [Custom transport](#custom-transport)
72
77
  - [Request tracing](#request-tracing)
78
+ - [Core trace utilities](#core-trace-utilities)
79
+ - [Express / Fastify middleware](#express--fastify-middleware)
80
+ - [NestJS trace middleware](#nestjs-trace-middleware)
81
+ - [Kafka trace interceptor](#kafka-trace-interceptor)
82
+ - [WebSocket trace interceptor](#websocket-trace-interceptor)
73
83
  - [NestJS integration](#nestjs-integration)
74
84
  - [Log redaction](#log-redaction)
85
+ - [Timer API](#timer-api)
86
+ - [Field management](#field-management)
87
+ - [Transport level control](#transport-level-control)
75
88
  - [Log search](#log-search)
76
89
  - [OpenTelemetry](#opentelemetry)
77
90
  - [Graceful shutdown](#graceful-shutdown)
91
+ - [Logger instance API](#logger-instance-api)
92
+ - [CLI tool](#cli-tool)
78
93
  - [Configuration reference](#configuration-reference)
79
94
  - [Contributing](#contributing)
80
95
  - [License](#license)
@@ -87,47 +102,46 @@ await logger.info('Server started', { port: 3000 });
87
102
 
88
103
  logixia takes a different approach: **everything ships built-in, and nothing blocks your event loop.**
89
104
 
90
- - **Async by design** — every log call is non-blocking, even to file and database transports
91
- - 🗄️ **Built-in database transports** — PostgreSQL, MySQL, MongoDB, SQLite with zero extra drivers
92
- - 🏗️ **NestJS module** — plug in with `LogixiaLoggerModule.forRoot()`, inject with `@InjectLogger()`
93
- - 📁 **File rotation** — `maxSize`, `maxFiles`, gzip archive — no `winston-daily-rotate-file` needed
94
- - 🔍 **Log search** — query your in-memory log store without shipping to an external service
95
- - 🔒 **Field redaction** — mask passwords, tokens, and PII before they touch any transport
96
- - 🕸️ **Request tracing** — `AsyncLocalStorage`-based trace propagation, no manual thread-locals
97
- - 📡 **OpenTelemetry** — W3C `traceparent` and `tracestate` support, zero extra dependencies
98
- - 🧩 **Multi-transport** — write to console, file, and database concurrently with one log call
99
- - 🛡️ **TypeScript-first** — typed log entries, typed metadata, full IntelliSense throughout
100
- - 🌱 **Adaptive log level** — auto-configures based on `NODE_ENV` and CI environment
101
- - 🔌 **Custom transports** — ship to Slack, Datadog, S3, or anywhere else with a simple interface
105
+ - **Async by design** — every log call is non-blocking, even to file and database transports
106
+ - **Built-in database transports** — PostgreSQL, MySQL, MongoDB, SQLite with zero extra drivers
107
+ - **NestJS module** — plug in with `LogixiaLoggerModule.forRoot()`, inject anywhere in the DI tree
108
+ - **File rotation** — `maxSize`, `maxFiles`, gzip archive, time-based rotation — no extra packages needed
109
+ - **Log search** — query your in-memory log store without shipping to an external service
110
+ - **Field redaction** — mask passwords, tokens, and PII before they touch any transport; supports dot-notation paths and regex patterns
111
+ - **Request tracing** — `AsyncLocalStorage`-based trace propagation with no manual thread-locals; includes Kafka and WebSocket interceptors
112
+ - **OpenTelemetry** — W3C `traceparent` and `tracestate` support, zero extra dependencies
113
+ - **Multi-transport** — write to console, file, and database concurrently with one log call
114
+ - **TypeScript-first** — typed log entries, typed metadata, custom-level IntelliSense throughout
115
+ - **Adaptive log level** — auto-configures based on `NODE_ENV` and CI environment
116
+ - **Custom transports** — ship to Slack, Datadog, S3, or anywhere else via a simple interface
102
117
 
103
118
  ---
104
119
 
105
120
  ## Feature comparison
106
121
 
107
- | Feature | **logixia** | pino | winston | bunyan |
108
- | ----------------------------------- | :---------: | :------------: | :--------------------------: | :----: |
109
- | TypeScript-first | | ⚠️ | ⚠️ | ⚠️ |
110
- | Async / non-blocking writes | || ❌ | |
111
- | NestJS module (built-in) | || ❌ | |
112
- | Database transports (built-in) | || ❌ | |
113
- | File rotation (built-in) | | ⚠️ pino-roll | ⚠️ winston-daily-rotate-file | |
114
- | Multi-transport concurrent | || ✅ | |
115
- | Log search | || ❌ | |
116
- | Field redaction (built-in) | | ⚠️ pino-redact || |
117
- | Request tracing (AsyncLocalStorage) | || ❌ | |
118
- | OpenTelemetry / W3C headers | || ❌ | |
119
- | Graceful shutdown / flush | || ❌ | |
120
- | Custom log levels | || ✅ | |
121
- | Adaptive log level (NODE_ENV) | ||| |
122
- | Actively maintained | || ✅ | |
123
-
124
- > ⚠️ = requires a separate package or manual implementation
122
+ | Feature | **logixia** | pino | winston | bunyan |
123
+ | ------------------------------------ | :---------: | :---------: | :-----------------------: | :-----: |
124
+ | TypeScript-first | yes | partial | partial | partial |
125
+ | Async / non-blocking writes | yes | no | no | no |
126
+ | NestJS module (built-in) | yes | no | no | no |
127
+ | Database transports (built-in) | yes | no | no | no |
128
+ | File rotation (built-in) | yes | pino-roll | winston-daily-rotate-file | no |
129
+ | Multi-transport concurrent | yes | no | yes | no |
130
+ | Log search | yes | no | no | no |
131
+ | Field redaction (built-in) | yes | pino-redact | no | no |
132
+ | Request tracing (AsyncLocalStorage) | yes | no | no | no |
133
+ | Kafka + WebSocket trace interceptors | yes | no | no | no |
134
+ | OpenTelemetry / W3C headers | yes | no | no | no |
135
+ | Graceful shutdown / flush | yes | no | no | no |
136
+ | Custom log levels | yes | yes | yes | yes |
137
+ | Adaptive log level (NODE_ENV) | yes | no | no | no |
138
+ | Actively maintained | yes | yes | yes | no |
125
139
 
126
140
  ---
127
141
 
128
142
  ## Performance
129
143
 
130
- logixia is **faster than winston in every benchmark** and outperforms pino on the workloads that matter most in production structured metadata and error serialization:
144
+ logixia uses `fast-json-stringify` (a pre-compiled serializer) for JSON output, which is ~59% faster than `JSON.stringify`. The hot path — level check, redaction decision, and format is optimised with pre-built caches built once on construction, not on every log call.
131
145
 
132
146
  | Library | Simple log (ops/sec) | Structured log (ops/sec) | Error log (ops/sec) | p99 latency |
133
147
  | ----------- | -------------------: | -----------------------: | ------------------: | -----------: |
@@ -135,9 +149,7 @@ logixia is **faster than winston in every benchmark** and outperforms pino on th
135
149
  | **logixia** | **840,000** | **696,000** | **654,000** | **4.8–10µs** |
136
150
  | winston | 738,000 | 371,000 | 433,000 | 9–16µs |
137
151
 
138
- logixia is **10% faster than pino on structured logging** and **68% faster on error serialization**. It beats winston across the board.
139
-
140
- **Why pino leads on simple strings:** pino uses synchronous direct writes to `process.stdout` — a trade-off that blocks the event loop under heavy I/O and that disappears as soon as you add real metadata. logixia is non-blocking on every call while still winning where it counts.
152
+ logixia is **10% faster than pino on structured logging** and **68% faster on error serialization**. It beats winston across the board. Pino leads on simple string logs because it uses synchronous direct writes to `process.stdout` — a trade-off that blocks the event loop under heavy I/O and disappears as soon as you add real metadata.
141
153
 
142
154
  To reproduce: `node benchmarks/run.mjs`
143
155
 
@@ -146,20 +158,13 @@ To reproduce: `node benchmarks/run.mjs`
146
158
  ## Installation
147
159
 
148
160
  ```bash
149
- # npm
150
161
  npm install logixia
151
-
152
- # pnpm
153
162
  pnpm add logixia
154
-
155
- # yarn
156
163
  yarn add logixia
157
-
158
- # bun
159
164
  bun add logixia
160
165
  ```
161
166
 
162
- **For database transports**, install the relevant driver alongside logixia:
167
+ For database transports, install the relevant driver alongside logixia:
163
168
 
164
169
  ```bash
165
170
  npm install pg # PostgreSQL
@@ -187,7 +192,15 @@ await logger.warn('High memory usage', { used: '87%' });
187
192
  await logger.error('Request failed', new Error('Connection timeout'));
188
193
  ```
189
194
 
190
- That's it. Logs go to the console by default, structured JSON in production, colorized text in development. Add a `file` or `database` key to write there too — all transports run concurrently.
195
+ Without a `transports` key, logs go to stdout/stderr. Add a `transports` key to write to file, database, or anywhere else — all transports run concurrently.
196
+
197
+ There is also a pre-configured default instance you can import directly:
198
+
199
+ ```typescript
200
+ import { logger } from 'logixia';
201
+
202
+ await logger.info('Ready');
203
+ ```
191
204
 
192
205
  ---
193
206
 
@@ -195,9 +208,31 @@ That's it. Logs go to the console by default, structured JSON in production, col
195
208
 
196
209
  ### Log levels
197
210
 
198
- logixia ships with six built-in levels: `trace`, `debug`, `info`, `warn`, `error`, and `fatal`. The minimum level is automatically inferred from `NODE_ENV` and CI environment no manual setup in most projects.
211
+ logixia ships with six built-in levels in priority order: `error`, `warn`, `info`, `debug`, `trace`, `verbose`. Logs at or above the configured minimum level are emitted; the rest are dropped.
199
212
 
200
- You can also define custom levels for your domain:
213
+ ```typescript
214
+ await logger.error('Something went wrong');
215
+ await logger.warn('Approaching rate limit', { remaining: 5 });
216
+ await logger.info('Order created', { orderId: 'ord_123' });
217
+ await logger.debug('Cache miss', { key: 'user:456' });
218
+ await logger.trace('Entering function', { fn: 'processPayment' });
219
+ await logger.verbose('Full request payload', { body: req.body });
220
+ ```
221
+
222
+ The `error` method also accepts an `Error` object directly — the full cause chain and standard Node.js fields (`code`, `statusCode`, `errno`, `syscall`) are serialized automatically:
223
+
224
+ ```typescript
225
+ await logger.error(new Error('Connection refused'));
226
+
227
+ // With extra metadata alongside:
228
+ await logger.error(new Error('Payment declined'), { orderId: 'ord_123', retryable: true });
229
+
230
+ // AggregateError is handled too:
231
+ const err = new AggregateError([new Error('A'), new Error('B')], 'Multiple failures');
232
+ await logger.error(err);
233
+ ```
234
+
235
+ You can also define **custom levels** for your domain:
201
236
 
202
237
  ```typescript
203
238
  const logger = createLogger({
@@ -205,20 +240,25 @@ const logger = createLogger({
205
240
  environment: 'production',
206
241
  levelOptions: {
207
242
  level: 'info',
208
- customLevels: {
243
+ levels: {
244
+ // extend the built-in set with your own
209
245
  audit: { priority: 35, color: 'blue' },
210
246
  security: { priority: 45, color: 'red' },
211
247
  },
212
248
  },
213
249
  });
214
250
 
215
- await logger.log('audit', 'Payment processed', { orderId: 'ord_123', amount: 99.99 });
216
- await logger.log('security', 'Suspicious login attempt', { ip: '1.2.3.4', userId: 'usr_456' });
251
+ // Custom level methods are available immediately, fully typed
252
+ await logger.audit('Payment processed', { orderId: 'ord_123', amount: 99.99 });
253
+ await logger.security('Suspicious login attempt', { ip: '1.2.3.4', userId: 'usr_456' });
254
+
255
+ // Or use logLevel() for dynamic dispatch
256
+ await logger.logLevel('audit', 'Refund issued', { orderId: 'ord_123' });
217
257
  ```
218
258
 
219
259
  ### Structured logging
220
260
 
221
- Every log call accepts metadata as its second argument — serialized as structured fields alongside the message, never concatenated into a string:
261
+ Every log call accepts a metadata object as its second argument — serialized as structured fields alongside the message, never concatenated into a string:
222
262
 
223
263
  ```typescript
224
264
  await logger.info('User authenticated', {
@@ -230,77 +270,168 @@ await logger.info('User authenticated', {
230
270
  });
231
271
  ```
232
272
 
233
- Output in development (colorized text):
273
+ Development output (colorized text):
234
274
 
235
275
  ```
236
- [INFO] User authenticated userId=usr_123 method=oauth provider=google durationMs=42
276
+ [2025-03-14T10:22:01.412Z] [INFO] [api] [abc123def456] User authenticated {"userId":"usr_123","method":"oauth",...}
237
277
  ```
238
278
 
239
- Output in production (JSON):
279
+ Production output (JSON, via `format: { json: true }`):
240
280
 
241
281
  ```json
242
282
  {
283
+ "timestamp": "2025-03-14T10:22:01.412Z",
243
284
  "level": "info",
285
+ "appName": "api",
286
+ "environment": "production",
244
287
  "message": "User authenticated",
245
- "userId": "usr_123",
246
- "method": "oauth",
247
- "provider": "google",
248
- "durationMs": 42,
249
- "timestamp": "2025-03-14T10:22:01.412Z",
250
- "traceId": "abc123def456"
288
+ "traceId": "abc123def456",
289
+ "payload": { "userId": "usr_123", "method": "oauth", "provider": "google", "durationMs": 42 }
251
290
  }
252
291
  ```
253
292
 
254
293
  ### Child loggers
255
294
 
256
- Create child loggers that inherit parent context and add their own. Every log from the child carries both sets of fields automatically:
295
+ Create child loggers that inherit their parent's configuration and transport setup, but carry their own context string and optional extra fields:
257
296
 
258
297
  ```typescript
259
- const reqLogger = logger.child({
298
+ const reqLogger = logger.child('OrderService', {
260
299
  requestId: req.id,
261
300
  userId: req.user.id,
262
- route: req.path,
263
301
  });
264
302
 
265
- await reqLogger.info('Processing order'); // carries requestId + userId + route
303
+ await reqLogger.info('Processing order'); // includes requestId + userId in every entry
266
304
  await reqLogger.info('Payment confirmed'); // same context, no repetition
267
305
  ```
268
306
 
307
+ ### Adaptive log level
308
+
309
+ logixia automatically selects a sensible default level when no explicit level is configured:
310
+
311
+ | Condition | Default level |
312
+ | ---------------------- | :-----------: |
313
+ | `NODE_ENV=development` | `debug` |
314
+ | `NODE_ENV=test` | `warn` |
315
+ | `NODE_ENV=production` | `info` |
316
+ | `CI=true` | `info` |
317
+ | None of the above | `info` |
318
+
319
+ You can override this at any time via the `LOGIXIA_LEVEL` environment variable:
320
+
321
+ ```bash
322
+ LOGIXIA_LEVEL=debug node server.js
323
+ ```
324
+
325
+ Or change it at runtime:
326
+
327
+ ```typescript
328
+ logger.setLevel('debug');
329
+ console.log(logger.getLevel()); // 'debug'
330
+ ```
331
+
332
+ ### Per-namespace log levels
333
+
334
+ Child loggers use their context string as a **namespace**. You can pin different log levels to different namespaces in config, or override them with environment variables at runtime — without redeploying:
335
+
336
+ ```typescript
337
+ const logger = createLogger({
338
+ appName: 'api',
339
+ environment: 'production',
340
+ namespaceLevels: {
341
+ db: 'debug', // child('db') and child('db.queries') → DEBUG
342
+ 'db.*': 'debug', // wildcard: all db.* children
343
+ 'http.*': 'warn', // only warn+ from HTTP layer
344
+ payment: 'trace', // full trace for payment namespace
345
+ },
346
+ });
347
+
348
+ const dbLogger = logger.child('db'); // resolves to DEBUG
349
+ const httpLogger = logger.child('http.req'); // resolves to WARN
350
+ ```
351
+
352
+ Environment variable overrides use the pattern `LOGIXIA_LEVEL_<NS>` where `<NS>` is the first segment of the namespace, uppercased:
353
+
354
+ ```bash
355
+ # Override just the db namespace to trace, without changing anything else:
356
+ LOGIXIA_LEVEL_DB=trace node server.js
357
+
358
+ # Override the payment namespace:
359
+ LOGIXIA_LEVEL_PAYMENT=info node server.js
360
+ ```
361
+
269
362
  ---
270
363
 
271
364
  ## Transports
272
365
 
366
+ All transports are configured under the `transports` key and run concurrently on every log call.
367
+
273
368
  ### Console
274
369
 
275
370
  ```typescript
276
371
  const logger = createLogger({
277
372
  appName: 'api',
278
373
  environment: 'development',
279
- console: {
280
- colorize: true,
281
- timestamp: true,
282
- format: 'text', // 'text' (human-readable) or 'json' (structured)
374
+ format: {
375
+ colorize: true, // ANSI colour output
376
+ timestamp: true, // include ISO timestamp
377
+ json: false, // text format; set to true for JSON
378
+ },
379
+ transports: {
380
+ console: {
381
+ level: 'debug', // minimum level for this transport only
382
+ },
283
383
  },
284
384
  });
285
385
  ```
286
386
 
287
387
  ### File with rotation
288
388
 
289
- No extra packages. Rotation by size, automatic compression, and configurable retention — all built-in:
389
+ No extra packages. Rotation by size or time interval, automatic gzip compression, configurable retention — all built-in:
290
390
 
291
391
  ```typescript
292
392
  const logger = createLogger({
293
393
  appName: 'api',
294
394
  environment: 'production',
395
+ transports: {
396
+ file: {
397
+ filename: 'app.log',
398
+ dirname: './logs',
399
+ maxSize: '50MB', // rotate when file reaches this size
400
+ maxFiles: 14, // keep 14 rotated files
401
+ zippedArchive: true, // compress old files with gzip
402
+ format: 'json', // 'json' | 'text' | 'csv'
403
+ batchSize: 100, // buffer up to 100 entries before writing
404
+ flushInterval: 2000, // flush buffer every 2 seconds
405
+ },
406
+ },
407
+ });
408
+ ```
409
+
410
+ You can also use **time-based rotation** via the `rotation` sub-key:
411
+
412
+ ```typescript
413
+ transports: {
295
414
  file: {
296
415
  filename: 'app.log',
297
416
  dirname: './logs',
298
- maxSize: '50MB', // Rotate when file hits 50 MB
299
- maxFiles: 14, // Keep 14 rotated files (~ 2 weeks)
300
- zippedArchive: true, // Compress old logs with gzip
301
- format: 'json',
417
+ rotation: {
418
+ interval: '1d', // rotate daily: '1h' | '6h' | '12h' | '1d' | '1w'
419
+ maxFiles: 30,
420
+ compress: true,
421
+ },
302
422
  },
303
- });
423
+ },
424
+ ```
425
+
426
+ Multiple file transports are supported — pass an array:
427
+
428
+ ```typescript
429
+ transports: {
430
+ file: [
431
+ { filename: 'app.log', dirname: './logs', format: 'json' },
432
+ { filename: 'error.log', dirname: './logs', format: 'json', level: 'error' },
433
+ ],
434
+ },
304
435
  ```
305
436
 
306
437
  ### Database
@@ -312,16 +443,18 @@ Write structured logs directly to your database — batched, non-blocking, with
312
443
  const logger = createLogger({
313
444
  appName: 'api',
314
445
  environment: 'production',
315
- database: {
316
- type: 'postgresql',
317
- host: 'localhost',
318
- port: 5432,
319
- database: 'appdb',
320
- table: 'logs',
321
- username: 'dbuser',
322
- password: process.env.DB_PASSWORD,
323
- batchSize: 100, // Write in batches of 100
324
- flushInterval: 5000, // Flush every 5 seconds
446
+ transports: {
447
+ database: {
448
+ type: 'postgresql',
449
+ host: 'localhost',
450
+ port: 5432,
451
+ database: 'appdb',
452
+ table: 'logs',
453
+ username: 'dbuser',
454
+ password: process.env.DB_PASSWORD,
455
+ batchSize: 100, // write in batches of 100
456
+ flushInterval: 5000, // flush every 5 seconds
457
+ },
325
458
  },
326
459
  });
327
460
 
@@ -329,11 +462,13 @@ const logger = createLogger({
329
462
  const logger = createLogger({
330
463
  appName: 'api',
331
464
  environment: 'production',
332
- database: {
333
- type: 'mongodb',
334
- connectionString: process.env.MONGO_URI,
335
- database: 'appdb',
336
- collection: 'logs',
465
+ transports: {
466
+ database: {
467
+ type: 'mongodb',
468
+ connectionString: process.env.MONGO_URI,
469
+ database: 'appdb',
470
+ collection: 'logs',
471
+ },
337
472
  },
338
473
  });
339
474
 
@@ -341,70 +476,147 @@ const logger = createLogger({
341
476
  const logger = createLogger({
342
477
  appName: 'api',
343
478
  environment: 'production',
344
- database: {
345
- type: 'mysql',
346
- host: 'localhost',
347
- database: 'appdb',
348
- table: 'logs',
349
- username: 'root',
350
- password: process.env.MYSQL_PASSWORD,
479
+ transports: {
480
+ database: {
481
+ type: 'mysql',
482
+ host: 'localhost',
483
+ database: 'appdb',
484
+ table: 'logs',
485
+ username: 'root',
486
+ password: process.env.MYSQL_PASSWORD,
487
+ },
351
488
  },
352
489
  });
353
490
 
354
- // SQLite (great for local development and small apps)
491
+ // SQLite great for local development and small apps
355
492
  const logger = createLogger({
356
493
  appName: 'api',
357
494
  environment: 'development',
358
- database: {
359
- type: 'sqlite',
360
- filename: './logs/app.sqlite',
361
- table: 'logs',
495
+ transports: {
496
+ database: {
497
+ type: 'sqlite',
498
+ database: './logs/app.sqlite',
499
+ table: 'logs',
500
+ },
362
501
  },
363
502
  });
364
503
  ```
365
504
 
505
+ Multiple database targets are supported — pass an array:
506
+
507
+ ```typescript
508
+ transports: {
509
+ database: [
510
+ { type: 'postgresql', host: 'primary-db', database: 'appdb', table: 'logs' },
511
+ { type: 'mongodb', connectionString: process.env.MONGO_URI, database: 'appdb', collection: 'logs' },
512
+ ],
513
+ },
514
+ ```
515
+
366
516
  ### Analytics
367
517
 
368
- Send log events to your analytics platform:
518
+ logixia includes built-in support for Datadog, Mixpanel, Segment, and Google Analytics. All analytics transports are batched and non-blocking.
519
+
520
+ **Datadog** — sends logs, metrics, and traces to your Datadog account:
369
521
 
370
522
  ```typescript
523
+ import { DataDogTransport } from 'logixia';
524
+
371
525
  const logger = createLogger({
372
526
  appName: 'api',
373
527
  environment: 'production',
374
- analytics: {
375
- endpoint: 'https://analytics.example.com/events',
376
- apiKey: process.env.ANALYTICS_KEY,
377
- batchSize: 50,
378
- flushInterval: 10_000,
528
+ transports: {
529
+ analytics: {
530
+ datadog: {
531
+ apiKey: process.env.DD_API_KEY!,
532
+ site: 'datadoghq.com', // or 'datadoghq.eu', 'us3.datadoghq.com'
533
+ service: 'api',
534
+ env: 'production',
535
+ enableLogs: true,
536
+ enableMetrics: true,
537
+ enableTraces: true,
538
+ },
539
+ },
379
540
  },
380
541
  });
381
542
  ```
382
543
 
544
+ **Mixpanel:**
545
+
546
+ ```typescript
547
+ transports: {
548
+ analytics: {
549
+ mixpanel: {
550
+ token: process.env.MIXPANEL_TOKEN!,
551
+ enableSuperProperties: true,
552
+ superProperties: { platform: 'web', version: '2.0' },
553
+ },
554
+ },
555
+ },
556
+ ```
557
+
558
+ **Segment:**
559
+
560
+ ```typescript
561
+ transports: {
562
+ analytics: {
563
+ segment: {
564
+ writeKey: process.env.SEGMENT_WRITE_KEY!,
565
+ enableBatching: true,
566
+ flushAt: 20,
567
+ flushInterval: 10_000,
568
+ },
569
+ },
570
+ },
571
+ ```
572
+
573
+ **Google Analytics:**
574
+
575
+ ```typescript
576
+ transports: {
577
+ analytics: {
578
+ googleAnalytics: {
579
+ measurementId: process.env.GA_MEASUREMENT_ID!,
580
+ apiSecret: process.env.GA_API_SECRET!,
581
+ enableEcommerce: false,
582
+ },
583
+ },
584
+ },
585
+ ```
586
+
383
587
  ### Multiple transports simultaneously
384
588
 
385
- All configured transports receive every log call concurrently — no sequential bottleneck:
589
+ All configured transports receive every log entry concurrently — no sequential bottleneck:
386
590
 
387
591
  ```typescript
388
592
  const logger = createLogger({
389
593
  appName: 'api',
390
594
  environment: 'production',
391
- console: { colorize: false, format: 'json' },
392
- file: { filename: 'app.log', dirname: './logs', maxSize: '100MB' },
393
- database: {
394
- type: 'postgresql',
395
- host: 'localhost',
396
- database: 'appdb',
397
- table: 'logs',
595
+ transports: {
596
+ console: { format: 'json' },
597
+ file: { filename: 'app.log', dirname: './logs', maxSize: '100MB' },
598
+ database: {
599
+ type: 'postgresql',
600
+ host: 'localhost',
601
+ database: 'appdb',
602
+ table: 'logs',
603
+ },
604
+ analytics: {
605
+ datadog: {
606
+ apiKey: process.env.DD_API_KEY!,
607
+ service: 'api',
608
+ },
609
+ },
398
610
  },
399
611
  });
400
612
 
401
- // One call → console + file + postgres. All concurrent. All non-blocking.
613
+ // One call → console + file + postgres + datadog. All concurrent. All non-blocking.
402
614
  await logger.info('Order placed', { orderId: 'ord_789' });
403
615
  ```
404
616
 
405
617
  ### Custom transport
406
618
 
407
- Implement `ITransport` to send logs anywhere — Slack, Datadog, S3, an internal queue:
619
+ Implement `ITransport` to send logs anywhere — Slack, PagerDuty, S3, an internal queue:
408
620
 
409
621
  ```typescript
410
622
  import type { ITransport, TransportLogEntry } from 'logixia';
@@ -418,57 +630,182 @@ class SlackTransport implements ITransport {
418
630
  method: 'POST',
419
631
  headers: { 'Content-Type': 'application/json' },
420
632
  body: JSON.stringify({
421
- text: `🚨 *[${entry.level.toUpperCase()}]* ${entry.message}`,
422
- attachments: [{ text: JSON.stringify(entry.metadata, null, 2) }],
633
+ text: `*[${entry.level.toUpperCase()}]* ${entry.message}`,
634
+ attachments: [{ text: JSON.stringify(entry.data, null, 2) }],
423
635
  }),
424
636
  });
425
637
  }
638
+
639
+ async close(): Promise<void> {
640
+ // optional cleanup
641
+ }
426
642
  }
427
643
 
428
644
  const logger = createLogger({
429
645
  appName: 'api',
430
646
  environment: 'production',
431
- transports: [new SlackTransport()],
647
+ transports: {
648
+ custom: [new SlackTransport()],
649
+ },
432
650
  });
433
651
  ```
434
652
 
653
+ The `TransportLogEntry` shape is:
654
+
655
+ ```typescript
656
+ interface TransportLogEntry {
657
+ timestamp: Date;
658
+ level: string;
659
+ message: string;
660
+ data?: Record<string, unknown>;
661
+ context?: string;
662
+ traceId?: string;
663
+ appName?: string;
664
+ environment?: string;
665
+ }
666
+ ```
667
+
435
668
  ---
436
669
 
437
670
  ## Request tracing
438
671
 
439
672
  logixia uses `AsyncLocalStorage` to propagate trace IDs through your entire async call graph automatically — no passing of context objects, no manual threading.
440
673
 
674
+ ### Core trace utilities
675
+
441
676
  ```typescript
442
- import { runWithTraceId, getCurrentTraceId } from 'logixia';
677
+ import {
678
+ generateTraceId, // create a UUID v4 trace ID
679
+ getCurrentTraceId, // read trace ID from current async context
680
+ runWithTraceId, // run a callback inside a new trace context
681
+ setTraceId, // set trace ID in the CURRENT context (use sparingly)
682
+ extractTraceId, // extract a trace ID from a request-like object
683
+ } from 'logixia';
684
+
685
+ // Generate a new trace ID
686
+ const traceId = generateTraceId();
687
+ // → 'a3f1c2b4-...'
688
+
689
+ // Run code inside a trace context — every logger.* call within the callback
690
+ // (including across await boundaries and Promise.all) will carry this trace ID
691
+ runWithTraceId(traceId, async () => {
692
+ await logger.info('Processing job'); // traceId attached automatically
693
+ await processItems(); // all nested async calls carry it too
694
+ });
695
+
696
+ // Read the trace ID currently in context (returns undefined if none is set)
697
+ const current = getCurrentTraceId();
443
698
 
444
- // Express / Fastify middleware
445
- app.use((req, res, next) => {
446
- const traceId = (req.headers['x-trace-id'] as string) ?? crypto.randomUUID();
447
- runWithTraceId(traceId, next);
699
+ // Extract a trace ID from an incoming request object
700
+ const incomingTraceId = extractTraceId(req, {
701
+ header: ['traceparent', 'x-trace-id', 'x-request-id'],
702
+ query: ['traceId'],
448
703
  });
704
+ ```
705
+
706
+ ### Express / Fastify middleware
707
+
708
+ ```typescript
709
+ import { traceMiddleware } from 'logixia';
710
+
711
+ // Zero-config — reads from traceparent / x-trace-id / x-request-id / x-correlation-id
712
+ // and generates a UUID v4 if none is present. Sets X-Trace-Id on the response.
713
+ app.use(traceMiddleware());
714
+
715
+ // With custom config:
716
+ app.use(
717
+ traceMiddleware({
718
+ enabled: true,
719
+ generator: () => `req_${crypto.randomUUID()}`,
720
+ extractor: {
721
+ header: ['x-trace-id', 'traceparent'],
722
+ query: ['traceId'],
723
+ },
724
+ })
725
+ );
449
726
 
450
- // Service layer — no parameters, no context objects
727
+ // Service layer — no parameters needed, trace ID propagates automatically
451
728
  class OrderService {
452
729
  async createOrder(data: OrderData) {
453
730
  await logger.info('Creating order', { items: data.items.length });
454
- // trace ID is automatically included in this log entry
731
+ // ^ trace ID is automatically included
455
732
  await this.processPayment(data);
456
733
  }
457
734
 
458
735
  async processPayment(data: OrderData) {
459
736
  await logger.info('Processing payment', { amount: data.total });
460
- // same trace ID, propagated automatically
737
+ // ^ same trace ID, propagated automatically through await
461
738
  }
462
739
  }
463
740
  ```
464
741
 
465
- Every log entry automatically includes the current trace ID even across `await` boundaries, `Promise.all`, and background jobs that were started in the request context.
742
+ The default headers checked for an incoming trace ID (in priority order) are: `traceparent`, `x-trace-id`, `x-request-id`, `x-correlation-id`, `trace-id`.
743
+
744
+ ### NestJS trace middleware
745
+
746
+ The `TraceMiddleware` class integrates directly with NestJS's middleware system. `LogixiaLoggerModule.forRoot()` applies it automatically across all routes — no manual wiring needed:
747
+
748
+ ```typescript
749
+ // Applied automatically by LogixiaLoggerModule.forRoot().
750
+ // For manual use in a custom module:
751
+
752
+ import { MiddlewareConsumer, Module, NestModule } from '@nestjs/common';
753
+ import { TraceMiddleware } from 'logixia';
754
+
755
+ @Module({})
756
+ export class AppModule implements NestModule {
757
+ configure(consumer: MiddlewareConsumer) {
758
+ consumer.apply(TraceMiddleware).forRoutes('*');
759
+ }
760
+ }
761
+ ```
762
+
763
+ ### Kafka trace interceptor
764
+
765
+ Propagates trace IDs through Kafka message handlers. Reads `traceId` / `trace_id` / `x-trace-id` from the message body or headers and runs the handler inside that trace context:
766
+
767
+ ```typescript
768
+ import { KafkaTraceInterceptor } from 'logixia';
769
+ import { UseInterceptors, Controller } from '@nestjs/common';
770
+ import { MessagePattern } from '@nestjs/microservices';
771
+
772
+ @Controller()
773
+ @UseInterceptors(KafkaTraceInterceptor)
774
+ export class OrdersConsumer {
775
+ @MessagePattern('order.created')
776
+ async handle(data: OrderCreatedEvent) {
777
+ // getCurrentTraceId() works here — extracted from the Kafka message
778
+ await logger.info('Processing order event', { orderId: data.orderId });
779
+ }
780
+ }
781
+ ```
782
+
783
+ `KafkaTraceInterceptor` and `WebSocketTraceInterceptor` are automatically provided when you use `LogixiaLoggerModule.forRoot()`. You can also inject them directly.
784
+
785
+ ### WebSocket trace interceptor
786
+
787
+ Propagates trace IDs through WebSocket event handlers. Reads `traceId` from the message body, event payload, or handshake query:
788
+
789
+ ```typescript
790
+ import { WebSocketTraceInterceptor } from 'logixia';
791
+ import { UseInterceptors, WebSocketGateway, SubscribeMessage } from '@nestjs/websockets';
792
+
793
+ @WebSocketGateway()
794
+ @UseInterceptors(WebSocketTraceInterceptor)
795
+ export class EventsGateway {
796
+ @SubscribeMessage('message')
797
+ async handleMessage(client: Socket, data: MessagePayload) {
798
+ // trace ID propagated from the WebSocket event context
799
+ await logger.info('WS message received', { event: 'message' });
800
+ }
801
+ }
802
+ ```
466
803
 
467
804
  ---
468
805
 
469
806
  ## NestJS integration
470
807
 
471
- Drop-in module with zero boilerplate. Supports both synchronous and async configuration:
808
+ Drop-in module with zero boilerplate. Registers `TraceMiddleware` for all routes, provides `LogixiaLoggerService`, `KafkaTraceInterceptor`, and `WebSocketTraceInterceptor` via the global DI container.
472
809
 
473
810
  ```typescript
474
811
  // app.module.ts
@@ -480,69 +817,113 @@ import { LogixiaLoggerModule } from 'logixia';
480
817
  LogixiaLoggerModule.forRoot({
481
818
  appName: 'nestjs-api',
482
819
  environment: process.env.NODE_ENV ?? 'development',
483
- console: { colorize: true },
484
- file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
820
+ traceId: true,
821
+ transports: {
822
+ console: {},
823
+ file: { filename: 'app.log', dirname: './logs', maxSize: '50MB' },
824
+ },
485
825
  }),
486
826
  ],
487
827
  })
488
828
  export class AppModule {}
489
829
  ```
490
830
 
831
+ **Async configuration** (for credentials from a config service):
832
+
833
+ ```typescript
834
+ LogixiaLoggerModule.forRootAsync({
835
+ imports: [ConfigModule],
836
+ useFactory: async (config: ConfigService) => ({
837
+ appName: 'nestjs-api',
838
+ environment: config.get('NODE_ENV'),
839
+ traceId: true,
840
+ transports: {
841
+ database: {
842
+ type: 'postgresql',
843
+ host: config.get('DB_HOST'),
844
+ database: config.get('DB_NAME'),
845
+ password: config.get('DB_PASSWORD'),
846
+ table: 'logs',
847
+ },
848
+ },
849
+ }),
850
+ inject: [ConfigService],
851
+ });
852
+ ```
853
+
854
+ **Inject the logger** in any service or controller. Since `LogixiaLoggerModule` is globally scoped, no per-module import is needed:
855
+
491
856
  ```typescript
492
- // my.service.ts
857
+ // orders.service.ts
493
858
  import { Injectable } from '@nestjs/common';
494
- import { InjectLogger, LogixiaLoggerService } from 'logixia';
859
+ import { LogixiaLoggerService } from 'logixia';
495
860
 
496
861
  @Injectable()
497
- export class OrderService {
498
- constructor(@InjectLogger() private readonly logger: LogixiaLoggerService) {}
862
+ export class OrdersService {
863
+ constructor(private readonly logger: LogixiaLoggerService) {}
499
864
 
500
- async createOrder(data: CreateOrderDto) {
501
- await this.logger.info('Creating order', { userId: data.userId });
865
+ async createOrder(dto: CreateOrderDto) {
866
+ await this.logger.info('Creating order', { userId: dto.userId });
502
867
  // ...
503
868
  }
504
869
  }
505
870
  ```
506
871
 
507
- **Async configuration** (for database credentials from a config service):
872
+ **Feature-scoped child logger** create a logger pre-scoped to a specific context string:
508
873
 
509
874
  ```typescript
510
- LogixiaLoggerModule.forRootAsync({
511
- useFactory: async (configService: ConfigService) => ({
512
- appName: 'nestjs-api',
513
- environment: configService.get('NODE_ENV'),
514
- database: {
515
- type: 'postgresql',
516
- host: configService.get('DB_HOST'),
517
- password: configService.get('DB_PASSWORD'),
518
- },
519
- }),
520
- inject: [ConfigService],
521
- });
875
+ // orders.module.ts
876
+ import { Module } from '@nestjs/common';
877
+ import { LogixiaLoggerModule } from 'logixia';
878
+ import { OrdersService } from './orders.service';
879
+
880
+ @Module({
881
+ imports: [LogixiaLoggerModule.forFeature('OrdersModule')],
882
+ providers: [OrdersService],
883
+ })
884
+ export class OrdersModule {}
522
885
  ```
523
886
 
887
+ ```typescript
888
+ // orders.service.ts — inject the feature-scoped token
889
+ import { Inject, Injectable } from '@nestjs/common';
890
+ import { LOGIXIA_LOGGER_PREFIX, LogixiaLoggerService } from 'logixia';
891
+
892
+ @Injectable()
893
+ export class OrdersService {
894
+ constructor(
895
+ @Inject(`${LOGIXIA_LOGGER_PREFIX}ORDERSMODULE`)
896
+ private readonly logger: LogixiaLoggerService
897
+ ) {}
898
+ }
899
+ ```
900
+
901
+ `LogixiaLoggerService` exposes the full `LogixiaLogger` API: `info`, `warn`, `error`, `debug`, `trace`, `verbose`, `logLevel`, `time`, `timeEnd`, `timeAsync`, `setLevel`, `getLevel`, `setContext`, `child`, `close`, `getCurrentTraceId`, and more.
902
+
524
903
  ---
525
904
 
526
905
  ## Log redaction
527
906
 
528
- Redact sensitive fields before they reach any transport — passwords, tokens, PII, credit card numbers. Fields are masked in-place before serialization:
907
+ Redact sensitive fields before they reach **any** transport — passwords, tokens, PII, credit card numbers. Redaction is applied once before dispatch; no transport can accidentally log sensitive data. The original object is never mutated.
908
+
909
+ **Path-based redaction** supports dot-notation, `*` (single segment wildcard), and `**` (any-depth wildcard):
529
910
 
530
911
  ```typescript
531
912
  const logger = createLogger({
532
913
  appName: 'api',
533
914
  environment: 'production',
534
- redaction: {
915
+ redact: {
535
916
  paths: [
536
917
  'password',
537
918
  'token',
538
919
  'accessToken',
539
920
  'refreshToken',
540
- 'creditCard',
541
- 'ssn',
542
- '*.secret', // Wildcard: any field named 'secret' at any depth
543
- 'user.email', // Nested path
921
+ '*.secret', // any field named 'secret' at one level deep
922
+ 'req.headers.*', // all headers
923
+ 'user.creditCard', // nested path
924
+ '**.password', // 'password' at any depth
544
925
  ],
545
- censor: '[REDACTED]', // Default: '[REDACTED]'
926
+ censor: '[REDACTED]', // default if omitted
546
927
  },
547
928
  });
548
929
 
@@ -550,28 +931,121 @@ await logger.info('User login', {
550
931
  username: 'alice',
551
932
  password: 'hunter2', // → '[REDACTED]'
552
933
  token: 'eyJhbGc...', // → '[REDACTED]'
553
- creditCard: '4111...', // → '[REDACTED]'
554
- ip: '203.0.113.4', // untouched
934
+ user: {
935
+ creditCard: '4111...', // '[REDACTED]'
936
+ email: 'alice@example.com', // untouched
937
+ },
938
+ });
939
+ ```
940
+
941
+ **Regex-based redaction** — mask patterns in string values across all fields:
942
+
943
+ ```typescript
944
+ const logger = createLogger({
945
+ appName: 'api',
946
+ environment: 'production',
947
+ redact: {
948
+ patterns: [
949
+ /Bearer\s+\S+/gi, // Authorization header values
950
+ /sk-[a-z0-9]{32,}/gi, // OpenAI / Stripe secret keys
951
+ /\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b/g, // credit card numbers
952
+ ],
953
+ },
555
954
  });
556
955
  ```
557
956
 
558
- Redaction is applied once, before the entry is dispatched to any transport — no risk of a transport accidentally logging sensitive data.
957
+ Both `paths` and `patterns` can be combined in the same config.
958
+
959
+ ---
960
+
961
+ ## Timer API
962
+
963
+ Measure the duration of any operation — synchronous or async. The result is logged automatically when the timer ends:
964
+
965
+ ```typescript
966
+ // Manual start/stop
967
+ logger.time('db-query');
968
+ const rows = await db.query('SELECT * FROM orders');
969
+ await logger.timeEnd('db-query');
970
+ // → logs: Timer 'db-query' finished { duration: '42ms', startTime: '...', endTime: '...' }
971
+
972
+ // Wrap an async function — timer starts before and stops after, even if the function throws
973
+ const result = await logger.timeAsync('process-batch', async () => {
974
+ return await processBatch(items);
975
+ });
976
+ ```
977
+
978
+ `timeEnd` returns the duration in milliseconds so you can use it in your own logic:
979
+
980
+ ```typescript
981
+ const ms = await logger.timeEnd('db-query');
982
+ if (ms && ms > 500) {
983
+ await logger.warn('Slow query detected', { durationMs: ms });
984
+ }
985
+ ```
986
+
987
+ ---
988
+
989
+ ## Field management
990
+
991
+ Control which fields appear in log output at runtime, without changing config:
992
+
993
+ ```typescript
994
+ // Disable fields you don't need in a specific context
995
+ logger.disableField('traceId');
996
+ logger.disableField('appName');
997
+
998
+ // Re-enable them
999
+ logger.enableField('traceId');
1000
+
1001
+ // Check whether a field is currently active
1002
+ const isOn = logger.isFieldEnabled('timestamp'); // true
1003
+
1004
+ // Inspect the current state of all fields
1005
+ const state = logger.getFieldState();
1006
+ // → { timestamp: true, level: true, appName: false, traceId: false, ... }
1007
+
1008
+ // Reset all fields back to the config defaults
1009
+ logger.resetFieldState();
1010
+ ```
1011
+
1012
+ Available field names: `timestamp`, `level`, `appName`, `service`, `traceId`, `message`, `payload`, `timeTaken`, `context`, `requestId`, `userId`, `sessionId`, `environment`.
1013
+
1014
+ ---
1015
+
1016
+ ## Transport level control
1017
+
1018
+ By default, every transport receives every log entry that passes the global level filter. You can narrow a specific transport to only receive a subset of levels:
1019
+
1020
+ ```typescript
1021
+ // Only send errors to the database transport — no noise from info/debug
1022
+ logger.setTransportLevels('database-0', ['error', 'warn']);
1023
+
1024
+ // Check what levels a transport is currently configured for
1025
+ const levels = logger.getTransportLevels('database-0'); // ['error', 'warn']
1026
+
1027
+ // List all registered transport IDs
1028
+ const ids = logger.getAvailableTransports(); // ['console', 'file-0', 'database-0']
1029
+
1030
+ // Remove all level overrides — all transports receive everything again
1031
+ logger.clearTransportLevelPreferences();
1032
+ ```
559
1033
 
560
1034
  ---
561
1035
 
562
1036
  ## Log search
563
1037
 
564
- Query your in-memory log history without shipping to Elasticsearch, Datadog, or any external service. Great for development environments and lightweight production setups:
1038
+ Query your in-memory log history without shipping to Elasticsearch, Datadog, or any external service. Useful in development and lightweight production setups:
565
1039
 
566
1040
  ```typescript
567
1041
  import { SearchManager } from 'logixia';
568
1042
 
569
1043
  const search = new SearchManager({ maxEntries: 10_000 });
570
1044
 
571
- // Index a batch of entries (e.g. from a file or database)
1045
+ // Index a batch of entries (from a file, database query, or any source)
572
1046
  await search.index(logEntries);
573
1047
 
574
- // Search by text, level, and time range
1048
+ // Search by text query, level, and time range
575
1049
  const results = await search.search({
576
1050
  query: 'payment failed',
577
1051
  level: 'error',
@@ -579,32 +1053,32 @@ const results = await search.search({
579
1053
  to: new Date(),
580
1054
  limit: 50,
581
1055
  });
582
-
583
- // results → sorted by relevance, includes matched entries with full metadata
1056
+ // → sorted by relevance, full metadata included
584
1057
  ```
585
1058
 
586
1059
  ---
587
1060
 
588
1061
  ## OpenTelemetry
589
1062
 
590
- W3C `traceparent` and `tracestate` headers are extracted from incoming requests and attached to every log entry automatically — enabling correlation between distributed traces and log events in tools like Jaeger, Zipkin, Honeycomb, and Datadog:
1063
+ W3C `traceparent` and `tracestate` headers are extracted from incoming requests and attached to every log entry automatically — enabling correlation between distributed traces and log events in Jaeger, Zipkin, Honeycomb, Datadog, and similar tools:
591
1064
 
592
1065
  ```typescript
593
- // With tracing enabled (zero extra packages required)
594
1066
  const logger = createLogger({
595
1067
  appName: 'checkout-service',
596
1068
  environment: 'production',
597
- otel: {
1069
+ traceId: {
598
1070
  enabled: true,
599
- serviceName: 'checkout-service',
600
- propagate: ['traceparent', 'tracestate', 'baggage'],
1071
+ extractor: {
1072
+ header: ['traceparent', 'tracestate', 'x-trace-id'],
1073
+ },
601
1074
  },
602
1075
  });
603
1076
 
604
- // In an Express handler receiving a traced request:
1077
+ // The traceparent header from the incoming request is stored as the trace ID
1078
+ // and included in every log entry automatically.
605
1079
  app.post('/checkout', async (req, res) => {
606
1080
  await logger.info('Checkout initiated', { cartId: req.body.cartId });
607
- // ^ log entry carries the W3C traceparent from the incoming request
1081
+ // log carries the W3C traceparent from the request
608
1082
  });
609
1083
  ```
610
1084
 
@@ -612,103 +1086,360 @@ app.post('/checkout', async (req, res) => {
612
1086
 
613
1087
  ## Graceful shutdown
614
1088
 
615
- Ensures all buffered log entries are flushed to every transport before the process exits. Critical for database and analytics transports that batch writes:
1089
+ Ensures all buffered log entries are flushed to every transport before the process exits. Critical for database and analytics transports that batch writes.
1090
+
1091
+ The simplest approach is to set `gracefulShutdown: true` in config — logixia registers SIGTERM and SIGINT handlers automatically:
1092
+
1093
+ ```typescript
1094
+ const logger = createLogger({
1095
+ appName: 'api',
1096
+ environment: 'production',
1097
+ gracefulShutdown: true,
1098
+ transports: { database: { type: 'postgresql' /* ... */ } },
1099
+ });
1100
+ // SIGTERM / SIGINT will flush all transports before exit. No extra code needed.
1101
+ ```
1102
+
1103
+ For more control, pass a config object:
1104
+
1105
+ ```typescript
1106
+ const logger = createLogger({
1107
+ appName: 'api',
1108
+ environment: 'production',
1109
+ gracefulShutdown: {
1110
+ enabled: true,
1111
+ timeout: 10_000, // wait up to 10 s; force-exits after
1112
+ signals: ['SIGTERM', 'SIGINT', 'SIGHUP'],
1113
+ },
1114
+ transports: {
1115
+ /* ... */
1116
+ },
1117
+ });
1118
+ ```
1119
+
1120
+ You can also call `flushOnExit` directly with lifecycle hooks:
616
1121
 
617
1122
  ```typescript
618
1123
  import { flushOnExit } from 'logixia';
619
1124
 
620
- // Register once at startup — handles SIGTERM, SIGINT, and uncaught exceptions
621
- flushOnExit(logger);
1125
+ flushOnExit({
1126
+ timeout: 5000,
1127
+ beforeFlush: async () => {
1128
+ // stop accepting new requests
1129
+ },
1130
+ afterFlush: async () => {
1131
+ // any cleanup after all logs are written
1132
+ },
1133
+ });
622
1134
  ```
623
1135
 
624
- Alternatively, flush manually:
1136
+ Or flush and close manually — useful in Kubernetes SIGTERM handlers:
625
1137
 
626
1138
  ```typescript
627
- // In a Kubernetes SIGTERM handler:
628
1139
  process.on('SIGTERM', async () => {
629
- await logger.flush(); // Wait for all in-flight writes to complete
1140
+ await logger.flush(); // wait for all in-flight writes
1141
+ await logger.close(); // close connections, deregister shutdown handlers
630
1142
  process.exit(0);
631
1143
  });
632
1144
  ```
633
1145
 
1146
+ For health monitoring:
1147
+
1148
+ ```typescript
1149
+ const { healthy, details } = await logger.healthCheck();
1150
+ // → { healthy: true, details: { 'database-0': { ready: true, metrics: { logsWritten: 1042, ... } } } }
1151
+ ```
1152
+
634
1153
  ---
635
1154
 
636
- ## Configuration reference
1155
+ ## Logger instance API
1156
+
1157
+ Complete reference for every method available on a logger instance returned by `createLogger` or `LogixiaLoggerService`:
637
1158
 
638
1159
  ```typescript
639
- interface LoggerConfig {
640
- // Required
641
- appName: string;
642
- environment: string;
1160
+ // Log methods
1161
+ await logger.error(message: string | Error, data?: Record<string, unknown>): Promise<void>
1162
+ await logger.warn(message: string, data?: Record<string, unknown>): Promise<void>
1163
+ await logger.info(message: string, data?: Record<string, unknown>): Promise<void>
1164
+ await logger.debug(message: string, data?: Record<string, unknown>): Promise<void>
1165
+ await logger.trace(message: string, data?: Record<string, unknown>): Promise<void>
1166
+ await logger.verbose(message: string, data?: Record<string, unknown>): Promise<void>
1167
+ await logger.logLevel(level: string, message: string, data?): Promise<void> // dynamic dispatch
1168
+
1169
+ // Timer API
1170
+ logger.time(label: string): void
1171
+ await logger.timeEnd(label: string): Promise<number | undefined> // returns ms
1172
+ await logger.timeAsync<T>(label: string, fn: () => Promise<T>): Promise<T>
1173
+
1174
+ // Level management
1175
+ logger.setLevel(level: string): void
1176
+ logger.getLevel(): string
1177
+
1178
+ // Context management
1179
+ logger.setContext(context: string): void
1180
+ logger.getContext(): string | undefined
1181
+ logger.child(context: string, data?: Record<string, unknown>): ILogger
1182
+
1183
+ // Field management
1184
+ logger.enableField(fieldName: string): void
1185
+ logger.disableField(fieldName: string): void
1186
+ logger.isFieldEnabled(fieldName: string): boolean
1187
+ logger.getFieldState(): Record<string, boolean>
1188
+ logger.resetFieldState(): void
1189
+
1190
+ // Transport management
1191
+ logger.getAvailableTransports(): string[]
1192
+ logger.setTransportLevels(transportId: string, levels: string[]): void
1193
+ logger.getTransportLevels(transportId: string): string[] | undefined
1194
+ logger.clearTransportLevelPreferences(): void
1195
+
1196
+ // Lifecycle
1197
+ await logger.flush(): Promise<void>
1198
+ await logger.close(): Promise<void>
1199
+ await logger.healthCheck(): Promise<{ healthy: boolean; details: Record<string, unknown> }>
1200
+ ```
643
1201
 
644
- // Optional general
645
- silent?: boolean; // Suppress all output (useful in tests)
1202
+ **Utility exports** available at the top level:
646
1203
 
647
- levelOptions?: {
648
- level?: 'trace' | 'debug' | 'info' | 'warn' | 'error' | 'fatal';
649
- customLevels?: Record<string, { priority: number; color: string }>;
650
- namespaces?: Record<string, string>; // Per-namespace level overrides
651
- };
1204
+ ```typescript
1205
+ import {
1206
+ generateTraceId, // () => string — UUID v4
1207
+ getCurrentTraceId, // () => string | undefined
1208
+ runWithTraceId, // (id, fn, data?) => T
1209
+ setTraceId, // (id, data?) => void
1210
+ extractTraceId, // (req, config) => string | undefined
1211
+ isError, // (value) => value is Error
1212
+ normalizeError, // (value) => Error
1213
+ serializeError, // (error, options?) => Record<string, unknown>
1214
+ applyRedaction, // (payload, config) => payload
1215
+ flushOnExit, // (options?) => void
1216
+ registerForShutdown, // (logger) => void
1217
+ deregisterFromShutdown, // (logger) => void
1218
+ resetShutdownHandlers, // () => void — useful in tests
1219
+ } from 'logixia';
1220
+ ```
652
1221
 
653
- redaction?: {
654
- paths: string[]; // Field paths or wildcards to redact
655
- censor?: string; // Replacement string (default: '[REDACTED]')
656
- };
1222
+ ---
657
1223
 
658
- gracefulShutdown?: {
659
- enabled?: boolean;
660
- timeout?: number; // Max ms to wait for transports to flush
661
- };
1224
+ ## CLI tool
662
1225
 
663
- otel?: {
664
- enabled?: boolean;
665
- serviceName?: string;
666
- propagate?: ('traceparent' | 'tracestate' | 'baggage')[];
667
- };
1226
+ logixia ships a CLI for working with log files directly. After installing, the `logixia` command is available via `npx` or globally:
668
1227
 
669
- // Transports (all optional, can be combined freely)
670
- console?: {
671
- colorize?: boolean;
672
- timestamp?: boolean;
673
- format?: 'text' | 'json';
674
- };
1228
+ ```bash
1229
+ npx logixia --help
1230
+ ```
1231
+
1232
+ **`tail`** stream a log file in real-time, with optional filtering and level highlighting:
1233
+
1234
+ ```bash
1235
+ # Show last 10 lines
1236
+ npx logixia tail ./logs/app.log
1237
+
1238
+ # Follow and filter by level
1239
+ npx logixia tail ./logs/app.log --follow --filter level:error
1240
+
1241
+ # Filter by a specific field value
1242
+ npx logixia tail ./logs/app.log --follow --filter user_id:usr_123
1243
+
1244
+ # Color output by log level
1245
+ npx logixia tail ./logs/app.log --highlight level
1246
+ ```
1247
+
1248
+ **`search`** — query a log file with field-specific or full-text search:
1249
+
1250
+ ```bash
1251
+ # Full-text search
1252
+ npx logixia search ./logs/app.log --query "payment failed"
1253
+
1254
+ # Field-specific search
1255
+ npx logixia search ./logs/app.log --query "level:error"
1256
+ npx logixia search ./logs/app.log --query "user_id:usr_123"
1257
+
1258
+ # Output as JSON or table
1259
+ npx logixia search ./logs/app.log --query "timeout" --format json
1260
+ npx logixia search ./logs/app.log --query "timeout" --format table
1261
+ ```
1262
+
1263
+ **`stats`** — summarize a log file with counts by level and time distribution:
1264
+
1265
+ ```bash
1266
+ npx logixia stats ./logs/app.log
1267
+ ```
675
1268
 
676
- file?: {
677
- filename: string;
678
- dirname: string;
679
- maxSize?: string; // e.g. '50MB', '1GB'
680
- maxFiles?: number;
681
- zippedArchive?: boolean;
682
- format?: 'text' | 'json';
1269
+ **`analyze`** — run pattern recognition and anomaly detection across a log file:
1270
+
1271
+ ```bash
1272
+ npx logixia analyze ./logs/app.log
1273
+ ```
1274
+
1275
+ **`export`** convert a log file between formats (JSON, CSV, text):
1276
+
1277
+ ```bash
1278
+ npx logixia export ./logs/app.log --format csv --output ./logs/app.csv
1279
+ ```
1280
+
1281
+ ---
1282
+
1283
+ ## Configuration reference
1284
+
1285
+ ```typescript
1286
+ interface LoggerConfig {
1287
+ // Required
1288
+ appName: string;
1289
+ environment: 'development' | 'production';
1290
+
1291
+ // Output format (applies to console and file text output)
1292
+ format?: {
1293
+ timestamp?: boolean; // include ISO timestamp. Default: true
1294
+ colorize?: boolean; // ANSI color output. Default: true
1295
+ json?: boolean; // JSON lines output. Default: false
683
1296
  };
684
1297
 
685
- database?: {
686
- type: 'postgresql' | 'mysql' | 'mongodb' | 'sqlite';
687
- // PostgreSQL / MySQL
688
- host?: string;
689
- port?: number;
690
- database?: string;
691
- table?: string;
692
- username?: string;
693
- password?: string;
694
- // MongoDB
695
- connectionString?: string;
696
- collection?: string;
697
- // SQLite
698
- filename?: string;
699
- // Batching
700
- batchSize?: number;
701
- flushInterval?: number; // ms
1298
+ // Trace ID — true enables UUID v4 auto-generation; pass an object for custom config
1299
+ traceId?:
1300
+ | boolean
1301
+ | {
1302
+ enabled: boolean;
1303
+ generator?: () => string; // custom ID generator
1304
+ contextKey?: string;
1305
+ extractor?: {
1306
+ header?: string | string[]; // headers to check
1307
+ query?: string | string[]; // query params to check
1308
+ body?: string | string[]; // body fields to check
1309
+ params?: string | string[]; // route params to check
1310
+ };
1311
+ };
1312
+
1313
+ // Suppress all output. Useful in test environments
1314
+ silent?: boolean;
1315
+
1316
+ // Level configuration
1317
+ levelOptions?: {
1318
+ level?: 'error' | 'warn' | 'info' | 'debug' | 'trace' | 'verbose' | string;
1319
+ levels?: Record<string, number>; // custom level priority map
1320
+ colors?: Record<string, LogColor>; // color per level
702
1321
  };
703
1322
 
704
- analytics?: {
705
- endpoint: string;
706
- apiKey?: string;
707
- batchSize?: number;
708
- flushInterval?: number; // ms
1323
+ // Visible fields in text output
1324
+ fields?: Partial<
1325
+ Record<
1326
+ | 'timestamp'
1327
+ | 'level'
1328
+ | 'appName'
1329
+ | 'service'
1330
+ | 'traceId'
1331
+ | 'message'
1332
+ | 'payload'
1333
+ | 'timeTaken'
1334
+ | 'context'
1335
+ | 'requestId'
1336
+ | 'userId'
1337
+ | 'sessionId'
1338
+ | 'environment',
1339
+ string | boolean
1340
+ >
1341
+ >;
1342
+
1343
+ // Field redaction — applied before any transport receives the entry
1344
+ redact?: {
1345
+ paths?: string[]; // dot-notation paths; supports * and ** wildcards
1346
+ patterns?: RegExp[]; // regex patterns applied to string values
1347
+ censor?: string; // replacement string. Default: '[REDACTED]'
709
1348
  };
710
1349
 
711
- transports?: ITransport[]; // Additional custom transports
1350
+ // Per-namespace log level overrides; keys are patterns, values are levels
1351
+ namespaceLevels?: Record<string, string>;
1352
+
1353
+ // Graceful shutdown — true for defaults, or a config object for full control
1354
+ gracefulShutdown?:
1355
+ | boolean
1356
+ | {
1357
+ enabled: boolean;
1358
+ timeout?: number; // ms to wait before force-exit. Default: 5000
1359
+ signals?: NodeJS.Signals[]; // Default: ['SIGTERM', 'SIGINT']
1360
+ };
1361
+
1362
+ // Transports — all optional, all concurrent
1363
+ transports?: {
1364
+ console?: {
1365
+ level?: string;
1366
+ colorize?: boolean;
1367
+ timestamp?: boolean;
1368
+ format?: 'json' | 'text';
1369
+ };
1370
+
1371
+ file?:
1372
+ | {
1373
+ filename: string;
1374
+ dirname?: string;
1375
+ maxSize?: string | number; // e.g. '50MB', '1GB', or bytes
1376
+ maxFiles?: number;
1377
+ datePattern?: string; // e.g. 'YYYY-MM-DD'
1378
+ zippedArchive?: boolean;
1379
+ format?: 'json' | 'text' | 'csv';
1380
+ level?: string;
1381
+ batchSize?: number;
1382
+ flushInterval?: number; // ms
1383
+ rotation?: {
1384
+ interval?: '1h' | '6h' | '12h' | '1d' | '1w' | '1m' | '1y';
1385
+ maxSize?: string | number;
1386
+ maxFiles?: number;
1387
+ compress?: boolean;
1388
+ };
1389
+ }
1390
+ | Array<FileTransportConfig>; // array for multiple file targets
1391
+
1392
+ database?:
1393
+ | {
1394
+ type: 'postgresql' | 'mysql' | 'mongodb' | 'sqlite';
1395
+ host?: string;
1396
+ port?: number;
1397
+ database: string;
1398
+ table?: string; // SQL databases
1399
+ collection?: string; // MongoDB
1400
+ connectionString?: string; // MongoDB connection string
1401
+ username?: string;
1402
+ password?: string;
1403
+ ssl?: boolean;
1404
+ batchSize?: number;
1405
+ flushInterval?: number; // ms
1406
+ }
1407
+ | Array<DatabaseTransportConfig>;
1408
+
1409
+ analytics?: {
1410
+ datadog?: {
1411
+ apiKey: string;
1412
+ site?: 'datadoghq.com' | 'datadoghq.eu' | 'us3.datadoghq.com' | 'us5.datadoghq.com';
1413
+ service?: string;
1414
+ version?: string;
1415
+ env?: string;
1416
+ enableMetrics?: boolean;
1417
+ enableLogs?: boolean;
1418
+ enableTraces?: boolean;
1419
+ };
1420
+ mixpanel?: {
1421
+ token: string;
1422
+ distinct_id?: string;
1423
+ enableSuperProperties?: boolean;
1424
+ superProperties?: Record<string, unknown>;
1425
+ };
1426
+ segment?: {
1427
+ writeKey: string;
1428
+ dataPlaneUrl?: string;
1429
+ enableBatching?: boolean;
1430
+ flushAt?: number;
1431
+ flushInterval?: number;
1432
+ };
1433
+ googleAnalytics?: {
1434
+ measurementId: string;
1435
+ apiSecret: string;
1436
+ clientId?: string;
1437
+ enableEcommerce?: boolean;
1438
+ };
1439
+ };
1440
+
1441
+ custom?: ITransport[]; // any object implementing { name, write, close? }
1442
+ };
712
1443
  }
713
1444
  ```
714
1445