@ljoukov/llm 4.1.1 → 5.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -25,6 +25,8 @@ Designed around a single streaming API that yields:
25
25
  npm i @ljoukov/llm
26
26
  ```
27
27
 
28
+ Requires Node.js 22 or newer.
29
+
28
30
  ## Environment variables
29
31
 
30
32
  This package optionally loads a `.env.local` file from `process.cwd()` (Node.js) on first use (dotenv-style `KEY=value`
@@ -523,6 +525,57 @@ const { value } = await generateJson({
523
525
  });
524
526
  ```
525
527
 
528
+ ## Telemetry
529
+
530
+ Telemetry is one shared API across the library:
531
+
532
+ - direct calls: `generateText()`, `streamText()`, `generateJson()`, `streamJson()`, `generateImages()`
533
+ - agent loops: `runAgentLoop()`, `streamAgentLoop()`
534
+
535
+ Configure it once for the process with `configureTelemetry()` and override or disable it per call with `telemetry`.
536
+
537
+ ```ts
538
+ import { configureTelemetry, generateJson, runAgentLoop } from "@ljoukov/llm";
539
+ import { z } from "zod";
540
+
541
+ configureTelemetry({
542
+ includeStreamEvents: false,
543
+ sink: {
544
+ emit: (event) => {
545
+ // event.type:
546
+ // "llm.call.started" | "llm.call.stream" | "llm.call.completed" |
547
+ // "agent.run.started" | "agent.run.stream" | "agent.run.completed"
548
+ },
549
+ flush: async () => {},
550
+ },
551
+ });
552
+
553
+ const { value } = await generateJson({
554
+ model: "gpt-5.2",
555
+ input: "Return { ok: true }.",
556
+ schema: z.object({ ok: z.boolean() }),
557
+ });
558
+
559
+ await runAgentLoop({
560
+ model: "gpt-5.2",
561
+ input: "Inspect the repo and update the file.",
562
+ filesystemTool: true,
563
+ });
564
+ ```
565
+
566
+ Per-call opt-out:
567
+
568
+ ```ts
569
+ await generateJson({
570
+ model: "gpt-5.2",
571
+ input: "Return { ok: true }.",
572
+ schema: z.object({ ok: z.boolean() }),
573
+ telemetry: false,
574
+ });
575
+ ```
576
+
577
+ See `docs/telemetry.md` for the event schema and adapter guidance.
578
+
526
579
  ## Tools
527
580
 
528
581
  There are three tool-enabled call patterns:
@@ -741,38 +794,6 @@ const result = await runAgentLoop({
741
794
  console.log(result.text);
742
795
  ```
743
796
 
744
- ### Agent Telemetry (Pluggable Backends)
745
-
746
- `runAgentLoop()` supports optional telemetry hooks that keep default behavior unchanged.
747
- You can attach any backend by implementing a sink with `emit(event)` and optional `flush()`.
748
-
749
- ```ts
750
- import { runAgentLoop } from "@ljoukov/llm";
751
-
752
- const result = await runAgentLoop({
753
- model: "chatgpt-gpt-5.3-codex",
754
- input: "Summarize the report and update output JSON files.",
755
- filesystemTool: true,
756
- telemetry: {
757
- includeLlmStreamEvents: false, // enable only if you need token/delta event fan-out
758
- sink: {
759
- emit: (event) => {
760
- // Forward to your backend (Cloud Logging, OpenTelemetry, Datadog, etc.)
761
- // event.type: "agent.run.started" | "agent.run.stream" | "agent.run.completed"
762
- // agent.run.completed also includes uploadCount, uploadBytes, and uploadLatencyMs
763
- // event carries runId, parentRunId, depth, model, timestamp + payload
764
- },
765
- flush: async () => {
766
- // Optional: flush buffered telemetry on run completion.
767
- },
768
- },
769
- },
770
- });
771
- ```
772
-
773
- Telemetry emits parent/child run correlation (`runId` + `parentRunId`) for subagents.
774
- See `docs/agent-telemetry.md` for event schema, design rationale, and backend adapter guidance.
775
-
776
797
  ### Agent Logging (Console + Files + Redirects)
777
798
 
778
799
  `runAgentLoop()` enables logging by default. It writes: