vectorjson 0.1.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,9 @@
1
1
  # VectorJSON
2
2
 
3
3
  [![CI](https://github.com/teamchong/vectorjson/actions/workflows/ci.yml/badge.svg)](https://github.com/teamchong/vectorjson/actions/workflows/ci.yml)
4
+ [![npm](https://img.shields.io/npm/v/vectorjson)](https://www.npmjs.com/package/vectorjson)
5
+ [![gzip size](https://img.shields.io/badge/gzip-~47kB-blue)](https://www.npmjs.com/package/vectorjson)
6
+ [![license](https://img.shields.io/npm/l/vectorjson)](https://github.com/teamchong/vectorjson/blob/main/LICENSE)
4
7
 
5
8
  O(n) streaming JSON parser for LLM tool calls, built on WASM SIMD. Agents act faster with field-level streaming, detect wrong outputs early to abort and save tokens, and offload parsing to Workers with transferable ArrayBuffers.
6
9
 
@@ -27,24 +30,26 @@ for await (const chunk of stream) {
27
30
  }
28
31
  ```
29
32
 
30
- A 50KB tool call streamed in ~12-char chunks means ~4,000 full re-parses — O(n²). At 100KB, Vercel AI SDK spends 4.1 seconds just parsing. Anthropic SDK spends 9.3 seconds.
33
+ A 50KB tool call streamed in ~12-char chunks means ~4,000 full re-parses — O(n²). At 100KB, Vercel AI SDK spends 6.1 seconds just parsing. Anthropic SDK spends 13.4 seconds.
31
34
 
32
35
  ## Quick Start
33
36
 
34
- Drop-in replacement for your SDK's partial JSON parser:
37
+ Zero-config just import and use. No `init()`, no WASM setup:
35
38
 
36
39
  ```js
37
- import { init } from "vectorjson";
38
- const vj = await init();
40
+ import { parse, createParser, createEventParser } from "vectorjson";
39
41
 
40
- // Before (JS parser — what your SDK does today):
41
- for await (const chunk of stream) {
42
- buffer += chunk;
43
- result = parsePartialJson(buffer); // re-parses entire buffer every time
44
- }
42
+ // One-shot parse
43
+ const result = parse('{"tool":"file_edit","path":"app.ts"}');
44
+ result.value.tool; // "file_edit" — lazy Proxy over WASM tape
45
+ ```
46
+
47
+ **Streaming** — O(n) incremental parsing, feed chunks, get a live object:
48
+
49
+ ```js
50
+ import { createParser } from "vectorjson";
45
51
 
46
- // After (VectorJSON — O(n) live document builder):
47
- const parser = vj.createParser();
52
+ const parser = createParser();
48
53
  for await (const chunk of stream) {
49
54
  parser.feed(chunk);
50
55
  result = parser.getValue(); // O(1) — returns live object
@@ -57,7 +62,7 @@ parser.destroy();
57
62
  **Or skip intermediate access entirely** — if you only need the final value:
58
63
 
59
64
  ```js
60
- const parser = vj.createParser();
65
+ const parser = createParser();
61
66
  for await (const chunk of stream) {
62
67
  const s = parser.feed(chunk); // O(1) — appends bytes to WASM buffer
63
68
  if (s === "complete") break;
@@ -69,7 +74,9 @@ parser.destroy();
69
74
  **Event-driven** — react to fields as they arrive, O(n) total, no re-parsing:
70
75
 
71
76
  ```js
72
- const parser = vj.createEventParser();
77
+ import { createEventParser } from "vectorjson";
78
+
79
+ const parser = createEventParser();
73
80
 
74
81
  parser.on('tool', (e) => showToolUI(e.value)); // fires immediately
75
82
  parser.onDelta('code', (e) => editor.append(e.value)); // streams char-by-char
@@ -85,7 +92,7 @@ parser.destroy();
85
92
 
86
93
  ```js
87
94
  const abort = new AbortController();
88
- const parser = vj.createEventParser();
95
+ const parser = createEventParser();
89
96
 
90
97
  parser.on('name', (e) => {
91
98
  if (e.value !== 'str_replace_editor') {
@@ -111,7 +118,7 @@ const buf = parser.getRawBuffer();
111
118
  postMessage(buf, [buf]); // O(1) transfer — moves pointer, no copy
112
119
 
113
120
  // On Main thread:
114
- const result = vj.parse(new Uint8Array(buf)); // lazy Proxy
121
+ const result = parse(new Uint8Array(buf)); // lazy Proxy
115
122
  result.value.name; // only materializes what you touch
116
123
  ```
117
124
 
@@ -125,18 +132,22 @@ Apple-to-apple: both sides produce a materialized partial object on every chunk.
125
132
 
126
133
  | Payload | Product | Original | + VectorJSON | Speedup |
127
134
  |---------|---------|----------|-------------|---------|
128
- | 1 KB | Vercel AI SDK | 4.2 ms | 162 µs | **26×** |
129
- | | Anthropic SDK | 1.6 ms | 162 µs | **10×** |
130
- | | TanStack AI | 1.8 ms | 162 µs | **11×** |
131
- | | OpenClaw | 2.0 ms | 162 µs | **12×** |
132
- | 10 KB | Vercel AI SDK | 49 ms | 470 µs | **104×** |
133
- | | Anthropic SDK | 93 ms | 470 µs | **198×** |
134
- | | TanStack AI | 96 ms | 470 µs | **204×** |
135
- | | OpenClaw | 113 ms | 470 µs | **240×** |
136
- | 100 KB | Vercel AI SDK | 4.1 s | 4.6 ms | **892×** |
137
- | | Anthropic SDK | 9.3 s | 4.6 ms | **2016×** |
138
- | | TanStack AI | 7.5 s | 4.6 ms | **1644×** |
139
- | | OpenClaw | 8.1 s | 4.6 ms | **1757×** |
135
+ | 1 KB | Vercel AI SDK | 3.9 ms | 283 µs | **14×** |
136
+ | | Anthropic SDK | 3.3 ms | 283 µs | **12×** |
137
+ | | TanStack AI | 3.2 ms | 283 µs | **11×** |
138
+ | | OpenClaw | 3.8 ms | 283 µs | **14×** |
139
+ | 5 KB | Vercel AI SDK | 23.1 ms | 739 µs | **31×** |
140
+ | | Anthropic SDK | 34.7 ms | 739 µs | **47×** |
141
+ | | TanStack AI | | 739 µs | |
142
+ | | OpenClaw | | 739 µs | |
143
+ | 50 KB | Vercel AI SDK | 1.80 s | 2.7 ms | **664×** |
144
+ | | Anthropic SDK | 3.39 s | 2.7 ms | **1255×** |
145
+ | | TanStack AI | 2.34 s | 2.7 ms | **864×** |
146
+ | | OpenClaw | 2.73 s | 2.7 ms | **1011×** |
147
+ | 100 KB | Vercel AI SDK | 6.1 s | 6.6 ms | **920×** |
148
+ | | Anthropic SDK | 13.4 s | 6.6 ms | **2028×** |
149
+ | | TanStack AI | 7.0 s | 6.6 ms | **1065×** |
150
+ | | OpenClaw | 8.0 s | 6.6 ms | **1222×** |
140
151
 
141
152
  Stock parsers re-parse the full buffer on every chunk — O(n²). VectorJSON maintains a **live JS object** that grows incrementally on each `feed()`, so `getValue()` is O(1). Total work: O(n).
142
153
 
@@ -148,10 +159,10 @@ The real cost isn't just CPU time — it's blocking the agent's main thread. Sim
148
159
 
149
160
  | Payload | Stock total | VectorJSON total | Main thread freed |
150
161
  |---------|-----------|-----------------|-------------------|
151
- | 10 KB | 24 ms | 1 ms | 23 ms sooner |
152
- | 100 KB | 1.5 s | 3 ms | **1.5 seconds sooner** |
153
- | 500 KB | 39 s | 29 ms | **39 seconds sooner** |
154
- | 1 MB | 2 min 41 s | 44 ms | **161 seconds sooner** |
162
+ | 1 KB | 4.0 ms | 1.7 ms | 2.3 ms sooner |
163
+ | 10 KB | 36.7 ms | 1.9 ms | 35 ms sooner |
164
+ | 50 KB | 665 ms | 3.8 ms | **661 ms sooner** |
165
+ | 100 KB | 2.42 s | 10.2 ms | **2.4 seconds sooner** |
155
166
 
156
167
  Both approaches detect the tool name (`.name`) at the same chunk — the LLM hasn't streamed more yet. But while VectorJSON finishes processing all chunks in milliseconds, the stock parser blocks the main thread for the entire duration. The agent can't render UI, stream code to the editor, or start running tools until parsing is done.
157
168
 
@@ -188,19 +199,24 @@ The event parser (`createEventParser`) adds path-matching on top: it diffs the t
188
199
 
189
200
  ```bash
190
201
  npm install vectorjson
202
+ # or
203
+ pnpm add vectorjson
204
+ # or
205
+ bun add vectorjson
206
+ # or
207
+ yarn add vectorjson
191
208
  ```
192
209
 
193
210
  ## Usage
194
211
 
195
- ### Drop-in: Replace your SDK's partial JSON parser
212
+ ### Streaming parse
196
213
 
197
- Every AI SDK has a `parsePartialJson` function that re-parses the full buffer on every chunk. Replace it with VectorJSON's streaming parser:
214
+ Feed chunks as they arrive from any source raw fetch, WebSocket, SSE, or your own transport:
198
215
 
199
216
  ```js
200
- import { init } from "vectorjson";
201
- const vj = await init();
217
+ import { createParser } from "vectorjson";
202
218
 
203
- const parser = vj.createParser();
219
+ const parser = createParser();
204
220
  for await (const chunk of stream) {
205
221
  const s = parser.feed(chunk);
206
222
  if (s === "complete" || s === "end_early") break;
@@ -209,7 +225,9 @@ const result = parser.getValue(); // lazy Proxy — materializes on access
209
225
  parser.destroy();
210
226
  ```
211
227
 
212
- Or use the Vercel AI SDK-compatible signature as a 1-line swap:
228
+ ### Vercel AI SDK-compatible signature
229
+
230
+ If you have code that calls `parsePartialJson`, VectorJSON provides a compatible function:
213
231
 
214
232
  ```js
215
233
  // Before
@@ -217,17 +235,20 @@ import { parsePartialJson } from "ai";
217
235
  const { value, state } = parsePartialJson(buffer);
218
236
 
219
237
  // After
220
- import { init } from "vectorjson";
221
- const vj = await init();
222
- const { value, state } = vj.parsePartialJson(buffer);
238
+ import { parsePartialJson } from "vectorjson";
239
+ const { value, state } = parsePartialJson(buffer);
223
240
  ```
224
241
 
242
+ > **Note:** AI SDKs (Vercel, Anthropic, TanStack) parse JSON internally inside `streamObject()`, `MessageStream`, etc. — you don't get access to the raw chunks. To use VectorJSON today, work with the raw LLM stream directly (raw fetch, WebSocket, SSE).
243
+
225
244
  ### Event-driven: React to fields as they stream in
226
245
 
227
246
  When an LLM streams a tool call, you usually care about specific fields at specific times. `createEventParser` lets you subscribe to paths and get notified the moment a value completes or a string grows:
228
247
 
229
248
  ```js
230
- const parser = vj.createEventParser();
249
+ import { createEventParser } from "vectorjson";
250
+
251
+ const parser = createEventParser();
231
252
 
232
253
  // Get the tool name the moment it's complete
233
254
  parser.on('tool_calls[*].name', (e) => {
@@ -254,7 +275,9 @@ parser.destroy();
254
275
  Some LLM APIs stream multiple JSON values separated by newlines. VectorJSON auto-resets between values:
255
276
 
256
277
  ```js
257
- const parser = vj.createEventParser({
278
+ import { createEventParser } from "vectorjson";
279
+
280
+ const parser = createEventParser({
258
281
  multiRoot: true,
259
282
  onRoot(event) {
260
283
  console.log(`Root #${event.index}:`, event.value);
@@ -272,7 +295,9 @@ parser.destroy();
272
295
  Some models emit thinking text before JSON, or wrap JSON in code fences. VectorJSON finds the JSON automatically:
273
296
 
274
297
  ```js
275
- const parser = vj.createEventParser();
298
+ import { createEventParser } from "vectorjson";
299
+
300
+ const parser = createEventParser();
276
301
  parser.on('answer', (e) => console.log(e.value));
277
302
  parser.onText((text) => thinkingPanel.append(text)); // opt-in
278
303
 
@@ -291,9 +316,11 @@ Validate and auto-infer types with Zod, Valibot, ArkType, or any lib with `.safe
291
316
 
292
317
  ```ts
293
318
  import { z } from 'zod';
319
+ import { createParser } from "vectorjson";
320
+
294
321
  const User = z.object({ name: z.string(), age: z.number() });
295
322
 
296
- const parser = vj.createParser(User); // T inferred from schema
323
+ const parser = createParser(User); // T inferred from schema
297
324
  for await (const chunk of stream) {
298
325
  parser.feed(chunk);
299
326
  const partial = parser.getValue(); // { name: "Ali" } mid-stream — always available
@@ -307,7 +334,9 @@ parser.destroy();
307
334
  **Partial JSON** — returns `DeepPartial<T>` because incomplete JSON has missing fields:
308
335
 
309
336
  ```ts
310
- const { value, state } = vj.parsePartialJson('{"name":"Al', User);
337
+ import { parsePartialJson } from "vectorjson";
338
+
339
+ const { value, state } = parsePartialJson('{"name":"Al', User);
311
340
  // value: { name: "Al" } — partial object, typed as DeepPartial<{ name: string; age: number }>
312
341
  // state: "repaired-parse"
313
342
  // TypeScript type: { name?: string; age?: number } | undefined
@@ -326,12 +355,41 @@ parser.on('tool_calls[*]', ToolCall, (event) => {
326
355
 
327
356
  Schema-agnostic: any object with `{ safeParse(v) → { success: boolean; data?: T } }` works.
328
357
 
358
+ ### Deep compare — compare JSON without materializing
359
+
360
+ Compare two parsed values directly in WASM memory. Returns a boolean — no JS objects allocated, no Proxy traps fired. Useful for diffing LLM outputs, caching, or deduplication:
361
+
362
+ ```js
363
+ import { parse, deepCompare } from "vectorjson";
364
+
365
+ const a = parse('{"name":"Alice","age":30}').value;
366
+ const b = parse('{"age":30,"name":"Alice"}').value;
367
+
368
+ deepCompare(a, b); // true — key order ignored by default
369
+ deepCompare(a, b, { ignoreKeyOrder: false }); // false — keys must be in same order
370
+ ```
371
+
372
+ By default, `deepCompare` ignores key order — `{"a":1,"b":2}` equals `{"b":2,"a":1}`, just like `fast-deep-equal`. Set `{ ignoreKeyOrder: false }` for strict key order comparison, which is ~2× faster when you know both values come from the same source.
373
+
374
+ ```
375
+ bun --expose-gc bench/deep-compare.mjs
376
+
377
+ Equal objects (560 KB):
378
+ JS deepEqual (recursive) 848 ops/s heap Δ 2.4 MB
379
+ VJ ignore key order (default) 1.63K ops/s heap Δ 0.1 MB 2× faster
380
+ VJ strict key order 3.41K ops/s heap Δ 0.1 MB 4× faster
381
+ ```
382
+
383
+ Works with any combination: two VJ proxies (fast WASM path), plain JS objects, or mixed (falls back to `JSON.stringify` comparison).
384
+
329
385
  ### Lazy access — only materialize what you touch
330
386
 
331
- `vj.parse()` returns a lazy Proxy backed by the WASM tape. Fields are only materialized into JS objects when you access them. On a 2 MB payload, reading one field is 2× faster than `JSON.parse` because the other 99% is never allocated:
387
+ `parse()` returns a lazy Proxy backed by the WASM tape. Fields are only materialized into JS objects when you access them. On a 2 MB payload, reading one field is 2× faster than `JSON.parse` because the other 99% is never allocated:
332
388
 
333
389
  ```js
334
- const result = vj.parse(huge2MBToolCall);
390
+ import { parse } from "vectorjson";
391
+
392
+ const result = parse(huge2MBToolCall);
335
393
  result.value.tool; // "file_edit" — reads from WASM tape, 2.3ms
336
394
  result.value.path; // "app.ts"
337
395
  // result.value.code (the 50KB field) is never materialized in JS memory
@@ -351,18 +409,28 @@ bun --expose-gc bench/partial-access.mjs
351
409
  For non-streaming use cases:
352
410
 
353
411
  ```js
354
- const result = vj.parse('{"users": [{"name": "Alice"}]}');
412
+ import { parse } from "vectorjson";
413
+
414
+ const result = parse('{"users": [{"name": "Alice"}]}');
355
415
  result.status; // "complete" | "complete_early" | "incomplete" | "invalid"
356
416
  result.value.users; // lazy Proxy — materializes on access
357
417
  ```
358
418
 
359
419
  ## API Reference
360
420
 
421
+ ### Direct exports (recommended)
422
+
423
+ All functions are available as direct imports — no `init()` needed:
424
+
425
+ ```js
426
+ import { parse, parsePartialJson, deepCompare, createParser, createEventParser, materialize } from "vectorjson";
427
+ ```
428
+
361
429
  ### `init(options?): Promise<VectorJSON>`
362
430
 
363
- Loads WASM once, returns cached singleton. `{ engineWasm?: string | URL | BufferSource }` for custom WASM location.
431
+ Returns cached singleton. Useful for passing custom WASM via `{ engineWasm?: string | URL | BufferSource }`. Called automatically on import.
364
432
 
365
- ### `vj.parse(input: string | Uint8Array): ParseResult`
433
+ ### `parse(input: string | Uint8Array): ParseResult`
366
434
 
367
435
  ```ts
368
436
  interface ParseResult {
@@ -380,7 +448,7 @@ interface ParseResult {
380
448
  - **`incomplete`** — truncated JSON; value is autocompleted, `isComplete()` tells you what's real
381
449
  - **`invalid`** — broken JSON
382
450
 
383
- ### `vj.createParser(schema?): StreamingParser<T>`
451
+ ### `createParser(schema?): StreamingParser<T>`
384
452
 
385
453
  Each `feed()` processes only new bytes — O(n) total. Pass an optional schema to auto-validate and infer the return type.
386
454
 
@@ -400,16 +468,18 @@ While incomplete, `getValue()` returns the **live document** — a mutable JS ob
400
468
 
401
469
  ```ts
402
470
  import { z } from 'zod';
471
+ import { createParser } from "vectorjson";
472
+
403
473
  const User = z.object({ name: z.string(), age: z.number() });
404
474
 
405
- const parser = vj.createParser(User);
475
+ const parser = createParser(User);
406
476
  parser.feed('{"name":"Alice","age":30}');
407
477
  const val = parser.getValue(); // { name: string; age: number } | undefined ✅
408
478
  ```
409
479
 
410
480
  Works with Zod, Valibot, ArkType — any library with `{ safeParse(v) → { success, data? } }`.
411
481
 
412
- ### `vj.parsePartialJson(input, schema?): PartialJsonResult<DeepPartial<T>>`
482
+ ### `parsePartialJson(input, schema?): PartialJsonResult<DeepPartial<T>>`
413
483
 
414
484
  Compatible with Vercel AI SDK's `parsePartialJson` signature. Returns a plain JS object (not a Proxy). Pass an optional schema for type-safe validation.
415
485
 
@@ -427,7 +497,7 @@ type DeepPartial<T> = T extends object
427
497
  : T;
428
498
  ```
429
499
 
430
- ### `vj.createEventParser(options?): EventParser`
500
+ ### `createEventParser(options?): EventParser`
431
501
 
432
502
  Event-driven streaming parser. Events fire synchronously during `feed()`.
433
503
 
@@ -485,7 +555,23 @@ interface RootEvent {
485
555
  }
486
556
  ```
487
557
 
488
- ### `vj.materialize(value): unknown`
558
+ ### `deepCompare(a, b, options?): boolean`
559
+
560
+ Compare two values for deep equality without materializing JS objects. When both values are VJ proxies, comparison runs entirely in WASM memory — zero allocations, zero Proxy traps.
561
+
562
+ ```ts
563
+ deepCompare(
564
+ a: unknown,
565
+ b: unknown,
566
+ options?: { ignoreKeyOrder?: boolean } // default: true
567
+ ): boolean
568
+ ```
569
+
570
+ - **`ignoreKeyOrder: true`** (default) — `{"a":1,"b":2}` equals `{"b":2,"a":1}`. Same semantics as `fast-deep-equal`.
571
+ - **`ignoreKeyOrder: false`** — keys must appear in the same order. ~2× faster for same-source comparisons.
572
+ - Falls back to `JSON.stringify` comparison when either value is a plain JS object.
573
+
574
+ ### `materialize(value): unknown`
489
575
 
490
576
  Convert a lazy Proxy into a plain JS object tree. No-op on plain values.
491
577
 
@@ -493,26 +579,41 @@ Convert a lazy Proxy into a plain JS object tree. No-op on plain values.
493
579
 
494
580
  | Runtime | Status | Notes |
495
581
  |---------|--------|-------|
496
- | Node.js 20+ | ✅ | WASM loaded from disk automatically |
497
- | Bun | ✅ | WASM loaded from disk automatically |
498
- | Browsers | ✅ | Pass `engineWasm` as `ArrayBuffer` or `URL` to `init()` |
499
- | Deno | ✅ | Pass `engineWasm` as `URL` to `init()` |
500
- | Cloudflare Workers | ✅ | Import WASM as module, pass as `ArrayBuffer` to `init()` |
582
+ | Node.js 20+ | ✅ | WASM embedded in bundle zero config |
583
+ | Bun | ✅ | WASM embedded in bundle zero config |
584
+ | Browsers | ✅ | WASM embedded in bundle zero config |
585
+ | Deno | ✅ | WASM embedded in bundle zero config |
586
+ | Cloudflare Workers | ✅ | WASM embedded in bundle zero config |
587
+
588
+ WASM is embedded as base64 in the JS bundle and auto-initialized via top-level `await`. No setup required — just `import { parse } from "vectorjson"`.
501
589
 
502
- For environments without filesystem access, provide the WASM binary explicitly:
590
+ For advanced use cases, you can still provide a custom WASM binary via `init()`:
503
591
 
504
592
  ```js
505
593
  import { init } from "vectorjson";
594
+ const vj = await init({ engineWasm: customWasmBytes });
595
+ ```
596
+
597
+ Bundle size: ~148 KB JS with embedded WASM (~47 KB gzipped). No runtime dependencies.
598
+
599
+ ## Runnable Examples
600
+
601
+ The `examples/` directory has working demos you can run immediately:
602
+
603
+ ```bash
604
+ # Anthropic tool call — streams fields as they arrive, early abort demo
605
+ bun examples/anthropic-tool-call.ts --mock
606
+ bun examples/anthropic-tool-call.ts --mock --wrong-tool # early abort
506
607
 
507
- // Option 1: URL (browsers, Deno)
508
- const vj = await init({ engineWasm: new URL('./engine.wasm', import.meta.url) });
608
+ # OpenAI function call streams function arguments via EventParser
609
+ bun examples/openai-function-call.ts --mock
509
610
 
510
- // Option 2: ArrayBuffer (Workers, custom loaders)
511
- const wasmBytes = await fetch('/engine.wasm').then(r => r.arrayBuffer());
512
- const vj = await init({ engineWasm: wasmBytes });
611
+ # With a real API key:
612
+ ANTHROPIC_API_KEY=sk-ant-... bun examples/anthropic-tool-call.ts
613
+ OPENAI_API_KEY=sk-... bun examples/openai-function-call.ts
513
614
  ```
514
615
 
515
- Bundle size: ~92 KB WASM + ~20 KB JS (~37 KB gzipped total). No runtime dependencies.
616
+ See also `examples/ai-usage.ts` for additional patterns (MCP stdio, Vercel AI SDK `streamObject`, NDJSON embeddings).
516
617
 
517
618
  ## Building from Source
518
619
 
@@ -528,7 +629,7 @@ sudo apt-get install -y binaryen
528
629
 
529
630
  ```bash
530
631
  bun run build # Zig → WASM → wasm-opt → TypeScript
531
- bun run test # 557 tests including 100MB stress payloads
632
+ bun run test # 724+ tests including 100MB stress payloads
532
633
  bun run test:worker # Worker transferable tests (Playwright + Chromium)
533
634
  ```
534
635
 
@@ -538,9 +639,10 @@ To reproduce benchmarks:
538
639
  bun --expose-gc bench/parse-stream.mjs # one-shot + streaming parse
539
640
  cd bench/ai-parsers && bun install && bun --expose-gc bench.mjs # AI SDK comparison
540
641
  bun run bench:worker # Worker transfer vs structured clone benchmark
642
+ node --expose-gc bench/deep-compare.mjs # deep compare: VJ vs JS deepEqual
541
643
  ```
542
644
 
543
- Benchmark numbers in this README were measured on an Apple M-series Mac. Results vary by machine but relative speedups are consistent.
645
+ Benchmark numbers in this README were measured on GitHub Actions (Ubuntu, x86_64). Results vary by machine but relative speedups are consistent.
544
646
 
545
647
  ## License
546
648