vectorjson 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +190 -0
- package/README.md +547 -0
- package/dist/engine.wasm +0 -0
- package/dist/index.d.ts +227 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +6 -0
- package/dist/index.js.map +10 -0
- package/package.json +63 -0
package/README.md
ADDED
|
@@ -0,0 +1,547 @@
|
|
|
1
|
+
# VectorJSON
|
|
2
|
+
|
|
3
|
+
[](https://github.com/teamchong/vectorjson/actions/workflows/ci.yml)
|
|
4
|
+
|
|
5
|
+
O(n) streaming JSON parser for LLM tool calls, built on WASM SIMD. Agents act faster with field-level streaming, detect wrong outputs early to abort and save tokens, and offload parsing to Workers with transferable ArrayBuffers.
|
|
6
|
+
|
|
7
|
+
## The Problem
|
|
8
|
+
|
|
9
|
+
When an LLM writes code via a tool call, it streams JSON like this:
|
|
10
|
+
|
|
11
|
+
```json
|
|
12
|
+
{"tool":"file_edit","path":"app.ts","code":"function hello() {\n ...5KB of code...\n}","explanation":"I refactored the..."}
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
Your agent UI needs to:
|
|
16
|
+
1. **Show the tool name immediately** — so the user sees "Editing app.ts" before the code arrives
|
|
17
|
+
2. **Stream code to the editor character-by-character** — not wait for the full response
|
|
18
|
+
3. **Skip the explanation** — the user doesn't need it rendered in real-time
|
|
19
|
+
|
|
20
|
+
Current AI SDKs — Vercel, Anthropic, TanStack, OpenClaw — re-parse the *entire accumulated buffer* on every token:
|
|
21
|
+
|
|
22
|
+
```js
|
|
23
|
+
// What every AI SDK actually does internally
|
|
24
|
+
for await (const chunk of stream) {
|
|
25
|
+
buffer += chunk;
|
|
26
|
+
result = parsePartialJson(buffer); // re-parses ENTIRE buffer every chunk
|
|
27
|
+
}
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
A 50KB tool call streamed in ~12-char chunks means ~4,000 full re-parses — O(n²). At 100KB, Vercel AI SDK spends 4.1 seconds just parsing. Anthropic SDK spends 9.3 seconds.
|
|
31
|
+
|
|
32
|
+
## Quick Start
|
|
33
|
+
|
|
34
|
+
Drop-in replacement for your SDK's partial JSON parser:
|
|
35
|
+
|
|
36
|
+
```js
|
|
37
|
+
import { init } from "vectorjson";
|
|
38
|
+
const vj = await init();
|
|
39
|
+
|
|
40
|
+
// Before (JS parser — what your SDK does today):
|
|
41
|
+
for await (const chunk of stream) {
|
|
42
|
+
buffer += chunk;
|
|
43
|
+
result = parsePartialJson(buffer); // re-parses entire buffer every time
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
// After (VectorJSON — O(n) live document builder):
|
|
47
|
+
const parser = vj.createParser();
|
|
48
|
+
for await (const chunk of stream) {
|
|
49
|
+
parser.feed(chunk);
|
|
50
|
+
result = parser.getValue(); // O(1) — returns live object
|
|
51
|
+
}
|
|
52
|
+
parser.destroy();
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
`getValue()` returns a **live JS object** that grows incrementally on each `feed()`. No re-parsing — each byte is scanned exactly once.
|
|
56
|
+
|
|
57
|
+
**Or skip intermediate access entirely** — if you only need the final value:
|
|
58
|
+
|
|
59
|
+
```js
|
|
60
|
+
const parser = vj.createParser();
|
|
61
|
+
for await (const chunk of stream) {
|
|
62
|
+
const s = parser.feed(chunk); // O(1) — appends bytes to WASM buffer
|
|
63
|
+
if (s === "complete") break;
|
|
64
|
+
}
|
|
65
|
+
const result = parser.getValue(); // one SIMD parse at the end
|
|
66
|
+
parser.destroy();
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Event-driven** — react to fields as they arrive, O(n) total, no re-parsing:
|
|
70
|
+
|
|
71
|
+
```js
|
|
72
|
+
const parser = vj.createEventParser();
|
|
73
|
+
|
|
74
|
+
parser.on('tool', (e) => showToolUI(e.value)); // fires immediately
|
|
75
|
+
parser.onDelta('code', (e) => editor.append(e.value)); // streams char-by-char
|
|
76
|
+
parser.skip('explanation'); // never materialized
|
|
77
|
+
|
|
78
|
+
for await (const chunk of llmStream) {
|
|
79
|
+
parser.feed(chunk); // O(n) — only new bytes scanned
|
|
80
|
+
}
|
|
81
|
+
parser.destroy();
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
**Early abort** — detect wrong output at chunk 7, cancel the remaining 8,000+ chunks:
|
|
85
|
+
|
|
86
|
+
```js
|
|
87
|
+
const abort = new AbortController();
|
|
88
|
+
const parser = vj.createEventParser();
|
|
89
|
+
|
|
90
|
+
parser.on('name', (e) => {
|
|
91
|
+
if (e.value !== 'str_replace_editor') {
|
|
92
|
+
parser.destroy();
|
|
93
|
+
abort.abort(); // stop the LLM stream, stop paying for tokens
|
|
94
|
+
}
|
|
95
|
+
});
|
|
96
|
+
|
|
97
|
+
for await (const chunk of llmStream({ signal: abort.signal })) {
|
|
98
|
+
const status = parser.feed(chunk);
|
|
99
|
+
if (status === 'error') break; // malformed JSON — bail out
|
|
100
|
+
}
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
**Worker offload** — parse 2-3× faster in a Worker, transfer results in O(1):
|
|
104
|
+
|
|
105
|
+
VectorJSON's `getRawBuffer()` returns flat bytes — `postMessage(buf, [buf])` transfers the backing store pointer in O(1) instead of structured-cloning a full object graph. The main thread lazily accesses only the fields it needs:
|
|
106
|
+
|
|
107
|
+
```js
|
|
108
|
+
// In Worker:
|
|
109
|
+
parser.feed(chunk);
|
|
110
|
+
const buf = parser.getRawBuffer();
|
|
111
|
+
postMessage(buf, [buf]); // O(1) transfer — moves pointer, no copy
|
|
112
|
+
|
|
113
|
+
// On Main thread:
|
|
114
|
+
const result = vj.parse(new Uint8Array(buf)); // lazy Proxy
|
|
115
|
+
result.value.name; // only materializes what you touch
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
Worker-side parsing is 2-3× faster than `JSON.parse` at 50 KB+. The transferable ArrayBuffer avoids structured clone overhead, and the lazy Proxy on the main thread means you only pay for the fields you access.
|
|
119
|
+
|
|
120
|
+
## Benchmarks
|
|
121
|
+
|
|
122
|
+
Apple-to-apple: both sides produce a materialized partial object on every chunk. Same payload, same chunks (~12 chars, typical LLM token).
|
|
123
|
+
|
|
124
|
+
`bun --expose-gc bench/ai-parsers/bench.mjs`
|
|
125
|
+
|
|
126
|
+
| Payload | Product | Original | + VectorJSON | Speedup |
|
|
127
|
+
|---------|---------|----------|-------------|---------|
|
|
128
|
+
| 1 KB | Vercel AI SDK | 4.2 ms | 162 µs | **26×** |
|
|
129
|
+
| | Anthropic SDK | 1.6 ms | 162 µs | **10×** |
|
|
130
|
+
| | TanStack AI | 1.8 ms | 162 µs | **11×** |
|
|
131
|
+
| | OpenClaw | 2.0 ms | 162 µs | **12×** |
|
|
132
|
+
| 10 KB | Vercel AI SDK | 49 ms | 470 µs | **104×** |
|
|
133
|
+
| | Anthropic SDK | 93 ms | 470 µs | **198×** |
|
|
134
|
+
| | TanStack AI | 96 ms | 470 µs | **204×** |
|
|
135
|
+
| | OpenClaw | 113 ms | 470 µs | **240×** |
|
|
136
|
+
| 100 KB | Vercel AI SDK | 4.1 s | 4.6 ms | **892×** |
|
|
137
|
+
| | Anthropic SDK | 9.3 s | 4.6 ms | **2016×** |
|
|
138
|
+
| | TanStack AI | 7.5 s | 4.6 ms | **1644×** |
|
|
139
|
+
| | OpenClaw | 8.1 s | 4.6 ms | **1757×** |
|
|
140
|
+
|
|
141
|
+
Stock parsers re-parse the full buffer on every chunk — O(n²). VectorJSON maintains a **live JS object** that grows incrementally on each `feed()`, so `getValue()` is O(1). Total work: O(n).
|
|
142
|
+
|
|
143
|
+
### Why this matters: main thread availability
|
|
144
|
+
|
|
145
|
+
The real cost isn't just CPU time — it's blocking the agent's main thread. Simulating an Anthropic `tool_use` content block (`str_replace_editor`) streamed in ~12-char chunks:
|
|
146
|
+
|
|
147
|
+
`bun --expose-gc bench/time-to-first-action.mjs`
|
|
148
|
+
|
|
149
|
+
| Payload | Stock total | VectorJSON total | Main thread freed |
|
|
150
|
+
|---------|-----------|-----------------|-------------------|
|
|
151
|
+
| 10 KB | 24 ms | 1 ms | 23 ms sooner |
|
|
152
|
+
| 100 KB | 1.5 s | 3 ms | **1.5 seconds sooner** |
|
|
153
|
+
| 500 KB | 39 s | 29 ms | **39 seconds sooner** |
|
|
154
|
+
| 1 MB | 2 min 41 s | 44 ms | **161 seconds sooner** |
|
|
155
|
+
|
|
156
|
+
Both approaches detect the tool name (`.name`) at the same chunk — the LLM hasn't streamed more yet. But while VectorJSON finishes processing all chunks in milliseconds, the stock parser blocks the main thread for the entire duration. The agent can't render UI, stream code to the editor, or start running tools until parsing is done.
|
|
157
|
+
|
|
158
|
+
For even more control, use `createEventParser()` for field-level subscriptions or only call `getValue()` once when `feed()` returns `"complete"`.
|
|
159
|
+
|
|
160
|
+
### Worker Transfer: parse faster, transfer in O(1)
|
|
161
|
+
|
|
162
|
+
`bun run bench:worker` (requires Playwright + Chromium)
|
|
163
|
+
|
|
164
|
+
Measures the full Worker→Main thread pipeline in a real browser. VectorJSON parses 2-3× faster in the Worker at 50 KB+, and `getRawBuffer()` produces a transferable ArrayBuffer — `postMessage(buf, [buf])` moves the backing store pointer in O(1) instead of structured-cloning the parsed object.
|
|
165
|
+
|
|
166
|
+
<details>
|
|
167
|
+
<summary>Which products use which parser</summary>
|
|
168
|
+
|
|
169
|
+
| Product | Stock Parser | With VectorJSON |
|
|
170
|
+
|---------|-------------|----------------|
|
|
171
|
+
| Vercel AI SDK | `fixJson` + `JSON.parse` — O(n²) | `createParser().feed()` + `getValue()` |
|
|
172
|
+
| OpenCode | Vercel AI SDK (`streamText()`) — O(n²) | `createParser().feed()` + `getValue()` |
|
|
173
|
+
| TanStack AI | `partial-json` npm — O(n²) | `createParser().feed()` + `getValue()` |
|
|
174
|
+
| OpenClaw | `partial-json` npm — O(n²) | `createParser().feed()` + `getValue()` |
|
|
175
|
+
| Anthropic SDK | vendored `partial-json-parser` — O(n²) | `createParser().feed()` + `getValue()` |
|
|
176
|
+
|
|
177
|
+
</details>
|
|
178
|
+
|
|
179
|
+
## How It Works
|
|
180
|
+
|
|
181
|
+
VectorJSON compiles [simdjson](https://simdjson.org/) (a SIMD-accelerated JSON parser written in C++) to WebAssembly via Zig. The WASM module does the byte-level parsing — finding structural characters, validating UTF-8, building a tape of tokens — while a thin JS layer provides the streaming API, lazy Proxy materialization, and event dispatch.
|
|
182
|
+
|
|
183
|
+
The streaming parser (`createParser`) accumulates chunks in a WASM-side buffer and re-runs the SIMD parse on the full buffer each `feed()`. This sounds similar to re-parsing, but the difference is: the parse itself runs at SIMD speed inside WASM (~1 GB/s), while JS-based parsers run at ~50 MB/s. The JS object returned by `getValue()` is a live Proxy that reads directly from the WASM tape — no intermediate object allocation.
|
|
184
|
+
|
|
185
|
+
The event parser (`createEventParser`) adds path-matching on top: it diffs the tape between feeds to detect new/changed values and fires callbacks only for subscribed paths.
|
|
186
|
+
|
|
187
|
+
## Install
|
|
188
|
+
|
|
189
|
+
```bash
|
|
190
|
+
npm install vectorjson
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
## Usage
|
|
194
|
+
|
|
195
|
+
### Drop-in: Replace your SDK's partial JSON parser
|
|
196
|
+
|
|
197
|
+
Every AI SDK has a `parsePartialJson` function that re-parses the full buffer on every chunk. Replace it with VectorJSON's streaming parser:
|
|
198
|
+
|
|
199
|
+
```js
|
|
200
|
+
import { init } from "vectorjson";
|
|
201
|
+
const vj = await init();
|
|
202
|
+
|
|
203
|
+
const parser = vj.createParser();
|
|
204
|
+
for await (const chunk of stream) {
|
|
205
|
+
const s = parser.feed(chunk);
|
|
206
|
+
if (s === "complete" || s === "end_early") break;
|
|
207
|
+
}
|
|
208
|
+
const result = parser.getValue(); // lazy Proxy — materializes on access
|
|
209
|
+
parser.destroy();
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
Or use the Vercel AI SDK-compatible signature as a 1-line swap:
|
|
213
|
+
|
|
214
|
+
```js
|
|
215
|
+
// Before
|
|
216
|
+
import { parsePartialJson } from "ai";
|
|
217
|
+
const { value, state } = parsePartialJson(buffer);
|
|
218
|
+
|
|
219
|
+
// After
|
|
220
|
+
import { init } from "vectorjson";
|
|
221
|
+
const vj = await init();
|
|
222
|
+
const { value, state } = vj.parsePartialJson(buffer);
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
### Event-driven: React to fields as they stream in
|
|
226
|
+
|
|
227
|
+
When an LLM streams a tool call, you usually care about specific fields at specific times. `createEventParser` lets you subscribe to paths and get notified the moment a value completes or a string grows:
|
|
228
|
+
|
|
229
|
+
```js
|
|
230
|
+
const parser = vj.createEventParser();
|
|
231
|
+
|
|
232
|
+
// Get the tool name the moment it's complete
|
|
233
|
+
parser.on('tool_calls[*].name', (e) => {
|
|
234
|
+
console.log(e.value); // "search"
|
|
235
|
+
console.log(e.index); // 0 (which tool call)
|
|
236
|
+
});
|
|
237
|
+
|
|
238
|
+
// Stream code to the editor as it arrives
|
|
239
|
+
parser.onDelta('tool_calls[0].args.code', (e) => {
|
|
240
|
+
editor.append(e.value); // just the new characters, decoded
|
|
241
|
+
});
|
|
242
|
+
|
|
243
|
+
// Don't waste CPU on fields you don't need
|
|
244
|
+
parser.skip('tool_calls[*].args.explanation');
|
|
245
|
+
|
|
246
|
+
for await (const chunk of llmStream) {
|
|
247
|
+
parser.feed(chunk);
|
|
248
|
+
}
|
|
249
|
+
parser.destroy();
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
### Multi-root / NDJSON
|
|
253
|
+
|
|
254
|
+
Some LLM APIs stream multiple JSON values separated by newlines. VectorJSON auto-resets between values:
|
|
255
|
+
|
|
256
|
+
```js
|
|
257
|
+
const parser = vj.createEventParser({
|
|
258
|
+
multiRoot: true,
|
|
259
|
+
onRoot(event) {
|
|
260
|
+
console.log(`Root #${event.index}:`, event.value);
|
|
261
|
+
}
|
|
262
|
+
});
|
|
263
|
+
|
|
264
|
+
for await (const chunk of ndjsonStream) {
|
|
265
|
+
parser.feed(chunk);
|
|
266
|
+
}
|
|
267
|
+
parser.destroy();
|
|
268
|
+
```
|
|
269
|
+
|
|
270
|
+
### Mixed LLM output (chain-of-thought, code fences)
|
|
271
|
+
|
|
272
|
+
Some models emit thinking text before JSON, or wrap JSON in code fences. VectorJSON finds the JSON automatically:
|
|
273
|
+
|
|
274
|
+
```js
|
|
275
|
+
const parser = vj.createEventParser();
|
|
276
|
+
parser.on('answer', (e) => console.log(e.value));
|
|
277
|
+
parser.onText((text) => thinkingPanel.append(text)); // opt-in
|
|
278
|
+
|
|
279
|
+
// All of these work:
|
|
280
|
+
// <think>reasoning</think>{"answer": 42}
|
|
281
|
+
// ```json\n{"answer": 42}\n```
|
|
282
|
+
// Here's the result:\n{"answer": 42}
|
|
283
|
+
parser.feed(llmOutput);
|
|
284
|
+
```
|
|
285
|
+
|
|
286
|
+
### Schema validation
|
|
287
|
+
|
|
288
|
+
Validate and auto-infer types with Zod, Valibot, ArkType, or any lib with `.safeParse()`. Works on all three APIs:
|
|
289
|
+
|
|
290
|
+
**Streaming parser with typed partial objects** — like Vercel AI SDK's `output`, but O(n) instead of O(n²):
|
|
291
|
+
|
|
292
|
+
```ts
|
|
293
|
+
import { z } from 'zod';
|
|
294
|
+
const User = z.object({ name: z.string(), age: z.number() });
|
|
295
|
+
|
|
296
|
+
const parser = vj.createParser(User); // T inferred from schema
|
|
297
|
+
for await (const chunk of stream) {
|
|
298
|
+
parser.feed(chunk);
|
|
299
|
+
const partial = parser.getValue(); // { name: "Ali" } mid-stream — always available
|
|
300
|
+
const done = parser.getStatus() === "complete";
|
|
301
|
+
updateUI(partial, done); // render as fields arrive
|
|
302
|
+
}
|
|
303
|
+
// On complete: getValue() runs safeParse → returns validated data or undefined
|
|
304
|
+
parser.destroy();
|
|
305
|
+
```
|
|
306
|
+
|
|
307
|
+
**Partial JSON** — returns `DeepPartial<T>` because incomplete JSON has missing fields:
|
|
308
|
+
|
|
309
|
+
```ts
|
|
310
|
+
const { value, state } = vj.parsePartialJson('{"name":"Al', User);
|
|
311
|
+
// value: { name: "Al" } — partial object, typed as DeepPartial<{ name: string; age: number }>
|
|
312
|
+
// state: "repaired-parse"
|
|
313
|
+
// TypeScript type: { name?: string; age?: number } | undefined
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
**Event parser** — filter events by schema:
|
|
317
|
+
|
|
318
|
+
```js
|
|
319
|
+
const ToolCall = z.object({ name: z.string(), args: z.record(z.unknown()) });
|
|
320
|
+
|
|
321
|
+
parser.on('tool_calls[*]', ToolCall, (event) => {
|
|
322
|
+
event.value.name; // typed as string
|
|
323
|
+
// Only fires when value passes schema validation
|
|
324
|
+
});
|
|
325
|
+
```
|
|
326
|
+
|
|
327
|
+
Schema-agnostic: any object with `{ safeParse(v) → { success: boolean; data?: T } }` works.
|
|
328
|
+
|
|
329
|
+
### Lazy access — only materialize what you touch
|
|
330
|
+
|
|
331
|
+
`vj.parse()` returns a lazy Proxy backed by the WASM tape. Fields are only materialized into JS objects when you access them. On a 2 MB payload, reading one field is 2× faster than `JSON.parse` because the other 99% is never allocated:
|
|
332
|
+
|
|
333
|
+
```js
|
|
334
|
+
const result = vj.parse(huge2MBToolCall);
|
|
335
|
+
result.value.tool; // "file_edit" — reads from WASM tape, 2.3ms
|
|
336
|
+
result.value.path; // "app.ts"
|
|
337
|
+
// result.value.code (the 50KB field) is never materialized in JS memory
|
|
338
|
+
```
|
|
339
|
+
|
|
340
|
+
```
|
|
341
|
+
bun --expose-gc bench/partial-access.mjs
|
|
342
|
+
|
|
343
|
+
2.2 MB payload, 10K items:
|
|
344
|
+
Access 1 field JSON.parse 4.6ms VectorJSON 2.3ms 2× faster
|
|
345
|
+
Access 10 items JSON.parse 4.5ms VectorJSON 2.6ms 1.7× faster
|
|
346
|
+
Full access JSON.parse 4.8ms VectorJSON 4.6ms ~equal
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
### One-shot parse
|
|
350
|
+
|
|
351
|
+
For non-streaming use cases:
|
|
352
|
+
|
|
353
|
+
```js
|
|
354
|
+
const result = vj.parse('{"users": [{"name": "Alice"}]}');
|
|
355
|
+
result.status; // "complete" | "complete_early" | "incomplete" | "invalid"
|
|
356
|
+
result.value.users; // lazy Proxy — materializes on access
|
|
357
|
+
```
|
|
358
|
+
|
|
359
|
+
## API Reference
|
|
360
|
+
|
|
361
|
+
### `init(options?): Promise<VectorJSON>`
|
|
362
|
+
|
|
363
|
+
Loads WASM once, returns cached singleton. `{ engineWasm?: string | URL | BufferSource }` for custom WASM location.
|
|
364
|
+
|
|
365
|
+
### `vj.parse(input: string | Uint8Array): ParseResult`
|
|
366
|
+
|
|
367
|
+
```ts
|
|
368
|
+
interface ParseResult {
|
|
369
|
+
status: "complete" | "complete_early" | "incomplete" | "invalid";
|
|
370
|
+
value?: unknown; // lazy Proxy for objects/arrays, plain value for primitives
|
|
371
|
+
remaining?: Uint8Array; // unparsed bytes after complete_early (for NDJSON)
|
|
372
|
+
error?: string;
|
|
373
|
+
isComplete(val: unknown): boolean; // was this value in the original input or autocompleted?
|
|
374
|
+
toJSON(): unknown; // full materialization via JSON.parse (cached)
|
|
375
|
+
}
|
|
376
|
+
```
|
|
377
|
+
|
|
378
|
+
- **`complete`** — valid JSON
|
|
379
|
+
- **`complete_early`** — valid JSON with trailing data (NDJSON); use `remaining` for the rest
|
|
380
|
+
- **`incomplete`** — truncated JSON; value is autocompleted, `isComplete()` tells you what's real
|
|
381
|
+
- **`invalid`** — broken JSON
|
|
382
|
+
|
|
383
|
+
### `vj.createParser(schema?): StreamingParser<T>`
|
|
384
|
+
|
|
385
|
+
Each `feed()` processes only new bytes — O(n) total. Pass an optional schema to auto-validate and infer the return type.
|
|
386
|
+
|
|
387
|
+
```ts
|
|
388
|
+
interface StreamingParser<T = unknown> {
|
|
389
|
+
feed(chunk: Uint8Array | string): FeedStatus;
|
|
390
|
+
getValue(): T | undefined; // autocompleted partial while incomplete, final when complete
|
|
391
|
+
getRemaining(): Uint8Array | null;
|
|
392
|
+
getRawBuffer(): ArrayBuffer | null; // transferable buffer for Worker postMessage
|
|
393
|
+
getStatus(): FeedStatus;
|
|
394
|
+
destroy(): void;
|
|
395
|
+
}
|
|
396
|
+
type FeedStatus = "incomplete" | "complete" | "error" | "end_early";
|
|
397
|
+
```
|
|
398
|
+
|
|
399
|
+
While incomplete, `getValue()` returns the **live document** — a mutable JS object that grows incrementally on each `feed()`. This is O(1) per call (just returns the reference). With a schema, returns `undefined` when validation fails:
|
|
400
|
+
|
|
401
|
+
```ts
|
|
402
|
+
import { z } from 'zod';
|
|
403
|
+
const User = z.object({ name: z.string(), age: z.number() });
|
|
404
|
+
|
|
405
|
+
const parser = vj.createParser(User);
|
|
406
|
+
parser.feed('{"name":"Alice","age":30}');
|
|
407
|
+
const val = parser.getValue(); // { name: string; age: number } | undefined ✅
|
|
408
|
+
```
|
|
409
|
+
|
|
410
|
+
Works with Zod, Valibot, ArkType — any library with `{ safeParse(v) → { success, data? } }`.
|
|
411
|
+
|
|
412
|
+
### `vj.parsePartialJson(input, schema?): PartialJsonResult<DeepPartial<T>>`
|
|
413
|
+
|
|
414
|
+
Compatible with Vercel AI SDK's `parsePartialJson` signature. Returns a plain JS object (not a Proxy). Pass an optional schema for type-safe validation.
|
|
415
|
+
|
|
416
|
+
With a schema, returns `DeepPartial<T>` — all properties are optional because incomplete JSON will have missing fields. When `safeParse` succeeds, returns validated `data`. When `safeParse` fails on a repaired-parse (partial JSON), the raw parsed value is kept — the object is partial, that's expected.
|
|
417
|
+
|
|
418
|
+
```ts
|
|
419
|
+
interface PartialJsonResult<T = unknown> {
|
|
420
|
+
value: T | undefined;
|
|
421
|
+
state: "successful-parse" | "repaired-parse" | "failed-parse";
|
|
422
|
+
}
|
|
423
|
+
|
|
424
|
+
type DeepPartial<T> = T extends object
|
|
425
|
+
? T extends Array<infer U> ? Array<DeepPartial<U>>
|
|
426
|
+
: { [K in keyof T]?: DeepPartial<T[K]> }
|
|
427
|
+
: T;
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
### `vj.createEventParser(options?): EventParser`
|
|
431
|
+
|
|
432
|
+
Event-driven streaming parser. Events fire synchronously during `feed()`.
|
|
433
|
+
|
|
434
|
+
```ts
|
|
435
|
+
interface EventParser {
|
|
436
|
+
on(path: string, callback: (event: PathEvent) => void): EventParser;
|
|
437
|
+
on<T>(path: string, schema: { safeParse: Function }, callback: (event: PathEvent & { value: T }) => void): EventParser;
|
|
438
|
+
onDelta(path: string, callback: (event: DeltaEvent) => void): EventParser;
|
|
439
|
+
onText(callback: (text: string) => void): EventParser;
|
|
440
|
+
skip(...paths: string[]): EventParser;
|
|
441
|
+
off(path: string, callback?: Function): EventParser;
|
|
442
|
+
feed(chunk: string | Uint8Array): FeedStatus;
|
|
443
|
+
getValue(): unknown | undefined; // undefined while incomplete, throws on parse errors
|
|
444
|
+
getRemaining(): Uint8Array | null;
|
|
445
|
+
getRawBuffer(): ArrayBuffer | null; // transferable buffer for Worker postMessage
|
|
446
|
+
getStatus(): FeedStatus;
|
|
447
|
+
destroy(): void;
|
|
448
|
+
}
|
|
449
|
+
```
|
|
450
|
+
|
|
451
|
+
All methods return `self` for chaining: `parser.on(...).onDelta(...).skip(...)`.
|
|
452
|
+
|
|
453
|
+
**Path syntax:**
|
|
454
|
+
- `foo.bar` — exact key
|
|
455
|
+
- `foo[0]` — array index
|
|
456
|
+
- `foo[*]` — any array index (wildcard)
|
|
457
|
+
- `foo.*.bar` — wildcard single segment (any key or index)
|
|
458
|
+
|
|
459
|
+
**Event types:**
|
|
460
|
+
|
|
461
|
+
```ts
|
|
462
|
+
interface PathEvent {
|
|
463
|
+
type: 'value';
|
|
464
|
+
path: string; // resolved path: "items.2.name" (concrete indices)
|
|
465
|
+
value: unknown; // parsed JS value
|
|
466
|
+
offset: number; // byte offset in accumulated buffer
|
|
467
|
+
length: number; // byte length of raw value
|
|
468
|
+
index?: number; // last wildcard-matched array index
|
|
469
|
+
key?: string; // last wildcard-matched object key
|
|
470
|
+
matches: (string | number)[]; // all wildcard-matched segments
|
|
471
|
+
}
|
|
472
|
+
|
|
473
|
+
interface DeltaEvent {
|
|
474
|
+
type: 'delta';
|
|
475
|
+
path: string; // resolved path
|
|
476
|
+
value: string; // decoded characters (escapes like \n are resolved)
|
|
477
|
+
offset: number; // byte offset of delta in buffer (raw bytes)
|
|
478
|
+
length: number; // byte length of delta (raw bytes, not char count)
|
|
479
|
+
}
|
|
480
|
+
|
|
481
|
+
interface RootEvent {
|
|
482
|
+
type: 'root';
|
|
483
|
+
index: number; // which root value (0, 1, 2...)
|
|
484
|
+
value: unknown; // parsed via doc_parse
|
|
485
|
+
}
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
### `vj.materialize(value): unknown`
|
|
489
|
+
|
|
490
|
+
Convert a lazy Proxy into a plain JS object tree. No-op on plain values.
|
|
491
|
+
|
|
492
|
+
## Runtime Support
|
|
493
|
+
|
|
494
|
+
| Runtime | Status | Notes |
|
|
495
|
+
|---------|--------|-------|
|
|
496
|
+
| Node.js 20+ | ✅ | WASM loaded from disk automatically |
|
|
497
|
+
| Bun | ✅ | WASM loaded from disk automatically |
|
|
498
|
+
| Browsers | ✅ | Pass `engineWasm` as `ArrayBuffer` or `URL` to `init()` |
|
|
499
|
+
| Deno | ✅ | Pass `engineWasm` as `URL` to `init()` |
|
|
500
|
+
| Cloudflare Workers | ✅ | Import WASM as module, pass as `ArrayBuffer` to `init()` |
|
|
501
|
+
|
|
502
|
+
For environments without filesystem access, provide the WASM binary explicitly:
|
|
503
|
+
|
|
504
|
+
```js
|
|
505
|
+
import { init } from "vectorjson";
|
|
506
|
+
|
|
507
|
+
// Option 1: URL (browsers, Deno)
|
|
508
|
+
const vj = await init({ engineWasm: new URL('./engine.wasm', import.meta.url) });
|
|
509
|
+
|
|
510
|
+
// Option 2: ArrayBuffer (Workers, custom loaders)
|
|
511
|
+
const wasmBytes = await fetch('/engine.wasm').then(r => r.arrayBuffer());
|
|
512
|
+
const vj = await init({ engineWasm: wasmBytes });
|
|
513
|
+
```
|
|
514
|
+
|
|
515
|
+
Bundle size: ~92 KB WASM + ~20 KB JS (~37 KB gzipped total). No runtime dependencies.
|
|
516
|
+
|
|
517
|
+
## Building from Source
|
|
518
|
+
|
|
519
|
+
Requires: [Zig](https://ziglang.org/) 0.15+, [Bun](https://bun.sh/) or Node.js 20+, [Binaryen](https://github.com/WebAssembly/binaryen) (`wasm-opt`).
|
|
520
|
+
|
|
521
|
+
```bash
|
|
522
|
+
# macOS
|
|
523
|
+
brew install binaryen
|
|
524
|
+
|
|
525
|
+
# Ubuntu / Debian
|
|
526
|
+
sudo apt-get install -y binaryen
|
|
527
|
+
```
|
|
528
|
+
|
|
529
|
+
```bash
|
|
530
|
+
bun run build # Zig → WASM → wasm-opt → TypeScript
|
|
531
|
+
bun run test # 557 tests including 100MB stress payloads
|
|
532
|
+
bun run test:worker # Worker transferable tests (Playwright + Chromium)
|
|
533
|
+
```
|
|
534
|
+
|
|
535
|
+
To reproduce benchmarks:
|
|
536
|
+
|
|
537
|
+
```bash
|
|
538
|
+
bun --expose-gc bench/parse-stream.mjs # one-shot + streaming parse
|
|
539
|
+
cd bench/ai-parsers && bun install && bun --expose-gc bench.mjs # AI SDK comparison
|
|
540
|
+
bun run bench:worker # Worker transfer vs structured clone benchmark
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
Benchmark numbers in this README were measured on an Apple M-series Mac. Results vary by machine but relative speedups are consistent.
|
|
544
|
+
|
|
545
|
+
## License
|
|
546
|
+
|
|
547
|
+
Apache-2.0
|
package/dist/engine.wasm
ADDED
|
Binary file
|