experimental-fast-webstreams 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,484 @@
1
+ # experimental-fast-webstreams
2
+
3
+ WHATWG WebStreams API (`ReadableStream`, `WritableStream`, `TransformStream`) backed by Node.js native streams for dramatically better performance.
4
+
5
+ Node.js ships a pure-JavaScript implementation of the WHATWG Streams spec. Every `reader.read()` allocates promises, every `pipeTo()` builds a chain of microtasks, and every chunk traverses a full JavaScript-level queue. `fast-webstreams` replaces this machinery with Node.js native streams (`Readable`, `Writable`, `Transform`) under the hood, while exposing the same WHATWG API surface.
6
+
7
+ ## Benchmarks
8
+
9
+ Throughput at 1KB chunks, 100MB total (Node.js v22, Apple Silicon). This measures pure streaming infrastructure cost -- no transformation, no I/O, no CPU work -- so the differences are entirely due to how each implementation moves chunks through its internal machinery.
10
+
11
+ | | Node.js streams | fast-webstreams | Native WebStreams |
12
+ |---|---|---|---|
13
+ | **read loop** | 26,391 MB/s | **14,577 MB/s** | 3,289 MB/s |
14
+ | **write loop** | 26,713 MB/s | **5,608 MB/s** | 2,391 MB/s |
15
+ | **transform (pipeThrough)** | 8,387 MB/s | **7,093 MB/s** | 618 MB/s |
16
+ | **pipeTo** | 15,175 MB/s | **2,664 MB/s** | 1,398 MB/s |
17
+
18
+ - **read loop is 4.4x faster than native WebStreams** -- synchronous reads from the Node.js buffer return `Promise.resolve()` with no event loop round-trip
19
+ - **write loop is 2.3x faster than native WebStreams** -- direct sink calls bypass Node.js Writable, replacing the `process.nextTick()` deferral with a single microtask
20
+ - **transform is 11x faster than native WebStreams** -- the Tier 0 pipeline path uses Node.js `pipeline()` internally with zero Promise allocations, reaching 85% of raw Node.js transform pipeline throughput
21
+ - **pipeTo is 1.9x faster than native WebStreams** -- benefits from both the fast read path and the direct sink write path
22
+
23
+ When your transform does real work (CPU, I/O), the streaming overhead becomes negligible and all implementations converge. These benchmarks intentionally measure the worst case: tiny chunks, no-op transforms, pure overhead.
24
+
25
+ ## Installation
26
+
27
+ ```bash
28
+ npm install experimental-fast-webstreams
29
+ ```
30
+
31
+ ```js
32
+ import {
33
+ FastReadableStream,
34
+ FastWritableStream,
35
+ FastTransformStream,
36
+ } from 'experimental-fast-webstreams';
37
+ ```
38
+
39
+ These are drop-in replacements for the global `ReadableStream`, `WritableStream`, and `TransformStream`.
40
+
41
+ ### Global Patch
42
+
43
+ To replace the built-in stream constructors globally:
44
+
45
+ ```js
46
+ import { patchGlobalWebStreams, unpatchGlobalWebStreams } from 'experimental-fast-webstreams/patch';
47
+
48
+ patchGlobalWebStreams();
49
+ // globalThis.ReadableStream is now FastReadableStream
50
+ // globalThis.WritableStream is now FastWritableStream
51
+ // globalThis.TransformStream is now FastTransformStream
52
+
53
+ unpatchGlobalWebStreams();
54
+ // restores the original native constructors
55
+ ```
56
+
57
+ Native constructor references are captured at import time, so internal code and `unpatch()` always work correctly.
58
+
59
+ ### TypeScript
60
+
61
+ Type declarations are included. The exports re-export the standard WHATWG stream types, so existing TypeScript code works without changes.
62
+
63
+ ## Quick Example
64
+
65
+ ```js
66
+ import {
67
+ FastReadableStream,
68
+ FastWritableStream,
69
+ FastTransformStream,
70
+ } from 'experimental-fast-webstreams';
71
+
72
+ const readable = new FastReadableStream({
73
+ start(controller) {
74
+ controller.enqueue('hello');
75
+ controller.enqueue('world');
76
+ controller.close();
77
+ },
78
+ });
79
+
80
+ const transform = new FastTransformStream({
81
+ transform(chunk, controller) {
82
+ controller.enqueue(chunk.toUpperCase());
83
+ },
84
+ });
85
+
86
+ const writable = new FastWritableStream({
87
+ write(chunk) {
88
+ console.log(chunk); // "HELLO", "WORLD"
89
+ },
90
+ });
91
+
92
+ await readable.pipeThrough(transform).pipeTo(writable);
93
+ ```
94
+
95
+ ## Why fast-webstreams Exists
96
+
97
+ Node.js WebStreams are slow. The built-in implementation is written in pure JavaScript with heavy Promise machinery: every chunk that flows through a `ReadableStream` allocates promises, traverses microtask queues, and bounces through multiple layers of JavaScript-level buffering. For high-throughput scenarios -- HTTP response bodies, file I/O, data pipelines -- this overhead dominates.
98
+
99
+ `fast-webstreams` solves this by using Node.js native streams (`Readable`, `Writable`, `Transform`) as the actual data transport. These are implemented in C++ within Node.js and have been optimized over a decade. The WHATWG API is a thin adapter layer on top.
100
+
101
+ The result: `reader.read()` loops run approximately 3x faster than native WebStreams, and `pipeThrough` chains operate within 10% of raw Node.js stream performance at 1KB+ chunk sizes.
102
+
103
+ ## Architecture: Three Tiers
104
+
105
+ `fast-webstreams` uses a tiered architecture that selects the fastest path for each operation:
106
+
107
+ ### Tier 0: Pipeline (zero promises)
108
+
109
+ When you `pipeThrough` and `pipeTo` exclusively between Fast streams with default options, the library builds a single `pipeline()` call across the entire chain. Data flows through Node.js C++ internals with zero Promise allocations.
110
+
111
+ ```
112
+ FastReadableStream -> FastTransformStream -> FastWritableStream
113
+ | | |
114
+ Node Readable -----> Node Transform ------> Node Writable
115
+ (single pipeline() call)
116
+ ```
117
+
118
+ The `pipeThrough` call links streams via an internal `kUpstream` reference. When `pipeTo` is finally called, `collectPipelineChain()` walks the upstream links and passes all Node.js streams to a single `pipeline()` invocation.
119
+
120
+ ### Tier 1: Sync Fast Path (reader/writer)
121
+
122
+ When you call `reader.read()`, the reader does a synchronous `nodeReadable.read()` from the Node.js buffer. If data is already buffered, it returns `Promise.resolve({ value, done: false })` -- no event loop round-trip, no microtask queue.
123
+
124
+ ```js
125
+ const reader = stream.getReader();
126
+ const { value, done } = await reader.read(); // sync read from Node buffer
127
+ ```
128
+
129
+ Similarly, `writer.write()` dispatches directly to `nodeWritable.write()` with a fast path that skips the internal queue when the stream is started and idle.
130
+
131
+ ### Tier 2: Native Interop (full compatibility)
132
+
133
+ Operations that need full spec compliance or interact with native WebStreams fall back to `Readable.toWeb()` / `Writable.toWeb()` delegation. This tier handles:
134
+
135
+ - **Byte streams** (`type: 'bytes'`) -- delegated to native `ReadableStream`
136
+ - **Custom queuing strategies** (`ByteLengthQueuingStrategy` with `size()`) -- delegated to native
137
+ - **`tee()`** -- implemented in pure JS using readers and controllers for correct cancel semantics
138
+ - **Mixed piping** (Fast stream to native WebStream or vice versa) -- uses `specPipeTo` for full WHATWG compliance
139
+
140
+ ```js
141
+ // These automatically use Tier 2:
142
+ new FastReadableStream({ type: 'bytes', ... }); // byte stream -> native
143
+ new FastReadableStream(source, { size: (chunk) => ... }); // custom size -> native
144
+ stream.tee(); // pure JS tee implementation
145
+ ```
146
+
147
+ ## Fast Path vs Compat Mode
148
+
149
+ Not every usage of `fast-webstreams` takes the fast path. Certain API patterns trigger **compat mode**, which delegates to Node.js native WebStreams internally. Compat mode still provides full WHATWG spec compliance, but without the performance benefits of the Node.js stream backing.
150
+
151
+ The rule of thumb: if you stick to `FastReadableStream`, `FastWritableStream`, and `FastTransformStream` with default queuing strategies, you get the fast path. Custom `size()` functions, byte streams, and `tee()` trigger compat mode.
152
+
153
+ ### Fast Path Examples
154
+
155
+ These patterns use the fast internal implementation (Node.js `Readable`, `Writable`, `Transform` under the hood):
156
+
157
+ **1. Pull-based ReadableStream with reader.read() loop (Tier 1 -- sync fast path)**
158
+
159
+ ```js
160
+ import { FastReadableStream } from 'experimental-fast-webstreams';
161
+
162
+ const stream = new FastReadableStream({
163
+ start(controller) {
164
+ controller.enqueue('a');
165
+ controller.enqueue('b');
166
+ controller.close();
167
+ },
168
+ });
169
+
170
+ const reader = stream.getReader();
171
+ while (true) {
172
+ const { value, done } = await reader.read(); // sync read from Node buffer
173
+ if (done) break;
174
+ process.stdout.write(value);
175
+ }
176
+ ```
177
+
178
+ `reader.read()` performs a synchronous `nodeReadable.read()` from the Node.js internal buffer. When data is already buffered, it returns `Promise.resolve({ value, done })` with no event loop round-trip. This path is approximately 3x faster than native `ReadableStream`.
179
+
180
+ **2. pipeThrough with FastTransformStream (Tier 0 -- Node.js pipeline)**
181
+
182
+ ```js
183
+ import {
184
+ FastReadableStream,
185
+ FastWritableStream,
186
+ FastTransformStream,
187
+ } from 'experimental-fast-webstreams';
188
+
189
+ const source = new FastReadableStream({
190
+ pull(controller) {
191
+ controller.enqueue(generateChunk());
192
+ },
193
+ });
194
+
195
+ const transform = new FastTransformStream({
196
+ transform(chunk, controller) {
197
+ controller.enqueue(processChunk(chunk));
198
+ },
199
+ });
200
+
201
+ const sink = new FastWritableStream({
202
+ write(chunk) {
203
+ consume(chunk);
204
+ },
205
+ });
206
+
207
+ await source.pipeThrough(transform).pipeTo(sink);
208
+ ```
209
+
210
+ When all streams in a `pipeThrough` / `pipeTo` chain are Fast streams with default options, `fast-webstreams` builds a single `pipeline()` call across the entire chain. Data flows through Node.js C++ internals with zero Promise allocations. This is approximately 11x faster than native `pipeThrough` at 1KB chunk sizes.
211
+
212
+ **3. WritableStream with simple write sink (Tier 1 -- direct dispatch)**
213
+
214
+ ```js
215
+ import { FastWritableStream } from 'experimental-fast-webstreams';
216
+
217
+ const writable = new FastWritableStream({
218
+ write(chunk) {
219
+ console.log('received:', chunk);
220
+ },
221
+ });
222
+
223
+ const writer = writable.getWriter();
224
+ await writer.write('hello');
225
+ await writer.write('world');
226
+ await writer.close();
227
+ ```
228
+
229
+ `writer.write()` dispatches directly to the underlying `nodeWritable.write()` with a fast path that skips the internal queue when the stream is started and idle.
230
+
231
+ ### Compat Mode Examples
232
+
233
+ These patterns fall back to native WebStreams. They are fully WHATWG-compliant but do not benefit from the Node.js stream fast path.
234
+
235
+ **1. ReadableStream with custom size() in QueuingStrategy**
236
+
237
+ ```js
238
+ import { FastReadableStream } from 'experimental-fast-webstreams';
239
+
240
+ // Custom size() function triggers delegation to native ReadableStream
241
+ const stream = new FastReadableStream(
242
+ {
243
+ pull(controller) {
244
+ controller.enqueue(new Uint8Array(1024));
245
+ },
246
+ },
247
+ {
248
+ highWaterMark: 65536,
249
+ size(chunk) {
250
+ return chunk.byteLength; // <-- custom size triggers compat mode
251
+ },
252
+ },
253
+ );
254
+ ```
255
+
256
+ Any strategy with a `size()` function causes the constructor to create a native `ReadableStream` internally and wrap it in a Fast shell. This is because Node.js streams use a count-based or byte-based HWM, not an arbitrary sizing function.
257
+
258
+ **2. TransformStream with custom readableStrategy.size**
259
+
260
+ ```js
261
+ import { FastTransformStream } from 'experimental-fast-webstreams';
262
+
263
+ // Custom size on either strategy triggers delegation to native TransformStream
264
+ const transform = new FastTransformStream(
265
+ {
266
+ transform(chunk, controller) {
267
+ controller.enqueue(chunk);
268
+ },
269
+ },
270
+ undefined, // writableStrategy (default)
271
+ {
272
+ highWaterMark: 65536,
273
+ size(chunk) {
274
+ return chunk.byteLength; // <-- compat mode
275
+ },
276
+ },
277
+ );
278
+ ```
279
+
280
+ If either `writableStrategy` or `readableStrategy` has a `size()` function, the entire `TransformStream` delegates to the native implementation. Both the `.readable` and `.writable` sides become native-backed shells.
281
+
282
+ **3. tee() on any stream**
283
+
284
+ ```js
285
+ import { FastReadableStream } from 'experimental-fast-webstreams';
286
+
287
+ const stream = new FastReadableStream({
288
+ start(controller) {
289
+ controller.enqueue('data');
290
+ controller.close();
291
+ },
292
+ });
293
+
294
+ const [branch1, branch2] = stream.tee(); // <-- compat mode (pure JS tee)
295
+ ```
296
+
297
+ `tee()` is implemented in pure JavaScript using readers and controllers to maintain correct cancel semantics. It acquires a reader lock on the source and creates two new `FastReadableStream` instances that replay chunks to both branches. This is not backed by Node.js `pipeline()` and does not benefit from the Tier 0 or Tier 1 fast paths.
298
+
299
+ **4. Byte streams (type: 'bytes')**
300
+
301
+ ```js
302
+ import { FastReadableStream } from 'experimental-fast-webstreams';
303
+
304
+ // Byte stream type delegates entirely to native ReadableStream
305
+ const stream = new FastReadableStream({
306
+ type: 'bytes', // <-- compat mode
307
+ pull(controller) {
308
+ controller.enqueue(new Uint8Array([1, 2, 3]));
309
+ },
310
+ });
311
+ ```
312
+
313
+ Byte streams (`type: 'bytes'`) require BYOB reader support and typed array view management that maps directly to the native implementation. The stream is fully delegated to the built-in `ReadableStream`.
314
+
315
+ ### How to Tell Which Mode You Are In
316
+
317
+ If you want to check programmatically whether a stream took the fast path or was delegated to native:
318
+
319
+ ```js
320
+ import { FastReadableStream } from 'experimental-fast-webstreams';
321
+
322
+ const stream = new FastReadableStream(source, strategy);
323
+
324
+ // Internal check (not part of the public API):
325
+ // stream[Symbol.for('kNativeOnly')] is not exposed, but the behavior
326
+ // is deterministic based on constructor arguments:
327
+ //
328
+ // Fast path: no size(), no type: 'bytes'
329
+ // Compat mode: size() present, or type: 'bytes'
330
+ ```
331
+
332
+ In practice, you do not need to check at runtime. The routing is fully deterministic based on the arguments passed to the constructor. If you avoid custom `size()` functions and byte stream types, you are on the fast path.
333
+
334
+ ## Key Design Decisions
335
+
336
+ ### objectMode: true
337
+
338
+ All internal Node.js streams use `objectMode: true`. WHATWG streams accept any JavaScript value (not just Buffers), so object mode is required for spec compliance.
339
+
340
+ ### Default HWM of 1
341
+
342
+ The WHATWG spec defaults to `CountQueuingStrategy` with `highWaterMark: 1` (one item), not Node.js's default of `16384` bytes. `fast-webstreams` respects this, configuring Node.js streams with `highWaterMark: 1` unless the user provides a different strategy.
343
+
344
+ ### Shell Objects for Transform
345
+
346
+ `FastTransformStream.readable` and `FastTransformStream.writable` return lightweight shell objects created via `Object.create(FastReadableStream.prototype)` rather than full constructor calls. Both shells share the same underlying `Node Transform` (which extends `Duplex`). This avoids double-buffering and constructor overhead while maintaining proper prototype chains for `instanceof` checks.
347
+
348
+ ### Reader Event Listeners
349
+
350
+ The reader registers `end`, `error`, and `close` lifecycle listeners at construction time for `closedPromise` settlement (self-cleaning on first fire). Per-read dispatch uses `once()` listeners for `readable`, `end`, `error`, and `close` events, cleaned up when the read resolves or via `_errorReadRequests` on stream error.
351
+
352
+ ### Reflect.apply for User Callbacks
353
+
354
+ All user-provided callbacks (`pull`, `write`, `transform`, `flush`, `cancel`, `abort`) are invoked via `Reflect.apply(fn, thisArg, args)` rather than `fn.call(thisArg, ...args)`. This is required because WPT tests monkey-patch `Function.prototype.call` to verify that implementations do not use `.call()`.
355
+
356
+ ### Spec-Compliant pipeTo
357
+
358
+ The `specPipeTo` implementation follows the WHATWG `ReadableStreamPipeTo` algorithm directly: it acquires a reader and writer, pumps chunks in a loop, and handles error propagation, cancellation, and abort signal semantics. The Tier 0 pipeline fast path is only used for `pipeThrough` chains (where upstream links are set), never for standalone `pipeTo`, because Node.js `pipeline()` does not match WHATWG error propagation semantics.
359
+
360
+ ## WPT Compliance
361
+
362
+ `fast-webstreams` is tested against the Web Platform Tests (WPT) streams test suite:
363
+
364
+ | Implementation | Pass Rate | Tests |
365
+ |---|---|---|
366
+ | Native (Node.js) | 98.2% | 1096/1116 |
367
+ | fast-webstreams | 97.7% | 1090/1116 |
368
+
369
+ The 26 remaining failures (8 fast-only) are in edge cases:
370
+
371
+ - **2 tests**: `then`-interception -- `Promise.resolve(obj)` always triggers a thenable check on `obj`, which is unfixable in pure JavaScript.
372
+ - **3 tests**: Cancel timing -- microtask ordering differences between the spec's pure-Promise model and Node.js event-driven completion.
373
+ - **2 tests**: Piping timing -- subtle microtask ordering in error propagation paths.
374
+ - **1 test**: `tee()` error identity -- non-Error objects lose identity through Node.js `destroy()`.
375
+
376
+ ### Running WPT Tests
377
+
378
+ ```bash
379
+ # Native WebStreams (baseline)
380
+ node test/run-wpt.js --mode=native
381
+
382
+ # fast-webstreams
383
+ node test/run-wpt.js --mode=fast
384
+ ```
385
+
386
+ ## API Reference
387
+
388
+ ### FastReadableStream
389
+
390
+ ```js
391
+ new FastReadableStream(underlyingSource?, strategy?)
392
+ ```
393
+
394
+ Drop-in replacement for `ReadableStream`. Supports:
395
+
396
+ - `underlyingSource.start(controller)` -- called on construction
397
+ - `underlyingSource.pull(controller)` -- called when internal buffer needs data
398
+ - `underlyingSource.cancel(reason)` -- called on cancellation
399
+ - `type: 'bytes'` -- delegates to native ReadableStream (Tier 2)
400
+
401
+ Methods: `getReader()`, `pipeThrough()`, `pipeTo()`, `tee()`, `cancel()`, `values()`, `[Symbol.asyncIterator]()`
402
+
403
+ Static: `FastReadableStream.from(asyncIterable)`
404
+
405
+ ### FastWritableStream
406
+
407
+ ```js
408
+ new FastWritableStream(underlyingSink?, strategy?)
409
+ ```
410
+
411
+ Drop-in replacement for `WritableStream`. Supports:
412
+
413
+ - `underlyingSink.start(controller)` -- called on construction
414
+ - `underlyingSink.write(chunk, controller)` -- called for each chunk
415
+ - `underlyingSink.close()` -- called when stream closes
416
+ - `underlyingSink.abort(reason)` -- called on abort
417
+
418
+ Methods: `getWriter()`, `abort()`, `close()`
419
+
420
+ Full `writable -> erroring -> errored` state machine per spec.
421
+
422
+ ### FastTransformStream
423
+
424
+ ```js
425
+ new FastTransformStream(transformer?, writableStrategy?, readableStrategy?)
426
+ ```
427
+
428
+ Drop-in replacement for `TransformStream`. Supports:
429
+
430
+ - `transformer.start(controller)` -- called on construction
431
+ - `transformer.transform(chunk, controller)` -- called for each chunk
432
+ - `transformer.flush(controller)` -- called when writable side closes
433
+ - `transformer.cancel(reason)` -- called when readable side is cancelled
434
+
435
+ Properties: `.readable` (FastReadableStream), `.writable` (FastWritableStream)
436
+
437
+ The readable and writable sides are lightweight shell objects that share a single underlying Node.js `Transform` stream.
438
+
439
+ ### Readers and Writers
440
+
441
+ - `FastReadableStreamDefaultReader` -- returned by `getReader()`
442
+ - `FastReadableStreamBYOBReader` -- returned by `getReader({ mode: 'byob' })`
443
+ - `FastWritableStreamDefaultWriter` -- returned by `getWriter()`
444
+
445
+ These follow the standard WHATWG reader/writer APIs (`read()`, `write()`, `close()`, `cancel()`, `abort()`, `releaseLock()`, `closed`, `ready`, `desiredSize`).
446
+
447
+ ## Project Structure
448
+
449
+ ```
450
+ src/
451
+ index.js Public exports
452
+ readable.js FastReadableStream (3-tier routing, pipeline chain)
453
+ writable.js FastWritableStream (full state machine)
454
+ transform.js FastTransformStream (shell objects, backpressure)
455
+ reader.js FastReadableStreamDefaultReader (sync fast path)
456
+ byob-reader.js FastReadableStreamBYOBReader (native delegation)
457
+ writer.js FastWritableStreamDefaultWriter
458
+ controller.js WHATWG controller adapters (Readable, Writable, Transform)
459
+ pipe-to.js Spec-compliant pipeTo implementation
460
+ materialize.js Tier 2: Readable.toWeb() / Writable.toWeb() delegation
461
+ natives.js Captured native constructors (pre-polyfill)
462
+ patch.js Global patch/unpatch
463
+ utils.js Symbols, type checks, shared constants
464
+
465
+ types/
466
+ index.d.ts TypeScript declarations
467
+
468
+ test/
469
+ run-wpt.js WPT test runner (subprocess-based, concurrency=4)
470
+ run-wpt-file.js Single-file WPT runner
471
+ wpt-harness.js testharness.js polyfill for VM context
472
+ patch.test.js Patch/unpatch tests
473
+
474
+ bench/
475
+ run.js Benchmark entry point
476
+ scenarios/ passthrough, transform-cpu, compression, backpressure, chunk-accumulation
477
+ results/ Timestamped JSON + Markdown reports
478
+
479
+ vendor/wpt/streams/ Web Platform Test files
480
+ ```
481
+
482
+ ## License
483
+
484
+ ISC
package/package.json ADDED
@@ -0,0 +1,45 @@
1
+ {
2
+ "name": "experimental-fast-webstreams",
3
+ "version": "0.0.1",
4
+ "type": "module",
5
+ "description": "Exploring faster WebStreams for Node.js",
6
+ "types": "./types/index.d.ts",
7
+ "exports": {
8
+ ".": {
9
+ "types": "./types/index.d.ts",
10
+ "default": "./src/index.js"
11
+ },
12
+ "./patch": "./src/patch.js"
13
+ },
14
+ "engines": {
15
+ "node": ">=20"
16
+ },
17
+ "keywords": [
18
+ "webstreams",
19
+ "streams",
20
+ "readable",
21
+ "writable",
22
+ "transform",
23
+ "performance",
24
+ "whatwg"
25
+ ],
26
+ "author": "",
27
+ "license": "ISC",
28
+ "devDependencies": {
29
+ "biome": "^0.3.3",
30
+ "knip": "^5.83.1"
31
+ },
32
+ "scripts": {
33
+ "bench": "node --expose-gc bench/run.js",
34
+ "bench:quick": "node --expose-gc bench/run.js --iterations=3 --warmup=1",
35
+ "bench:passthrough": "node --expose-gc bench/run.js --scenario=passthrough",
36
+ "test": "node --test test/*.test.js && node test/run-wpt.js --mode=fast",
37
+ "test:unit": "node --test test/*.test.js",
38
+ "test:wpt": "node test/run-wpt.js --mode=fast",
39
+ "test:wpt:native": "node test/run-wpt.js --mode=native",
40
+ "knip": "knip",
41
+ "lint": "biome check src/",
42
+ "lint:fix": "biome check --fix src/",
43
+ "format": "biome format --write src/"
44
+ }
45
+ }
@@ -0,0 +1,25 @@
1
+ /**
2
+ * FastReadableStreamBYOBReader — wraps native ReadableStreamBYOBReader
3
+ * to accept FastReadableStream instances (materializes to native first).
4
+ */
5
+
6
+ import { materializeReadable } from './materialize.js';
7
+ import { NativeReadableStream } from './natives.js';
8
+ import { isFastReadable } from './utils.js';
9
+
10
+ // Get the native BYOB reader constructor
11
+ const NativeBYOBReader = new NativeReadableStream({ type: 'bytes' }).getReader({ mode: 'byob' }).constructor;
12
+
13
+ /**
14
+ * BYOB reader that accepts both native ReadableStream and FastReadableStream.
15
+ * For FastReadableStream, materializes to native before constructing.
16
+ */
17
+ export class FastReadableStreamBYOBReader extends NativeBYOBReader {
18
+ constructor(stream) {
19
+ if (isFastReadable(stream)) {
20
+ super(materializeReadable(stream));
21
+ } else {
22
+ super(stream);
23
+ }
24
+ }
25
+ }