@dmop/puru 0.1.5 → 0.1.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,517 +1,197 @@
1
1
  # puru (プール)
2
2
 
3
- A thread pool with Go-style concurrency primitives for JavaScript — spawn tasks off the main thread with channels, WaitGroup, select, and more. No worker files, no boilerplate.
3
+ [![npm version](https://img.shields.io/npm/v/@dmop/puru)](https://www.npmjs.com/package/@dmop/puru)
4
+ [![npm downloads](https://img.shields.io/npm/dm/@dmop/puru)](https://www.npmjs.com/package/@dmop/puru)
5
+ [![bundle size](https://img.shields.io/bundlephobia/minzip/@dmop/puru)](https://bundlephobia.com/package/@dmop/puru)
6
+ [![license](https://img.shields.io/npm/l/@dmop/puru)](LICENSE)
4
7
 
5
- Works on **Node.js** and **Bun**. Deno support coming soon.
8
+ **Go-style concurrency for JavaScript.** Worker threads with channels, WaitGroup, select, and context no worker files, no boilerplate.
6
9
 
7
- *puru (プール) means "pool" in Japanese.*
10
+ ```ts
11
+ import { spawn } from '@dmop/puru'
8
12
 
9
- ## Install
10
-
11
- ```bash
12
- npm install @dmop/puru
13
- # or
14
- bun add @dmop/puru
15
- ```
16
-
17
- ## Quick Start
18
-
19
- ```typescript
20
- import { spawn, chan, WaitGroup, select, after } from '@dmop/puru'
21
-
22
- // CPU work — runs in a dedicated worker thread
23
- const { result } = spawn(() => fibonacci(40))
24
- console.log(await result)
25
-
26
- // I/O work — many tasks share worker threads
27
- const wg = new WaitGroup()
28
- for (const url of urls) {
29
- wg.spawn(() => fetch(url).then(r => r.json()), { concurrent: true })
30
- }
31
- const results = await wg.wait()
32
- ```
13
+ const { result } = spawn(() => {
14
+ let sum = 0
15
+ for (let i = 0; i < 100_000_000; i++) sum += i
16
+ return sum
17
+ })
33
18
 
34
- ## How It Works
35
-
36
- puru manages a **thread pool** — tasks are dispatched onto a fixed set of worker threads:
37
-
38
- ```text
39
- puru thread pool
40
- ┌──────────────────────────────┐
41
- │ │
42
- │ Task 1 ─┐ │
43
- │ Task 2 ─┤──► Thread 1 │
44
- │ Task 3 ─┘ (shared) │
45
- │ │
46
- │ Task 4 ────► Thread 2 │ N threads
47
- │ (exclusive) │ (os.availableParallelism)
48
- │ │
49
- │ Task 5 ─┐ │
50
- │ Task 6 ─┤──► Thread 3 │
51
- │ Task 7 ─┘ (shared) │
52
- │ │
53
- └──────────────────────────────┘
19
+ console.log(await result) // runs off the main thread
54
20
  ```
55
21
 
56
- **Two modes:**
57
-
58
- | Mode | Flag | Best for | How it works |
59
- | --- | --- | --- | --- |
60
- | **Exclusive** (default) | `spawn(fn)` | CPU-bound work | 1 task per thread, full core usage |
61
- | **Concurrent** | `spawn(fn, { concurrent: true })` | I/O-bound / async work | Many tasks share a thread's event loop |
22
+ ## Before / After
62
23
 
63
- CPU-bound work gets a dedicated thread. I/O-bound work shares threads efficiently. The API is inspired by Go's concurrency primitives (channels, WaitGroup, select), but the underlying mechanism is a thread pool — not a green thread scheduler.
24
+ <table>
25
+ <tr><th>Raw worker_threads</th><th>puru</th></tr>
26
+ <tr>
27
+ <td>
64
28
 
65
- ## Why puru
66
-
67
- Same task, four ways — process 4 items in parallel:
68
-
69
- **worker_threads** — 2 files, 15 lines, manual everything:
29
+ ```ts
30
+ const { Worker } = require('worker_threads')
31
+ const worker = new Worker('./worker.js')
32
+ worker.postMessage({ n: 40 })
33
+ worker.on('message', (result) => {
34
+ console.log(result)
35
+ worker.terminate()
36
+ })
37
+ worker.on('error', reject)
70
38
 
71
- ```typescript
72
- // worker.js (separate file required)
39
+ // worker.js (separate file)
73
40
  const { parentPort } = require('worker_threads')
74
- parentPort.on('message', (data) => {
75
- parentPort.postMessage(heavyWork(data))
41
+ parentPort.on('message', ({ n }) => {
42
+ parentPort.postMessage(fibonacci(n))
76
43
  })
77
-
78
- // main.js
79
- import { Worker } from 'worker_threads'
80
- const results = await Promise.all(items.map(item =>
81
- new Promise((resolve, reject) => {
82
- const w = new Worker('./worker.js')
83
- w.postMessage(item)
84
- w.on('message', resolve)
85
- w.on('error', reject)
86
- })
87
- ))
88
- ```
89
-
90
- **Tinypool** — still needs a separate file:
91
-
92
- ```typescript
93
- // worker.js (separate file required)
94
- export default function(data) { return heavyWork(data) }
95
-
96
- // main.js
97
- import Tinypool from 'tinypool'
98
- const pool = new Tinypool({ filename: './worker.js' })
99
- const results = await Promise.all(items.map(item => pool.run(item)))
100
44
  ```
101
45
 
102
- **Piscina** — same pattern, separate file:
46
+ </td>
47
+ <td>
103
48
 
104
- ```typescript
105
- // worker.js (separate file required)
106
- module.exports = function(data) { return heavyWork(data) }
49
+ ```ts
50
+ import { spawn } from '@dmop/puru'
107
51
 
108
- // main.js
109
- import Piscina from 'piscina'
110
- const pool = new Piscina({ filename: './worker.js' })
111
- const results = await Promise.all(items.map(item => pool.run(item)))
112
- ```
113
-
114
- **puru** — one file, 4 lines:
115
-
116
- ```typescript
117
- import { WaitGroup } from '@dmop/puru'
118
- const wg = new WaitGroup()
119
- for (const item of items) wg.spawn(() => heavyWork(item))
120
- const results = await wg.wait()
121
- ```
52
+ const { result } = spawn(() => {
53
+ function fibonacci(n: number): number {
54
+ if (n <= 1) return n
55
+ return fibonacci(n - 1) + fibonacci(n - 2)
56
+ }
57
+ return fibonacci(40)
58
+ })
122
59
 
123
- | Feature | worker_threads | Tinypool | Piscina | **puru** |
124
- | --- | --- | --- | --- | --- |
125
- | Separate worker file | Required | Required | Required | **Not needed** |
126
- | Inline functions | No | No | No | **Yes** |
127
- | Managed thread pool | No | No | No | **Yes** |
128
- | Concurrent mode (I/O) | No | No | No | **Yes** |
129
- | Channels (cross-thread) | No | No | No | **Yes** |
130
- | Cancellation | No | No | No | **Yes** |
131
- | WaitGroup / ErrGroup | No | No | No | **Yes** |
132
- | select (with default) | No | No | No | **Yes** |
133
- | Mutex / Once | No | No | No | **Yes** |
134
- | Ticker | No | No | No | **Yes** |
135
- | Backpressure | No | No | No | **Yes** |
136
- | Priority scheduling | No | No | Yes | **Yes** |
137
- | Pool management | Manual | Automatic | Automatic | **Automatic** |
138
- | Bun support | No | No | No | **Yes** |
139
-
140
- ### puru vs Node.js Cluster
141
-
142
- These solve different problems and are meant to be used together in production.
143
-
144
- **Node Cluster** copies your entire app into N processes. The OS load-balances incoming connections across them. The goal is request throughput — use all cores to handle more concurrent HTTP requests.
145
-
146
- **puru** manages a thread pool inside a single process. Heavy tasks are offloaded off the main event loop to worker threads. The goal is CPU task isolation — use all cores without blocking the event loop.
147
-
148
- ```text
149
- Node Cluster (4 processes):
150
-
151
- OS / Load Balancer
152
- ┌─────────┬─────────┬─────────┐
153
- ▼ ▼ ▼ ▼
154
- ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
155
- │Process │ │Process │ │Process │ │Process │
156
- │full app│ │full app│ │full app│ │full app│
157
- │own DB │ │own DB │ │own DB │ │own DB │
158
- │~100MB │ │~100MB │ │~100MB │ │~100MB │
159
- └────────┘ └────────┘ └────────┘ └────────┘
160
-
161
- puru (1 process, thread pool):
162
-
163
- ┌──────────────────────────────────────┐
164
- │ Your App (1 process) │
165
- │ │
166
- │ Main thread — handles HTTP, DB, I/O │
167
- │ │
168
- │ ┌──────────┐ ┌──────────┐ │
169
- │ │ Thread 1 │ │ Thread 2 │ ... │
170
- │ │ CPU task │ │ CPU task │ │
171
- │ └──────────┘ └──────────┘ │
172
- │ shared memory, one DB pool │
173
- └──────────────────────────────────────┘
60
+ console.log(await result)
174
61
  ```
175
62
 
176
- What happens without puru, even with Cluster:
63
+ </td>
64
+ </tr>
65
+ </table>
177
66
 
178
- ```text
179
- Request 1 → Process 1 → resize image (2s) → Process 1 event loop FROZEN
180
- Request 2 → Process 2 → handles fine ✓
181
- Request 3 → Process 3 → handles fine ✓
182
- (other processes still work, but each process still blocks on heavy tasks)
67
+ One file. No message plumbing. Automatic pooling.
183
68
 
184
- With puru inside each process:
185
- Request 1 → spawn(resizeImage) → worker thread, main thread free ✓
186
- Request 2 → main thread handles instantly ✓
187
- Request 3 → main thread handles instantly ✓
188
- ```
189
-
190
- In production, use both:
69
+ ## Install
191
70
 
192
- ```text
193
- PM2 / Cluster (4 processes) ← maximise request throughput
194
- └── each process runs puru ← keep each event loop unblocked
71
+ ```bash
72
+ npm install @dmop/puru
195
73
  ```
196
74
 
197
- | | Cluster | puru |
198
- | --- | --- | --- |
199
- | Unit | Process | Thread |
200
- | Memory | ~100MB per copy | Shared, much lower |
201
- | Shared state | Needs Redis/IPC | Same process |
202
- | Solves | Request throughput | CPU task offloading |
203
- | Event loop | Still blocks per process | Never blocks |
204
- | DB connections | One pool per process | One pool total |
205
- | Bun support | No cluster module | Yes |
206
-
207
- ## API
208
-
209
- ### `spawn(fn, opts?)`
75
+ ## Quick Start
210
76
 
211
- Run a function in a worker thread. Returns `{ result: Promise<T>, cancel: () => void }`.
77
+ ```ts
78
+ import { spawn, WaitGroup, chan } from '@dmop/puru'
212
79
 
213
- ```typescript
214
- // CPU-bound — exclusive mode (default)
80
+ // CPU work on a dedicated worker
215
81
  const { result } = spawn(() => fibonacci(40))
216
82
 
217
- // I/O-boundconcurrent mode (many tasks per thread)
218
- const { result } = spawn(() => fetch(url), { concurrent: true })
219
-
220
- // With priority
221
- const { result } = spawn(() => criticalWork(), { priority: 'high' })
222
-
223
- // Cancel
224
- const { result, cancel } = spawn(() => longTask())
225
- setTimeout(cancel, 5000)
226
- ```
227
-
228
- **Exclusive mode** (default): the function gets a dedicated thread. Use for CPU-heavy work.
229
-
230
- **Concurrent mode** (`{ concurrent: true }`): multiple tasks share a thread's event loop. Use for async/I/O work where you want to run thousands of tasks without thousands of threads.
231
-
232
- Functions must be self-contained — they cannot capture variables from the enclosing scope:
233
-
234
- ```typescript
235
- const x = 42
236
- spawn(() => x + 1) // ReferenceError: x is not defined
237
- spawn(() => 42 + 1) // works
238
- ```
239
-
240
- ### `chan(capacity?)`
241
-
242
- Create a channel for communicating between async tasks — including across worker threads.
243
-
244
- ```typescript
245
- const ch = chan<number>(10) // buffered, capacity 10
246
- const ch = chan<string>() // unbuffered, capacity 0
247
-
248
- await ch.send(42)
249
- const value = await ch.recv() // 42
250
-
251
- ch.close()
252
- await ch.recv() // null (closed)
253
-
254
- // Async iteration
255
- for await (const value of ch) {
256
- process(value)
257
- }
258
- ```
259
-
260
- **Channels in workers** — pass channels to `spawn()` and use them across worker threads:
83
+ // Parallel batch wait for all
84
+ const wg = new WaitGroup()
85
+ wg.spawn(() => crunchData())
86
+ wg.spawn(() => crunchMoreData())
87
+ const [a, b] = await wg.wait()
261
88
 
262
- ```typescript
89
+ // Cross-thread channels
263
90
  const ch = chan<number>(10)
264
-
265
- // Producer worker
266
91
  spawn(async ({ ch }) => {
267
- for (let i = 0; i < 100; i++) await ch.send(i)
92
+ for (let i = 0; i < 10; i++) await ch.send(i)
268
93
  ch.close()
269
94
  }, { channels: { ch } })
270
95
 
271
- // Consumer worker
272
- spawn(async ({ ch }) => {
273
- for await (const item of ch) process(item)
274
- }, { channels: { ch } })
275
-
276
- // Fan-out: multiple workers pulling from the same channel
277
- const input = chan<Job>(50)
278
- const output = chan<Result>(50)
279
-
280
- for (let i = 0; i < 4; i++) {
281
- spawn(async ({ input, output }) => {
282
- for await (const job of input) {
283
- await output.send(processJob(job))
284
- }
285
- }, { channels: { input, output } })
286
- }
287
- ```
288
-
289
- ### `WaitGroup`
290
-
291
- Structured concurrency. Spawn multiple tasks, wait for all.
292
-
293
- ```typescript
294
- const wg = new WaitGroup()
295
- wg.spawn(() => cpuWork()) // exclusive
296
- wg.spawn(() => fetchData(), { concurrent: true }) // concurrent
297
-
298
- const results = await wg.wait() // resolves when all tasks succeed
299
- const settled = await wg.waitSettled() // resolves when all tasks settle
300
-
301
- wg.cancel() // cancel all tasks
302
- ```
303
-
304
- ### `ErrGroup`
305
-
306
- Like `WaitGroup`, but cancels all remaining tasks on first error. The Go standard for production code (`golang.org/x/sync/errgroup`).
307
-
308
- ```typescript
309
- const eg = new ErrGroup()
310
- eg.spawn(() => fetchUser(id))
311
- eg.spawn(() => fetchOrders(id))
312
- eg.spawn(() => fetchAnalytics(id))
313
-
314
- try {
315
- const [user, orders, analytics] = await eg.wait()
316
- } catch (err) {
317
- // First error — all other tasks were cancelled
318
- console.error('Failed:', err)
319
- }
320
- ```
321
-
322
- ### `Mutex`
323
-
324
- Async mutual exclusion. Serialize access to shared resources under concurrency.
325
-
326
- ```typescript
327
- const mu = new Mutex()
328
-
329
- // withLock — recommended (auto-unlocks on error)
330
- const result = await mu.withLock(async () => {
331
- return await db.query('UPDATE ...')
332
- })
333
-
334
- // Manual lock/unlock
335
- await mu.lock()
336
- try { /* critical section */ }
337
- finally { mu.unlock() }
338
- ```
339
-
340
- ### `Once<T>`
341
-
342
- Run a function exactly once, even if called concurrently. All callers get the same result.
343
-
344
- ```typescript
345
- const once = new Once<DBConnection>()
346
- const conn = await once.do(() => createExpensiveConnection())
347
- // Subsequent calls return the cached result
348
- ```
349
-
350
- ### `select(cases, opts?)`
351
-
352
- Wait for the first of multiple promises to resolve, like Go's `select`.
353
-
354
- ```typescript
355
- // Blocking — waits for first ready
356
- await select([
357
- [ch.recv(), (value) => console.log('received', value)],
358
- [after(5000), () => console.log('timeout')],
359
- ])
360
-
361
- // Non-blocking — returns immediately if nothing is ready (Go's select with default)
362
- await select(
363
- [[ch.recv(), (value) => process(value)]],
364
- { default: () => console.log('channel not ready') },
365
- )
366
- ```
367
-
368
- ### `after(ms)` / `ticker(ms)`
369
-
370
- Timers for use with `select` and async iteration.
371
-
372
- ```typescript
373
- await after(1000) // one-shot: resolves after 1 second
374
-
375
- // Repeating: tick every 500ms
376
- const t = ticker(500)
377
- for await (const _ of t) {
378
- console.log('tick')
379
- if (shouldStop) t.stop()
380
- }
96
+ for await (const item of ch) console.log(item)
381
97
  ```
382
98
 
383
- ### `register(name, fn)` / `run(name, ...args)`
99
+ ## Performance
384
100
 
385
- Named task registry. Register functions by name, call them by name.
101
+ Measured on Apple M1 Pro (8 cores). Full results in [BENCHMARKS.md](docs/BENCHMARKS.md).
386
102
 
387
- ```typescript
388
- register('resize', (buffer, w, h) => sharp(buffer).resize(w, h).toBuffer())
389
- const resized = await run('resize', imageBuffer, 800, 600)
390
- ```
391
-
392
- ### `configure(opts?)`
103
+ | Benchmark | Single-threaded | puru | Speedup |
104
+ | --- | --: | --: | --: |
105
+ | Fibonacci (fib(38) x8) | 4,345 ms | 2,131 ms | **2.0x** |
106
+ | Prime counting (2M range) | 335 ms | 77 ms | **4.4x** |
107
+ | 100 concurrent async tasks | 1,140 ms | 16 ms | **73x** |
108
+ | Fan-out pipeline (4 workers) | 176 ms | 51 ms | **3.4x** |
393
109
 
394
- Optional global configuration. Must be called before the first `spawn()`.
110
+ Spawn overhead: ~0.1-0.5ms. Use for tasks above ~5ms.
395
111
 
396
- ```typescript
397
- configure({
398
- maxThreads: 4, // default: os.availableParallelism()
399
- concurrency: 64, // max concurrent tasks per shared worker (default: 64)
400
- idleTimeout: 30_000, // kill idle workers after 30s (default)
401
- adapter: 'auto', // 'auto' | 'node' | 'bun' | 'inline'
402
- })
403
- ```
112
+ ## Two Modes
404
113
 
405
- ### `stats()` / `resize(n)`
114
+ | Mode | Use it for | What happens |
115
+ | --- | --- | --- |
116
+ | `spawn(fn)` | CPU-bound work | Dedicated worker thread |
117
+ | `spawn(fn, { concurrent: true })` | Async / I/O work | Shares a worker's event loop |
406
118
 
407
- ```typescript
408
- const s = stats() // { totalWorkers, idleWorkers, busyWorkers, queuedTasks, ... }
409
- resize(8) // scale pool up/down at runtime
410
- ```
119
+ ## When To Use What
411
120
 
412
- ### `detectRuntime()` / `detectCapability()`
121
+ | Situation | Tool |
122
+ | --- | --- |
123
+ | One heavy CPU task | `spawn(fn)` |
124
+ | Same logic, many inputs | `task(fn)` |
125
+ | Wait for all tasks | `WaitGroup` |
126
+ | Fail-fast, cancel the rest | `ErrGroup` (with `setLimit()` for throttling) |
127
+ | Timeouts and cancellation | `context` + `spawn(fn, { ctx })` |
128
+ | Producer/consumer pipelines | `chan()` + `select()` |
413
129
 
414
- ```typescript
415
- detectRuntime() // 'node' | 'bun' | 'deno' | 'browser'
416
- detectCapability() // 'full-threads' | 'single-thread'
417
- ```
130
+ ## The Big Rule
418
131
 
419
- ## Benchmarks
132
+ > **Functions passed to `spawn()` cannot capture outer variables.** They are serialized as text and sent to a worker — closures don't survive.
420
133
 
421
- Apple M1 Pro (8 cores), 16 GB RAM. Median of 5 runs after warmup.
134
+ ```ts
135
+ const x = 42
136
+ spawn(() => x + 1) // ReferenceError at runtime
422
137
 
423
- ```bash
424
- npm run bench # all benchmarks (Node.js)
425
- npm run bench:bun # all benchmarks (Bun)
138
+ spawn(() => {
139
+ const x = 42 // define inside
140
+ return x + 1
141
+ }) // works
426
142
  ```
427
143
 
428
- ### CPU-Bound Parallelism
429
-
430
- | Benchmark | Without puru | With puru | Speedup |
431
- | --- | --: | --: | --: |
432
- | Fibonacci (fib(38) x8) | 4,345 ms | 2,131 ms | **2.0x** |
433
- | Prime counting (2M range) | 335 ms | 77 ms | **4.4x** |
434
- | Matrix multiply (200x200 x8) | 140 ms | 39 ms | **3.6x** |
435
- | Data processing (100K items x8) | 221 ms | 67 ms | **3.3x** |
436
-
437
- <details>
438
- <summary>Bun results</summary>
439
-
440
- | Benchmark | Without puru | With puru | Speedup |
441
- | --- | --: | --: | --: |
442
- | Fibonacci (fib(38) x8) | 2,208 ms | 380 ms | **5.8x** |
443
- | Prime counting (2M range) | 201 ms | 50 ms | **4.0x** |
444
- | Matrix multiply (200x200 x8) | 197 ms | 57 ms | **3.5x** |
445
- | Data processing (100K items x8) | 214 ms | 109 ms | **2.0x** |
144
+ Use `task(fn)` to pass arguments to reusable worker functions.
446
145
 
447
- </details>
146
+ ## What's Included
448
147
 
449
- ### Channels Fan-Out Pipeline
148
+ **Coordination:** `chan()` &middot; `WaitGroup` &middot; `ErrGroup` &middot; `select()` &middot; `context`
450
149
 
451
- 200 items with CPU-heavy transform, 4 parallel transform workers:
150
+ **Synchronization:** `Mutex` &middot; `RWMutex` &middot; `Once` &middot; `Cond`
452
151
 
453
- | Approach | Time | vs Sequential |
454
- | --- | --: | --: |
455
- | Sequential (no channels) | 176 ms | baseline |
456
- | Main-thread channels only | 174 ms | 1.0x |
457
- | **puru fan-out (4 workers)** | **51 ms** | **3.4x faster** |
152
+ **Timing:** `after()` &middot; `ticker()` &middot; `Timer`
458
153
 
459
- <details>
460
- <summary>Bun results</summary>
154
+ **Ergonomics:** `task()` &middot; `configure()` &middot; `stats()` &middot; directional channels &middot; channel `len`/`cap`
461
155
 
462
- | Approach | Time | vs Sequential |
463
- | --- | --: | --: |
464
- | Sequential (no channels) | 59 ms | baseline |
465
- | Main-thread channels only | 60 ms | 1.0x |
466
- | **puru fan-out (4 workers)** | **22 ms** | **2.7x faster** |
156
+ All modeled after Go's concurrency primitives. Full API in [docs/API.md](docs/API.md).
467
157
 
468
- </details>
158
+ ## Why Not Just Use...
469
159
 
470
- ### Concurrent Async
160
+ **`Promise.all()`** Great for cheap async work. Use puru when work is CPU-heavy or you need the main thread to stay responsive.
471
161
 
472
- 100 async tasks with simulated I/O + CPU, running off the main thread:
162
+ **`worker_threads`** Powerful but low-level: separate files, manual messaging, manual pooling, no channels/WaitGroup/select. puru keeps the power, removes the ceremony.
473
163
 
474
- | Approach | Time | vs Sequential |
475
- | --- | --: | --: |
476
- | Sequential | 1,140 ms | baseline |
477
- | **puru concurrent** | **16 ms** | **73x faster** |
478
-
479
- <details>
480
- <summary>Bun results</summary>
481
-
482
- | Approach | Time | vs Sequential |
483
- | --- | --: | --: |
484
- | Sequential | 1,110 ms | baseline |
485
- | **puru concurrent** | **13 ms** | **87x faster** |
486
-
487
- </details>
488
-
489
- > Spawn overhead is ~0.1-0.5 ms. Use `spawn` for tasks > 5ms. For trivial operations, call directly.
164
+ **Cluster** Cluster adds processes for request throughput. puru offloads heavy work inside each process. They compose well together.
490
165
 
491
166
  ## Runtimes
492
167
 
493
- | Runtime | Support | How |
494
- | --- | --- | --- |
495
- | Node.js >= 18 | Full | `worker_threads` |
496
- | Bun | Full | Web Workers (file-based) |
497
- | Deno | Planned | — |
498
- | Cloudflare Workers | Error | No thread support |
499
- | Vercel Edge | Error | No thread support |
168
+ | Runtime | Status |
169
+ | --- | --- |
170
+ | Node.js >= 20 | Full support |
171
+ | Bun | Full support |
172
+ | Deno | Planned |
500
173
 
501
174
  ## Testing
502
175
 
503
- ```typescript
176
+ ```ts
504
177
  import { configure } from '@dmop/puru'
505
- configure({ adapter: 'inline' }) // runs tasks in main thread, no real workers
178
+ configure({ adapter: 'inline' }) // runs on main thread, no real workers
506
179
  ```
507
180
 
181
+ ## Docs
182
+
183
+ - [API reference](docs/API.md)
184
+ - [Benchmarks](docs/BENCHMARKS.md)
185
+ - [Production use cases](USE-CASES.md)
186
+ - [Examples](examples)
187
+ - [AI assistant guide](AGENTS.md)
188
+
508
189
  ## Limitations
509
190
 
510
- - Functions passed to `spawn()` cannot capture variables from the enclosing scope
511
- - Channel values must be structured-cloneable (no functions, symbols, or WeakRefs)
512
- - `null` cannot be sent through a channel (it's the "closed" sentinel)
513
- - `register()`/`run()` args must be JSON-serializable
514
- - Channel operations from workers have ~0.1-0.5ms RPC overhead per send/recv (fine for coarse-grained coordination, not for per-item micro-operations)
191
+ - `spawn()` functions cannot capture outer variables (see [The Big Rule](#the-big-rule))
192
+ - Channel values must be structured-cloneable (no functions, symbols, WeakRefs)
193
+ - `null` is reserved as the channel-closed sentinel
194
+ - `task()` arguments must be JSON-serializable
515
195
 
516
196
  ## License
517
197