@dmop/puru 0.1.5 → 0.1.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,517 +1,242 @@
1
1
  # puru (プール)
2
2
 
3
- A thread pool with Go-style concurrency primitives for JavaScript — spawn tasks off the main thread with channels, WaitGroup, select, and more. No worker files, no boilerplate.
3
+ > A thread pool for JavaScript with Go-style concurrency primitives.
4
+ >
5
+ > Run work off the main thread with inline functions, channels, `WaitGroup`, `ErrGroup`, `select`, `Mutex`, `Once`, and more. No worker files. No boilerplate.
4
6
 
5
- Works on **Node.js** and **Bun**. Deno support coming soon.
7
+ `puru` is for the moment when `Promise.all()` is no longer enough, but raw `worker_threads` feels too low-level.
6
8
 
7
- *puru (プール) means "pool" in Japanese.*
9
+ - CPU-heavy work: use dedicated worker threads
10
+ - Async / I/O-heavy work: share worker threads efficiently with `concurrent: true`
11
+ - Coordination: use channels, `WaitGroup`, `ErrGroup`, `select`, `Mutex`, `Once`, and `ticker`
12
+ - Ergonomics: write worker logic inline or define reusable typed tasks
8
13
 
9
- ## Install
14
+ Works on **Node.js >= 20** and **Bun**.
10
15
 
11
- ```bash
12
- npm install @dmop/puru
13
- # or
14
- bun add @dmop/puru
15
- ```
16
+ ## Why This Exists
16
17
 
17
- ## Quick Start
18
+ JavaScript apps usually hit one of these walls:
18
19
 
19
- ```typescript
20
- import { spawn, chan, WaitGroup, select, after } from '@dmop/puru'
20
+ - A request handler does 200ms of CPU work and stalls the event loop
21
+ - You want worker threads, but you do not want separate worker files and message plumbing
22
+ - You need more than raw parallelism: cancellation, fan-out, backpressure, coordination
23
+ - You like Go's concurrency model and want something similar in JavaScript
21
24
 
22
- // CPU work runs in a dedicated worker thread
23
- const { result } = spawn(() => fibonacci(40))
24
- console.log(await result)
25
+ `puru` gives you a managed worker pool with a much nicer programming model.
25
26
 
26
- // I/O work — many tasks share worker threads
27
- const wg = new WaitGroup()
28
- for (const url of urls) {
29
- wg.spawn(() => fetch(url).then(r => r.json()), { concurrent: true })
30
- }
31
- const results = await wg.wait()
32
- ```
27
+ ## Install
33
28
 
34
- ## How It Works
35
-
36
- puru manages a **thread pool** — tasks are dispatched onto a fixed set of worker threads:
37
-
38
- ```text
39
- puru thread pool
40
- ┌──────────────────────────────┐
41
- │ │
42
- │ Task 1 ─┐ │
43
- │ Task 2 ─┤──► Thread 1 │
44
- │ Task 3 ─┘ (shared) │
45
- │ │
46
- │ Task 4 ────► Thread 2 │ N threads
47
- │ (exclusive) │ (os.availableParallelism)
48
- │ │
49
- │ Task 5 ─┐ │
50
- │ Task 6 ─┤──► Thread 3 │
51
- │ Task 7 ─┘ (shared) │
52
- │ │
53
- └──────────────────────────────┘
29
+ ```bash
30
+ npm install @dmop/puru
31
+ # or
32
+ bun add @dmop/puru
54
33
  ```
55
34
 
56
- **Two modes:**
57
-
58
- | Mode | Flag | Best for | How it works |
59
- | --- | --- | --- | --- |
60
- | **Exclusive** (default) | `spawn(fn)` | CPU-bound work | 1 task per thread, full core usage |
61
- | **Concurrent** | `spawn(fn, { concurrent: true })` | I/O-bound / async work | Many tasks share a thread's event loop |
62
-
63
- CPU-bound work gets a dedicated thread. I/O-bound work shares threads efficiently. The API is inspired by Go's concurrency primitives (channels, WaitGroup, select), but the underlying mechanism is a thread pool — not a green thread scheduler.
35
+ ## 30-Second Tour
64
36
 
65
- ## Why puru
37
+ ```ts
38
+ import { spawn, task, WaitGroup, chan } from '@dmop/puru'
66
39
 
67
- Same task, four ways process 4 items in parallel:
68
-
69
- **worker_threads** 2 files, 15 lines, manual everything:
70
-
71
- ```typescript
72
- // worker.js (separate file required)
73
- const { parentPort } = require('worker_threads')
74
- parentPort.on('message', (data) => {
75
- parentPort.postMessage(heavyWork(data))
40
+ // 1. One CPU-heavy task on a dedicated worker
41
+ const { result: fib } = spawn(() => {
42
+ function fibonacci(n: number): number {
43
+ if (n <= 1) return n
44
+ return fibonacci(n - 1) + fibonacci(n - 2)
45
+ }
46
+ return fibonacci(40)
76
47
  })
77
48
 
78
- // main.js
79
- import { Worker } from 'worker_threads'
80
- const results = await Promise.all(items.map(item =>
81
- new Promise((resolve, reject) => {
82
- const w = new Worker('./worker.js')
83
- w.postMessage(item)
84
- w.on('message', resolve)
85
- w.on('error', reject)
86
- })
87
- ))
88
- ```
89
-
90
- **Tinypool** — still needs a separate file:
91
-
92
- ```typescript
93
- // worker.js (separate file required)
94
- export default function(data) { return heavyWork(data) }
95
-
96
- // main.js
97
- import Tinypool from 'tinypool'
98
- const pool = new Tinypool({ filename: './worker.js' })
99
- const results = await Promise.all(items.map(item => pool.run(item)))
100
- ```
101
-
102
- **Piscina** — same pattern, separate file:
103
-
104
- ```typescript
105
- // worker.js (separate file required)
106
- module.exports = function(data) { return heavyWork(data) }
107
-
108
- // main.js
109
- import Piscina from 'piscina'
110
- const pool = new Piscina({ filename: './worker.js' })
111
- const results = await Promise.all(items.map(item => pool.run(item)))
112
- ```
113
-
114
- **puru** — one file, 4 lines:
49
+ // 2. Reusable typed worker function
50
+ const resize = task((width: number, height: number) => {
51
+ return { width, height, pixels: width * height }
52
+ })
115
53
 
116
- ```typescript
117
- import { WaitGroup } from '@dmop/puru'
54
+ // 3. Structured concurrency
118
55
  const wg = new WaitGroup()
119
- for (const item of items) wg.spawn(() => heavyWork(item))
120
- const results = await wg.wait()
121
- ```
122
-
123
- | Feature | worker_threads | Tinypool | Piscina | **puru** |
124
- | --- | --- | --- | --- | --- |
125
- | Separate worker file | Required | Required | Required | **Not needed** |
126
- | Inline functions | No | No | No | **Yes** |
127
- | Managed thread pool | No | No | No | **Yes** |
128
- | Concurrent mode (I/O) | No | No | No | **Yes** |
129
- | Channels (cross-thread) | No | No | No | **Yes** |
130
- | Cancellation | No | No | No | **Yes** |
131
- | WaitGroup / ErrGroup | No | No | No | **Yes** |
132
- | select (with default) | No | No | No | **Yes** |
133
- | Mutex / Once | No | No | No | **Yes** |
134
- | Ticker | No | No | No | **Yes** |
135
- | Backpressure | No | No | No | **Yes** |
136
- | Priority scheduling | No | No | Yes | **Yes** |
137
- | Pool management | Manual | Automatic | Automatic | **Automatic** |
138
- | Bun support | No | No | No | **Yes** |
139
-
140
- ### puru vs Node.js Cluster
141
-
142
- These solve different problems and are meant to be used together in production.
143
-
144
- **Node Cluster** copies your entire app into N processes. The OS load-balances incoming connections across them. The goal is request throughput — use all cores to handle more concurrent HTTP requests.
145
-
146
- **puru** manages a thread pool inside a single process. Heavy tasks are offloaded off the main event loop to worker threads. The goal is CPU task isolation — use all cores without blocking the event loop.
147
-
148
- ```text
149
- Node Cluster (4 processes):
150
-
151
- OS / Load Balancer
152
- ┌─────────┬─────────┬─────────┐
153
- ▼ ▼ ▼ ▼
154
- ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
155
- │Process │ │Process │ │Process │ │Process │
156
- │full app│ │full app│ │full app│ │full app│
157
- │own DB │ │own DB │ │own DB │ │own DB │
158
- │~100MB │ │~100MB │ │~100MB │ │~100MB │
159
- └────────┘ └────────┘ └────────┘ └────────┘
160
-
161
- puru (1 process, thread pool):
162
-
163
- ┌──────────────────────────────────────┐
164
- │ Your App (1 process) │
165
- │ │
166
- │ Main thread — handles HTTP, DB, I/O │
167
- │ │
168
- │ ┌──────────┐ ┌──────────┐ │
169
- │ │ Thread 1 │ │ Thread 2 │ ... │
170
- │ │ CPU task │ │ CPU task │ │
171
- │ └──────────┘ └──────────┘ │
172
- │ shared memory, one DB pool │
173
- └──────────────────────────────────────┘
174
- ```
175
-
176
- What happens without puru, even with Cluster:
177
-
178
- ```text
179
- Request 1 → Process 1 → resize image (2s) → Process 1 event loop FROZEN
180
- Request 2 → Process 2 → handles fine ✓
181
- Request 3 → Process 3 → handles fine ✓
182
- (other processes still work, but each process still blocks on heavy tasks)
183
-
184
- With puru inside each process:
185
- Request 1 → spawn(resizeImage) → worker thread, main thread free ✓
186
- Request 2 → main thread handles instantly ✓
187
- Request 3 → main thread handles instantly ✓
188
- ```
189
-
190
- In production, use both:
191
-
192
- ```text
193
- PM2 / Cluster (4 processes) ← maximise request throughput
194
- └── each process runs puru ← keep each event loop unblocked
195
- ```
196
-
197
- | | Cluster | puru |
198
- | --- | --- | --- |
199
- | Unit | Process | Thread |
200
- | Memory | ~100MB per copy | Shared, much lower |
201
- | Shared state | Needs Redis/IPC | Same process |
202
- | Solves | Request throughput | CPU task offloading |
203
- | Event loop | Still blocks per process | Never blocks |
204
- | DB connections | One pool per process | One pool total |
205
- | Bun support | No cluster module | Yes |
206
-
207
- ## API
208
-
209
- ### `spawn(fn, opts?)`
210
-
211
- Run a function in a worker thread. Returns `{ result: Promise<T>, cancel: () => void }`.
212
-
213
- ```typescript
214
- // CPU-bound — exclusive mode (default)
215
- const { result } = spawn(() => fibonacci(40))
216
-
217
- // I/O-bound — concurrent mode (many tasks per thread)
218
- const { result } = spawn(() => fetch(url), { concurrent: true })
56
+ wg.spawn(() => {
57
+ let sum = 0
58
+ for (let i = 0; i < 1_000_000; i++) sum += i
59
+ return sum
60
+ })
61
+ wg.spawn(
62
+ () => fetch('https://api.example.com/users/1').then((r) => r.json()),
63
+ { concurrent: true },
64
+ )
219
65
 
220
- // With priority
221
- const { result } = spawn(() => criticalWork(), { priority: 'high' })
66
+ // 4. Channels for coordination
67
+ const jobs = chan<number>(10)
68
+ spawn(async ({ jobs }) => {
69
+ for (let i = 0; i < 10; i++) await jobs.send(i)
70
+ jobs.close()
71
+ }, { channels: { jobs }, concurrent: true })
222
72
 
223
- // Cancel
224
- const { result, cancel } = spawn(() => longTask())
225
- setTimeout(cancel, 5000)
73
+ console.log(await fib)
74
+ console.log(await resize(800, 600))
75
+ console.log(await wg.wait())
226
76
  ```
227
77
 
228
- **Exclusive mode** (default): the function gets a dedicated thread. Use for CPU-heavy work.
78
+ ## The Big Rule
229
79
 
230
- **Concurrent mode** (`{ concurrent: true }`): multiple tasks share a thread's event loop. Use for async/I/O work where you want to run thousands of tasks without thousands of threads.
80
+ Functions passed to `spawn()` are serialized with `.toString()` and executed in a worker.
231
81
 
232
- Functions must be self-contained — they cannot capture variables from the enclosing scope:
82
+ That means they **cannot capture variables from the enclosing scope**.
233
83
 
234
- ```typescript
84
+ ```ts
235
85
  const x = 42
236
- spawn(() => x + 1) // ReferenceError: x is not defined
237
- spawn(() => 42 + 1) // works
238
- ```
239
-
240
- ### `chan(capacity?)`
241
-
242
- Create a channel for communicating between async tasks — including across worker threads.
243
-
244
- ```typescript
245
- const ch = chan<number>(10) // buffered, capacity 10
246
- const ch = chan<string>() // unbuffered, capacity 0
247
-
248
- await ch.send(42)
249
- const value = await ch.recv() // 42
250
86
 
251
- ch.close()
252
- await ch.recv() // null (closed)
87
+ spawn(() => x + 1) // ReferenceError at runtime
253
88
 
254
- // Async iteration
255
- for await (const value of ch) {
256
- process(value)
257
- }
89
+ spawn(() => {
90
+ const x = 42
91
+ return x + 1
92
+ }) // works
258
93
  ```
259
94
 
260
- **Channels in workers** pass channels to `spawn()` and use them across worker threads:
261
-
262
- ```typescript
263
- const ch = chan<number>(10)
264
-
265
- // Producer worker
266
- spawn(async ({ ch }) => {
267
- for (let i = 0; i < 100; i++) await ch.send(i)
268
- ch.close()
269
- }, { channels: { ch } })
270
-
271
- // Consumer worker
272
- spawn(async ({ ch }) => {
273
- for await (const item of ch) process(item)
274
- }, { channels: { ch } })
275
-
276
- // Fan-out: multiple workers pulling from the same channel
277
- const input = chan<Job>(50)
278
- const output = chan<Result>(50)
279
-
280
- for (let i = 0; i < 4; i++) {
281
- spawn(async ({ input, output }) => {
282
- for await (const job of input) {
283
- await output.send(processJob(job))
284
- }
285
- }, { channels: { input, output } })
286
- }
287
- ```
288
-
289
- ### `WaitGroup`
290
-
291
- Structured concurrency. Spawn multiple tasks, wait for all.
95
+ If you need to pass arguments repeatedly, prefer `task(fn)`.
292
96
 
293
- ```typescript
294
- const wg = new WaitGroup()
295
- wg.spawn(() => cpuWork()) // exclusive
296
- wg.spawn(() => fetchData(), { concurrent: true }) // concurrent
297
-
298
- const results = await wg.wait() // resolves when all tasks succeed
299
- const settled = await wg.waitSettled() // resolves when all tasks settle
97
+ ## Why People Reach for puru
300
98
 
301
- wg.cancel() // cancel all tasks
302
- ```
99
+ ### Inline worker code
303
100
 
304
- ### `ErrGroup`
101
+ No separate worker file in the normal case.
305
102
 
306
- Like `WaitGroup`, but cancels all remaining tasks on first error. The Go standard for production code (`golang.org/x/sync/errgroup`).
103
+ ```ts
104
+ import { spawn } from '@dmop/puru'
307
105
 
308
- ```typescript
309
- const eg = new ErrGroup()
310
- eg.spawn(() => fetchUser(id))
311
- eg.spawn(() => fetchOrders(id))
312
- eg.spawn(() => fetchAnalytics(id))
313
-
314
- try {
315
- const [user, orders, analytics] = await eg.wait()
316
- } catch (err) {
317
- // First error — all other tasks were cancelled
318
- console.error('Failed:', err)
319
- }
320
- ```
321
-
322
- ### `Mutex`
323
-
324
- Async mutual exclusion. Serialize access to shared resources under concurrency.
325
-
326
- ```typescript
327
- const mu = new Mutex()
328
-
329
- // withLock — recommended (auto-unlocks on error)
330
- const result = await mu.withLock(async () => {
331
- return await db.query('UPDATE ...')
106
+ const { result } = spawn(() => {
107
+ let sum = 0
108
+ for (let i = 0; i < 10_000_000; i++) sum += i
109
+ return sum
332
110
  })
333
-
334
- // Manual lock/unlock
335
- await mu.lock()
336
- try { /* critical section */ }
337
- finally { mu.unlock() }
338
- ```
339
-
340
- ### `Once<T>`
341
-
342
- Run a function exactly once, even if called concurrently. All callers get the same result.
343
-
344
- ```typescript
345
- const once = new Once<DBConnection>()
346
- const conn = await once.do(() => createExpensiveConnection())
347
- // Subsequent calls return the cached result
348
111
  ```
349
112
 
350
- ### `select(cases, opts?)`
113
+ ### Two execution modes
351
114
 
352
- Wait for the first of multiple promises to resolve, like Go's `select`.
353
-
354
- ```typescript
355
- // Blocking waits for first ready
356
- await select([
357
- [ch.recv(), (value) => console.log('received', value)],
358
- [after(5000), () => console.log('timeout')],
359
- ])
360
-
361
- // Non-blocking — returns immediately if nothing is ready (Go's select with default)
362
- await select(
363
- [[ch.recv(), (value) => process(value)]],
364
- { default: () => console.log('channel not ready') },
365
- )
366
- ```
367
-
368
- ### `after(ms)` / `ticker(ms)`
369
-
370
- Timers for use with `select` and async iteration.
371
-
372
- ```typescript
373
- await after(1000) // one-shot: resolves after 1 second
374
-
375
- // Repeating: tick every 500ms
376
- const t = ticker(500)
377
- for await (const _ of t) {
378
- console.log('tick')
379
- if (shouldStop) t.stop()
380
- }
381
- ```
382
-
383
- ### `register(name, fn)` / `run(name, ...args)`
384
-
385
- Named task registry. Register functions by name, call them by name.
115
+ | Mode | Use it for | What happens |
116
+ | --- | --- | --- |
117
+ | `spawn(fn)` | CPU-bound work | The task gets a dedicated worker |
118
+ | `spawn(fn, { concurrent: true })` | Async / I/O-heavy work | Multiple tasks share a worker's event loop |
386
119
 
387
- ```typescript
388
- register('resize', (buffer, w, h) => sharp(buffer).resize(w, h).toBuffer())
389
- const resized = await run('resize', imageBuffer, 800, 600)
390
- ```
120
+ This is the key distinction:
391
121
 
392
- ### `configure(opts?)`
122
+ - `exclusive` mode is for actual CPU parallelism
123
+ - `concurrent` mode is for lots of tasks that mostly `await`
393
124
 
394
- Optional global configuration. Must be called before the first `spawn()`.
125
+ ### More than a worker pool
395
126
 
396
- ```typescript
397
- configure({
398
- maxThreads: 4, // default: os.availableParallelism()
399
- concurrency: 64, // max concurrent tasks per shared worker (default: 64)
400
- idleTimeout: 30_000, // kill idle workers after 30s (default)
401
- adapter: 'auto', // 'auto' | 'node' | 'bun' | 'inline'
402
- })
403
- ```
127
+ `puru` is not just `spawn()`.
404
128
 
405
- ### `stats()` / `resize(n)`
129
+ - `chan()` for cross-thread coordination and backpressure
130
+ - `WaitGroup` for “run many, wait for all”
131
+ - `ErrGroup` for “fail fast, cancel the rest”
132
+ - `select()` for first-ready coordination
133
+ - `Mutex` for shared resource protection
134
+ - `Once` for one-time initialization under concurrency
135
+ - `task()` for reusable typed worker functions
406
136
 
407
- ```typescript
408
- const s = stats() // { totalWorkers, idleWorkers, busyWorkers, queuedTasks, ... }
409
- resize(8) // scale pool up/down at runtime
410
- ```
137
+ ## When To Use What
411
138
 
412
- ### `detectRuntime()` / `detectCapability()`
139
+ | Situation | Best tool |
140
+ | --- | --- |
141
+ | One heavy synchronous task | `spawn(fn)` |
142
+ | Same worker logic called many times with different inputs | `task(fn)` |
143
+ | Many async tasks that mostly wait on I/O | `spawn(fn, { concurrent: true })` |
144
+ | Parallel batch with “wait for everything” | `WaitGroup` |
145
+ | Parallel batch where the first failure should cancel the rest | `ErrGroup` |
146
+ | Producer/consumer or fan-out/fan-in pipeline | `chan()` |
147
+ | Non-blocking coordination between async operations | `select()` |
413
148
 
414
- ```typescript
415
- detectRuntime() // 'node' | 'bun' | 'deno' | 'browser'
416
- detectCapability() // 'full-threads' | 'single-thread'
417
- ```
149
+ ## Why Not Just Use...
418
150
 
419
- ## Benchmarks
151
+ ### `Promise.all()`
420
152
 
421
- Apple M1 Pro (8 cores), 16 GB RAM. Median of 5 runs after warmup.
153
+ Use `Promise.all()` when work is already cheap and async.
422
154
 
423
- ```bash
424
- npm run bench # all benchmarks (Node.js)
425
- npm run bench:bun # all benchmarks (Bun)
426
- ```
427
-
428
- ### CPU-Bound Parallelism
155
+ Use `puru` when:
429
156
 
430
- | Benchmark | Without puru | With puru | Speedup |
431
- | --- | --: | --: | --: |
432
- | Fibonacci (fib(38) x8) | 4,345 ms | 2,131 ms | **2.0x** |
433
- | Prime counting (2M range) | 335 ms | 77 ms | **4.4x** |
434
- | Matrix multiply (200x200 x8) | 140 ms | 39 ms | **3.6x** |
435
- | Data processing (100K items x8) | 221 ms | 67 ms | **3.3x** |
157
+ - work is CPU-heavy
158
+ - you need the main thread to stay responsive under load
159
+ - you want worker coordination primitives, not just promise aggregation
436
160
 
437
- <details>
438
- <summary>Bun results</summary>
161
+ ### `worker_threads`
439
162
 
440
- | Benchmark | Without puru | With puru | Speedup |
441
- | --- | --: | --: | --: |
442
- | Fibonacci (fib(38) x8) | 2,208 ms | 380 ms | **5.8x** |
443
- | Prime counting (2M range) | 201 ms | 50 ms | **4.0x** |
444
- | Matrix multiply (200x200 x8) | 197 ms | 57 ms | **3.5x** |
445
- | Data processing (100K items x8) | 214 ms | 109 ms | **2.0x** |
163
+ Raw `worker_threads` are powerful, but they are low-level:
446
164
 
447
- </details>
165
+ - separate worker entry files
166
+ - manual message passing
167
+ - manual pooling
168
+ - no built-in channels, `WaitGroup`, `ErrGroup`, or `select`
448
169
 
449
- ### Channels Fan-Out Pipeline
170
+ `puru` keeps the power and removes most of the ceremony.
450
171
 
451
- 200 items with CPU-heavy transform, 4 parallel transform workers:
172
+ ### Cluster
452
173
 
453
- | Approach | Time | vs Sequential |
454
- | --- | --: | --: |
455
- | Sequential (no channels) | 176 ms | baseline |
456
- | Main-thread channels only | 174 ms | 1.0x |
457
- | **puru fan-out (4 workers)** | **51 ms** | **3.4x faster** |
174
+ Cluster solves a different problem.
458
175
 
459
- <details>
460
- <summary>Bun results</summary>
176
+ - Cluster: more processes, better request throughput
177
+ - `puru`: offload heavy work inside each process
461
178
 
462
- | Approach | Time | vs Sequential |
463
- | --- | --: | --: |
464
- | Sequential (no channels) | 59 ms | baseline |
465
- | Main-thread channels only | 60 ms | 1.0x |
466
- | **puru fan-out (4 workers)** | **22 ms** | **2.7x faster** |
179
+ They work well together.
467
180
 
468
- </details>
181
+ ## Feature Snapshot
469
182
 
470
- ### Concurrent Async
183
+ | Feature | `puru` |
184
+ | --- | --- |
185
+ | Inline worker functions | Yes |
186
+ | Dedicated CPU workers | Yes |
187
+ | Shared-worker async mode | Yes |
188
+ | Channels across workers | Yes |
189
+ | WaitGroup / ErrGroup | Yes |
190
+ | `select` / timers | Yes |
191
+ | Mutex / Once | Yes |
192
+ | Bun support | Yes |
193
+ | TypeScript support | Yes |
471
194
 
472
- 100 async tasks with simulated I/O + CPU, running off the main thread:
195
+ ## Performance
473
196
 
474
- | Approach | Time | vs Sequential |
475
- | --- | --: | --: |
476
- | Sequential | 1,140 ms | baseline |
477
- | **puru concurrent** | **16 ms** | **73x faster** |
197
+ `puru` is designed for real work, not micro-bench tricks.
478
198
 
479
- <details>
480
- <summary>Bun results</summary>
199
+ - Spawn overhead is roughly `0.1-0.5ms`
200
+ - As a rule of thumb, use worker threads for tasks above `~5ms`
201
+ - CPU-bound benchmarks show real speedups from multi-core execution
202
+ - Concurrent async benchmarks show large gains when many tasks mostly wait on I/O off the main thread
481
203
 
482
- | Approach | Time | vs Sequential |
483
- | --- | --: | --: |
484
- | Sequential | 1,110 ms | baseline |
485
- | **puru concurrent** | **13 ms** | **87x faster** |
204
+ Full benchmark tables live in [docs/BENCHMARKS.md](docs/BENCHMARKS.md).
486
205
 
487
- </details>
206
+ ## Docs
488
207
 
489
- > Spawn overhead is ~0.1-0.5 ms. Use `spawn` for tasks > 5ms. For trivial operations, call directly.
208
+ - [API reference](docs/API.md)
209
+ - [Benchmarks](docs/BENCHMARKS.md)
210
+ - [Production use cases](USE-CASES.md)
211
+ - [Examples](examples)
212
+ - [AI assistant guide](AGENTS.md)
213
+ - [Full LLM reference](llms-full.txt)
490
214
 
491
215
  ## Runtimes
492
216
 
493
- | Runtime | Support | How |
217
+ | Runtime | Support | Notes |
494
218
  | --- | --- | --- |
495
- | Node.js >= 18 | Full | `worker_threads` |
496
- | Bun | Full | Web Workers (file-based) |
497
- | Deno | Planned | |
498
- | Cloudflare Workers | Error | No thread support |
499
- | Vercel Edge | Error | No thread support |
219
+ | Node.js >= 20 | Full | Uses `worker_threads` |
220
+ | Bun | Full | Uses Web Workers |
221
+ | Deno | Planned | Not yet implemented |
500
222
 
501
223
  ## Testing
502
224
 
503
- ```typescript
225
+ Use the inline adapter to run tasks on the main thread in tests:
226
+
227
+ ```ts
504
228
  import { configure } from '@dmop/puru'
505
- configure({ adapter: 'inline' }) // runs tasks in main thread, no real workers
229
+
230
+ configure({ adapter: 'inline' })
506
231
  ```
507
232
 
508
233
  ## Limitations
509
234
 
510
- - Functions passed to `spawn()` cannot capture variables from the enclosing scope
511
- - Channel values must be structured-cloneable (no functions, symbols, or WeakRefs)
512
- - `null` cannot be sent through a channel (it's the "closed" sentinel)
513
- - `register()`/`run()` args must be JSON-serializable
514
- - Channel operations from workers have ~0.1-0.5ms RPC overhead per send/recv (fine for coarse-grained coordination, not for per-item micro-operations)
235
+ - `spawn()` functions cannot capture outer variables
236
+ - Channel values must be structured-cloneable
237
+ - `null` is reserved as the channel closed sentinel
238
+ - `task()` arguments must be JSON-serializable
239
+ - Channel ops from workers have RPC overhead, so use them for coordination, not ultra-fine-grained inner loops
515
240
 
516
241
  ## License
517
242