@dmop/puru 0.1.4 → 0.1.10
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +184 -0
- package/README.md +160 -372
- package/dist/index.cjs +175 -149
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +398 -17
- package/dist/index.d.ts +398 -17
- package/dist/index.js +175 -149
- package/dist/index.js.map +1 -1
- package/llms-full.txt +286 -0
- package/llms.txt +45 -0
- package/package.json +38 -5
package/README.md
CHANGED
|
@@ -1,454 +1,242 @@
|
|
|
1
1
|
# puru (プール)
|
|
2
2
|
|
|
3
|
-
A thread pool with Go-style concurrency primitives
|
|
3
|
+
> A thread pool for JavaScript with Go-style concurrency primitives.
|
|
4
|
+
>
|
|
5
|
+
> Run work off the main thread with inline functions, channels, `WaitGroup`, `ErrGroup`, `select`, `Mutex`, `Once`, and more. No worker files. No boilerplate.
|
|
4
6
|
|
|
5
|
-
|
|
7
|
+
`puru` is for the moment when `Promise.all()` is no longer enough, but raw `worker_threads` feels too low-level.
|
|
6
8
|
|
|
7
|
-
|
|
9
|
+
- CPU-heavy work: use dedicated worker threads
|
|
10
|
+
- Async / I/O-heavy work: share worker threads efficiently with `concurrent: true`
|
|
11
|
+
- Coordination: use channels, `WaitGroup`, `ErrGroup`, `select`, `Mutex`, `Once`, and `ticker`
|
|
12
|
+
- Ergonomics: write worker logic inline or define reusable typed tasks
|
|
8
13
|
|
|
9
|
-
|
|
14
|
+
Works on **Node.js >= 20** and **Bun**.
|
|
10
15
|
|
|
11
|
-
|
|
12
|
-
npm install @dmop/puru
|
|
13
|
-
# or
|
|
14
|
-
bun add @dmop/puru
|
|
15
|
-
```
|
|
16
|
+
## Why This Exists
|
|
16
17
|
|
|
17
|
-
|
|
18
|
+
JavaScript apps usually hit one of these walls:
|
|
18
19
|
|
|
19
|
-
|
|
20
|
-
|
|
20
|
+
- A request handler does 200ms of CPU work and stalls the event loop
|
|
21
|
+
- You want worker threads, but you do not want separate worker files and message plumbing
|
|
22
|
+
- You need more than raw parallelism: cancellation, fan-out, backpressure, coordination
|
|
23
|
+
- You like Go's concurrency model and want something similar in JavaScript
|
|
21
24
|
|
|
22
|
-
|
|
23
|
-
const { result } = spawn(() => fibonacci(40))
|
|
24
|
-
console.log(await result)
|
|
25
|
+
`puru` gives you a managed worker pool with a much nicer programming model.
|
|
25
26
|
|
|
26
|
-
|
|
27
|
-
const wg = new WaitGroup()
|
|
28
|
-
for (const url of urls) {
|
|
29
|
-
wg.spawn(() => fetch(url).then(r => r.json()), { concurrent: true })
|
|
30
|
-
}
|
|
31
|
-
const results = await wg.wait()
|
|
32
|
-
```
|
|
27
|
+
## Install
|
|
33
28
|
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
```text
|
|
39
|
-
puru thread pool
|
|
40
|
-
┌──────────────────────────────┐
|
|
41
|
-
│ │
|
|
42
|
-
│ Task 1 ─┐ │
|
|
43
|
-
│ Task 2 ─┤──► Thread 1 │
|
|
44
|
-
│ Task 3 ─┘ (shared) │
|
|
45
|
-
│ │
|
|
46
|
-
│ Task 4 ────► Thread 2 │ N threads
|
|
47
|
-
│ (exclusive) │ (os.availableParallelism)
|
|
48
|
-
│ │
|
|
49
|
-
│ Task 5 ─┐ │
|
|
50
|
-
│ Task 6 ─┤──► Thread 3 │
|
|
51
|
-
│ Task 7 ─┘ (shared) │
|
|
52
|
-
│ │
|
|
53
|
-
└──────────────────────────────┘
|
|
29
|
+
```bash
|
|
30
|
+
npm install @dmop/puru
|
|
31
|
+
# or
|
|
32
|
+
bun add @dmop/puru
|
|
54
33
|
```
|
|
55
34
|
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
| Mode | Flag | Best for | How it works |
|
|
59
|
-
| --- | --- | --- | --- |
|
|
60
|
-
| **Exclusive** (default) | `spawn(fn)` | CPU-bound work | 1 task per thread, full core usage |
|
|
61
|
-
| **Concurrent** | `spawn(fn, { concurrent: true })` | I/O-bound / async work | Many tasks share a thread's event loop |
|
|
62
|
-
|
|
63
|
-
CPU-bound work gets a dedicated thread. I/O-bound work shares threads efficiently. The API is inspired by Go's concurrency primitives (channels, WaitGroup, select), but the underlying mechanism is a thread pool — not a green thread scheduler.
|
|
35
|
+
## 30-Second Tour
|
|
64
36
|
|
|
65
|
-
|
|
37
|
+
```ts
|
|
38
|
+
import { spawn, task, WaitGroup, chan } from '@dmop/puru'
|
|
66
39
|
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
parentPort.on('message', (data) => {
|
|
75
|
-
parentPort.postMessage(heavyWork(data))
|
|
40
|
+
// 1. One CPU-heavy task on a dedicated worker
|
|
41
|
+
const { result: fib } = spawn(() => {
|
|
42
|
+
function fibonacci(n: number): number {
|
|
43
|
+
if (n <= 1) return n
|
|
44
|
+
return fibonacci(n - 1) + fibonacci(n - 2)
|
|
45
|
+
}
|
|
46
|
+
return fibonacci(40)
|
|
76
47
|
})
|
|
77
48
|
|
|
78
|
-
//
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
const w = new Worker('./worker.js')
|
|
83
|
-
w.postMessage(item)
|
|
84
|
-
w.on('message', resolve)
|
|
85
|
-
w.on('error', reject)
|
|
86
|
-
})
|
|
87
|
-
))
|
|
88
|
-
```
|
|
89
|
-
|
|
90
|
-
**Tinypool** — still needs a separate file:
|
|
91
|
-
|
|
92
|
-
```typescript
|
|
93
|
-
// worker.js (separate file required)
|
|
94
|
-
export default function(data) { return heavyWork(data) }
|
|
95
|
-
|
|
96
|
-
// main.js
|
|
97
|
-
import Tinypool from 'tinypool'
|
|
98
|
-
const pool = new Tinypool({ filename: './worker.js' })
|
|
99
|
-
const results = await Promise.all(items.map(item => pool.run(item)))
|
|
100
|
-
```
|
|
101
|
-
|
|
102
|
-
**Piscina** — same pattern, separate file:
|
|
103
|
-
|
|
104
|
-
```typescript
|
|
105
|
-
// worker.js (separate file required)
|
|
106
|
-
module.exports = function(data) { return heavyWork(data) }
|
|
107
|
-
|
|
108
|
-
// main.js
|
|
109
|
-
import Piscina from 'piscina'
|
|
110
|
-
const pool = new Piscina({ filename: './worker.js' })
|
|
111
|
-
const results = await Promise.all(items.map(item => pool.run(item)))
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
**puru** — one file, 4 lines:
|
|
49
|
+
// 2. Reusable typed worker function
|
|
50
|
+
const resize = task((width: number, height: number) => {
|
|
51
|
+
return { width, height, pixels: width * height }
|
|
52
|
+
})
|
|
115
53
|
|
|
116
|
-
|
|
117
|
-
import { WaitGroup } from '@dmop/puru'
|
|
54
|
+
// 3. Structured concurrency
|
|
118
55
|
const wg = new WaitGroup()
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
| Concurrent mode (I/O) | No | No | No | **Yes** |
|
|
129
|
-
| Channels (cross-thread) | No | No | No | **Yes** |
|
|
130
|
-
| Cancellation | No | No | No | **Yes** |
|
|
131
|
-
| WaitGroup / ErrGroup | No | No | No | **Yes** |
|
|
132
|
-
| select (with default) | No | No | No | **Yes** |
|
|
133
|
-
| Mutex / Once | No | No | No | **Yes** |
|
|
134
|
-
| Ticker | No | No | No | **Yes** |
|
|
135
|
-
| Backpressure | No | No | No | **Yes** |
|
|
136
|
-
| Priority scheduling | No | No | Yes | **Yes** |
|
|
137
|
-
| Pool management | Manual | Automatic | Automatic | **Automatic** |
|
|
138
|
-
| Bun support | No | No | No | **Yes** |
|
|
139
|
-
|
|
140
|
-
## API
|
|
141
|
-
|
|
142
|
-
### `spawn(fn, opts?)`
|
|
143
|
-
|
|
144
|
-
Run a function in a worker thread. Returns `{ result: Promise<T>, cancel: () => void }`.
|
|
145
|
-
|
|
146
|
-
```typescript
|
|
147
|
-
// CPU-bound — exclusive mode (default)
|
|
148
|
-
const { result } = spawn(() => fibonacci(40))
|
|
149
|
-
|
|
150
|
-
// I/O-bound — concurrent mode (many tasks per thread)
|
|
151
|
-
const { result } = spawn(() => fetch(url), { concurrent: true })
|
|
152
|
-
|
|
153
|
-
// With priority
|
|
154
|
-
const { result } = spawn(() => criticalWork(), { priority: 'high' })
|
|
155
|
-
|
|
156
|
-
// Cancel
|
|
157
|
-
const { result, cancel } = spawn(() => longTask())
|
|
158
|
-
setTimeout(cancel, 5000)
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
**Exclusive mode** (default): the function gets a dedicated thread. Use for CPU-heavy work.
|
|
162
|
-
|
|
163
|
-
**Concurrent mode** (`{ concurrent: true }`): multiple tasks share a thread's event loop. Use for async/I/O work where you want to run thousands of tasks without thousands of threads.
|
|
164
|
-
|
|
165
|
-
Functions must be self-contained — they cannot capture variables from the enclosing scope:
|
|
166
|
-
|
|
167
|
-
```typescript
|
|
168
|
-
const x = 42
|
|
169
|
-
spawn(() => x + 1) // ReferenceError: x is not defined
|
|
170
|
-
spawn(() => 42 + 1) // works
|
|
171
|
-
```
|
|
172
|
-
|
|
173
|
-
### `chan(capacity?)`
|
|
174
|
-
|
|
175
|
-
Create a channel for communicating between async tasks — including across worker threads.
|
|
176
|
-
|
|
177
|
-
```typescript
|
|
178
|
-
const ch = chan<number>(10) // buffered, capacity 10
|
|
179
|
-
const ch = chan<string>() // unbuffered, capacity 0
|
|
180
|
-
|
|
181
|
-
await ch.send(42)
|
|
182
|
-
const value = await ch.recv() // 42
|
|
56
|
+
wg.spawn(() => {
|
|
57
|
+
let sum = 0
|
|
58
|
+
for (let i = 0; i < 1_000_000; i++) sum += i
|
|
59
|
+
return sum
|
|
60
|
+
})
|
|
61
|
+
wg.spawn(
|
|
62
|
+
() => fetch('https://api.example.com/users/1').then((r) => r.json()),
|
|
63
|
+
{ concurrent: true },
|
|
64
|
+
)
|
|
183
65
|
|
|
184
|
-
|
|
185
|
-
|
|
66
|
+
// 4. Channels for coordination
|
|
67
|
+
const jobs = chan<number>(10)
|
|
68
|
+
spawn(async ({ jobs }) => {
|
|
69
|
+
for (let i = 0; i < 10; i++) await jobs.send(i)
|
|
70
|
+
jobs.close()
|
|
71
|
+
}, { channels: { jobs }, concurrent: true })
|
|
186
72
|
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
}
|
|
73
|
+
console.log(await fib)
|
|
74
|
+
console.log(await resize(800, 600))
|
|
75
|
+
console.log(await wg.wait())
|
|
191
76
|
```
|
|
192
77
|
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
```typescript
|
|
196
|
-
const ch = chan<number>(10)
|
|
197
|
-
|
|
198
|
-
// Producer worker
|
|
199
|
-
spawn(async ({ ch }) => {
|
|
200
|
-
for (let i = 0; i < 100; i++) await ch.send(i)
|
|
201
|
-
ch.close()
|
|
202
|
-
}, { channels: { ch } })
|
|
203
|
-
|
|
204
|
-
// Consumer worker
|
|
205
|
-
spawn(async ({ ch }) => {
|
|
206
|
-
for await (const item of ch) process(item)
|
|
207
|
-
}, { channels: { ch } })
|
|
208
|
-
|
|
209
|
-
// Fan-out: multiple workers pulling from the same channel
|
|
210
|
-
const input = chan<Job>(50)
|
|
211
|
-
const output = chan<Result>(50)
|
|
212
|
-
|
|
213
|
-
for (let i = 0; i < 4; i++) {
|
|
214
|
-
spawn(async ({ input, output }) => {
|
|
215
|
-
for await (const job of input) {
|
|
216
|
-
await output.send(processJob(job))
|
|
217
|
-
}
|
|
218
|
-
}, { channels: { input, output } })
|
|
219
|
-
}
|
|
220
|
-
```
|
|
78
|
+
## The Big Rule
|
|
221
79
|
|
|
222
|
-
|
|
80
|
+
Functions passed to `spawn()` are serialized with `.toString()` and executed in a worker.
|
|
223
81
|
|
|
224
|
-
|
|
82
|
+
That means they **cannot capture variables from the enclosing scope**.
|
|
225
83
|
|
|
226
|
-
```
|
|
227
|
-
const
|
|
228
|
-
wg.spawn(() => cpuWork()) // exclusive
|
|
229
|
-
wg.spawn(() => fetchData(), { concurrent: true }) // concurrent
|
|
84
|
+
```ts
|
|
85
|
+
const x = 42
|
|
230
86
|
|
|
231
|
-
|
|
232
|
-
const settled = await wg.waitSettled() // like Promise.allSettled
|
|
87
|
+
spawn(() => x + 1) // ReferenceError at runtime
|
|
233
88
|
|
|
234
|
-
|
|
89
|
+
spawn(() => {
|
|
90
|
+
const x = 42
|
|
91
|
+
return x + 1
|
|
92
|
+
}) // works
|
|
235
93
|
```
|
|
236
94
|
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
Like `WaitGroup`, but cancels all remaining tasks on first error. The Go standard for production code (`golang.org/x/sync/errgroup`).
|
|
95
|
+
If you need to pass arguments repeatedly, prefer `task(fn)`.
|
|
240
96
|
|
|
241
|
-
|
|
242
|
-
const eg = new ErrGroup()
|
|
243
|
-
eg.spawn(() => fetchUser(id))
|
|
244
|
-
eg.spawn(() => fetchOrders(id))
|
|
245
|
-
eg.spawn(() => fetchAnalytics(id))
|
|
246
|
-
|
|
247
|
-
try {
|
|
248
|
-
const [user, orders, analytics] = await eg.wait()
|
|
249
|
-
} catch (err) {
|
|
250
|
-
// First error — all other tasks were cancelled
|
|
251
|
-
console.error('Failed:', err)
|
|
252
|
-
}
|
|
253
|
-
```
|
|
97
|
+
## Why People Reach for puru
|
|
254
98
|
|
|
255
|
-
###
|
|
99
|
+
### Inline worker code
|
|
256
100
|
|
|
257
|
-
|
|
101
|
+
No separate worker file in the normal case.
|
|
258
102
|
|
|
259
|
-
```
|
|
260
|
-
|
|
103
|
+
```ts
|
|
104
|
+
import { spawn } from '@dmop/puru'
|
|
261
105
|
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
106
|
+
const { result } = spawn(() => {
|
|
107
|
+
let sum = 0
|
|
108
|
+
for (let i = 0; i < 10_000_000; i++) sum += i
|
|
109
|
+
return sum
|
|
265
110
|
})
|
|
266
|
-
|
|
267
|
-
// Manual lock/unlock
|
|
268
|
-
await mu.lock()
|
|
269
|
-
try { /* critical section */ }
|
|
270
|
-
finally { mu.unlock() }
|
|
271
|
-
```
|
|
272
|
-
|
|
273
|
-
### `Once<T>`
|
|
274
|
-
|
|
275
|
-
Run a function exactly once, even if called concurrently. All callers get the same result.
|
|
276
|
-
|
|
277
|
-
```typescript
|
|
278
|
-
const once = new Once<DBConnection>()
|
|
279
|
-
const conn = await once.do(() => createExpensiveConnection())
|
|
280
|
-
// Subsequent calls return the cached result
|
|
281
|
-
```
|
|
282
|
-
|
|
283
|
-
### `select(cases, opts?)`
|
|
284
|
-
|
|
285
|
-
Wait for the first of multiple promises to resolve, like Go's `select`.
|
|
286
|
-
|
|
287
|
-
```typescript
|
|
288
|
-
// Blocking — waits for first ready
|
|
289
|
-
await select([
|
|
290
|
-
[ch.recv(), (value) => console.log('received', value)],
|
|
291
|
-
[after(5000), () => console.log('timeout')],
|
|
292
|
-
])
|
|
293
|
-
|
|
294
|
-
// Non-blocking — returns immediately if nothing is ready (Go's select with default)
|
|
295
|
-
await select(
|
|
296
|
-
[[ch.recv(), (value) => process(value)]],
|
|
297
|
-
{ default: () => console.log('channel not ready') },
|
|
298
|
-
)
|
|
299
|
-
```
|
|
300
|
-
|
|
301
|
-
### `after(ms)` / `ticker(ms)`
|
|
302
|
-
|
|
303
|
-
Timers for use with `select` and async iteration.
|
|
304
|
-
|
|
305
|
-
```typescript
|
|
306
|
-
await after(1000) // one-shot: resolves after 1 second
|
|
307
|
-
|
|
308
|
-
// Repeating: tick every 500ms
|
|
309
|
-
const t = ticker(500)
|
|
310
|
-
for await (const _ of t) {
|
|
311
|
-
console.log('tick')
|
|
312
|
-
if (shouldStop) t.stop()
|
|
313
|
-
}
|
|
314
111
|
```
|
|
315
112
|
|
|
316
|
-
###
|
|
317
|
-
|
|
318
|
-
Named task registry. Register functions by name, call them by name.
|
|
319
|
-
|
|
320
|
-
```typescript
|
|
321
|
-
register('resize', (buffer, w, h) => sharp(buffer).resize(w, h).toBuffer())
|
|
322
|
-
const resized = await run('resize', imageBuffer, 800, 600)
|
|
323
|
-
```
|
|
113
|
+
### Two execution modes
|
|
324
114
|
|
|
325
|
-
|
|
115
|
+
| Mode | Use it for | What happens |
|
|
116
|
+
| --- | --- | --- |
|
|
117
|
+
| `spawn(fn)` | CPU-bound work | The task gets a dedicated worker |
|
|
118
|
+
| `spawn(fn, { concurrent: true })` | Async / I/O-heavy work | Multiple tasks share a worker's event loop |
|
|
326
119
|
|
|
327
|
-
|
|
120
|
+
This is the key distinction:
|
|
328
121
|
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
maxThreads: 4, // default: os.availableParallelism()
|
|
332
|
-
concurrency: 64, // max concurrent tasks per shared worker (default: 64)
|
|
333
|
-
idleTimeout: 30_000, // kill idle workers after 30s (default)
|
|
334
|
-
adapter: 'auto', // 'auto' | 'node' | 'bun' | 'inline'
|
|
335
|
-
})
|
|
336
|
-
```
|
|
122
|
+
- `exclusive` mode is for actual CPU parallelism
|
|
123
|
+
- `concurrent` mode is for lots of tasks that mostly `await`
|
|
337
124
|
|
|
338
|
-
###
|
|
125
|
+
### More than a worker pool
|
|
339
126
|
|
|
340
|
-
|
|
341
|
-
const s = stats() // { totalWorkers, idleWorkers, busyWorkers, queuedTasks, ... }
|
|
342
|
-
resize(8) // scale pool up/down at runtime
|
|
343
|
-
```
|
|
127
|
+
`puru` is not just `spawn()`.
|
|
344
128
|
|
|
345
|
-
|
|
129
|
+
- `chan()` for cross-thread coordination and backpressure
|
|
130
|
+
- `WaitGroup` for “run many, wait for all”
|
|
131
|
+
- `ErrGroup` for “fail fast, cancel the rest”
|
|
132
|
+
- `select()` for first-ready coordination
|
|
133
|
+
- `Mutex` for shared resource protection
|
|
134
|
+
- `Once` for one-time initialization under concurrency
|
|
135
|
+
- `task()` for reusable typed worker functions
|
|
346
136
|
|
|
347
|
-
|
|
348
|
-
detectRuntime() // 'node' | 'bun' | 'deno' | 'browser'
|
|
349
|
-
detectCapability() // 'full-threads' | 'single-thread'
|
|
350
|
-
```
|
|
137
|
+
## When To Use What
|
|
351
138
|
|
|
352
|
-
|
|
139
|
+
| Situation | Best tool |
|
|
140
|
+
| --- | --- |
|
|
141
|
+
| One heavy synchronous task | `spawn(fn)` |
|
|
142
|
+
| Same worker logic called many times with different inputs | `task(fn)` |
|
|
143
|
+
| Many async tasks that mostly wait on I/O | `spawn(fn, { concurrent: true })` |
|
|
144
|
+
| Parallel batch with “wait for everything” | `WaitGroup` |
|
|
145
|
+
| Parallel batch where the first failure should cancel the rest | `ErrGroup` |
|
|
146
|
+
| Producer/consumer or fan-out/fan-in pipeline | `chan()` |
|
|
147
|
+
| Non-blocking coordination between async operations | `select()` |
|
|
353
148
|
|
|
354
|
-
|
|
149
|
+
## Why Not Just Use...
|
|
355
150
|
|
|
356
|
-
|
|
357
|
-
npm run bench # all benchmarks (Node.js)
|
|
358
|
-
npm run bench:bun # all benchmarks (Bun)
|
|
359
|
-
```
|
|
151
|
+
### `Promise.all()`
|
|
360
152
|
|
|
361
|
-
|
|
153
|
+
Use `Promise.all()` when work is already cheap and async.
|
|
362
154
|
|
|
363
|
-
|
|
364
|
-
| --- | --: | --: | --: |
|
|
365
|
-
| Fibonacci (fib(38) x8) | 4,345 ms | 2,131 ms | **2.0x** |
|
|
366
|
-
| Prime counting (2M range) | 335 ms | 77 ms | **4.4x** |
|
|
367
|
-
| Matrix multiply (200x200 x8) | 140 ms | 39 ms | **3.6x** |
|
|
368
|
-
| Data processing (100K items x8) | 221 ms | 67 ms | **3.3x** |
|
|
155
|
+
Use `puru` when:
|
|
369
156
|
|
|
370
|
-
|
|
371
|
-
|
|
157
|
+
- work is CPU-heavy
|
|
158
|
+
- you need the main thread to stay responsive under load
|
|
159
|
+
- you want worker coordination primitives, not just promise aggregation
|
|
372
160
|
|
|
373
|
-
|
|
374
|
-
| --- | --: | --: | --: |
|
|
375
|
-
| Fibonacci (fib(38) x8) | 2,208 ms | 380 ms | **5.8x** |
|
|
376
|
-
| Prime counting (2M range) | 201 ms | 50 ms | **4.0x** |
|
|
377
|
-
| Matrix multiply (200x200 x8) | 197 ms | 57 ms | **3.5x** |
|
|
378
|
-
| Data processing (100K items x8) | 214 ms | 109 ms | **2.0x** |
|
|
161
|
+
### `worker_threads`
|
|
379
162
|
|
|
380
|
-
|
|
163
|
+
Raw `worker_threads` are powerful, but they are low-level:
|
|
381
164
|
|
|
382
|
-
|
|
165
|
+
- separate worker entry files
|
|
166
|
+
- manual message passing
|
|
167
|
+
- manual pooling
|
|
168
|
+
- no built-in channels, `WaitGroup`, `ErrGroup`, or `select`
|
|
383
169
|
|
|
384
|
-
|
|
170
|
+
`puru` keeps the power and removes most of the ceremony.
|
|
385
171
|
|
|
386
|
-
|
|
387
|
-
| --- | --: | --: |
|
|
388
|
-
| Sequential (no channels) | 176 ms | baseline |
|
|
389
|
-
| Main-thread channels only | 174 ms | 1.0x |
|
|
390
|
-
| **puru fan-out (4 workers)** | **51 ms** | **3.4x faster** |
|
|
172
|
+
### Cluster
|
|
391
173
|
|
|
392
|
-
|
|
393
|
-
<summary>Bun results</summary>
|
|
174
|
+
Cluster solves a different problem.
|
|
394
175
|
|
|
395
|
-
|
|
396
|
-
|
|
397
|
-
| Sequential (no channels) | 59 ms | baseline |
|
|
398
|
-
| Main-thread channels only | 60 ms | 1.0x |
|
|
399
|
-
| **puru fan-out (4 workers)** | **22 ms** | **2.7x faster** |
|
|
176
|
+
- Cluster: more processes, better request throughput
|
|
177
|
+
- `puru`: offload heavy work inside each process
|
|
400
178
|
|
|
401
|
-
|
|
179
|
+
They work well together.
|
|
402
180
|
|
|
403
|
-
|
|
181
|
+
## Feature Snapshot
|
|
404
182
|
|
|
405
|
-
|
|
183
|
+
| Feature | `puru` |
|
|
184
|
+
| --- | --- |
|
|
185
|
+
| Inline worker functions | Yes |
|
|
186
|
+
| Dedicated CPU workers | Yes |
|
|
187
|
+
| Shared-worker async mode | Yes |
|
|
188
|
+
| Channels across workers | Yes |
|
|
189
|
+
| WaitGroup / ErrGroup | Yes |
|
|
190
|
+
| `select` / timers | Yes |
|
|
191
|
+
| Mutex / Once | Yes |
|
|
192
|
+
| Bun support | Yes |
|
|
193
|
+
| TypeScript support | Yes |
|
|
406
194
|
|
|
407
|
-
|
|
408
|
-
| --- | --: | --: |
|
|
409
|
-
| Sequential | 1,140 ms | baseline |
|
|
410
|
-
| Promise.all (main thread) | 20 ms | 58x faster |
|
|
411
|
-
| **puru concurrent** | **16 ms** | **73x faster** |
|
|
195
|
+
## Performance
|
|
412
196
|
|
|
413
|
-
|
|
414
|
-
<summary>Bun results</summary>
|
|
197
|
+
`puru` is designed for real work, not micro-bench tricks.
|
|
415
198
|
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
| **puru concurrent** | **13 ms** | **87x faster** |
|
|
199
|
+
- Spawn overhead is roughly `0.1-0.5ms`
|
|
200
|
+
- As a rule of thumb, use worker threads for tasks above `~5ms`
|
|
201
|
+
- CPU-bound benchmarks show real speedups from multi-core execution
|
|
202
|
+
- Concurrent async benchmarks show large gains when many tasks mostly wait on I/O off the main thread
|
|
421
203
|
|
|
422
|
-
|
|
204
|
+
Full benchmark tables live in [docs/BENCHMARKS.md](docs/BENCHMARKS.md).
|
|
423
205
|
|
|
424
|
-
|
|
206
|
+
## Docs
|
|
425
207
|
|
|
426
|
-
|
|
208
|
+
- [API reference](docs/API.md)
|
|
209
|
+
- [Benchmarks](docs/BENCHMARKS.md)
|
|
210
|
+
- [Production use cases](USE-CASES.md)
|
|
211
|
+
- [Examples](examples)
|
|
212
|
+
- [AI assistant guide](AGENTS.md)
|
|
213
|
+
- [Full LLM reference](llms-full.txt)
|
|
427
214
|
|
|
428
215
|
## Runtimes
|
|
429
216
|
|
|
430
|
-
| Runtime | Support |
|
|
217
|
+
| Runtime | Support | Notes |
|
|
431
218
|
| --- | --- | --- |
|
|
432
|
-
| Node.js >=
|
|
433
|
-
| Bun | Full | Web Workers
|
|
434
|
-
| Deno | Planned |
|
|
435
|
-
| Cloudflare Workers | Error | No thread support |
|
|
436
|
-
| Vercel Edge | Error | No thread support |
|
|
219
|
+
| Node.js >= 20 | Full | Uses `worker_threads` |
|
|
220
|
+
| Bun | Full | Uses Web Workers |
|
|
221
|
+
| Deno | Planned | Not yet implemented |
|
|
437
222
|
|
|
438
223
|
## Testing
|
|
439
224
|
|
|
440
|
-
|
|
225
|
+
Use the inline adapter to run tasks on the main thread in tests:
|
|
226
|
+
|
|
227
|
+
```ts
|
|
441
228
|
import { configure } from '@dmop/puru'
|
|
442
|
-
|
|
229
|
+
|
|
230
|
+
configure({ adapter: 'inline' })
|
|
443
231
|
```
|
|
444
232
|
|
|
445
233
|
## Limitations
|
|
446
234
|
|
|
447
|
-
-
|
|
448
|
-
- Channel values must be structured-cloneable
|
|
449
|
-
- `null`
|
|
450
|
-
- `
|
|
451
|
-
- Channel
|
|
235
|
+
- `spawn()` functions cannot capture outer variables
|
|
236
|
+
- Channel values must be structured-cloneable
|
|
237
|
+
- `null` is reserved as the channel closed sentinel
|
|
238
|
+
- `task()` arguments must be JSON-serializable
|
|
239
|
+
- Channel ops from workers have RPC overhead, so use them for coordination, not ultra-fine-grained inner loops
|
|
452
240
|
|
|
453
241
|
## License
|
|
454
242
|
|