@dmop/puru 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +49 -12
  2. package/package.json +2 -2
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # puru (プール)
2
2
 
3
- Goroutine-style concurrency for JavaScript with an **M:N scheduler** multiplexes thousands of tasks onto a small pool of OS threads, just like Go.
3
+ A thread pool with Go-style concurrency primitives for JavaScript spawn tasks off the main thread with channels, WaitGroup, select, and more. No worker files, no boilerplate.
4
4
 
5
- Works on **Node.js** and **Bun**. No separate worker files, no manual message passing.
5
+ Works on **Node.js** and **Bun**. Deno support coming soon.
6
6
 
7
7
  *puru (プール) means "pool" in Japanese.*
8
8
 
@@ -10,6 +10,8 @@ Works on **Node.js** and **Bun**. No separate worker files, no manual message pa
10
10
 
11
11
  ```bash
12
12
  npm install @dmop/puru
13
+ # or
14
+ bun add @dmop/puru
13
15
  ```
14
16
 
15
17
  ## Quick Start
@@ -21,7 +23,7 @@ import { spawn, chan, WaitGroup, select, after } from '@dmop/puru'
21
23
  const { result } = spawn(() => fibonacci(40))
22
24
  console.log(await result)
23
25
 
24
- // I/O work — many tasks share worker threads (M:N scheduling)
26
+ // I/O work — many tasks share worker threads
25
27
  const wg = new WaitGroup()
26
28
  for (const url of urls) {
27
29
  wg.spawn(() => fetch(url).then(r => r.json()), { concurrent: true })
@@ -31,10 +33,10 @@ const results = await wg.wait()
31
33
 
32
34
  ## How It Works
33
35
 
34
- puru uses an **M:N scheduler** — M tasks are multiplexed onto N OS threads:
36
+ puru manages a **thread pool** — tasks are dispatched onto a fixed set of worker threads:
35
37
 
36
38
  ```text
37
- puru scheduler
39
+ puru thread pool
38
40
  ┌──────────────────────────────┐
39
41
  │ │
40
42
  │ Task 1 ─┐ │
@@ -58,7 +60,7 @@ puru uses an **M:N scheduler** — M tasks are multiplexed onto N OS threads:
58
60
  | **Exclusive** (default) | `spawn(fn)` | CPU-bound work | 1 task per thread, full core usage |
59
61
  | **Concurrent** | `spawn(fn, { concurrent: true })` | I/O-bound / async work | Many tasks share a thread's event loop |
60
62
 
61
- This is the same model Go uses: goroutines (M) are scheduled onto OS threads (N). CPU-bound work gets a dedicated thread. I/O-bound work shares threads efficiently.
63
+ CPU-bound work gets a dedicated thread. I/O-bound work shares threads efficiently. The API is inspired by Go's concurrency primitives (channels, WaitGroup, select), but the underlying mechanism is a thread pool — not a green thread scheduler.
62
64
 
63
65
  ## Why puru
64
66
 
@@ -122,7 +124,7 @@ const results = await wg.wait()
122
124
  | --- | --- | --- | --- | --- |
123
125
  | Separate worker file | Required | Required | Required | **Not needed** |
124
126
  | Inline functions | No | No | No | **Yes** |
125
- | M:N scheduler | No | No | No | **Yes** |
127
+ | Managed thread pool | No | No | No | **Yes** |
126
128
  | Concurrent mode (I/O) | No | No | No | **Yes** |
127
129
  | Channels (cross-thread) | No | No | No | **Yes** |
128
130
  | Cancellation | No | No | No | **Yes** |
@@ -188,7 +190,7 @@ for await (const value of ch) {
188
190
  }
189
191
  ```
190
192
 
191
- **Channels in workers** — pass channels to `spawn()` and use them from worker threads, just like Go:
193
+ **Channels in workers** — pass channels to `spawn()` and use them across worker threads:
192
194
 
193
195
  ```typescript
194
196
  const ch = chan<number>(10)
@@ -356,7 +358,7 @@ npm run bench # all benchmarks (Node.js)
356
358
  npm run bench:bun # all benchmarks (Bun)
357
359
  ```
358
360
 
359
- ### CPU-Bound Parallelism (Node.js)
361
+ ### CPU-Bound Parallelism
360
362
 
361
363
  | Benchmark | Without puru | With puru | Speedup |
362
364
  | --- | --: | --: | --: |
@@ -365,7 +367,19 @@ npm run bench:bun # all benchmarks (Bun)
365
367
  | Matrix multiply (200x200 x8) | 140 ms | 39 ms | **3.6x** |
366
368
  | Data processing (100K items x8) | 221 ms | 67 ms | **3.3x** |
367
369
 
368
- ### Channels Fan-Out Pipeline (Node.js)
370
+ <details>
371
+ <summary>Bun results</summary>
372
+
373
+ | Benchmark | Without puru | With puru | Speedup |
374
+ | --- | --: | --: | --: |
375
+ | Fibonacci (fib(38) x8) | 2,208 ms | 380 ms | **5.8x** |
376
+ | Prime counting (2M range) | 201 ms | 50 ms | **4.0x** |
377
+ | Matrix multiply (200x200 x8) | 197 ms | 57 ms | **3.5x** |
378
+ | Data processing (100K items x8) | 214 ms | 109 ms | **2.0x** |
379
+
380
+ </details>
381
+
382
+ ### Channels Fan-Out Pipeline
369
383
 
370
384
  200 items with CPU-heavy transform, 4 parallel transform workers:
371
385
 
@@ -375,7 +389,18 @@ npm run bench:bun # all benchmarks (Bun)
375
389
  | Main-thread channels only | 174 ms | 1.0x |
376
390
  | **puru fan-out (4 workers)** | **51 ms** | **3.4x faster** |
377
391
 
378
- ### M:N Concurrent Async (Node.js)
392
+ <details>
393
+ <summary>Bun results</summary>
394
+
395
+ | Approach | Time | vs Sequential |
396
+ | --- | --: | --: |
397
+ | Sequential (no channels) | 59 ms | baseline |
398
+ | Main-thread channels only | 60 ms | 1.0x |
399
+ | **puru fan-out (4 workers)** | **22 ms** | **2.7x faster** |
400
+
401
+ </details>
402
+
403
+ ### Concurrent Async
379
404
 
380
405
  100 async tasks with simulated I/O + CPU:
381
406
 
@@ -383,7 +408,18 @@ npm run bench:bun # all benchmarks (Bun)
383
408
  | --- | --: | --: |
384
409
  | Sequential | 1,140 ms | baseline |
385
410
  | Promise.all (main thread) | 20 ms | 58x faster |
386
- | **puru concurrent (M:N)** | **16 ms** | **73x faster** |
411
+ | **puru concurrent** | **16 ms** | **73x faster** |
412
+
413
+ <details>
414
+ <summary>Bun results</summary>
415
+
416
+ | Approach | Time | vs Sequential |
417
+ | --- | --: | --: |
418
+ | Sequential | 1,110 ms | baseline |
419
+ | Promise.all (main thread) | 16 ms | 68x faster |
420
+ | **puru concurrent** | **13 ms** | **87x faster** |
421
+
422
+ </details>
387
423
 
388
424
  Both Promise.all and puru concurrent are fast — but puru runs everything **off the main thread**, keeping your server responsive under load.
389
425
 
@@ -395,6 +431,7 @@ Both Promise.all and puru concurrent are fast — but puru runs everything **off
395
431
  | --- | --- | --- |
396
432
  | Node.js >= 18 | Full | `worker_threads` |
397
433
  | Bun | Full | Web Workers (file-based) |
434
+ | Deno | Planned | — |
398
435
  | Cloudflare Workers | Error | No thread support |
399
436
  | Vercel Edge | Error | No thread support |
400
437
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@dmop/puru",
3
- "version": "0.1.1",
4
- "description": "puru (プール) — Goroutine-style concurrency for JavaScript",
3
+ "version": "0.1.3",
4
+ "description": "puru (プール) — A thread pool with Go-style concurrency primitives for JavaScript",
5
5
  "type": "module",
6
6
  "main": "./dist/index.cjs",
7
7
  "module": "./dist/index.js",