@camunda8/orchestration-cluster-api 8.9.0-alpha.13 → 8.9.0-alpha.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,9 +1,25 @@
1
- # [8.9.0-alpha.13](https://github.com/camunda/orchestration-cluster-api-js/compare/v8.9.0-alpha.12...v8.9.0-alpha.13) (2026-03-09)
1
+ # [8.9.0-alpha.14](https://github.com/camunda/orchestration-cluster-api-js/compare/v8.9.0-alpha.13...v8.9.0-alpha.14) (2026-03-18)
2
+
3
+
4
+ ### Bug Fixes
5
+
6
+ * address PR review comments for threaded worker ([3eabf5d](https://github.com/camunda/orchestration-cluster-api-js/commit/3eabf5dfc2424a875a498978acec7a865a0d151b))
7
+ * emit threadWorkerEntry to dist and resolve path in ESM ([3f00041](https://github.com/camunda/orchestration-cluster-api-js/commit/3f0004199040fbc904426520efccdd9528b2afcd))
8
+ * fix constraint validation for unicode regex ([8347376](https://github.com/camunda/orchestration-cluster-api-js/commit/834737664db9a76177cd9f7c652141bc20a45ddb))
2
9
 
3
10
 
4
11
  ### Features
5
12
 
6
- * add backoff-at-floor to backpressure algorithm ([a0504bc](https://github.com/camunda/orchestration-cluster-api-js/commit/a0504bcdfd3e24c47e168cddc59aeae957bfc8dd))
13
+ * add performance test, rebuild latest ([1984ad3](https://github.com/camunda/orchestration-cluster-api-js/commit/1984ad38a83f1c652e1f30b2e9751e9800b8cf2e))
14
+ * add threadedJobWorker ([1e6b049](https://github.com/camunda/orchestration-cluster-api-js/commit/1e6b049c09cd7078c70b020cc4f9370f527a444d))
15
+ * build from latest stable/8.9 ([2798004](https://github.com/camunda/orchestration-cluster-api-js/commit/279800491c202f46b0fea7efc8a7a3ced328a339))
16
+ * rebuild from latest stable/8.9 ([e2c8d04](https://github.com/camunda/orchestration-cluster-api-js/commit/e2c8d04280f4991eb5e2564688bc9dbce3c748a8))
17
+
18
+ # [8.9.0-alpha.13](https://github.com/camunda/orchestration-cluster-api-js/compare/v8.9.0-alpha.12...v8.9.0-alpha.13) (2026-03-09)
19
+
20
+ ### Features
21
+
22
+ - add backoff-at-floor to backpressure algorithm ([a0504bc](https://github.com/camunda/orchestration-cluster-api-js/commit/a0504bcdfd3e24c47e168cddc59aeae957bfc8dd))
7
23
 
8
24
  # [8.9.0-alpha.12](https://github.com/camunda/orchestration-cluster-api-js/compare/v8.9.0-alpha.11...v8.9.0-alpha.12) (2026-03-08)
9
25
 
package/README.md CHANGED
@@ -671,6 +671,107 @@ Call `client.getBackpressureState()` to obtain:
671
671
  }
672
672
  ```
673
673
 
674
+ ### Threaded Job Workers (Node.js Only)
675
+
676
+ For CPU-intensive job handlers, `createThreadedJobWorker` offloads handler execution to a pool of Node.js `worker_threads`. Polling and I/O remain on the main event loop, while handler logic runs in parallel threads — dramatically improving throughput when the handler does CPU-bound work (JSON processing, validation, transformation, cryptography).
677
+
678
+ #### When to use
679
+
680
+ - Your handler spends significant time on CPU work (not just waiting for HTTP responses)
681
+ - You observe that a single-threaded worker saturates one CPU core while throughput plateaus
682
+ - You need to process more jobs per second without deploying additional instances
683
+
684
+ If your handler is mostly I/O-bound (HTTP calls, database queries), the standard `createJobWorker` is sufficient.
685
+
686
+ #### Handler module
687
+
688
+ The handler must be a **separate file** (not an inline function) that exports a default async function:
689
+
690
+ ```ts
691
+ // my-handler.ts (or my-handler.js)
692
+ import type { ThreadedJobHandler } from '@camunda8/orchestration-cluster-api';
693
+
694
+ const handler: ThreadedJobHandler = async (job, client) => {
695
+ const { orderId } = job.variables;
696
+ // CPU-intensive work here...
697
+ const result = heavyComputation(orderId);
698
+ return job.complete({ result });
699
+ };
700
+ export default handler;
701
+ ```
702
+
703
+ Typing your handler as `ThreadedJobHandler` gives full intellisense for `job` (variables, action methods like `complete()`, `fail()`, `error()`) and `client` (every `CamundaClient` API method).
704
+
705
+ The handler receives two arguments:
706
+
707
+ 1. **`job`** — a proxy with the same shape as a regular job worker job (`variables`, `customHeaders`, `jobKey`, plus action methods: `complete()`, `fail()`, `error()`, `cancelWorkflow()`, `ignore()`)
708
+ 2. **`client`** — a proxy to the `CamundaClient` on the main thread. You can call any SDK method (e.g. `client.publishMessage(...)`, `client.createProcessInstance(...)`) and it will be forwarded to the main thread and executed there.
709
+
710
+ #### Minimal example
711
+
712
+ ```ts
713
+ import createCamundaClient from '@camunda8/orchestration-cluster-api';
714
+ import path from 'node:path';
715
+
716
+ const client = createCamundaClient();
717
+
718
+ const worker = client.createThreadedJobWorker({
719
+ jobType: 'cpu-heavy-task',
720
+ handlerModule: path.join(import.meta.dirname, 'my-handler.js'),
721
+ maxParallelJobs: 32,
722
+ jobTimeoutMs: 30_000,
723
+ });
724
+ ```
725
+
726
+ #### Configuration
727
+
728
+ `createThreadedJobWorker` accepts all the same options as `createJobWorker` (except `jobHandler`), plus:
729
+
730
+ | Option | Type | Default | Description |
731
+ | ---------------- | -------- | --------------------------- | ---------------------------------------------------------------- |
732
+ | `handlerModule` | `string` | (required) | Path to handler module (absolute or relative to `process.cwd()`) |
733
+ | `threadPoolSize` | `number` | `os.availableParallelism()` | Number of worker threads in the pool |
734
+
735
+ Other familiar options: `jobType`, `maxParallelJobs`, `jobTimeoutMs`, `pollIntervalMs`, `pollTimeoutMs`, `fetchVariables`, `inputSchema`, `outputSchema`, `customHeadersSchema`, `validateSchemas`, `autoStart`, `startupJitterMaxSeconds`, `workerName`.
736
+
737
+ #### Lifecycle
738
+
739
+ Threaded workers integrate with the same lifecycle as regular workers:
740
+
741
+ ```ts
742
+ // Returned by getWorkers()
743
+ const allWorkers = client.getWorkers();
744
+
745
+ // Stopped by stopAllWorkers()
746
+ client.stopAllWorkers();
747
+
748
+ // Graceful shutdown (waits for in-flight jobs to finish)
749
+ const { timedOut, remainingJobs } = await worker.stopGracefully({ waitUpToMs: 10_000 });
750
+ ```
751
+
752
+ #### Pool stats
753
+
754
+ ```ts
755
+ worker.poolSize; // number of threads
756
+ worker.busyThreads; // threads currently processing a job
757
+ worker.activeJobs; // total jobs dispatched but not yet completed
758
+ ```
759
+
760
+ #### How it works
761
+
762
+ 1. The main thread polls `activateJobs` using the same mechanism as `createJobWorker`
763
+ 2. Activated jobs are serialized and dispatched to an idle thread via `MessageChannel`
764
+ 3. The thread loads the handler module (lazy, on first job), creates a proxy for `job` action methods and `client` API calls
765
+ 4. Action methods (`job.complete()`, `job.fail()`, etc.) and client calls are forwarded back to the main thread over the `MessagePort` and executed there
766
+ 5. The result is relayed back, and the thread is marked idle for the next job
767
+
768
+ #### Constraints
769
+
770
+ - **Node.js only**: `worker_threads` is not available in browsers or Deno
771
+ - **Handler must be a file module**: Inline functions cannot be transferred to threads
772
+ - **Job variables must be JSON-serializable**: Functions and class instances on the job are stripped during transfer
773
+ - **Client calls are async round-trips**: Each `client.xyz()` call crosses a thread boundary, adding a small amount of latency per call
774
+
674
775
  ---
675
776
 
676
777
  ## Authentication