@redflow/client 0.0.3 → 0.0.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/INTERNALS.md +238 -0
- package/README.md +24 -3
- package/package.json +1 -1
- package/src/client.ts +23 -20
- package/src/types.ts +6 -0
- package/src/worker.ts +86 -14
- package/tests/bugfixes.test.ts +11 -11
- package/tests/fixtures/worker-crash.ts +1 -0
- package/tests/fixtures/worker-recover.ts +1 -0
- package/tests/redflow.e2e.test.ts +142 -73
package/INTERNALS.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
1
|
+
# redflow internals
|
|
2
|
+
|
|
3
|
+
This document describes how `@redflow/client` works internally in production terms.
|
|
4
|
+
|
|
5
|
+
## Design model
|
|
6
|
+
|
|
7
|
+
- Durable state lives in Redis.
|
|
8
|
+
- Handlers and workflow code live in process memory (per worker process).
|
|
9
|
+
- The runtime is queue-based and crash-recoverable.
|
|
10
|
+
- Delivery semantics are at-least-once at run level.
|
|
11
|
+
- Step API provides deterministic replay/caching to avoid repeating completed work.
|
|
12
|
+
|
|
13
|
+
## Main components
|
|
14
|
+
|
|
15
|
+
- **Workflow registry (in-memory):** built via `defineWorkflow(...)`.
|
|
16
|
+
- **Client (`RedflowClient`):** enqueue runs, inspect state, cancel runs, sync metadata.
|
|
17
|
+
- **Worker runtime:** executes queued runs, retries failures, promotes scheduled runs.
|
|
18
|
+
- **Cron scheduler:** leader-elected loop that creates cron runs.
|
|
19
|
+
|
|
20
|
+
## Registry and metadata sync
|
|
21
|
+
|
|
22
|
+
`startWorker({ app, ... })` always calls `syncRegistry(registry, { app })` before loops start.
|
|
23
|
+
|
|
24
|
+
What `syncRegistry` writes per workflow:
|
|
25
|
+
|
|
26
|
+
- `workflow:<name>` hash:
|
|
27
|
+
- `name`
|
|
28
|
+
- `queue`
|
|
29
|
+
- `maxConcurrency` (default `1`)
|
|
30
|
+
- `app` (required ownership scope for cleanup)
|
|
31
|
+
- `updatedAt`
|
|
32
|
+
- `cronJson`
|
|
33
|
+
- `retriesJson`
|
|
34
|
+
- `cronIdsJson`
|
|
35
|
+
- `workflows` set (all known workflow names)
|
|
36
|
+
- cron definitions in `cron:def` and schedule in `cron:next`
|
|
37
|
+
|
|
38
|
+
### Stale cleanup
|
|
39
|
+
|
|
40
|
+
Before writing new metadata, sync removes stale workflow metadata when all are true:
|
|
41
|
+
|
|
42
|
+
- workflow exists in Redis,
|
|
43
|
+
- workflow is missing in current registry,
|
|
44
|
+
- workflow `app` equals current `app`,
|
|
45
|
+
- workflow is older than grace period (`30s`).
|
|
46
|
+
|
|
47
|
+
Cleanup removes:
|
|
48
|
+
|
|
49
|
+
- `workflow:<name>` metadata hash,
|
|
50
|
+
- `workflows` set membership,
|
|
51
|
+
- associated cron entries (`cron:def`, `cron:next`).
|
|
52
|
+
|
|
53
|
+
It does **not** delete historical runs.
|
|
54
|
+
|
|
55
|
+
## Redis keyspace
|
|
56
|
+
|
|
57
|
+
Key builders are in `src/internal/keys.ts`.
|
|
58
|
+
|
|
59
|
+
- `workflows`
|
|
60
|
+
- `workflow:<name>`
|
|
61
|
+
- `workflow-runs:<name>`
|
|
62
|
+
- `runs:created`
|
|
63
|
+
- `runs:status:<status>`
|
|
64
|
+
- `run:<runId>`
|
|
65
|
+
- `run:<runId>:steps`
|
|
66
|
+
- `run:<runId>:lease`
|
|
67
|
+
- `q:<queue>:ready`
|
|
68
|
+
- `q:<queue>:processing`
|
|
69
|
+
- `q:<queue>:scheduled`
|
|
70
|
+
- `cron:def`
|
|
71
|
+
- `cron:next`
|
|
72
|
+
- `lock:cron`
|
|
73
|
+
- `idempo:<encodedWorkflow>:<encodedKey>`
|
|
74
|
+
|
|
75
|
+
## Run lifecycle
|
|
76
|
+
|
|
77
|
+
Statuses:
|
|
78
|
+
|
|
79
|
+
- `scheduled`
|
|
80
|
+
- `queued`
|
|
81
|
+
- `running`
|
|
82
|
+
- terminal: `succeeded`, `failed`, `canceled`
|
|
83
|
+
|
|
84
|
+
### Enqueue
|
|
85
|
+
|
|
86
|
+
Enqueue uses `ENQUEUE_RUN_LUA` atomically:
|
|
87
|
+
|
|
88
|
+
- creates run hash,
|
|
89
|
+
- writes indexes (`runs:created`, `runs:status:*`, `workflow-runs:*`),
|
|
90
|
+
- pushes to ready queue or scheduled ZSET,
|
|
91
|
+
- applies idempotency mapping if key was provided.
|
|
92
|
+
|
|
93
|
+
Idempotency key TTL defaults to `7 days`.
|
|
94
|
+
|
|
95
|
+
### Processing
|
|
96
|
+
|
|
97
|
+
Worker loop uses `LMOVE`/`BLMOVE` from `ready` -> `processing`.
|
|
98
|
+
|
|
99
|
+
For each claimed run:
|
|
100
|
+
|
|
101
|
+
1. Acquire lease (`run:<id>:lease`) with periodic renewal.
|
|
102
|
+
2. Validate current run status.
|
|
103
|
+
3. If `queued`, enforce `maxConcurrency` for that workflow.
|
|
104
|
+
4. Transition `queued -> running` atomically.
|
|
105
|
+
5. Execute handler with step engine.
|
|
106
|
+
6. Finalize to terminal status atomically.
|
|
107
|
+
7. Remove from `processing`.
|
|
108
|
+
|
|
109
|
+
If lease is lost, current worker aborts and does not finalize.
|
|
110
|
+
|
|
111
|
+
### Reaper
|
|
112
|
+
|
|
113
|
+
Reaper scans `processing` lists. For runs without active lease:
|
|
114
|
+
|
|
115
|
+
- removes from `processing`,
|
|
116
|
+
- pushes back to `ready`.
|
|
117
|
+
|
|
118
|
+
This recovers from worker crashes.
|
|
119
|
+
|
|
120
|
+
### Scheduled promoter
|
|
121
|
+
|
|
122
|
+
Promoter pops due items from `q:<queue>:scheduled` (`ZPOPMIN` batch), then:
|
|
123
|
+
|
|
124
|
+
- transitions `scheduled -> queued`,
|
|
125
|
+
- pushes to `ready`.
|
|
126
|
+
|
|
127
|
+
Future items are put back.
|
|
128
|
+
|
|
129
|
+
## maxConcurrency
|
|
130
|
+
|
|
131
|
+
`maxConcurrency` is per workflow, default `1`.
|
|
132
|
+
|
|
133
|
+
### For regular queued runs
|
|
134
|
+
|
|
135
|
+
When a worker picks a `queued` run:
|
|
136
|
+
|
|
137
|
+
- it counts current `running` runs for the same workflow,
|
|
138
|
+
- if count >= `maxConcurrency`, run is atomically moved from `processing` back to end of `ready`.
|
|
139
|
+
|
|
140
|
+
So non-cron runs are delayed (not dropped).
|
|
141
|
+
|
|
142
|
+
### For cron runs
|
|
143
|
+
|
|
144
|
+
Cron loop also checks running count before enqueue.
|
|
145
|
+
|
|
146
|
+
- if count >= `maxConcurrency`, that cron tick is skipped,
|
|
147
|
+
- next cron tick is still scheduled normally.
|
|
148
|
+
|
|
149
|
+
## Cron scheduler
|
|
150
|
+
|
|
151
|
+
- Leader election via Redis lock `lock:cron`.
|
|
152
|
+
- Only lock holder schedules cron runs.
|
|
153
|
+
- Loop pops earliest `cronId` from `cron:next`.
|
|
154
|
+
- If due:
|
|
155
|
+
- parses `cron:def` payload,
|
|
156
|
+
- enforces `maxConcurrency`,
|
|
157
|
+
- enqueues run via `runByName` (or skips),
|
|
158
|
+
- computes next fire time and stores in `cron:next`.
|
|
159
|
+
|
|
160
|
+
Cron uses "reschedule from now" behavior (no catch-up burst if stale timestamp was in the past).
|
|
161
|
+
|
|
162
|
+
## Step engine semantics
|
|
163
|
+
|
|
164
|
+
Inside handler, `step` API has three primitives.
|
|
165
|
+
|
|
166
|
+
### `step.run(...)`
|
|
167
|
+
|
|
168
|
+
- Step state is persisted in `run:<id>:steps` hash under `step.name`.
|
|
169
|
+
- If step already `succeeded`, cached output is returned.
|
|
170
|
+
- Duplicate step names in one execution are rejected.
|
|
171
|
+
- Step timeout and cancellation are supported.
|
|
172
|
+
|
|
173
|
+
### `step.runWorkflow(...)`
|
|
174
|
+
|
|
175
|
+
- Enqueues child workflow with deterministic idempotency by default:
|
|
176
|
+
- `parentRunId + stepName + childWorkflowName`.
|
|
177
|
+
- Waits for child completion.
|
|
178
|
+
- Waiting is bounded by step `timeoutMs` (if set), otherwise unbounded until cancellation.
|
|
179
|
+
- Inline assist: if child is queued on a queue this worker handles, worker may execute child inline to avoid self-deadlock with low concurrency.
|
|
180
|
+
|
|
181
|
+
### `step.emitWorkflow(...)`
|
|
182
|
+
|
|
183
|
+
- Enqueues child workflow and returns child `runId`.
|
|
184
|
+
- Supports child as workflow object or workflow name string.
|
|
185
|
+
- Uses deterministic idempotency default based on parent run and step name.
|
|
186
|
+
|
|
187
|
+
## Retry model
|
|
188
|
+
|
|
189
|
+
- `maxAttempts` is workflow-level (`retries.maxAttempts`), default `1`.
|
|
190
|
+
- Retry delay uses exponential backoff + jitter.
|
|
191
|
+
- Non-retryable classes:
|
|
192
|
+
- input validation errors,
|
|
193
|
+
- unknown workflow,
|
|
194
|
+
- output serialization errors,
|
|
195
|
+
- cancellation,
|
|
196
|
+
- explicit `NonRetriableError`.
|
|
197
|
+
- Retry scheduling is atomic (`scheduleRetry` Lua): status/index update + queue scheduled ZSET write in one script.
|
|
198
|
+
|
|
199
|
+
## Cancellation
|
|
200
|
+
|
|
201
|
+
`cancelRun(runId)`:
|
|
202
|
+
|
|
203
|
+
- sets `cancelRequestedAt` + optional reason,
|
|
204
|
+
- if run is `queued`/`scheduled`, attempts immediate transition to `canceled` and cleanup,
|
|
205
|
+
- if run is `running`, cancellation is cooperative via `AbortSignal` polling in worker.
|
|
206
|
+
|
|
207
|
+
Terminal finalize script ensures consistent indexes and terminal status.
|
|
208
|
+
|
|
209
|
+
## Idempotency vs step cache
|
|
210
|
+
|
|
211
|
+
- **Idempotency:** deduplicates run creation (`key -> runId`) with TTL.
|
|
212
|
+
- **Step cache:** deduplicates completed step execution within one parent run.
|
|
213
|
+
|
|
214
|
+
They solve different failure windows and are intentionally both used.
|
|
215
|
+
|
|
216
|
+
## Multi-worker behavior
|
|
217
|
+
|
|
218
|
+
- Many workers can process same prefix/queues.
|
|
219
|
+
- Cron scheduling is single-leader.
|
|
220
|
+
- Processing/recovery is shared via Redis lists + leases.
|
|
221
|
+
- `maxConcurrency` is enforced globally against Redis `running` index.
|
|
222
|
+
|
|
223
|
+
## Operational notes
|
|
224
|
+
|
|
225
|
+
Recommended for production:
|
|
226
|
+
|
|
227
|
+
- Use stable `prefix` per environment.
|
|
228
|
+
- Use explicit `app` per service role for safe metadata cleanup.
|
|
229
|
+
- Set `maxConcurrency` intentionally for long workflows.
|
|
230
|
+
- Keep queue ownership clear (avoid workers consuming queues for workflows they do not register).
|
|
231
|
+
- Use idempotency keys for external trigger endpoints.
|
|
232
|
+
|
|
233
|
+
## Current guarantees and limitations
|
|
234
|
+
|
|
235
|
+
- Run execution is at-least-once.
|
|
236
|
+
- Step cache reduces replay but cannot provide global exactly-once side effects.
|
|
237
|
+
- `maxConcurrency` is enforced via runtime checks against Redis state; it is robust in practice but not a strict distributed semaphore proof.
|
|
238
|
+
- `handle.result({ timeoutMs })` timeout affects caller waiting only, not run execution itself.
|
package/README.md
CHANGED
|
@@ -2,6 +2,8 @@
|
|
|
2
2
|
|
|
3
3
|
Redis-backed workflow runtime for Bun.
|
|
4
4
|
|
|
5
|
+
Deep internal details: `INTERNALS.md`
|
|
6
|
+
|
|
5
7
|
## Warning
|
|
6
8
|
|
|
7
9
|
This project is still in early alpha stage.
|
|
@@ -147,13 +149,14 @@ const output = await handle.result({ timeoutMs: 90_000 });
|
|
|
147
149
|
|
|
148
150
|
## Start a worker
|
|
149
151
|
|
|
150
|
-
Import workflows, then run `startWorker()`.
|
|
152
|
+
Import workflows, then run `startWorker({ app: ... })`.
|
|
151
153
|
|
|
152
154
|
```ts
|
|
153
155
|
import { startWorker } from "@redflow/client";
|
|
154
156
|
import "./workflows";
|
|
155
157
|
|
|
156
158
|
const worker = await startWorker({
|
|
159
|
+
app: "billing-worker",
|
|
157
160
|
url: process.env.REDIS_URL,
|
|
158
161
|
prefix: "redflow:prod",
|
|
159
162
|
concurrency: 4,
|
|
@@ -164,6 +167,7 @@ Explicit queues + runtime tuning:
|
|
|
164
167
|
|
|
165
168
|
```ts
|
|
166
169
|
const worker = await startWorker({
|
|
170
|
+
app: "billing-worker",
|
|
167
171
|
url: process.env.REDIS_URL,
|
|
168
172
|
prefix: "redflow:prod",
|
|
169
173
|
queues: ["critical", "io", "analytics"],
|
|
@@ -178,6 +182,21 @@ const worker = await startWorker({
|
|
|
178
182
|
|
|
179
183
|
## Workflow options examples
|
|
180
184
|
|
|
185
|
+
### maxConcurrency
|
|
186
|
+
|
|
187
|
+
`maxConcurrency` limits concurrent `running` runs per workflow. Default is `1`.
|
|
188
|
+
|
|
189
|
+
```ts
|
|
190
|
+
defineWorkflow(
|
|
191
|
+
"heavy-sync",
|
|
192
|
+
{
|
|
193
|
+
queue: "ops",
|
|
194
|
+
maxConcurrency: 1,
|
|
195
|
+
},
|
|
196
|
+
async () => ({ ok: true }),
|
|
197
|
+
);
|
|
198
|
+
```
|
|
199
|
+
|
|
181
200
|
### Cron
|
|
182
201
|
|
|
183
202
|
```ts
|
|
@@ -194,6 +213,8 @@ defineWorkflow(
|
|
|
194
213
|
);
|
|
195
214
|
```
|
|
196
215
|
|
|
216
|
+
Cron respects `maxConcurrency`: if the limit is reached, that cron tick is skipped.
|
|
217
|
+
|
|
197
218
|
### onFailure
|
|
198
219
|
|
|
199
220
|
```ts
|
|
@@ -281,10 +302,10 @@ const output = await handle.result({ timeoutMs: 30_000 });
|
|
|
281
302
|
console.log(output);
|
|
282
303
|
```
|
|
283
304
|
|
|
284
|
-
### Registry sync
|
|
305
|
+
### Registry sync app id
|
|
285
306
|
|
|
286
307
|
```ts
|
|
287
308
|
import { getDefaultRegistry } from "@redflow/client";
|
|
288
309
|
|
|
289
|
-
await client.syncRegistry(getDefaultRegistry(), {
|
|
310
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "billing-service" });
|
|
290
311
|
```
|
package/package.json
CHANGED
package/src/client.ts
CHANGED
|
@@ -37,10 +37,10 @@ export type CreateClientOptions = {
|
|
|
37
37
|
|
|
38
38
|
export type SyncRegistryOptions = {
|
|
39
39
|
/**
|
|
40
|
-
*
|
|
41
|
-
*
|
|
40
|
+
* Stable application id used for stale workflow metadata cleanup.
|
|
41
|
+
* Workflows are pruned only when they were last synced by the same app.
|
|
42
42
|
*/
|
|
43
|
-
|
|
43
|
+
app: string;
|
|
44
44
|
};
|
|
45
45
|
|
|
46
46
|
export function defaultPrefix(): string {
|
|
@@ -249,16 +249,6 @@ function encodeCompositePart(value: string): string {
|
|
|
249
249
|
return `${value.length}:${value}`;
|
|
250
250
|
}
|
|
251
251
|
|
|
252
|
-
function defaultRegistryOwner(): string {
|
|
253
|
-
const envOwner = process.env.REDFLOW_SYNC_OWNER?.trim();
|
|
254
|
-
if (envOwner) return envOwner;
|
|
255
|
-
|
|
256
|
-
const argvOwner = process.argv[1]?.trim();
|
|
257
|
-
if (argvOwner) return argvOwner;
|
|
258
|
-
|
|
259
|
-
return "redflow:unknown-owner";
|
|
260
|
-
}
|
|
261
|
-
|
|
262
252
|
function parseEnqueueScriptResult(value: unknown): { kind: "created" | "existing"; runId: string } | null {
|
|
263
253
|
if (Array.isArray(value) && value.length === 1 && Array.isArray(value[0])) {
|
|
264
254
|
return parseEnqueueScriptResult(value[0]);
|
|
@@ -309,6 +299,11 @@ function isValidDate(value: Date): boolean {
|
|
|
309
299
|
return value instanceof Date && Number.isFinite(value.getTime());
|
|
310
300
|
}
|
|
311
301
|
|
|
302
|
+
function normalizeMaxConcurrency(value: unknown): number {
|
|
303
|
+
if (typeof value !== "number" || !Number.isFinite(value) || value <= 0) return 1;
|
|
304
|
+
return Math.floor(value);
|
|
305
|
+
}
|
|
306
|
+
|
|
312
307
|
export class RedflowClient {
|
|
313
308
|
constructor(
|
|
314
309
|
public readonly redis: RedisClient,
|
|
@@ -356,9 +351,11 @@ export class RedflowClient {
|
|
|
356
351
|
const retries = safeJsonTryParse<any>(data.retriesJson ?? null) as any;
|
|
357
352
|
const updatedAt = Number(data.updatedAt ?? "0");
|
|
358
353
|
const queue = data.queue ?? "default";
|
|
354
|
+
const maxConcurrency = normalizeMaxConcurrency(Number(data.maxConcurrency ?? "1"));
|
|
359
355
|
return {
|
|
360
356
|
name,
|
|
361
357
|
queue,
|
|
358
|
+
maxConcurrency,
|
|
362
359
|
cron: Array.isArray(cron) && cron.length > 0 ? cron : undefined,
|
|
363
360
|
retries,
|
|
364
361
|
updatedAt,
|
|
@@ -606,17 +603,21 @@ export class RedflowClient {
|
|
|
606
603
|
}
|
|
607
604
|
}
|
|
608
605
|
|
|
609
|
-
async syncRegistry(registry: WorkflowRegistry, options
|
|
606
|
+
async syncRegistry(registry: WorkflowRegistry, options: SyncRegistryOptions): Promise<void> {
|
|
610
607
|
const defs = registry.list();
|
|
611
608
|
const syncStartedAt = nowMs();
|
|
612
|
-
const
|
|
609
|
+
const app = options.app.trim();
|
|
610
|
+
if (!app) {
|
|
611
|
+
throw new Error("syncRegistry requires a non-empty options.app");
|
|
612
|
+
}
|
|
613
613
|
const registeredNames = new Set(defs.map((def) => def.options.name));
|
|
614
614
|
|
|
615
|
-
await this.cleanupStaleWorkflows(registeredNames, syncStartedAt,
|
|
615
|
+
await this.cleanupStaleWorkflows(registeredNames, syncStartedAt, app);
|
|
616
616
|
|
|
617
617
|
for (const def of defs) {
|
|
618
618
|
const name = def.options.name;
|
|
619
619
|
const queue = def.options.queue ?? "default";
|
|
620
|
+
const maxConcurrency = normalizeMaxConcurrency(def.options.maxConcurrency);
|
|
620
621
|
const cron = def.options.cron ?? [];
|
|
621
622
|
const retries = def.options.retries ?? {};
|
|
622
623
|
const updatedAt = nowMs();
|
|
@@ -653,6 +654,7 @@ export class RedflowClient {
|
|
|
653
654
|
id: cronId,
|
|
654
655
|
workflow: name,
|
|
655
656
|
queue,
|
|
657
|
+
maxConcurrency,
|
|
656
658
|
expression: c.expression,
|
|
657
659
|
timezone: c.timezone,
|
|
658
660
|
inputJson: safeJsonStringify(cronInput),
|
|
@@ -671,7 +673,8 @@ export class RedflowClient {
|
|
|
671
673
|
const meta: Record<string, string> = {
|
|
672
674
|
name,
|
|
673
675
|
queue,
|
|
674
|
-
|
|
676
|
+
maxConcurrency: String(maxConcurrency),
|
|
677
|
+
app,
|
|
675
678
|
updatedAt: String(updatedAt),
|
|
676
679
|
cronJson: safeJsonStringify(cron),
|
|
677
680
|
retriesJson: safeJsonStringify(retries),
|
|
@@ -722,7 +725,7 @@ export class RedflowClient {
|
|
|
722
725
|
private async cleanupStaleWorkflows(
|
|
723
726
|
registeredNames: Set<string>,
|
|
724
727
|
syncStartedAt: number,
|
|
725
|
-
|
|
728
|
+
app: string,
|
|
726
729
|
): Promise<void> {
|
|
727
730
|
const existingNames = await this.redis.smembers(keys.workflows(this.prefix));
|
|
728
731
|
|
|
@@ -730,8 +733,8 @@ export class RedflowClient {
|
|
|
730
733
|
if (registeredNames.has(existingName)) continue;
|
|
731
734
|
|
|
732
735
|
const workflowKey = keys.workflow(this.prefix, existingName);
|
|
733
|
-
const
|
|
734
|
-
if (!
|
|
736
|
+
const workflowApp = (await this.redis.hget(workflowKey, "app")) ?? "";
|
|
737
|
+
if (!workflowApp || workflowApp !== app) {
|
|
735
738
|
continue;
|
|
736
739
|
}
|
|
737
740
|
|
package/src/types.ts
CHANGED
|
@@ -32,6 +32,11 @@ export type OnFailureContext = {
|
|
|
32
32
|
export type DefineWorkflowOptions<TSchema extends ZodTypeAny | undefined = ZodTypeAny | undefined> = {
|
|
33
33
|
name: string;
|
|
34
34
|
queue?: string;
|
|
35
|
+
/**
|
|
36
|
+
* Maximum concurrently running runs for this workflow.
|
|
37
|
+
* Default: 1.
|
|
38
|
+
*/
|
|
39
|
+
maxConcurrency?: number;
|
|
35
40
|
schema?: TSchema;
|
|
36
41
|
cron?: CronTrigger[];
|
|
37
42
|
retries?: WorkflowRetries;
|
|
@@ -167,6 +172,7 @@ export type ListedRun = {
|
|
|
167
172
|
export type WorkflowMeta = {
|
|
168
173
|
name: string;
|
|
169
174
|
queue: string;
|
|
175
|
+
maxConcurrency: number;
|
|
170
176
|
cron?: CronTrigger[];
|
|
171
177
|
retries?: WorkflowRetries;
|
|
172
178
|
updatedAt: number;
|
package/src/worker.ts
CHANGED
|
@@ -19,6 +19,8 @@ import { getDefaultRegistry, type WorkflowRegistry } from "./registry";
|
|
|
19
19
|
import type { OnFailureContext, RunStatus, StepApi, StepStatus } from "./types";
|
|
20
20
|
|
|
21
21
|
export type StartWorkerOptions = {
|
|
22
|
+
/** Stable application id used for registry sync stale-cleanup scoping. */
|
|
23
|
+
app: string;
|
|
22
24
|
redis?: RedisClient;
|
|
23
25
|
url?: string;
|
|
24
26
|
prefix?: string;
|
|
@@ -74,18 +76,32 @@ redis.call("lpush", KEYS[2], ARGV[1])
|
|
|
74
76
|
return 1
|
|
75
77
|
`;
|
|
76
78
|
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
79
|
+
const REQUEUE_DUE_TO_CONCURRENCY_LUA = `
|
|
80
|
+
if redis.call("lrem", KEYS[1], 1, ARGV[1]) <= 0 then
|
|
81
|
+
return 0
|
|
82
|
+
end
|
|
83
|
+
|
|
84
|
+
redis.call("rpush", KEYS[2], ARGV[1])
|
|
85
|
+
return 1
|
|
86
|
+
`;
|
|
87
|
+
|
|
88
|
+
export async function startWorker(options: StartWorkerOptions): Promise<WorkerHandle> {
|
|
89
|
+
const app = options.app.trim();
|
|
90
|
+
if (!app) {
|
|
91
|
+
throw new Error("startWorker requires a non-empty options.app");
|
|
92
|
+
}
|
|
93
|
+
|
|
94
|
+
const registry = options.registry ?? getDefaultRegistry();
|
|
95
|
+
const prefix = options.prefix ?? defaultPrefix();
|
|
96
|
+
const ownsBaseRedis = !options.redis && !!options.url;
|
|
97
|
+
const baseRedis = options.redis ?? (options.url ? new BunRedisClient(options.url) : defaultRedis);
|
|
82
98
|
const syncClient = createClient({ redis: baseRedis, prefix });
|
|
83
99
|
|
|
84
|
-
const queues = options
|
|
85
|
-
const concurrency = Math.max(1, options
|
|
86
|
-
const leaseMs = Math.max(100, options
|
|
87
|
-
const blmoveTimeoutSec = options
|
|
88
|
-
const reaperIntervalMs = options
|
|
100
|
+
const queues = options.queues ?? deriveQueuesFromRegistry(registry);
|
|
101
|
+
const concurrency = Math.max(1, options.concurrency ?? 1);
|
|
102
|
+
const leaseMs = Math.max(100, options.runtime?.leaseMs ?? 5000);
|
|
103
|
+
const blmoveTimeoutSec = options.runtime?.blmoveTimeoutSec ?? 1;
|
|
104
|
+
const reaperIntervalMs = options.runtime?.reaperIntervalMs ?? 500;
|
|
89
105
|
|
|
90
106
|
const abort = new AbortController();
|
|
91
107
|
const tasks: Promise<void>[] = [];
|
|
@@ -111,7 +127,7 @@ export async function startWorker(options?: StartWorkerOptions): Promise<WorkerH
|
|
|
111
127
|
};
|
|
112
128
|
|
|
113
129
|
try {
|
|
114
|
-
await syncClient.syncRegistry(registry);
|
|
130
|
+
await syncClient.syncRegistry(registry, { app });
|
|
115
131
|
|
|
116
132
|
// Worker loops (blocking BLMOVE). Use dedicated connections per slot.
|
|
117
133
|
for (let i = 0; i < concurrency; i++) {
|
|
@@ -222,6 +238,11 @@ function encodeIdempotencyPart(value: string): string {
|
|
|
222
238
|
return `${value.length}:${value}`;
|
|
223
239
|
}
|
|
224
240
|
|
|
241
|
+
function normalizeMaxConcurrency(value: unknown): number {
|
|
242
|
+
if (typeof value !== "number" || !Number.isFinite(value) || value <= 0) return 1;
|
|
243
|
+
return Math.floor(value);
|
|
244
|
+
}
|
|
245
|
+
|
|
225
246
|
function defaultStepWorkflowIdempotencyKey(parentRunId: string, stepName: string, childWorkflowName: string): string {
|
|
226
247
|
return `stepwf:${encodeIdempotencyPart(parentRunId)}:${encodeIdempotencyPart(stepName)}:${encodeIdempotencyPart(childWorkflowName)}`;
|
|
227
248
|
}
|
|
@@ -396,6 +417,8 @@ async function processRun(args: {
|
|
|
396
417
|
}
|
|
397
418
|
|
|
398
419
|
const workflowName = run.workflow ?? "";
|
|
420
|
+
const def = workflowName ? registry.get(workflowName) : undefined;
|
|
421
|
+
const maxConcurrency = normalizeMaxConcurrency(def?.options.maxConcurrency);
|
|
399
422
|
const maxAttempts = Number(run.maxAttempts ?? "1");
|
|
400
423
|
const cancelRequestedAt = run.cancelRequestedAt ? Number(run.cancelRequestedAt) : 0;
|
|
401
424
|
if (cancelRequestedAt > 0) {
|
|
@@ -406,7 +429,26 @@ async function processRun(args: {
|
|
|
406
429
|
|
|
407
430
|
const startedAt = run.startedAt && run.startedAt !== "" ? Number(run.startedAt) : nowMs();
|
|
408
431
|
|
|
409
|
-
if (currentStatus === "queued") {
|
|
432
|
+
if (currentStatus === "queued" && def) {
|
|
433
|
+
const runningCount = await countRunningRunsForWorkflow({
|
|
434
|
+
redis,
|
|
435
|
+
prefix,
|
|
436
|
+
workflowName,
|
|
437
|
+
stopAt: maxConcurrency,
|
|
438
|
+
});
|
|
439
|
+
|
|
440
|
+
if (runningCount >= maxConcurrency) {
|
|
441
|
+
await redis.send("EVAL", [
|
|
442
|
+
REQUEUE_DUE_TO_CONCURRENCY_LUA,
|
|
443
|
+
"2",
|
|
444
|
+
processingKey,
|
|
445
|
+
keys.queueReady(prefix, queue),
|
|
446
|
+
runId,
|
|
447
|
+
]);
|
|
448
|
+
await sleep(25);
|
|
449
|
+
return;
|
|
450
|
+
}
|
|
451
|
+
|
|
410
452
|
const movedToRunning = await client.transitionRunStatusIfCurrent(runId, "queued", "running", startedAt);
|
|
411
453
|
if (!movedToRunning) {
|
|
412
454
|
// Most likely canceled between dequeue and start transition.
|
|
@@ -433,7 +475,6 @@ async function processRun(args: {
|
|
|
433
475
|
return;
|
|
434
476
|
}
|
|
435
477
|
|
|
436
|
-
const def = registry.get(workflowName);
|
|
437
478
|
if (!def) {
|
|
438
479
|
const errorJson = makeErrorJson(new UnknownWorkflowError(workflowName));
|
|
439
480
|
await client.finalizeRun(runId, { status: "failed", errorJson, finishedAt: nowMs() });
|
|
@@ -902,6 +943,27 @@ async function reaperLoop(args: {
|
|
|
902
943
|
}
|
|
903
944
|
}
|
|
904
945
|
|
|
946
|
+
async function countRunningRunsForWorkflow(args: {
|
|
947
|
+
redis: RedisClient;
|
|
948
|
+
prefix: string;
|
|
949
|
+
workflowName: string;
|
|
950
|
+
stopAt?: number;
|
|
951
|
+
}): Promise<number> {
|
|
952
|
+
const { redis, prefix, workflowName, stopAt } = args;
|
|
953
|
+
const runningRunIds = await redis.zrevrange(keys.runsStatus(prefix, "running"), 0, -1);
|
|
954
|
+
let count = 0;
|
|
955
|
+
|
|
956
|
+
for (const runId of runningRunIds) {
|
|
957
|
+
const runWorkflow = await redis.hget(keys.run(prefix, runId), "workflow");
|
|
958
|
+
if (runWorkflow !== workflowName) continue;
|
|
959
|
+
|
|
960
|
+
count += 1;
|
|
961
|
+
if (typeof stopAt === "number" && count >= stopAt) return count;
|
|
962
|
+
}
|
|
963
|
+
|
|
964
|
+
return count;
|
|
965
|
+
}
|
|
966
|
+
|
|
905
967
|
async function cronSchedulerLoop(args: {
|
|
906
968
|
redis: RedisClient;
|
|
907
969
|
client: RedflowClient;
|
|
@@ -976,7 +1038,17 @@ async function cronSchedulerLoop(args: {
|
|
|
976
1038
|
continue;
|
|
977
1039
|
}
|
|
978
1040
|
|
|
979
|
-
|
|
1041
|
+
const cronMaxConcurrency = normalizeMaxConcurrency(def.maxConcurrency);
|
|
1042
|
+
const runningCount = await countRunningRunsForWorkflow({
|
|
1043
|
+
redis,
|
|
1044
|
+
prefix,
|
|
1045
|
+
workflowName: workflow,
|
|
1046
|
+
stopAt: cronMaxConcurrency,
|
|
1047
|
+
});
|
|
1048
|
+
|
|
1049
|
+
if (runningCount < cronMaxConcurrency) {
|
|
1050
|
+
await client.runByName(workflow, input, { queueOverride: queue });
|
|
1051
|
+
}
|
|
980
1052
|
|
|
981
1053
|
// Schedule next run.
|
|
982
1054
|
let nextAt: number | null = null;
|
package/tests/bugfixes.test.ts
CHANGED
|
@@ -219,7 +219,7 @@ test("crash recovery: attempt is not double-incremented", async () => {
|
|
|
219
219
|
},
|
|
220
220
|
);
|
|
221
221
|
|
|
222
|
-
const worker = await startWorker({
|
|
222
|
+
const worker = await startWorker({ app: "test-app",
|
|
223
223
|
redis,
|
|
224
224
|
prefix,
|
|
225
225
|
queues: [queue],
|
|
@@ -282,9 +282,9 @@ test("crash recovery: re-processing a running run does not bump attempt", async
|
|
|
282
282
|
await redis.lpush(keys.queueReady(prefix, queue), runId);
|
|
283
283
|
|
|
284
284
|
// Sync registry before starting worker
|
|
285
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
285
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
286
286
|
|
|
287
|
-
const worker = await startWorker({
|
|
287
|
+
const worker = await startWorker({ app: "test-app",
|
|
288
288
|
redis,
|
|
289
289
|
prefix,
|
|
290
290
|
queues: [queue],
|
|
@@ -325,7 +325,7 @@ test("retry: run is atomically transitioned to scheduled with queue entry", asyn
|
|
|
325
325
|
},
|
|
326
326
|
);
|
|
327
327
|
|
|
328
|
-
const worker = await startWorker({
|
|
328
|
+
const worker = await startWorker({ app: "test-app",
|
|
329
329
|
redis,
|
|
330
330
|
prefix,
|
|
331
331
|
queues: [queue],
|
|
@@ -426,7 +426,7 @@ test(
|
|
|
426
426
|
const queue = "q_batch_promote";
|
|
427
427
|
const countKey = `${prefix}:t:batchPromoteCount`;
|
|
428
428
|
|
|
429
|
-
const wf = defineWorkflow("batch-promote-wf", {queue }, async ({ step }) => {
|
|
429
|
+
const wf = defineWorkflow("batch-promote-wf", {queue, maxConcurrency: 5 }, async ({ step }) => {
|
|
430
430
|
await step.run({ name: "do" }, async () => {
|
|
431
431
|
await redis.incr(countKey);
|
|
432
432
|
return true;
|
|
@@ -440,7 +440,7 @@ test(
|
|
|
440
440
|
handles.push(await wf.run({}, { runAt: new Date(Date.now() + 200) }));
|
|
441
441
|
}
|
|
442
442
|
|
|
443
|
-
const worker = await startWorker({
|
|
443
|
+
const worker = await startWorker({ app: "test-app",
|
|
444
444
|
redis,
|
|
445
445
|
prefix,
|
|
446
446
|
queues: [queue],
|
|
@@ -527,7 +527,7 @@ test("duplicate step name: throws a clear error", async () => {
|
|
|
527
527
|
return { ok: true };
|
|
528
528
|
});
|
|
529
529
|
|
|
530
|
-
const worker = await startWorker({
|
|
530
|
+
const worker = await startWorker({ app: "test-app",
|
|
531
531
|
redis,
|
|
532
532
|
prefix,
|
|
533
533
|
queues: [queue],
|
|
@@ -567,7 +567,7 @@ test("unique step names: work normally", async () => {
|
|
|
567
567
|
return { sum: a + b + c };
|
|
568
568
|
});
|
|
569
569
|
|
|
570
|
-
const worker = await startWorker({
|
|
570
|
+
const worker = await startWorker({ app: "test-app",
|
|
571
571
|
redis,
|
|
572
572
|
prefix,
|
|
573
573
|
queues: [queue],
|
|
@@ -610,7 +610,7 @@ test(
|
|
|
610
610
|
},
|
|
611
611
|
);
|
|
612
612
|
|
|
613
|
-
const worker = await startWorker({
|
|
613
|
+
const worker = await startWorker({ app: "test-app",
|
|
614
614
|
redis,
|
|
615
615
|
prefix,
|
|
616
616
|
queues: [queue],
|
|
@@ -661,7 +661,7 @@ test(
|
|
|
661
661
|
},
|
|
662
662
|
);
|
|
663
663
|
|
|
664
|
-
const worker = await startWorker({
|
|
664
|
+
const worker = await startWorker({ app: "test-app",
|
|
665
665
|
redis,
|
|
666
666
|
prefix,
|
|
667
667
|
queues: [queue],
|
|
@@ -719,7 +719,7 @@ test("status indexes: consistent through full lifecycle", async () => {
|
|
|
719
719
|
let queuedMembers = await redis.zrange(`${prefix}:runs:status:queued`, 0, -1);
|
|
720
720
|
expect(queuedMembers.includes(handle.id)).toBe(true);
|
|
721
721
|
|
|
722
|
-
const worker = await startWorker({
|
|
722
|
+
const worker = await startWorker({ app: "test-app",
|
|
723
723
|
redis,
|
|
724
724
|
prefix,
|
|
725
725
|
queues: [queue],
|
|
@@ -52,7 +52,7 @@ test("manual run: succeeds and records steps", async () => {
|
|
|
52
52
|
},
|
|
53
53
|
);
|
|
54
54
|
|
|
55
|
-
const worker = await startWorker({
|
|
55
|
+
const worker = await startWorker({ app: "test-app",
|
|
56
56
|
redis,
|
|
57
57
|
prefix,
|
|
58
58
|
queues: ["q1"],
|
|
@@ -82,7 +82,7 @@ test("runAt: scheduled -> promoted -> executed", async () => {
|
|
|
82
82
|
return { ok: true };
|
|
83
83
|
});
|
|
84
84
|
|
|
85
|
-
const worker = await startWorker({
|
|
85
|
+
const worker = await startWorker({ app: "test-app",
|
|
86
86
|
redis,
|
|
87
87
|
prefix,
|
|
88
88
|
queues: ["q2"],
|
|
@@ -120,7 +120,7 @@ test("runAt: cleared after retry exhaustion final failure", async () => {
|
|
|
120
120
|
},
|
|
121
121
|
);
|
|
122
122
|
|
|
123
|
-
const worker = await startWorker({
|
|
123
|
+
const worker = await startWorker({ app: "test-app",
|
|
124
124
|
redis,
|
|
125
125
|
prefix,
|
|
126
126
|
queues: ["q2_retry_fail"],
|
|
@@ -153,7 +153,7 @@ test("NonRetriableError: skips retries and fails immediately on attempt 1", asyn
|
|
|
153
153
|
},
|
|
154
154
|
);
|
|
155
155
|
|
|
156
|
-
const worker = await startWorker({
|
|
156
|
+
const worker = await startWorker({ app: "test-app",
|
|
157
157
|
redis,
|
|
158
158
|
prefix,
|
|
159
159
|
queues: ["q_non_retriable"],
|
|
@@ -196,7 +196,7 @@ test("onFailure: called after retry exhaustion with correct context", async () =
|
|
|
196
196
|
},
|
|
197
197
|
);
|
|
198
198
|
|
|
199
|
-
const worker = await startWorker({
|
|
199
|
+
const worker = await startWorker({ app: "test-app",
|
|
200
200
|
redis,
|
|
201
201
|
prefix,
|
|
202
202
|
queues: ["q_on_failure_1"],
|
|
@@ -239,7 +239,7 @@ test("onFailure: called immediately with NonRetriableError", async () => {
|
|
|
239
239
|
},
|
|
240
240
|
);
|
|
241
241
|
|
|
242
|
-
const worker = await startWorker({
|
|
242
|
+
const worker = await startWorker({ app: "test-app",
|
|
243
243
|
redis,
|
|
244
244
|
prefix,
|
|
245
245
|
queues: ["q_on_failure_2"],
|
|
@@ -283,7 +283,7 @@ test("onFailure: NOT called on cancellation", async () => {
|
|
|
283
283
|
},
|
|
284
284
|
);
|
|
285
285
|
|
|
286
|
-
const worker = await startWorker({
|
|
286
|
+
const worker = await startWorker({ app: "test-app",
|
|
287
287
|
redis,
|
|
288
288
|
prefix,
|
|
289
289
|
queues: ["q_on_failure_3"],
|
|
@@ -333,7 +333,7 @@ test("cron: creates runs and executes workflow", async () => {
|
|
|
333
333
|
},
|
|
334
334
|
);
|
|
335
335
|
|
|
336
|
-
const worker = await startWorker({
|
|
336
|
+
const worker = await startWorker({ app: "test-app",
|
|
337
337
|
redis,
|
|
338
338
|
prefix,
|
|
339
339
|
queues: ["q4"],
|
|
@@ -352,6 +352,75 @@ test("cron: creates runs and executes workflow", async () => {
|
|
|
352
352
|
redis.close();
|
|
353
353
|
});
|
|
354
354
|
|
|
355
|
+
test("cron: skips when workflow already running", async () => {
|
|
356
|
+
const prefix = testPrefix();
|
|
357
|
+
const redis = new RedisClient(redisServer.url);
|
|
358
|
+
const client = createClient({ redis, prefix });
|
|
359
|
+
setDefaultClient(client);
|
|
360
|
+
|
|
361
|
+
const manualCountKey = `${prefix}:t:cronSkipManualCount`;
|
|
362
|
+
const cronCountKey = `${prefix}:t:cronSkipCronCount`;
|
|
363
|
+
|
|
364
|
+
defineWorkflow(
|
|
365
|
+
"cron-skip-running-wf",
|
|
366
|
+
{
|
|
367
|
+
queue: "q4_skip",
|
|
368
|
+
cron: [{ expression: "*/5 * * * * *", input: { source: "cron" } }],
|
|
369
|
+
},
|
|
370
|
+
async ({ input, step }) => {
|
|
371
|
+
await step.run({ name: "record-source" }, async () => {
|
|
372
|
+
const source = (input as { source?: string } | null | undefined)?.source;
|
|
373
|
+
if (source === "cron") {
|
|
374
|
+
await redis.incr(cronCountKey);
|
|
375
|
+
} else {
|
|
376
|
+
await redis.incr(manualCountKey);
|
|
377
|
+
}
|
|
378
|
+
return true;
|
|
379
|
+
});
|
|
380
|
+
|
|
381
|
+
await step.run({ name: "hold" }, async () => {
|
|
382
|
+
await new Promise((resolve) => setTimeout(resolve, 6500));
|
|
383
|
+
return true;
|
|
384
|
+
});
|
|
385
|
+
|
|
386
|
+
return { ok: true };
|
|
387
|
+
},
|
|
388
|
+
);
|
|
389
|
+
|
|
390
|
+
const worker = await startWorker({ app: "test-app",
|
|
391
|
+
redis,
|
|
392
|
+
prefix,
|
|
393
|
+
queues: ["q4_skip"],
|
|
394
|
+
concurrency: 2,
|
|
395
|
+
runtime: { leaseMs: 500 },
|
|
396
|
+
});
|
|
397
|
+
|
|
398
|
+
try {
|
|
399
|
+
const manualHandle = await client.emitWorkflow("cron-skip-running-wf", { source: "manual" });
|
|
400
|
+
|
|
401
|
+
await waitFor(
|
|
402
|
+
async () => Number((await redis.get(manualCountKey)) ?? "0") >= 1,
|
|
403
|
+
{ timeoutMs: 5000, label: "manual run started" },
|
|
404
|
+
);
|
|
405
|
+
|
|
406
|
+
// Wait long enough for one cron tick while manual run is still executing.
|
|
407
|
+
await new Promise((resolve) => setTimeout(resolve, 5500));
|
|
408
|
+
const cronWhileManualRunning = Number((await redis.get(cronCountKey)) ?? "0");
|
|
409
|
+
expect(cronWhileManualRunning).toBe(0);
|
|
410
|
+
|
|
411
|
+
await manualHandle.result({ timeoutMs: 15_000 });
|
|
412
|
+
|
|
413
|
+
// Cron should resume once the running instance is gone.
|
|
414
|
+
await waitFor(
|
|
415
|
+
async () => Number((await redis.get(cronCountKey)) ?? "0") >= 1,
|
|
416
|
+
{ timeoutMs: 10_000, label: "cron resumed after manual run" },
|
|
417
|
+
);
|
|
418
|
+
} finally {
|
|
419
|
+
await worker.stop();
|
|
420
|
+
redis.close();
|
|
421
|
+
}
|
|
422
|
+
}, { timeout: 30_000 });
|
|
423
|
+
|
|
355
424
|
test("step.runWorkflow: auto idempotency and override are forwarded to child runs", async () => {
|
|
356
425
|
const prefix = testPrefix();
|
|
357
426
|
const redis = new RedisClient(redisServer.url);
|
|
@@ -388,7 +457,7 @@ test("step.runWorkflow: auto idempotency and override are forwarded to child run
|
|
|
388
457
|
return { auto, custom };
|
|
389
458
|
});
|
|
390
459
|
|
|
391
|
-
const worker = await startWorker({
|
|
460
|
+
const worker = await startWorker({ app: "test-app",
|
|
392
461
|
redis,
|
|
393
462
|
prefix,
|
|
394
463
|
queues: ["q_rw_parent", "q_rw_child"],
|
|
@@ -451,7 +520,7 @@ test("step.runWorkflow: child workflow executes once across parent retries", asy
|
|
|
451
520
|
},
|
|
452
521
|
);
|
|
453
522
|
|
|
454
|
-
const worker = await startWorker({
|
|
523
|
+
const worker = await startWorker({ app: "test-app",
|
|
455
524
|
redis,
|
|
456
525
|
prefix,
|
|
457
526
|
queues: ["q_rw_retry_parent", "q_rw_retry_child"],
|
|
@@ -508,7 +577,7 @@ test("step.runWorkflow: same queue avoids self-deadlock with concurrency 1", asy
|
|
|
508
577
|
return await step.runWorkflow({ name: "call-child" }, child, {});
|
|
509
578
|
});
|
|
510
579
|
|
|
511
|
-
const worker = await startWorker({
|
|
580
|
+
const worker = await startWorker({ app: "test-app",
|
|
512
581
|
redis,
|
|
513
582
|
prefix,
|
|
514
583
|
queues: ["q_rw_single"],
|
|
@@ -552,7 +621,7 @@ test("step.emitWorkflow: supports workflow name strings", async () => {
|
|
|
552
621
|
return { childRunId };
|
|
553
622
|
});
|
|
554
623
|
|
|
555
|
-
const worker = await startWorker({
|
|
624
|
+
const worker = await startWorker({ app: "test-app",
|
|
556
625
|
redis,
|
|
557
626
|
prefix,
|
|
558
627
|
queues: ["q_emit_name_parent", "q_emit_name_child"],
|
|
@@ -606,7 +675,7 @@ test("retries: step results are cached across attempts", async () => {
|
|
|
606
675
|
},
|
|
607
676
|
);
|
|
608
677
|
|
|
609
|
-
const worker = await startWorker({
|
|
678
|
+
const worker = await startWorker({ app: "test-app",
|
|
610
679
|
redis,
|
|
611
680
|
prefix,
|
|
612
681
|
queues: ["q5"],
|
|
@@ -654,7 +723,7 @@ test("retries: run queued before worker start keeps workflow maxAttempts", async
|
|
|
654
723
|
const queuedState = await client.getRun(handle.id);
|
|
655
724
|
expect(queuedState?.maxAttempts).toBe(2);
|
|
656
725
|
|
|
657
|
-
const worker = await startWorker({
|
|
726
|
+
const worker = await startWorker({ app: "test-app",
|
|
658
727
|
redis,
|
|
659
728
|
prefix,
|
|
660
729
|
queues: ["q5_before_sync"],
|
|
@@ -687,7 +756,7 @@ test("step timeout: run fails and error is recorded", async () => {
|
|
|
687
756
|
return { ok: true };
|
|
688
757
|
});
|
|
689
758
|
|
|
690
|
-
const worker = await startWorker({
|
|
759
|
+
const worker = await startWorker({ app: "test-app",
|
|
691
760
|
redis,
|
|
692
761
|
prefix,
|
|
693
762
|
queues: ["q6"],
|
|
@@ -737,7 +806,7 @@ test("cancellation: scheduled/queued/running", async () => {
|
|
|
737
806
|
|
|
738
807
|
// Scheduled cancel.
|
|
739
808
|
{
|
|
740
|
-
const worker = await startWorker({
|
|
809
|
+
const worker = await startWorker({ app: "test-app",
|
|
741
810
|
redis,
|
|
742
811
|
prefix,
|
|
743
812
|
queues: ["q7"],
|
|
@@ -773,7 +842,7 @@ test("cancellation: scheduled/queued/running", async () => {
|
|
|
773
842
|
|
|
774
843
|
// Running cancel.
|
|
775
844
|
{
|
|
776
|
-
const worker = await startWorker({
|
|
845
|
+
const worker = await startWorker({ app: "test-app",
|
|
777
846
|
redis,
|
|
778
847
|
prefix,
|
|
779
848
|
queues: ["q7"],
|
|
@@ -856,7 +925,7 @@ test("cancel race: queued cancel before start does not execute handler", async (
|
|
|
856
925
|
|
|
857
926
|
let worker: Awaited<ReturnType<typeof startWorker>> | null = null;
|
|
858
927
|
try {
|
|
859
|
-
worker = await startWorker({
|
|
928
|
+
worker = await startWorker({ app: "test-app",
|
|
860
929
|
redis,
|
|
861
930
|
prefix,
|
|
862
931
|
queues: [queue],
|
|
@@ -977,7 +1046,7 @@ test("idempotencyKey: same key returns same run id and executes once", async ()
|
|
|
977
1046
|
return { ok: true };
|
|
978
1047
|
});
|
|
979
1048
|
|
|
980
|
-
const worker = await startWorker({
|
|
1049
|
+
const worker = await startWorker({ app: "test-app",
|
|
981
1050
|
redis,
|
|
982
1051
|
prefix,
|
|
983
1052
|
queues: ["q9"],
|
|
@@ -1015,7 +1084,7 @@ test("idempotencyKey: delayed TTL refresh cannot fork duplicate runs", async ()
|
|
|
1015
1084
|
return { ok: true };
|
|
1016
1085
|
});
|
|
1017
1086
|
|
|
1018
|
-
const worker = await startWorker({
|
|
1087
|
+
const worker = await startWorker({ app: "test-app",
|
|
1019
1088
|
redis,
|
|
1020
1089
|
prefix,
|
|
1021
1090
|
queues: ["q9_race_fix"],
|
|
@@ -1063,7 +1132,7 @@ test("enqueue: producer-side zadd failure does not leave runs orphaned", async (
|
|
|
1063
1132
|
setDefaultClient(client);
|
|
1064
1133
|
|
|
1065
1134
|
const wf = defineWorkflow("enqueue-atomic", {queue: "q_enqueue_atomic" }, async () => ({ ok: true }));
|
|
1066
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1135
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1067
1136
|
|
|
1068
1137
|
const originalZadd = redis.zadd.bind(redis);
|
|
1069
1138
|
let injected = false;
|
|
@@ -1082,7 +1151,7 @@ test("enqueue: producer-side zadd failure does not leave runs orphaned", async (
|
|
|
1082
1151
|
redis.zadd = originalZadd;
|
|
1083
1152
|
}
|
|
1084
1153
|
|
|
1085
|
-
const worker = await startWorker({
|
|
1154
|
+
const worker = await startWorker({ app: "test-app",
|
|
1086
1155
|
redis,
|
|
1087
1156
|
prefix,
|
|
1088
1157
|
queues: ["q_enqueue_atomic"],
|
|
@@ -1107,7 +1176,7 @@ test("idempotencyKey: delimiter-like workflow/key pairs do not collide", async (
|
|
|
1107
1176
|
const wfA = defineWorkflow("idem:a", {queue: "q9a" }, async () => ({ workflow: "idem:a" }));
|
|
1108
1177
|
const wfB = defineWorkflow("idem:a:b", {queue: "q9b" }, async () => ({ workflow: "idem:a:b" }));
|
|
1109
1178
|
|
|
1110
|
-
const worker = await startWorker({
|
|
1179
|
+
const worker = await startWorker({ app: "test-app",
|
|
1111
1180
|
redis,
|
|
1112
1181
|
prefix,
|
|
1113
1182
|
queues: ["q9a", "q9b"],
|
|
@@ -1150,7 +1219,7 @@ test("idempotencyTtl: short TTL expires and allows a new run with same key", asy
|
|
|
1150
1219
|
return { ok: true };
|
|
1151
1220
|
});
|
|
1152
1221
|
|
|
1153
|
-
const worker = await startWorker({
|
|
1222
|
+
const worker = await startWorker({ app: "test-app",
|
|
1154
1223
|
redis,
|
|
1155
1224
|
prefix,
|
|
1156
1225
|
queues: ["q_idem_ttl"],
|
|
@@ -1199,7 +1268,7 @@ test("input validation: invalid input fails once and is not retried", async () =
|
|
|
1199
1268
|
},
|
|
1200
1269
|
);
|
|
1201
1270
|
|
|
1202
|
-
const worker = await startWorker({
|
|
1271
|
+
const worker = await startWorker({ app: "test-app",
|
|
1203
1272
|
redis,
|
|
1204
1273
|
prefix,
|
|
1205
1274
|
queues: ["q11"],
|
|
@@ -1234,7 +1303,7 @@ test("unknown workflow: run fails and is not retried", async () => {
|
|
|
1234
1303
|
const redis = new RedisClient(redisServer.url);
|
|
1235
1304
|
const client = createClient({ redis, prefix });
|
|
1236
1305
|
|
|
1237
|
-
const worker = await startWorker({
|
|
1306
|
+
const worker = await startWorker({ app: "test-app",
|
|
1238
1307
|
redis,
|
|
1239
1308
|
prefix,
|
|
1240
1309
|
queues: ["qx"],
|
|
@@ -1287,7 +1356,7 @@ test("cancel during step: run becomes canceled and step error kind is canceled",
|
|
|
1287
1356
|
return { ok: true };
|
|
1288
1357
|
});
|
|
1289
1358
|
|
|
1290
|
-
const worker = await startWorker({
|
|
1359
|
+
const worker = await startWorker({ app: "test-app",
|
|
1291
1360
|
redis,
|
|
1292
1361
|
prefix,
|
|
1293
1362
|
queues: ["q12"],
|
|
@@ -1345,7 +1414,7 @@ test("terminal run re-queued is ignored (no re-execution)", async () => {
|
|
|
1345
1414
|
return { ok: true };
|
|
1346
1415
|
});
|
|
1347
1416
|
|
|
1348
|
-
const worker = await startWorker({
|
|
1417
|
+
const worker = await startWorker({ app: "test-app",
|
|
1349
1418
|
redis,
|
|
1350
1419
|
prefix,
|
|
1351
1420
|
queues: [queue],
|
|
@@ -1393,7 +1462,7 @@ test("lease+reaper: long running step is not duplicated", async () => {
|
|
|
1393
1462
|
return { ok: true };
|
|
1394
1463
|
});
|
|
1395
1464
|
|
|
1396
|
-
const worker = await startWorker({
|
|
1465
|
+
const worker = await startWorker({ app: "test-app",
|
|
1397
1466
|
redis,
|
|
1398
1467
|
prefix,
|
|
1399
1468
|
queues: [queue],
|
|
@@ -1438,8 +1507,8 @@ test(
|
|
|
1438
1507
|
},
|
|
1439
1508
|
);
|
|
1440
1509
|
|
|
1441
|
-
const w1 = await startWorker({ redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1442
|
-
const w2 = await startWorker({ redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1510
|
+
const w1 = await startWorker({ app: "test-app", redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1511
|
+
const w2 = await startWorker({ app: "test-app", redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1443
1512
|
|
|
1444
1513
|
await new Promise((r) => setTimeout(r, 3500));
|
|
1445
1514
|
const count = Number((await redis.get(counterKey)) ?? "0");
|
|
@@ -1505,7 +1574,7 @@ test(
|
|
|
1505
1574
|
},
|
|
1506
1575
|
);
|
|
1507
1576
|
|
|
1508
|
-
const worker = await startWorker({
|
|
1577
|
+
const worker = await startWorker({ app: "test-app",
|
|
1509
1578
|
redis,
|
|
1510
1579
|
prefix,
|
|
1511
1580
|
queues: ["q_parent", "q_child"],
|
|
@@ -1560,7 +1629,7 @@ test(
|
|
|
1560
1629
|
const queuedMembers = await redis.zrange(queuedIndex, 0, -1);
|
|
1561
1630
|
expect(queuedMembers.includes(handle.id)).toBe(true);
|
|
1562
1631
|
|
|
1563
|
-
const worker = await startWorker({
|
|
1632
|
+
const worker = await startWorker({ app: "test-app",
|
|
1564
1633
|
redis,
|
|
1565
1634
|
prefix,
|
|
1566
1635
|
queues: [queue],
|
|
@@ -1606,7 +1675,7 @@ test(
|
|
|
1606
1675
|
cron: [{ expression: "*/1 * * * * *" }],
|
|
1607
1676
|
}, async () => ({ ok: true }));
|
|
1608
1677
|
|
|
1609
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1678
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1610
1679
|
|
|
1611
1680
|
const workflowKey = `${prefix}:workflow:${name}`;
|
|
1612
1681
|
const cronIdsJson = await redis.hget(workflowKey, "cronIdsJson");
|
|
@@ -1624,7 +1693,7 @@ test(
|
|
|
1624
1693
|
queue: "q_update",
|
|
1625
1694
|
}, async () => ({ ok: true }));
|
|
1626
1695
|
|
|
1627
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1696
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1628
1697
|
|
|
1629
1698
|
expect(await redis.hget(`${prefix}:cron:def`, cronId)).toBeNull();
|
|
1630
1699
|
const cronNext2 = await redis.zrange(`${prefix}:cron:next`, 0, -1);
|
|
@@ -1658,7 +1727,7 @@ test(
|
|
|
1658
1727
|
},
|
|
1659
1728
|
);
|
|
1660
1729
|
|
|
1661
|
-
const worker = await startWorker({
|
|
1730
|
+
const worker = await startWorker({ app: "test-app",
|
|
1662
1731
|
redis,
|
|
1663
1732
|
prefix,
|
|
1664
1733
|
queues: [queue],
|
|
@@ -1708,8 +1777,8 @@ test(
|
|
|
1708
1777
|
},
|
|
1709
1778
|
);
|
|
1710
1779
|
|
|
1711
|
-
const w1 = await startWorker({ redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1712
|
-
const w2 = await startWorker({ redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1780
|
+
const w1 = await startWorker({ app: "test-app", redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1781
|
+
const w2 = await startWorker({ app: "test-app", redis, prefix, queues: [queue], concurrency: 1, runtime: { leaseMs: 500 } });
|
|
1713
1782
|
|
|
1714
1783
|
await waitFor(
|
|
1715
1784
|
async () => Number((await redis.get(counterKey)) ?? "0") >= 1,
|
|
@@ -1744,7 +1813,7 @@ test("listRuns: status + workflow filters are applied together", async () => {
|
|
|
1744
1813
|
throw new Error("expected failure");
|
|
1745
1814
|
});
|
|
1746
1815
|
|
|
1747
|
-
const worker = await startWorker({ redis, prefix, queues: [queue], runtime: { leaseMs: 500 } });
|
|
1816
|
+
const worker = await startWorker({ app: "test-app", redis, prefix, queues: [queue], runtime: { leaseMs: 500 } });
|
|
1748
1817
|
|
|
1749
1818
|
const successHandle = await succeeds.run({});
|
|
1750
1819
|
const failHandle = await fails.run({});
|
|
@@ -1795,7 +1864,7 @@ test("cron trigger ids: same custom id in two workflows does not collide", async
|
|
|
1795
1864
|
async () => ({ ok: true }),
|
|
1796
1865
|
);
|
|
1797
1866
|
|
|
1798
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1867
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1799
1868
|
|
|
1800
1869
|
const idsA = JSON.parse((await redis.hget(`${prefix}:workflow:cron-id-a`, "cronIdsJson")) ?? "[]") as string[];
|
|
1801
1870
|
const idsB = JSON.parse((await redis.hget(`${prefix}:workflow:cron-id-b`, "cronIdsJson")) ?? "[]") as string[];
|
|
@@ -1823,7 +1892,7 @@ test("syncRegistry: cron trigger with explicit null input preserves null payload
|
|
|
1823
1892
|
async () => ({ ok: true }),
|
|
1824
1893
|
);
|
|
1825
1894
|
|
|
1826
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1895
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1827
1896
|
|
|
1828
1897
|
const workflowKey = `${prefix}:workflow:${workflowName}`;
|
|
1829
1898
|
const cronIds = JSON.parse((await redis.hget(workflowKey, "cronIdsJson")) ?? "[]") as string[];
|
|
@@ -1854,7 +1923,7 @@ test("syncRegistry: invalid cron expression removes existing next schedule for s
|
|
|
1854
1923
|
async () => ({ ok: true }),
|
|
1855
1924
|
);
|
|
1856
1925
|
|
|
1857
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1926
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1858
1927
|
|
|
1859
1928
|
const cronIds = JSON.parse((await redis.hget(workflowKey, "cronIdsJson")) ?? "[]") as string[];
|
|
1860
1929
|
expect(cronIds.length).toBe(1);
|
|
@@ -1870,7 +1939,7 @@ test("syncRegistry: invalid cron expression removes existing next schedule for s
|
|
|
1870
1939
|
async () => ({ ok: true }),
|
|
1871
1940
|
);
|
|
1872
1941
|
|
|
1873
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1942
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1874
1943
|
|
|
1875
1944
|
const after = await redis.zrange(cronNextKey, 0, -1);
|
|
1876
1945
|
expect(after.includes(cronId)).toBe(false);
|
|
@@ -1891,7 +1960,7 @@ test("syncRegistry: removes stale workflow metadata (cron)", async () => {
|
|
|
1891
1960
|
async () => ({ ok: true }),
|
|
1892
1961
|
);
|
|
1893
1962
|
|
|
1894
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1963
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1895
1964
|
|
|
1896
1965
|
const workflowKey = `${prefix}:workflow:${workflowName}`;
|
|
1897
1966
|
const cronIds = JSON.parse((await redis.hget(workflowKey, "cronIdsJson")) ?? "[]") as string[];
|
|
@@ -1903,7 +1972,7 @@ test("syncRegistry: removes stale workflow metadata (cron)", async () => {
|
|
|
1903
1972
|
// Force stale age beyond grace period and sync an empty registry.
|
|
1904
1973
|
await redis.hset(workflowKey, { updatedAt: "1" });
|
|
1905
1974
|
__unstableResetDefaultRegistryForTests();
|
|
1906
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
1975
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
1907
1976
|
|
|
1908
1977
|
expect((await redis.smembers(`${prefix}:workflows`)).includes(workflowName)).toBe(false);
|
|
1909
1978
|
expect(await redis.hget(workflowKey, "queue")).toBeNull();
|
|
@@ -1914,37 +1983,37 @@ test("syncRegistry: removes stale workflow metadata (cron)", async () => {
|
|
|
1914
1983
|
redis.close();
|
|
1915
1984
|
});
|
|
1916
1985
|
|
|
1917
|
-
test("syncRegistry: stale cleanup is isolated by
|
|
1986
|
+
test("syncRegistry: stale cleanup is isolated by app", async () => {
|
|
1918
1987
|
const prefix = testPrefix();
|
|
1919
1988
|
const redis = new RedisClient(redisServer.url);
|
|
1920
1989
|
const client = createClient({ redis, prefix });
|
|
1921
1990
|
|
|
1922
1991
|
__unstableResetDefaultRegistryForTests();
|
|
1923
|
-
defineWorkflow("
|
|
1992
|
+
defineWorkflow("app-a-wf", {queue: "qa",
|
|
1924
1993
|
cron: [{ id: "a", expression: "*/30 * * * * *" }],
|
|
1925
1994
|
},
|
|
1926
1995
|
async () => ({ ok: true }),
|
|
1927
1996
|
);
|
|
1928
|
-
await client.syncRegistry(getDefaultRegistry(), {
|
|
1997
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "svc-a" });
|
|
1929
1998
|
|
|
1930
1999
|
__unstableResetDefaultRegistryForTests();
|
|
1931
|
-
defineWorkflow("
|
|
2000
|
+
defineWorkflow("app-b-wf", {queue: "qb",
|
|
1932
2001
|
cron: [{ id: "b", expression: "*/30 * * * * *" }],
|
|
1933
2002
|
},
|
|
1934
2003
|
async () => ({ ok: true }),
|
|
1935
2004
|
);
|
|
1936
|
-
await client.syncRegistry(getDefaultRegistry(), {
|
|
2005
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "svc-b" });
|
|
1937
2006
|
|
|
1938
|
-
await redis.hset(`${prefix}:workflow:
|
|
1939
|
-
await redis.hset(`${prefix}:workflow:
|
|
2007
|
+
await redis.hset(`${prefix}:workflow:app-a-wf`, { updatedAt: "1" });
|
|
2008
|
+
await redis.hset(`${prefix}:workflow:app-b-wf`, { updatedAt: "1" });
|
|
1940
2009
|
|
|
1941
2010
|
__unstableResetDefaultRegistryForTests();
|
|
1942
|
-
await client.syncRegistry(getDefaultRegistry(), {
|
|
2011
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "svc-a" });
|
|
1943
2012
|
|
|
1944
|
-
expect(await redis.hget(`${prefix}:workflow:
|
|
1945
|
-
expect(await redis.hget(`${prefix}:workflow:
|
|
1946
|
-
expect((await redis.smembers(`${prefix}:workflows`)).includes("
|
|
1947
|
-
expect((await redis.smembers(`${prefix}:workflows`)).includes("
|
|
2013
|
+
expect(await redis.hget(`${prefix}:workflow:app-a-wf`, "queue")).toBeNull();
|
|
2014
|
+
expect(await redis.hget(`${prefix}:workflow:app-b-wf`, "queue")).not.toBeNull();
|
|
2015
|
+
expect((await redis.smembers(`${prefix}:workflows`)).includes("app-a-wf")).toBe(false);
|
|
2016
|
+
expect((await redis.smembers(`${prefix}:workflows`)).includes("app-b-wf")).toBe(true);
|
|
1948
2017
|
|
|
1949
2018
|
redis.close();
|
|
1950
2019
|
});
|
|
@@ -1957,7 +2026,7 @@ test("cancelRun: terminal runs are unchanged", async () => {
|
|
|
1957
2026
|
|
|
1958
2027
|
const wf = defineWorkflow("cancel-terminal", {queue: "q_cancel_terminal" }, async () => ({ ok: true }));
|
|
1959
2028
|
|
|
1960
|
-
const worker = await startWorker({
|
|
2029
|
+
const worker = await startWorker({ app: "test-app",
|
|
1961
2030
|
redis,
|
|
1962
2031
|
prefix,
|
|
1963
2032
|
queues: ["q_cancel_terminal"],
|
|
@@ -1992,7 +2061,7 @@ test("output serialization: non-JSON output fails once and is not stuck", async
|
|
|
1992
2061
|
async () => ({ bad: 1n }),
|
|
1993
2062
|
);
|
|
1994
2063
|
|
|
1995
|
-
const worker = await startWorker({
|
|
2064
|
+
const worker = await startWorker({ app: "test-app",
|
|
1996
2065
|
redis,
|
|
1997
2066
|
prefix,
|
|
1998
2067
|
queues: ["q_serial"],
|
|
@@ -2047,7 +2116,7 @@ test("retry step sequence: new steps keep monotonic order after cached steps", a
|
|
|
2047
2116
|
},
|
|
2048
2117
|
);
|
|
2049
2118
|
|
|
2050
|
-
const worker = await startWorker({
|
|
2119
|
+
const worker = await startWorker({ app: "test-app",
|
|
2051
2120
|
redis,
|
|
2052
2121
|
prefix,
|
|
2053
2122
|
queues: ["q_seq"],
|
|
@@ -2078,7 +2147,7 @@ test("listRuns: negative offset is clamped to zero", async () => {
|
|
|
2078
2147
|
setDefaultClient(client);
|
|
2079
2148
|
|
|
2080
2149
|
const wf = defineWorkflow("offset-wf", {queue: "q_offset" }, async () => ({ ok: true }));
|
|
2081
|
-
const worker = await startWorker({
|
|
2150
|
+
const worker = await startWorker({ app: "test-app",
|
|
2082
2151
|
redis,
|
|
2083
2152
|
prefix,
|
|
2084
2153
|
queues: ["q_offset"],
|
|
@@ -2142,7 +2211,7 @@ test("error serialization: non-serializable thrown values do not wedge a run", a
|
|
|
2142
2211
|
},
|
|
2143
2212
|
);
|
|
2144
2213
|
|
|
2145
|
-
const worker = await startWorker({
|
|
2214
|
+
const worker = await startWorker({ app: "test-app",
|
|
2146
2215
|
redis,
|
|
2147
2216
|
prefix,
|
|
2148
2217
|
queues: ["q_bigint"],
|
|
@@ -2185,7 +2254,7 @@ test("cron trigger ids: delimiter-like custom ids do not collide", async () => {
|
|
|
2185
2254
|
async () => ({ ok: true }),
|
|
2186
2255
|
);
|
|
2187
2256
|
|
|
2188
|
-
await client.syncRegistry(getDefaultRegistry());
|
|
2257
|
+
await client.syncRegistry(getDefaultRegistry(), { app: "test-app" });
|
|
2189
2258
|
|
|
2190
2259
|
const idsA = JSON.parse((await redis.hget(`${prefix}:workflow:wf:a`, "cronIdsJson")) ?? "[]") as string[];
|
|
2191
2260
|
const idsB = JSON.parse((await redis.hget(`${prefix}:workflow:wf:a:b`, "cronIdsJson")) ?? "[]") as string[];
|
|
@@ -2219,7 +2288,7 @@ test("idempotencyKey: stale pointer is recovered instead of returning missing ru
|
|
|
2219
2288
|
|
|
2220
2289
|
const wf = defineWorkflow(workflowName, {queue }, async () => ({ ok: true }));
|
|
2221
2290
|
|
|
2222
|
-
const worker = await startWorker({
|
|
2291
|
+
const worker = await startWorker({ app: "test-app",
|
|
2223
2292
|
redis,
|
|
2224
2293
|
prefix,
|
|
2225
2294
|
queues: [queue],
|
|
@@ -2262,7 +2331,7 @@ test("idempotencyKey: partial existing run is repaired and executed", async () =
|
|
|
2262
2331
|
return { ok: true };
|
|
2263
2332
|
});
|
|
2264
2333
|
|
|
2265
|
-
const worker = await startWorker({
|
|
2334
|
+
const worker = await startWorker({ app: "test-app",
|
|
2266
2335
|
redis,
|
|
2267
2336
|
prefix,
|
|
2268
2337
|
queues: [queue],
|
|
@@ -2314,7 +2383,7 @@ test("syncRegistry: duplicate custom cron ids in one workflow are rejected", asy
|
|
|
2314
2383
|
async () => ({ ok: true }),
|
|
2315
2384
|
);
|
|
2316
2385
|
|
|
2317
|
-
await expect(client.syncRegistry(getDefaultRegistry())).rejects.toThrow("Duplicate cron trigger id");
|
|
2386
|
+
await expect(client.syncRegistry(getDefaultRegistry(), { app: "test-app" })).rejects.toThrow("Duplicate cron trigger id");
|
|
2318
2387
|
|
|
2319
2388
|
redis.close();
|
|
2320
2389
|
});
|
|
@@ -2331,7 +2400,7 @@ test("scheduled promoter: stale scheduled entry does not resurrect terminal run"
|
|
|
2331
2400
|
return { ok: true };
|
|
2332
2401
|
});
|
|
2333
2402
|
|
|
2334
|
-
const worker = await startWorker({
|
|
2403
|
+
const worker = await startWorker({ app: "test-app",
|
|
2335
2404
|
redis,
|
|
2336
2405
|
prefix,
|
|
2337
2406
|
queues: [queue],
|
|
@@ -2398,7 +2467,7 @@ test(
|
|
|
2398
2467
|
|
|
2399
2468
|
let worker: Awaited<ReturnType<typeof startWorker>> | null = null;
|
|
2400
2469
|
try {
|
|
2401
|
-
worker = await startWorker({
|
|
2470
|
+
worker = await startWorker({ app: "test-app",
|
|
2402
2471
|
redis,
|
|
2403
2472
|
prefix,
|
|
2404
2473
|
queues: [queue],
|
|
@@ -2449,7 +2518,7 @@ test(
|
|
|
2449
2518
|
|
|
2450
2519
|
let worker: Awaited<ReturnType<typeof startWorker>> | null = null;
|
|
2451
2520
|
try {
|
|
2452
|
-
worker = await startWorker({
|
|
2521
|
+
worker = await startWorker({ app: "test-app",
|
|
2453
2522
|
redis,
|
|
2454
2523
|
prefix,
|
|
2455
2524
|
queues: [queue],
|
|
@@ -2492,7 +2561,7 @@ test("workflow names ending with ':runs' do not collide with workflow run indexe
|
|
|
2492
2561
|
|
|
2493
2562
|
expect(keys.workflow(prefix, "keyspace-base:runs")).not.toBe(keys.workflowRuns(prefix, "keyspace-base"));
|
|
2494
2563
|
|
|
2495
|
-
const worker = await startWorker({
|
|
2564
|
+
const worker = await startWorker({ app: "test-app",
|
|
2496
2565
|
redis,
|
|
2497
2566
|
prefix,
|
|
2498
2567
|
queues: [queue],
|
|
@@ -2537,7 +2606,7 @@ test("cancelRun race: near-finish cancellation settles as canceled", async () =>
|
|
|
2537
2606
|
return await originalFinalizeRun.call(this, runId, args);
|
|
2538
2607
|
}) as RedflowClient["finalizeRun"];
|
|
2539
2608
|
|
|
2540
|
-
const worker = await startWorker({
|
|
2609
|
+
const worker = await startWorker({ app: "test-app",
|
|
2541
2610
|
redis,
|
|
2542
2611
|
prefix,
|
|
2543
2612
|
queues: [queue],
|
|
@@ -2600,7 +2669,7 @@ test("cancelRun race: cancellation requested during finalize wins over success",
|
|
|
2600
2669
|
return await originalFinalizeRun.call(this, runId, args);
|
|
2601
2670
|
}) as RedflowClient["finalizeRun"];
|
|
2602
2671
|
|
|
2603
|
-
const worker = await startWorker({
|
|
2672
|
+
const worker = await startWorker({ app: "test-app",
|
|
2604
2673
|
redis,
|
|
2605
2674
|
prefix,
|
|
2606
2675
|
queues: [queue],
|
|
@@ -2650,7 +2719,7 @@ test(
|
|
|
2650
2719
|
await new Promise<never>(() => {});
|
|
2651
2720
|
});
|
|
2652
2721
|
|
|
2653
|
-
const worker = await startWorker({
|
|
2722
|
+
const worker = await startWorker({ app: "test-app",
|
|
2654
2723
|
redis,
|
|
2655
2724
|
prefix,
|
|
2656
2725
|
queues: [queue],
|