@nicnocquee/dataqueue 1.24.0 → 1.26.0-beta.20260223195940

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. package/README.md +44 -0
  2. package/ai/build-docs-content.ts +96 -0
  3. package/ai/build-llms-full.ts +42 -0
  4. package/ai/docs-content.json +278 -0
  5. package/ai/rules/advanced.md +132 -0
  6. package/ai/rules/basic.md +159 -0
  7. package/ai/rules/react-dashboard.md +83 -0
  8. package/ai/skills/dataqueue-advanced/SKILL.md +320 -0
  9. package/ai/skills/dataqueue-core/SKILL.md +234 -0
  10. package/ai/skills/dataqueue-react/SKILL.md +189 -0
  11. package/dist/cli.cjs +1149 -14
  12. package/dist/cli.cjs.map +1 -1
  13. package/dist/cli.d.cts +66 -1
  14. package/dist/cli.d.ts +66 -1
  15. package/dist/cli.js +1146 -13
  16. package/dist/cli.js.map +1 -1
  17. package/dist/index.cjs +4630 -928
  18. package/dist/index.cjs.map +1 -1
  19. package/dist/index.d.cts +1033 -15
  20. package/dist/index.d.ts +1033 -15
  21. package/dist/index.js +4626 -929
  22. package/dist/index.js.map +1 -1
  23. package/dist/mcp-server.cjs +186 -0
  24. package/dist/mcp-server.cjs.map +1 -0
  25. package/dist/mcp-server.d.cts +32 -0
  26. package/dist/mcp-server.d.ts +32 -0
  27. package/dist/mcp-server.js +175 -0
  28. package/dist/mcp-server.js.map +1 -0
  29. package/migrations/1751131910825_add_timeout_seconds_to_job_queue.sql +2 -2
  30. package/migrations/1751186053000_add_job_events_table.sql +12 -8
  31. package/migrations/1751984773000_add_tags_to_job_queue.sql +1 -1
  32. package/migrations/1765809419000_add_force_kill_on_timeout_to_job_queue.sql +1 -1
  33. package/migrations/1771100000000_add_idempotency_key_to_job_queue.sql +7 -0
  34. package/migrations/1781200000000_add_wait_support.sql +12 -0
  35. package/migrations/1781200000001_create_waitpoints_table.sql +18 -0
  36. package/migrations/1781200000002_add_performance_indexes.sql +34 -0
  37. package/migrations/1781200000003_add_progress_to_job_queue.sql +7 -0
  38. package/migrations/1781200000004_create_cron_schedules_table.sql +33 -0
  39. package/migrations/1781200000005_add_retry_config_to_job_queue.sql +17 -0
  40. package/package.json +40 -23
  41. package/src/backend.ts +328 -0
  42. package/src/backends/postgres.ts +2040 -0
  43. package/src/backends/redis-scripts.ts +865 -0
  44. package/src/backends/redis.test.ts +1906 -0
  45. package/src/backends/redis.ts +1792 -0
  46. package/src/cli.test.ts +82 -6
  47. package/src/cli.ts +73 -10
  48. package/src/cron.test.ts +126 -0
  49. package/src/cron.ts +40 -0
  50. package/src/db-util.ts +4 -2
  51. package/src/index.test.ts +688 -1
  52. package/src/index.ts +277 -39
  53. package/src/init-command.test.ts +449 -0
  54. package/src/init-command.ts +709 -0
  55. package/src/install-mcp-command.test.ts +216 -0
  56. package/src/install-mcp-command.ts +185 -0
  57. package/src/install-rules-command.test.ts +218 -0
  58. package/src/install-rules-command.ts +233 -0
  59. package/src/install-skills-command.test.ts +176 -0
  60. package/src/install-skills-command.ts +124 -0
  61. package/src/mcp-server.test.ts +162 -0
  62. package/src/mcp-server.ts +231 -0
  63. package/src/processor.test.ts +559 -18
  64. package/src/processor.ts +456 -49
  65. package/src/queue.test.ts +682 -6
  66. package/src/queue.ts +135 -944
  67. package/src/supervisor.test.ts +340 -0
  68. package/src/supervisor.ts +162 -0
  69. package/src/test-util.ts +32 -0
  70. package/src/types.ts +726 -17
  71. package/src/wait.test.ts +698 -0
  72. package/LICENSE +0 -21
@@ -0,0 +1,132 @@
1
+ # DataQueue — Advanced Rules
2
+
3
+ ## Step Memoization (ctx.run)
4
+
5
+ Wrap side-effectful work in `ctx.run(stepName, fn)` for durability. Cached results replay on re-invocation after a wait.
6
+
7
+ ```typescript
8
+ const data = await ctx.run('fetch', async () => fetchFromAPI(url));
9
+ await ctx.waitFor({ hours: 1 });
10
+ await ctx.run('notify', async () => sendNotification(data));
11
+ ```
12
+
13
+ Step names must be unique within a handler and stable across deployments.
14
+
15
+ ## Waits
16
+
17
+ - `ctx.waitFor({ hours: 24 })` — pause for a duration (seconds, minutes, hours, days, weeks, months, years).
18
+ - `ctx.waitUntil(date)` — pause until a specific date.
19
+ - `ctx.waitForToken(tokenId)` — pause until an external actor completes the token.
20
+
21
+ Waiting jobs release their worker lock and concurrency slot. They consume no resources.
22
+
23
+ Wait calls use a positional counter internally. Do not add/remove waits conditionally between re-invocations.
24
+
25
+ ## Token System
26
+
27
+ ```typescript
28
+ const token = await ctx.createToken({ timeout: '48h', tags: ['approval'] });
29
+ const result = await ctx.waitForToken<{ approved: boolean }>(token.id);
30
+ if (result.ok) {
31
+ /* result.output.approved */
32
+ }
33
+ ```
34
+
35
+ Complete externally: `await queue.completeToken(tokenId, { approved: true })`.
36
+ Expire timed-out tokens: `await queue.expireTimedOutTokens()`.
37
+
38
+ ## Cron Scheduling
39
+
40
+ ```typescript
41
+ await queue.addCronJob({
42
+ scheduleName: 'daily-cleanup',
43
+ cronExpression: '0 2 * * *',
44
+ jobType: 'cleanup',
45
+ payload: { days: 30 },
46
+ timezone: 'UTC',
47
+ allowOverlap: false,
48
+ });
49
+ ```
50
+
51
+ The processor auto-enqueues due cron jobs before each batch. Manage with `pauseCronJob`, `resumeCronJob`, `editCronJob`, `removeCronJob`, `listCronJobs`.
52
+
53
+ ## Timeout Management
54
+
55
+ - `ctx.prolong(ms)` — proactively reset deadline. `ctx.prolong()` resets to original `timeoutMs`.
56
+ - `ctx.onTimeout(() => ms)` — reactive; return ms to extend, or nothing to let timeout proceed.
57
+ - `forceKillOnTimeout: true` — terminates handler via Worker Thread. Requires Node.js, serializable handler, and disables `ctx.run`/waits/`prolong`/`onTimeout`.
58
+
59
+ ## Tags and Filtering
60
+
61
+ ```typescript
62
+ await queue.addJob({ jobType: 'email', payload, tags: ['welcome', 'user'] });
63
+ const jobs = await queue.getJobsByTags(['welcome'], 'any');
64
+ await queue.cancelAllUpcomingJobs({ tags: { values: ['user'], mode: 'all' } });
65
+ ```
66
+
67
+ Modes: `exact` (exact set), `all` (superset), `any` (intersection), `none` (exclusion).
68
+
69
+ ## Idempotency
70
+
71
+ ```typescript
72
+ await queue.addJob({
73
+ jobType: 'email',
74
+ payload,
75
+ idempotencyKey: `welcome-${userId}`,
76
+ });
77
+ ```
78
+
79
+ Returns existing job ID if key already exists. Key persists until `cleanupOldJobs` removes the job.
80
+
81
+ ## Transactional Job Creation (PostgreSQL Only)
82
+
83
+ Pass a `pg.PoolClient` inside a transaction via the `{ db }` option to enqueue a job atomically with other writes:
84
+
85
+ ```typescript
86
+ const client = await pool.connect();
87
+ await client.query('BEGIN');
88
+ await client.query('INSERT INTO users (email) VALUES ($1)', [email]);
89
+ await queue.addJob(
90
+ {
91
+ jobType: 'send_email',
92
+ payload: { to: email, subject: 'Welcome!', body: '...' },
93
+ },
94
+ { db: client },
95
+ );
96
+ await client.query('COMMIT');
97
+ client.release();
98
+ ```
99
+
100
+ If the transaction rolls back, the job and its event are never persisted. The `db` option accepts any object with a `.query(text, values)` method matching `pg`'s signature. Using `{ db }` with the Redis backend throws an error.
101
+
102
+ ## Retry Strategy
103
+
104
+ ```typescript
105
+ await queue.addJob({
106
+ jobType: 'email',
107
+ payload,
108
+ retryDelay: 10, // base 10s
109
+ retryBackoff: true, // exponential (default)
110
+ retryDelayMax: 300, // cap at 5 min
111
+ });
112
+ ```
113
+
114
+ - `retryBackoff: false` — fixed delay of `retryDelay` seconds.
115
+ - `retryBackoff: true` (default) — `retryDelay * 2^attempts` with jitter, capped by `retryDelayMax`.
116
+ - No config — legacy `2^attempts * 60s` formula (backward compatible).
117
+ - Cron schedules propagate retry config to enqueued jobs.
118
+
119
+ ## Scaling
120
+
121
+ - Increase `batchSize` and `concurrency` for higher throughput.
122
+ - Run multiple processor instances with unique `workerId` values — `FOR UPDATE SKIP LOCKED` (PostgreSQL) or Lua scripts (Redis) prevent double-claiming.
123
+ - Use `jobType` filter for specialized workers.
124
+ - Use `createSupervisor()` to automate maintenance (reclaim stuck jobs, cleanup, token expiry). Safe to run across multiple instances.
125
+
126
+ ## Progress Tracking
127
+
128
+ ```typescript
129
+ await ctx.setProgress(50); // 0–100, persisted to DB
130
+ ```
131
+
132
+ Read via `queue.getJob(id)` (`progress` field) or React SDK's `useJob` hook.
@@ -0,0 +1,159 @@
1
+ # DataQueue — Basic Rules
2
+
3
+ ## Imports
4
+
5
+ Always import from `@nicnocquee/dataqueue`. There is no subpath like `/v2` or `/v3`.
6
+
7
+ ```typescript
8
+ import { initJobQueue, JobHandlers } from '@nicnocquee/dataqueue';
9
+ ```
10
+
11
+ ## PayloadMap Pattern
12
+
13
+ Define an object type where keys are job type strings and values are payload shapes. This powers type-safe `addJob`, `createProcessor`, and handler completeness checking.
14
+
15
+ ```typescript
16
+ type JobPayloadMap = {
17
+ send_email: { to: string; subject: string; body: string };
18
+ generate_report: { reportId: string; userId: string };
19
+ };
20
+ ```
21
+
22
+ ## Initialization (Singleton)
23
+
24
+ Never call `initJobQueue` per request — each call creates a new database connection pool. Use a module-level singleton:
25
+
26
+ ```typescript
27
+ import { initJobQueue } from '@nicnocquee/dataqueue';
28
+
29
+ let jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;
30
+
31
+ export const getJobQueue = () => {
32
+ if (!jobQueue) {
33
+ jobQueue = initJobQueue<JobPayloadMap>({
34
+ databaseConfig: { connectionString: process.env.PG_DATAQUEUE_DATABASE },
35
+ });
36
+ }
37
+ return jobQueue;
38
+ };
39
+ ```
40
+
41
+ For Redis, set `backend: 'redis'` and use `redisConfig` with `url` or `host`/`port`/`password`. Install `ioredis` as a peer dependency.
42
+
43
+ ### Bring Your Own Pool / Client
44
+
45
+ Pass an existing `pg.Pool` or `ioredis` client instead of connection config:
46
+
47
+ ```typescript
48
+ import { Pool } from 'pg';
49
+ const pool = new Pool({ connectionString: process.env.DATABASE_URL });
50
+ jobQueue = initJobQueue<JobPayloadMap>({ pool });
51
+ ```
52
+
53
+ ```typescript
54
+ import IORedis from 'ioredis';
55
+ const redis = new IORedis(process.env.REDIS_URL);
56
+ jobQueue = initJobQueue<JobPayloadMap>({
57
+ backend: 'redis',
58
+ client: redis,
59
+ keyPrefix: 'myapp:',
60
+ });
61
+ ```
62
+
63
+ The library will **not** close externally provided connections on shutdown.
64
+
65
+ ## Adding Jobs
66
+
67
+ Use `addJob` for a single job, `addJobs` for bulk inserts (single DB round-trip).
68
+
69
+ ```typescript
70
+ const id = await queue.addJob({
71
+ jobType: 'send_email',
72
+ payload: { to: 'a@x.com', subject: 'Hi', body: '...' },
73
+ });
74
+
75
+ const ids = await queue.addJobs([
76
+ {
77
+ jobType: 'send_email',
78
+ payload: { to: 'a@x.com', subject: 'Hi', body: '...' },
79
+ },
80
+ {
81
+ jobType: 'send_email',
82
+ payload: { to: 'b@x.com', subject: 'Hi', body: '...' },
83
+ priority: 10,
84
+ },
85
+ ]);
86
+ // ids[i] corresponds to the i-th input job
87
+ ```
88
+
89
+ Both support `idempotencyKey`, `priority`, `runAt`, `tags`, and `{ db }` for transactional inserts (PostgreSQL only).
90
+
91
+ ## Handlers
92
+
93
+ Type handlers as `JobHandlers<PayloadMap>` so TypeScript enforces a handler for every job type.
94
+
95
+ ```typescript
96
+ export const jobHandlers: JobHandlers<JobPayloadMap> = {
97
+ send_email: async (payload, signal, ctx) => {
98
+ await sendEmail(payload.to, payload.subject, payload.body);
99
+ },
100
+ generate_report: async (payload) => {
101
+ await generateReport(payload.reportId, payload.userId);
102
+ },
103
+ };
104
+ ```
105
+
106
+ Handler signature: `(payload: T, signal: AbortSignal, ctx: JobContext) => Promise<void>`. You can omit arguments you don't need.
107
+
108
+ ## Processing
109
+
110
+ **Serverless** — call `processor.start()` which processes one batch and stops:
111
+
112
+ ```typescript
113
+ const processor = queue.createProcessor(handlers, {
114
+ batchSize: 10,
115
+ concurrency: 3,
116
+ });
117
+ await processor.start();
118
+ ```
119
+
120
+ **Long-running** — call `processor.startInBackground()` which polls continuously, and `createSupervisor()` to automate maintenance:
121
+
122
+ ```typescript
123
+ processor.startInBackground();
124
+
125
+ const supervisor = queue.createSupervisor({
126
+ intervalMs: 60_000,
127
+ stuckJobsTimeoutMinutes: 10,
128
+ cleanupJobsDaysToKeep: 30,
129
+ });
130
+ supervisor.startInBackground();
131
+
132
+ process.on('SIGTERM', async () => {
133
+ await Promise.all([
134
+ processor.stopAndDrain(30000),
135
+ supervisor.stopAndDrain(30000),
136
+ ]);
137
+ queue.getPool().end(); // or queue.getRedisClient().quit() for Redis
138
+ process.exit(0);
139
+ });
140
+ ```
141
+
142
+ ## Retry Configuration
143
+
144
+ Control retry behavior per-job with optional fields on `addJob`:
145
+
146
+ - `retryDelay` (seconds, default 60) — base delay between retries.
147
+ - `retryBackoff` (boolean, default true) — enable exponential backoff with jitter.
148
+ - `retryDelayMax` (seconds, optional) — cap the maximum delay.
149
+
150
+ When none are set, the legacy `2^attempts * 60s` formula is used.
151
+
152
+ ## Common Mistakes
153
+
154
+ 1. Creating `initJobQueue` per request — use a singleton.
155
+ 2. Missing handler for a job type — fails with `NoHandler`. Type as `JobHandlers<PayloadMap>`.
156
+ 3. Not checking `signal.aborted` in long handlers — timed-out jobs keep running.
157
+ 4. Skipping maintenance — use `createSupervisor()` to automate reclaim, cleanup, and token expiry. Without it, stuck jobs and old data accumulate.
158
+ 5. Skipping migrations (PostgreSQL) — run `dataqueue-cli migrate` first. Redis needs none.
159
+ 6. Using `stop()` instead of `stopAndDrain()` — leaves in-flight jobs stuck.
@@ -0,0 +1,83 @@
1
+ # DataQueue — React & Dashboard Rules
2
+
3
+ ## React SDK (@nicnocquee/dataqueue-react)
4
+
5
+ Install: `npm install @nicnocquee/dataqueue-react` (requires React 18+).
6
+
7
+ ### useJob Hook
8
+
9
+ ```tsx
10
+ 'use client';
11
+ import { useJob } from '@nicnocquee/dataqueue-react';
12
+
13
+ const { status, progress, data, isLoading, error } = useJob(jobId, {
14
+ fetcher: (id) =>
15
+ fetch(`/api/jobs/${id}`)
16
+ .then((r) => r.json())
17
+ .then((d) => d.job),
18
+ pollingInterval: 1000,
19
+ onComplete: (job) => {
20
+ /* job completed */
21
+ },
22
+ onFailed: (job) => {
23
+ /* job failed */
24
+ },
25
+ });
26
+ ```
27
+
28
+ Polling auto-stops on terminal statuses (`completed`, `failed`, `cancelled`).
29
+
30
+ ### DataqueueProvider
31
+
32
+ Wrap app in `DataqueueProvider` to share `fetcher` and `pollingInterval`:
33
+
34
+ ```tsx
35
+ <DataqueueProvider fetcher={fetcher} pollingInterval={2000}>
36
+ {children}
37
+ </DataqueueProvider>
38
+ ```
39
+
40
+ ### API Route (Next.js)
41
+
42
+ ```typescript
43
+ // app/api/jobs/[id]/route.ts
44
+ export async function GET(
45
+ _req: Request,
46
+ { params }: { params: Promise<{ id: string }> },
47
+ ) {
48
+ const { id } = await params;
49
+ const job = await getJobQueue().getJob(Number(id));
50
+ if (!job) return NextResponse.json({ error: 'Not found' }, { status: 404 });
51
+ return NextResponse.json({ job });
52
+ }
53
+ ```
54
+
55
+ ## Dashboard (@nicnocquee/dataqueue-dashboard)
56
+
57
+ Install: `npm install @nicnocquee/dataqueue-dashboard`.
58
+
59
+ ### Setup (Next.js App Router)
60
+
61
+ ```typescript
62
+ // app/admin/dataqueue/[[...path]]/route.ts
63
+ import { createDataqueueDashboard } from '@nicnocquee/dataqueue-dashboard/next';
64
+ import { getJobQueue, jobHandlers } from '@/lib/queue';
65
+
66
+ const { GET, POST } = createDataqueueDashboard({
67
+ jobQueue: getJobQueue(),
68
+ jobHandlers,
69
+ basePath: '/admin/dataqueue',
70
+ });
71
+
72
+ export { GET, POST };
73
+ ```
74
+
75
+ `basePath` must match the route directory path.
76
+
77
+ ### Protection
78
+
79
+ Wrap handlers with your auth middleware before exporting GET/POST.
80
+
81
+ ## Progress Tracking
82
+
83
+ Use `ctx.setProgress(percent)` in handlers (0–100). The value appears in `useJob`'s `progress` field and the dashboard detail view.
@@ -0,0 +1,320 @@
1
+ ---
2
+ name: dataqueue-advanced
3
+ description: Advanced DataQueue patterns — step memoization, waits, tokens, cron, timeouts, tags, idempotency.
4
+ ---
5
+
6
+ # DataQueue Advanced Patterns
7
+
8
+ ## Step Memoization with ctx.run()
9
+
10
+ Wrap side-effectful work in `ctx.run(stepName, fn)`. Results are cached in the database — when the handler re-runs after a wait, completed steps replay from cache without re-executing.
11
+
12
+ ```typescript
13
+ const handler = async (payload, signal, ctx) => {
14
+ const data = await ctx.run('fetch-data', async () => {
15
+ return await fetchFromAPI(payload.url);
16
+ });
17
+
18
+ await ctx.run('send-notification', async () => {
19
+ await notify(data.userId, data.message);
20
+ });
21
+ };
22
+ ```
23
+
24
+ **Rules:**
25
+
26
+ - Step names must be unique within a handler.
27
+ - Step names must be stable across deployments while jobs are waiting.
28
+ - Step order must not change conditionally between re-invocations.
29
+
30
+ ## Time-Based Waits
31
+
32
+ ### waitFor (duration)
33
+
34
+ ```typescript
35
+ const handler = async (payload, signal, ctx) => {
36
+ await ctx.run('step-1', async () => {
37
+ /* ... */
38
+ });
39
+ await ctx.waitFor({ hours: 24 });
40
+ await ctx.run('step-2', async () => {
41
+ /* ... */
42
+ });
43
+ };
44
+ ```
45
+
46
+ Duration fields: `seconds`, `minutes`, `hours`, `days`, `weeks`, `months`, `years` (additive).
47
+
48
+ ### waitUntil (date)
49
+
50
+ ```typescript
51
+ await ctx.waitUntil(new Date('2025-03-01T09:00:00Z'));
52
+ ```
53
+
54
+ ### How waits work internally
55
+
56
+ 1. Handler throws a `WaitSignal` internally.
57
+ 2. Job moves to `'waiting'` status — worker lock is released.
58
+ 3. After the wait expires, job becomes `'pending'` again.
59
+ 4. Handler re-runs from top; `ctx.run()` replays cached steps.
60
+
61
+ Waiting jobs are idle — they hold no lock, no concurrency slot, no resources.
62
+
63
+ ## Token-Based Waits (Human-in-the-Loop)
64
+
65
+ Create a token, send it to an external actor, and wait for them to complete it.
66
+
67
+ ```typescript
68
+ const handler = async (payload, signal, ctx) => {
69
+ const token = await ctx.run('create-token', async () => {
70
+ return await ctx.createToken({ timeout: '48h', tags: ['approval'] });
71
+ });
72
+
73
+ await ctx.run('notify', async () => {
74
+ await sendSlack(`Approve: ${token.id}`);
75
+ });
76
+
77
+ const result = await ctx.waitForToken<{ approved: boolean }>(token.id);
78
+ if (result.ok) {
79
+ await ctx.run('process', async () => {
80
+ if (result.output.approved) await approve(payload.id);
81
+ });
82
+ }
83
+ };
84
+ ```
85
+
86
+ Complete tokens externally:
87
+
88
+ ```typescript
89
+ await queue.completeToken(tokenId, { approved: true });
90
+ ```
91
+
92
+ Expire timed-out tokens periodically:
93
+
94
+ ```typescript
95
+ await queue.expireTimedOutTokens();
96
+ ```
97
+
98
+ ## Cron Scheduling
99
+
100
+ ```typescript
101
+ const cronId = await queue.addCronJob({
102
+ scheduleName: 'daily-report',
103
+ cronExpression: '0 9 * * *',
104
+ jobType: 'generate_report',
105
+ payload: { reportId: 'daily', userId: 'system' },
106
+ timezone: 'America/New_York',
107
+ allowOverlap: false,
108
+ });
109
+ ```
110
+
111
+ The processor automatically enqueues due cron jobs before each batch — no manual triggering needed.
112
+
113
+ Manage schedules:
114
+
115
+ ```typescript
116
+ await queue.pauseCronJob(cronId);
117
+ await queue.resumeCronJob(cronId);
118
+ await queue.editCronJob(cronId, { cronExpression: '0 */2 * * *' });
119
+ await queue.removeCronJob(cronId);
120
+ const schedules = await queue.listCronJobs('active');
121
+ ```
122
+
123
+ ## Timeout Management
124
+
125
+ ### Proactive — ctx.prolong()
126
+
127
+ ```typescript
128
+ const handler = async (payload, signal, ctx) => {
129
+ ctx.prolong(60_000); // set deadline to 60s from now
130
+ await doHeavyWork();
131
+ ctx.prolong(); // reset to original timeoutMs
132
+ };
133
+ ```
134
+
135
+ ### Reactive — ctx.onTimeout()
136
+
137
+ ```typescript
138
+ const handler = async (payload, signal, ctx) => {
139
+ let step = 0;
140
+ ctx.onTimeout(() => {
141
+ if (step < 3) return 30_000; // extend 30s
142
+ });
143
+ step = 1;
144
+ await doStep1();
145
+ step = 2;
146
+ await doStep2();
147
+ step = 3;
148
+ await doStep3();
149
+ };
150
+ ```
151
+
152
+ Both update `locked_at` in the DB, preventing premature reclamation.
153
+
154
+ ### Force Kill on Timeout
155
+
156
+ ```typescript
157
+ await queue.addJob({
158
+ jobType: 'task',
159
+ payload: {
160
+ /* ... */
161
+ },
162
+ timeoutMs: 5000,
163
+ forceKillOnTimeout: true,
164
+ });
165
+ ```
166
+
167
+ **Limitations of forceKillOnTimeout:**
168
+
169
+ - Requires Node.js (not Bun).
170
+ - Handler must be serializable (no closures over external variables).
171
+ - `prolong`, `onTimeout`, `ctx.run`, waits are NOT available.
172
+
173
+ ## Tags
174
+
175
+ ```typescript
176
+ await queue.addJob({
177
+ jobType: 'email',
178
+ payload: {
179
+ /* ... */
180
+ },
181
+ tags: ['welcome', 'onboarding'],
182
+ });
183
+
184
+ const jobs = await queue.getJobsByTags(['welcome'], 'any');
185
+ await queue.cancelAllUpcomingJobs({
186
+ tags: { values: ['onboarding'], mode: 'all' },
187
+ });
188
+ ```
189
+
190
+ Tag query modes: `'exact'`, `'all'`, `'any'`, `'none'`.
191
+
192
+ ## Idempotency
193
+
194
+ ```typescript
195
+ const jobId = await queue.addJob({
196
+ jobType: 'email',
197
+ payload: { to: 'user@example.com', subject: 'Welcome', body: '...' },
198
+ idempotencyKey: `welcome-${userId}`,
199
+ });
200
+ ```
201
+
202
+ If a job with the same key exists, returns the existing job ID. Key is unique across all statuses until `cleanupOldJobs` removes it.
203
+
204
+ ## Transactional Job Creation (PostgreSQL Only)
205
+
206
+ Insert a job within an existing database transaction so the job is enqueued **atomically** with other writes:
207
+
208
+ ```typescript
209
+ import { Pool } from 'pg';
210
+
211
+ const pool = new Pool({ connectionString: process.env.DATABASE_URL });
212
+
213
+ async function registerUser(email: string, name: string) {
214
+ const client = await pool.connect();
215
+ try {
216
+ await client.query('BEGIN');
217
+
218
+ await client.query('INSERT INTO users (email, name) VALUES ($1, $2)', [
219
+ email,
220
+ name,
221
+ ]);
222
+
223
+ const queue = getJobQueue();
224
+ await queue.addJob(
225
+ {
226
+ jobType: 'send_email',
227
+ payload: { to: email, subject: 'Welcome!', body: `Hi ${name}!` },
228
+ },
229
+ { db: client },
230
+ );
231
+
232
+ await client.query('COMMIT');
233
+ } catch (error) {
234
+ await client.query('ROLLBACK');
235
+ throw error;
236
+ } finally {
237
+ client.release();
238
+ }
239
+ }
240
+ ```
241
+
242
+ The `db` option accepts any object matching `DatabaseClient { query(text, values): Promise<{ rows, rowCount }> }` — works with `pg.PoolClient`, `pg.Client`, or compatible ORM query runners.
243
+
244
+ The job event (`'added'`) is also inserted within the same transaction.
245
+
246
+ ## Retry Strategy
247
+
248
+ Configure how failed jobs are retried with `retryDelay`, `retryBackoff`, and `retryDelayMax`.
249
+
250
+ ### Fixed delay
251
+
252
+ ```typescript
253
+ await queue.addJob({
254
+ jobType: 'email',
255
+ payload: {
256
+ /* ... */
257
+ },
258
+ maxAttempts: 5,
259
+ retryDelay: 30, // 30 seconds between each retry
260
+ retryBackoff: false,
261
+ });
262
+ ```
263
+
264
+ ### Exponential backoff with cap
265
+
266
+ ```typescript
267
+ await queue.addJob({
268
+ jobType: 'email',
269
+ payload: {
270
+ /* ... */
271
+ },
272
+ maxAttempts: 10,
273
+ retryDelay: 5, // base: 5 seconds
274
+ retryBackoff: true, // default — delay doubles each attempt with jitter
275
+ retryDelayMax: 300, // never wait more than 5 minutes
276
+ });
277
+ ```
278
+
279
+ ### Cron schedules with retry config
280
+
281
+ ```typescript
282
+ await queue.addCronJob({
283
+ scheduleName: 'daily-sync',
284
+ cronExpression: '0 * * * *',
285
+ jobType: 'sync',
286
+ payload: { source: 'api' },
287
+ retryDelay: 60,
288
+ retryBackoff: true,
289
+ retryDelayMax: 600,
290
+ });
291
+ ```
292
+
293
+ Every job enqueued by the schedule inherits the retry settings.
294
+
295
+ ### Default behavior
296
+
297
+ When no retry options are set, the legacy formula `2^attempts * 60 seconds` is used. This is fully backward compatible.
298
+
299
+ ## Maintenance
300
+
301
+ Use `createSupervisor()` to automate all maintenance tasks in a long-running server:
302
+
303
+ ```typescript
304
+ const supervisor = queue.createSupervisor({
305
+ intervalMs: 60_000,
306
+ stuckJobsTimeoutMinutes: 10,
307
+ cleanupJobsDaysToKeep: 30,
308
+ cleanupEventsDaysToKeep: 30,
309
+ });
310
+ supervisor.startInBackground();
311
+ ```
312
+
313
+ For serverless or one-off scripts, call `supervisor.start()` (runs once) or use the manual methods:
314
+
315
+ ```typescript
316
+ await queue.reclaimStuckJobs(10); // reclaim jobs stuck > 10 min
317
+ await queue.cleanupOldJobs(30); // delete completed jobs > 30 days
318
+ await queue.cleanupOldJobEvents(30); // delete old events > 30 days
319
+ await queue.expireTimedOutTokens(); // expire overdue tokens
320
+ ```