@nicnocquee/dataqueue 1.34.0 → 1.35.0-beta.20260224110011

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -78,12 +78,69 @@ await queue.addJob({
78
78
 
79
79
  Returns existing job ID if key already exists. Key persists until `cleanupOldJobs` removes the job.
80
80
 
81
+ ## Transactional Job Creation (PostgreSQL Only)
82
+
83
+ Pass a `pg.PoolClient` inside a transaction via the `{ db }` option to enqueue a job atomically with other writes:
84
+
85
+ ```typescript
86
+ const client = await pool.connect();
87
+ await client.query('BEGIN');
88
+ await client.query('INSERT INTO users (email) VALUES ($1)', [email]);
89
+ await queue.addJob(
90
+ {
91
+ jobType: 'send_email',
92
+ payload: { to: email, subject: 'Welcome!', body: '...' },
93
+ },
94
+ { db: client },
95
+ );
96
+ await client.query('COMMIT');
97
+ client.release();
98
+ ```
99
+
100
+ If the transaction rolls back, the job and its event are never persisted. The `db` option accepts any object with a `.query(text, values)` method matching `pg`'s signature. Using `{ db }` with the Redis backend throws an error.
101
+
102
+ ## Retry Strategy
103
+
104
+ ```typescript
105
+ await queue.addJob({
106
+ jobType: 'email',
107
+ payload,
108
+ retryDelay: 10, // base 10s
109
+ retryBackoff: true, // exponential (default)
110
+ retryDelayMax: 300, // cap at 5 min
111
+ });
112
+ ```
113
+
114
+ - `retryBackoff: false` — fixed delay of `retryDelay` seconds.
115
+ - `retryBackoff: true` (default) — `retryDelay * 2^attempts` with jitter, capped by `retryDelayMax`.
116
+ - No config — legacy `2^attempts * 60s` formula (backward compatible).
117
+ - Cron schedules propagate retry config to enqueued jobs.
118
+
119
+ ## Event Hooks
120
+
121
+ Subscribe to real-time lifecycle events via `on`, `once`, `off`, `removeAllListeners`. Works with both Postgres and Redis.
122
+
123
+ ```typescript
124
+ queue.on('job:completed', ({ jobId, jobType }) => {
125
+ metrics.increment('job.completed', { jobType });
126
+ });
127
+ queue.on('job:failed', ({ jobId, jobType, error, willRetry }) => {
128
+ if (!willRetry) alertOps(`Permanent failure: ${jobId}`);
129
+ });
130
+ queue.on('error', (error) => Sentry.captureException(error));
131
+ ```
132
+
133
+ Events: `job:added`, `job:processing`, `job:completed`, `job:failed` (with `willRetry`), `job:cancelled`, `job:retried`, `job:waiting`, `job:progress`, `job:output`, `error`.
134
+
135
+ `error` events fire alongside `onError` callbacks in `ProcessorOptions` / `SupervisorOptions` — both mechanisms work independently.
136
+
81
137
  ## Scaling
82
138
 
83
139
  - Increase `batchSize` and `concurrency` for higher throughput.
140
+ - Use `group: { id }` on jobs with `groupConcurrency` on processors when you need global per-tenant/per-account fairness.
84
141
  - Run multiple processor instances with unique `workerId` values — `FOR UPDATE SKIP LOCKED` (PostgreSQL) or Lua scripts (Redis) prevent double-claiming.
85
142
  - Use `jobType` filter for specialized workers.
86
- - Call `cleanupOldJobs` and `reclaimStuckJobs` on intervals.
143
+ - Use `createSupervisor()` to automate maintenance (reclaim stuck jobs, cleanup, token expiry). Safe to run across multiple instances.
87
144
 
88
145
  ## Progress Tracking
89
146
 
@@ -92,3 +149,23 @@ await ctx.setProgress(50); // 0–100, persisted to DB
92
149
  ```
93
150
 
94
151
  Read via `queue.getJob(id)` (`progress` field) or React SDK's `useJob` hook.
152
+
153
+ ## Job Output
154
+
155
+ Store results via `ctx.setOutput(data)` or by returning a value from the handler:
156
+
157
+ ```typescript
158
+ // Option 1: return a value
159
+ const handler = async (payload, signal, ctx) => {
160
+ const result = await doWork(payload);
161
+ return { url: result.downloadUrl };
162
+ };
163
+
164
+ // Option 2: ctx.setOutput (takes precedence over return value)
165
+ const handler = async (payload, signal, ctx) => {
166
+ const result = await doWork(payload);
167
+ await ctx.setOutput({ url: result.downloadUrl });
168
+ };
169
+ ```
170
+
171
+ Read via `queue.getJob(id)` (`output` field) or React SDK's `useJob` hook (`output` property).
package/ai/rules/basic.md CHANGED
@@ -40,6 +40,54 @@ export const getJobQueue = () => {
40
40
 
41
41
  For Redis, set `backend: 'redis'` and use `redisConfig` with `url` or `host`/`port`/`password`. Install `ioredis` as a peer dependency.
42
42
 
43
+ ### Bring Your Own Pool / Client
44
+
45
+ Pass an existing `pg.Pool` or `ioredis` client instead of connection config:
46
+
47
+ ```typescript
48
+ import { Pool } from 'pg';
49
+ const pool = new Pool({ connectionString: process.env.DATABASE_URL });
50
+ jobQueue = initJobQueue<JobPayloadMap>({ pool });
51
+ ```
52
+
53
+ ```typescript
54
+ import IORedis from 'ioredis';
55
+ const redis = new IORedis(process.env.REDIS_URL);
56
+ jobQueue = initJobQueue<JobPayloadMap>({
57
+ backend: 'redis',
58
+ client: redis,
59
+ keyPrefix: 'myapp:',
60
+ });
61
+ ```
62
+
63
+ The library will **not** close externally provided connections on shutdown.
64
+
65
+ ## Adding Jobs
66
+
67
+ Use `addJob` for a single job, `addJobs` for bulk inserts (single DB round-trip).
68
+
69
+ ```typescript
70
+ const id = await queue.addJob({
71
+ jobType: 'send_email',
72
+ payload: { to: 'a@x.com', subject: 'Hi', body: '...' },
73
+ });
74
+
75
+ const ids = await queue.addJobs([
76
+ {
77
+ jobType: 'send_email',
78
+ payload: { to: 'a@x.com', subject: 'Hi', body: '...' },
79
+ },
80
+ {
81
+ jobType: 'send_email',
82
+ payload: { to: 'b@x.com', subject: 'Hi', body: '...' },
83
+ priority: 10,
84
+ },
85
+ ]);
86
+ // ids[i] corresponds to the i-th input job
87
+ ```
88
+
89
+ Both support `idempotencyKey`, `priority`, `runAt`, `tags`, optional `group: { id, tier? }`, and `{ db }` for transactional inserts (PostgreSQL only).
90
+
43
91
  ## Handlers
44
92
 
45
93
  Type handlers as `JobHandlers<PayloadMap>` so TypeScript enforces a handler for every job type.
@@ -65,26 +113,48 @@ Handler signature: `(payload: T, signal: AbortSignal, ctx: JobContext) => Promis
65
113
  const processor = queue.createProcessor(handlers, {
66
114
  batchSize: 10,
67
115
  concurrency: 3,
116
+ groupConcurrency: 2, // optional global cap per group.id
68
117
  });
69
118
  await processor.start();
70
119
  ```
71
120
 
72
- **Long-running** — call `processor.startInBackground()` which polls continuously:
121
+ **Long-running** — call `processor.startInBackground()` which polls continuously, and `createSupervisor()` to automate maintenance:
73
122
 
74
123
  ```typescript
75
124
  processor.startInBackground();
125
+
126
+ const supervisor = queue.createSupervisor({
127
+ intervalMs: 60_000,
128
+ stuckJobsTimeoutMinutes: 10,
129
+ cleanupJobsDaysToKeep: 30,
130
+ });
131
+ supervisor.startInBackground();
132
+
76
133
  process.on('SIGTERM', async () => {
77
- await processor.stopAndDrain(30000);
134
+ await Promise.all([
135
+ processor.stopAndDrain(30000),
136
+ supervisor.stopAndDrain(30000),
137
+ ]);
78
138
  queue.getPool().end(); // or queue.getRedisClient().quit() for Redis
79
139
  process.exit(0);
80
140
  });
81
141
  ```
82
142
 
143
+ ## Retry Configuration
144
+
145
+ Control retry behavior per-job with optional fields on `addJob`:
146
+
147
+ - `retryDelay` (seconds, default 60) — base delay between retries.
148
+ - `retryBackoff` (boolean, default true) — enable exponential backoff with jitter.
149
+ - `retryDelayMax` (seconds, optional) — cap the maximum delay.
150
+
151
+ When none are set, the legacy `2^attempts * 60s` formula is used.
152
+
83
153
  ## Common Mistakes
84
154
 
85
155
  1. Creating `initJobQueue` per request — use a singleton.
86
156
  2. Missing handler for a job type — fails with `NoHandler`. Type as `JobHandlers<PayloadMap>`.
87
157
  3. Not checking `signal.aborted` in long handlers — timed-out jobs keep running.
88
- 4. Forgetting `reclaimStuckJobs()` crashed workers leave jobs stuck.
158
+ 4. Skipping maintenance — use `createSupervisor()` to automate reclaim, cleanup, and token expiry. Without it, stuck jobs and old data accumulate.
89
159
  5. Skipping migrations (PostgreSQL) — run `dataqueue-cli migrate` first. Redis needs none.
90
160
  6. Using `stop()` instead of `stopAndDrain()` — leaves in-flight jobs stuck.
@@ -10,7 +10,7 @@ Install: `npm install @nicnocquee/dataqueue-react` (requires React 18+).
10
10
  'use client';
11
11
  import { useJob } from '@nicnocquee/dataqueue-react';
12
12
 
13
- const { status, progress, data, isLoading, error } = useJob(jobId, {
13
+ const { status, progress, output, data, isLoading, error } = useJob(jobId, {
14
14
  fetcher: (id) =>
15
15
  fetch(`/api/jobs/${id}`)
16
16
  .then((r) => r.json())
@@ -81,3 +81,7 @@ Wrap handlers with your auth middleware before exporting GET/POST.
81
81
  ## Progress Tracking
82
82
 
83
83
  Use `ctx.setProgress(percent)` in handlers (0–100). The value appears in `useJob`'s `progress` field and the dashboard detail view.
84
+
85
+ ## Job Output
86
+
87
+ Store results via `ctx.setOutput(data)` or by returning a value from the handler. The value appears in `useJob`'s `output` field and the dashboard detail view. If both are used, `ctx.setOutput()` takes precedence.
@@ -170,6 +170,56 @@ await queue.addJob({
170
170
  - Handler must be serializable (no closures over external variables).
171
171
  - `prolong`, `onTimeout`, `ctx.run`, waits are NOT available.
172
172
 
173
+ ## Event Hooks
174
+
175
+ Subscribe to real-time job lifecycle events. Works identically with PostgreSQL and Redis.
176
+
177
+ ```typescript
178
+ const queue = initJobQueue<MyPayloadMap>(config);
179
+
180
+ queue.on('job:completed', ({ jobId, jobType }) => {
181
+ console.log(`Job ${jobId} (${jobType}) completed`);
182
+ });
183
+
184
+ queue.on('job:failed', ({ jobId, jobType, error, willRetry }) => {
185
+ console.error(`Job ${jobId} failed: ${error.message}`);
186
+ if (!willRetry) {
187
+ alertOps(`Permanent failure for job ${jobId}`);
188
+ }
189
+ });
190
+
191
+ queue.on('error', (error) => {
192
+ Sentry.captureException(error);
193
+ });
194
+ ```
195
+
196
+ ### Available events
197
+
198
+ | Event | Payload |
199
+ | ---------------- | -------------------------------------- |
200
+ | `job:added` | `{ jobId, jobType }` |
201
+ | `job:processing` | `{ jobId, jobType }` |
202
+ | `job:completed` | `{ jobId, jobType }` |
203
+ | `job:failed` | `{ jobId, jobType, error, willRetry }` |
204
+ | `job:cancelled` | `{ jobId }` |
205
+ | `job:retried` | `{ jobId }` |
206
+ | `job:waiting` | `{ jobId, jobType }` |
207
+ | `job:progress` | `{ jobId, progress }` |
208
+ | `error` | `Error` |
209
+
210
+ ### Listener management
211
+
212
+ ```typescript
213
+ const listener = ({ jobId }) => console.log(jobId);
214
+ queue.on('job:completed', listener);
215
+ queue.off('job:completed', listener);
216
+ queue.once('job:added', ({ jobId }) => console.log('First job:', jobId));
217
+ queue.removeAllListeners('job:completed');
218
+ queue.removeAllListeners(); // all events
219
+ ```
220
+
221
+ The `error` event fires alongside `onError` callbacks in `ProcessorOptions` and `SupervisorOptions` -- both mechanisms work independently.
222
+
173
223
  ## Tags
174
224
 
175
225
  ```typescript
@@ -189,6 +239,28 @@ await queue.cancelAllUpcomingJobs({
189
239
 
190
240
  Tag query modes: `'exact'`, `'all'`, `'any'`, `'none'`.
191
241
 
242
+ ## Group-Based Concurrency
243
+
244
+ Use job `group.id` plus processor `groupConcurrency` to enforce a global cap per group across all workers/instances (PostgreSQL and Redis).
245
+
246
+ ```typescript
247
+ await queue.addJob({
248
+ jobType: 'email',
249
+ payload: {
250
+ /* ... */
251
+ },
252
+ group: { id: 'tenant_abc', tier: 'gold' }, // tier is optional/reserved
253
+ });
254
+
255
+ const processor = queue.createProcessor(handlers, {
256
+ batchSize: 20,
257
+ concurrency: 10,
258
+ groupConcurrency: 2,
259
+ });
260
+ ```
261
+
262
+ Ungrouped jobs are unaffected by `groupConcurrency`.
263
+
192
264
  ## Idempotency
193
265
 
194
266
  ```typescript
@@ -201,8 +273,117 @@ const jobId = await queue.addJob({
201
273
 
202
274
  If a job with the same key exists, returns the existing job ID. Key is unique across all statuses until `cleanupOldJobs` removes it.
203
275
 
276
+ ## Transactional Job Creation (PostgreSQL Only)
277
+
278
+ Insert a job within an existing database transaction so the job is enqueued **atomically** with other writes:
279
+
280
+ ```typescript
281
+ import { Pool } from 'pg';
282
+
283
+ const pool = new Pool({ connectionString: process.env.DATABASE_URL });
284
+
285
+ async function registerUser(email: string, name: string) {
286
+ const client = await pool.connect();
287
+ try {
288
+ await client.query('BEGIN');
289
+
290
+ await client.query('INSERT INTO users (email, name) VALUES ($1, $2)', [
291
+ email,
292
+ name,
293
+ ]);
294
+
295
+ const queue = getJobQueue();
296
+ await queue.addJob(
297
+ {
298
+ jobType: 'send_email',
299
+ payload: { to: email, subject: 'Welcome!', body: `Hi ${name}!` },
300
+ },
301
+ { db: client },
302
+ );
303
+
304
+ await client.query('COMMIT');
305
+ } catch (error) {
306
+ await client.query('ROLLBACK');
307
+ throw error;
308
+ } finally {
309
+ client.release();
310
+ }
311
+ }
312
+ ```
313
+
314
+ The `db` option accepts any object matching `DatabaseClient { query(text, values): Promise<{ rows, rowCount }> }` — works with `pg.PoolClient`, `pg.Client`, or compatible ORM query runners.
315
+
316
+ The job event (`'added'`) is also inserted within the same transaction.
317
+
318
+ ## Retry Strategy
319
+
320
+ Configure how failed jobs are retried with `retryDelay`, `retryBackoff`, and `retryDelayMax`.
321
+
322
+ ### Fixed delay
323
+
324
+ ```typescript
325
+ await queue.addJob({
326
+ jobType: 'email',
327
+ payload: {
328
+ /* ... */
329
+ },
330
+ maxAttempts: 5,
331
+ retryDelay: 30, // 30 seconds between each retry
332
+ retryBackoff: false,
333
+ });
334
+ ```
335
+
336
+ ### Exponential backoff with cap
337
+
338
+ ```typescript
339
+ await queue.addJob({
340
+ jobType: 'email',
341
+ payload: {
342
+ /* ... */
343
+ },
344
+ maxAttempts: 10,
345
+ retryDelay: 5, // base: 5 seconds
346
+ retryBackoff: true, // default — delay doubles each attempt with jitter
347
+ retryDelayMax: 300, // never wait more than 5 minutes
348
+ });
349
+ ```
350
+
351
+ ### Cron schedules with retry config
352
+
353
+ ```typescript
354
+ await queue.addCronJob({
355
+ scheduleName: 'daily-sync',
356
+ cronExpression: '0 * * * *',
357
+ jobType: 'sync',
358
+ payload: { source: 'api' },
359
+ retryDelay: 60,
360
+ retryBackoff: true,
361
+ retryDelayMax: 600,
362
+ });
363
+ ```
364
+
365
+ Every job enqueued by the schedule inherits the retry settings.
366
+
367
+ ### Default behavior
368
+
369
+ When no retry options are set, the legacy formula `2^attempts * 60 seconds` is used. This is fully backward compatible.
370
+
204
371
  ## Maintenance
205
372
 
373
+ Use `createSupervisor()` to automate all maintenance tasks in a long-running server:
374
+
375
+ ```typescript
376
+ const supervisor = queue.createSupervisor({
377
+ intervalMs: 60_000,
378
+ stuckJobsTimeoutMinutes: 10,
379
+ cleanupJobsDaysToKeep: 30,
380
+ cleanupEventsDaysToKeep: 30,
381
+ });
382
+ supervisor.startInBackground();
383
+ ```
384
+
385
+ For serverless or one-off scripts, call `supervisor.start()` (runs once) or use the manual methods:
386
+
206
387
  ```typescript
207
388
  await queue.reclaimStuckJobs(10); // reclaim jobs stuck > 10 min
208
389
  await queue.cleanupOldJobs(30); // delete completed jobs > 30 days
@@ -38,7 +38,8 @@ export const jobHandlers: JobHandlers<JobPayloadMap> = {
38
38
  },
39
39
  generate_report: async (payload, signal) => {
40
40
  if (signal.aborted) return;
41
- await generateReport(payload.reportId, payload.userId);
41
+ const url = await generateReport(payload.reportId, payload.userId);
42
+ return { url }; // stored as job output, readable via getJob()
42
43
  },
43
44
  };
44
45
  ```
@@ -79,6 +80,30 @@ jobQueue = initJobQueue<JobPayloadMap>({
79
80
  });
80
81
  ```
81
82
 
83
+ ### Bring Your Own Pool / Client
84
+
85
+ You can pass an existing `pg.Pool` or `ioredis` client instead of connection config:
86
+
87
+ ```typescript
88
+ import { Pool } from 'pg';
89
+ const pool = new Pool({ connectionString: process.env.DATABASE_URL });
90
+
91
+ jobQueue = initJobQueue<JobPayloadMap>({ pool });
92
+ ```
93
+
94
+ ```typescript
95
+ import IORedis from 'ioredis';
96
+ const redis = new IORedis(process.env.REDIS_URL);
97
+
98
+ jobQueue = initJobQueue<JobPayloadMap>({
99
+ backend: 'redis',
100
+ client: redis,
101
+ keyPrefix: 'myapp:',
102
+ });
103
+ ```
104
+
105
+ When you provide your own pool/client, the library will **not** close it on shutdown — you manage its lifecycle.
106
+
82
107
  ## Step 4: Add Jobs
83
108
 
84
109
  ```typescript
@@ -89,9 +114,75 @@ const jobId = await queue.addJob({
89
114
  runAt: new Date(Date.now() + 5000),
90
115
  tags: ['welcome'],
91
116
  idempotencyKey: 'welcome-user-123',
117
+ group: { id: 'tenant_123' }, // optional: for global per-group concurrency limits
92
118
  });
93
119
  ```
94
120
 
121
+ ### Batch Insert
122
+
123
+ Use `addJobs` to insert many jobs in a single database round-trip. Returns IDs in the same order as the input array.
124
+
125
+ ```typescript
126
+ const jobIds = await queue.addJobs([
127
+ {
128
+ jobType: 'send_email',
129
+ payload: { to: 'a@example.com', subject: 'Hi', body: '...' },
130
+ },
131
+ {
132
+ jobType: 'send_email',
133
+ payload: { to: 'b@example.com', subject: 'Hi', body: '...' },
134
+ priority: 10,
135
+ },
136
+ {
137
+ jobType: 'generate_report',
138
+ payload: { reportId: '1', userId: '2' },
139
+ tags: ['monthly'],
140
+ },
141
+ ]);
142
+ ```
143
+
144
+ Each job can independently have its own `idempotencyKey`, `priority`, `runAt`, `tags`, etc. The `{ db }` transactional option is also supported (PostgreSQL only).
145
+
146
+ ### Transactional Job Creation (PostgreSQL only)
147
+
148
+ Pass an external `pg.PoolClient` inside a transaction via `{ db: client }`:
149
+
150
+ ```typescript
151
+ const client = await pool.connect();
152
+ await client.query('BEGIN');
153
+ await client.query('INSERT INTO users (email) VALUES ($1)', [email]);
154
+ await queue.addJob(
155
+ {
156
+ jobType: 'send_email',
157
+ payload: { to: email, subject: 'Welcome!', body: '...' },
158
+ },
159
+ { db: client },
160
+ );
161
+ await client.query('COMMIT');
162
+ client.release();
163
+ ```
164
+
165
+ If the transaction rolls back, the job is never enqueued.
166
+
167
+ ### Retry configuration
168
+
169
+ Control retry behavior per-job with `retryDelay`, `retryBackoff`, and `retryDelayMax`:
170
+
171
+ ```typescript
172
+ await queue.addJob({
173
+ jobType: 'send_email',
174
+ payload: { to: 'user@example.com', subject: 'Hi', body: 'Hello' },
175
+ maxAttempts: 5,
176
+ retryDelay: 10, // base delay: 10 seconds
177
+ retryBackoff: true, // exponential backoff (default)
178
+ retryDelayMax: 300, // cap at 5 minutes
179
+ });
180
+ ```
181
+
182
+ - **Fixed delay**: set `retryBackoff: false` for constant delay between retries.
183
+ - **Exponential backoff** (default): delay doubles each attempt with jitter.
184
+ - **Default**: when no retry options are set, legacy `2^attempts * 60s` is used.
185
+
95
186
  ## Step 5: Process Jobs
96
187
 
97
188
  ### Serverless (one-shot)
@@ -100,6 +191,7 @@ const jobId = await queue.addJob({
100
191
  const processor = queue.createProcessor(handlers, {
101
192
  batchSize: 10,
102
193
  concurrency: 3,
194
+ groupConcurrency: 2, // optional global cap per group.id across all workers
103
195
  });
104
196
  const processed = await processor.start();
105
197
  ```
@@ -114,8 +206,20 @@ const processor = queue.createProcessor(handlers, {
114
206
  });
115
207
  processor.startInBackground();
116
208
 
209
+ // Automate maintenance (reclaim stuck jobs, cleanup old data, expire tokens)
210
+ const supervisor = queue.createSupervisor({
211
+ intervalMs: 60_000,
212
+ stuckJobsTimeoutMinutes: 10,
213
+ cleanupJobsDaysToKeep: 30,
214
+ cleanupEventsDaysToKeep: 30,
215
+ });
216
+ supervisor.startInBackground();
217
+
117
218
  process.on('SIGTERM', async () => {
118
- await processor.stopAndDrain(30000);
219
+ await Promise.all([
220
+ processor.stopAndDrain(30000),
221
+ supervisor.stopAndDrain(30000),
222
+ ]);
119
223
  queue.getPool().end();
120
224
  process.exit(0);
121
225
  });
@@ -126,6 +230,8 @@ process.on('SIGTERM', async () => {
126
230
  1. **Creating a new queue per request** — always use a singleton. Each `initJobQueue` creates a DB pool.
127
231
  2. **Missing handler for a job type** — the job fails with `FailureReason.NoHandler`. Let TypeScript enforce completeness by typing handlers as `JobHandlers<PayloadMap>`.
128
232
  3. **Not checking `signal.aborted`** — timed-out jobs keep running in the background. Always check the signal in long-running handlers.
129
- 4. **Forgetting `reclaimStuckJobs`** — crashed workers leave jobs stuck in `processing`. Call `reclaimStuckJobs()` periodically.
233
+ 4. **Skipping maintenance**use `createSupervisor()` to automate reclaiming stuck jobs, cleaning up old data, and expiring tokens. Without it, crashed workers leave jobs stuck in `processing` and tables grow unbounded.
130
234
  5. **Forgetting to run migrations** — PostgreSQL requires `dataqueue-cli migrate` before use. Redis needs no migrations.
131
235
  6. **Not calling `stopAndDrain` on shutdown** — use `stopAndDrain()` (not `stop()`) for graceful shutdown to avoid stuck jobs.
236
+ 7. **Forgetting to commit/rollback when using `db` option** — the `addJob` INSERT sits in an open transaction. If you never `COMMIT` or `ROLLBACK`, the connection leaks and the job is invisible to other sessions.
237
+ 8. **Using `db` option with Redis** — transactional job creation is PostgreSQL only. The Redis backend throws if `db` is provided.
@@ -102,13 +102,14 @@ export async function GET(
102
102
 
103
103
  ### useJob Return Value
104
104
 
105
- | Field | Type | Description |
106
- | ----------- | ------------------- | ------------------------------- |
107
- | `data` | `JobData \| null` | Latest job data from fetcher |
108
- | `status` | `JobStatus \| null` | Current job status |
109
- | `progress` | `number \| null` | Progress percentage (0–100) |
110
- | `isLoading` | `boolean` | True until first fetch resolves |
111
- | `error` | `Error \| null` | Last fetch error |
105
+ | Field | Type | Description |
106
+ | ----------- | ------------------- | ----------------------------------------------------- |
107
+ | `data` | `JobData \| null` | Latest job data from fetcher |
108
+ | `status` | `JobStatus \| null` | Current job status |
109
+ | `progress` | `number \| null` | Progress percentage (0–100) |
110
+ | `output` | `unknown \| null` | Handler output from `ctx.setOutput()` or return value |
111
+ | `isLoading` | `boolean` | True until first fetch resolves |
112
+ | `error` | `Error \| null` | Last fetch error |
112
113
 
113
114
  ## Dashboard — @nicnocquee/dataqueue-dashboard
114
115
 
@@ -187,3 +188,14 @@ const handler = async (payload, signal, ctx) => {
187
188
  }
188
189
  };
189
190
  ```
191
+
192
+ ### Job Output from Handlers
193
+
194
+ Store results via `ctx.setOutput(data)` or by returning a value from the handler. Exposed via `getJob()` (`output` field) and the `useJob` hook's `output` property. If both are used, `ctx.setOutput()` takes precedence.
195
+
196
+ ```typescript
197
+ const handler = async (payload, signal, ctx) => {
198
+ const result = await doWork(payload);
199
+ return { url: result.downloadUrl }; // stored as output
200
+ };
201
+ ```