@nicnocquee/dataqueue 1.38.0 → 1.39.0-beta.20260322125514
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/ai/docs-content.json +11 -5
- package/ai/rules/advanced.md +27 -0
- package/ai/rules/basic.md +1 -1
- package/ai/skills/dataqueue-advanced/SKILL.md +40 -1
- package/ai/skills/dataqueue-core/SKILL.md +9 -0
- package/dist/index.cjs +563 -61
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +93 -2
- package/dist/index.d.ts +93 -2
- package/dist/index.js +558 -62
- package/dist/index.js.map +1 -1
- package/migrations/1781200000009_add_depends_on_to_job_queue.sql +10 -0
- package/package.json +1 -1
- package/src/backends/postgres.ts +254 -29
- package/src/backends/redis-scripts.ts +100 -4
- package/src/backends/redis.ts +194 -24
- package/src/index.ts +8 -0
- package/src/job-dependencies.test.ts +129 -0
- package/src/job-dependencies.ts +140 -0
- package/src/types.ts +36 -0
package/ai/docs-content.json
CHANGED
|
@@ -15,7 +15,7 @@
|
|
|
15
15
|
"slug": "api",
|
|
16
16
|
"title": "API Reference",
|
|
17
17
|
"description": "",
|
|
18
|
-
"content": "This section documents the main classes, types, and functions available for managing job queues, processing jobs, and interacting with the database.\n\n## API Surface\n\n- [JobQueue](/api/job-queue)\n- [JobOptions](/api/job-options)\n- [JobRecord](/api/job-record)\n- [JobEvent](/api/job-event)\n- [Processor](/api/processor)\n- [ProcessorOptions](/api/processor-options)\n- [Supervisor](/api/job-queue#background-supervisor)\n- [SupervisorOptions](/api/job-queue#supervisoroptions)\n- [SupervisorRunResult](/api/job-queue#supervisorrunresult)\n- [JobHandlers](/api/job-handlers)\n- [Database Utility](/api/db-util)\n- [Tags](/api/tags)"
|
|
18
|
+
"content": "This section documents the main classes, types, and functions available for managing job queues, processing jobs, and interacting with the database.\n\n## API Surface\n\n- [JobQueue](/api/job-queue)\n- [JobOptions](/api/job-options)\n- [JobRecord](/api/job-record)\n- [JobEvent](/api/job-event)\n- [Processor](/api/processor)\n- [ProcessorOptions](/api/processor-options)\n- [Supervisor](/api/job-queue#background-supervisor)\n- [SupervisorOptions](/api/job-queue#supervisoroptions)\n- [SupervisorRunResult](/api/job-queue#supervisorrunresult)\n- [JobHandlers](/api/job-handlers)\n- [Database Utility](/api/db-util)\n- [Tags](/api/tags)\n\n## Guides\n\n- [Job dependencies](/usage/job-dependencies) — prerequisites via `dependsOn.jobIds`, tag drain via `dependsOn.tags`, and `batchDepRef` for `addJobs`"
|
|
19
19
|
},
|
|
20
20
|
{
|
|
21
21
|
"slug": "api/job-event",
|
|
@@ -33,19 +33,19 @@
|
|
|
33
33
|
"slug": "api/job-options",
|
|
34
34
|
"title": "JobOptions",
|
|
35
35
|
"description": "",
|
|
36
|
-
"content": "The `JobOptions` interface defines the options for creating a new job in the queue.\n\n## Fields\n\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The payload for the job, type-safe per job\n type.\n- `maxAttempts?`: _number_ — Maximum number of attempts for\n this job (default: 3).\n- `priority?`: _number_ — Priority of the job (higher runs\n first, default: 0).\n- `runAt?`: _Date | null_ — When to run the job (default: now).\n- `timeoutMs?`: _number_ — Timeout for this job in milliseconds.\n If not set, uses the processor default or unlimited.\n- `forceKillOnTimeout?`: _boolean_ — If true, the job will be forcefully terminated (using Worker Threads) when timeout is reached. If false (default), the job will only receive an AbortSignal and must handle the abort gracefully.\n\n **⚠️ Runtime Requirements**: This option requires **Node.js** and will **not work** in Bun or other runtimes without worker thread support. See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string_ — Optional idempotency key. When provided, ensures that only one job exists for a given key. If a job with the same key already exists, `addJob` returns the existing job's ID instead of creating a duplicate. See [Idempotency](/usage/add-job#idempotency) for details.\n- `deadLetterJobType?`: _string_ — Optional dead-letter destination job type. When the job exhausts retries, DataQueue creates a new pending job in this job type with an envelope payload containing source metadata, original payload, and failure context.\n\n## Example\n\n```ts\nconst job = {\n jobType: 'email',\n payload: { to: 'user@example.com', subject: 'Hello' },\n maxAttempts: 5,\n priority: 10,\n runAt: new Date(Date.now() + 60000), // run in 1 minute\n timeoutMs: 30000, // 30 seconds\n forceKillOnTimeout: false, // Use graceful shutdown (default)\n tags: ['welcome', 'user'], // tags for grouping/searching\n idempotencyKey: 'welcome-email-user-123', // prevent duplicate jobs\n deadLetterJobType: 'email_dead_letter', // route exhausted failures\n};\n```"
|
|
36
|
+
"content": "The `JobOptions` interface defines the options for creating a new job in the queue.\n\n## Fields\n\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The payload for the job, type-safe per job\n type.\n- `maxAttempts?`: _number_ — Maximum number of attempts for\n this job (default: 3).\n- `priority?`: _number_ — Priority of the job (higher runs\n first, default: 0).\n- `runAt?`: _Date | null_ — When to run the job (default: now).\n- `timeoutMs?`: _number_ — Timeout for this job in milliseconds.\n If not set, uses the processor default or unlimited.\n- `forceKillOnTimeout?`: _boolean_ — If true, the job will be forcefully terminated (using Worker Threads) when timeout is reached. If false (default), the job will only receive an AbortSignal and must handle the abort gracefully.\n\n **⚠️ Runtime Requirements**: This option requires **Node.js** and will **not work** in Bun or other runtimes without worker thread support. See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string_ — Optional idempotency key. When provided, ensures that only one job exists for a given key. If a job with the same key already exists, `addJob` returns the existing job's ID instead of creating a duplicate. See [Idempotency](/usage/add-job#idempotency) for details.\n- `deadLetterJobType?`: _string_ — Optional dead-letter destination job type. When the job exhausts retries, DataQueue creates a new pending job in this job type with an envelope payload containing source metadata, original payload, and failure context.\n- `dependsOn?`: _JobDependsOn_ — Optional prerequisites. Wait for listed jobs to complete (`jobIds`) and/or for a tag-drain barrier (`tags`). See [Job dependencies](/usage/job-dependencies).\n\n## Example\n\n```ts\nconst job = {\n jobType: 'email',\n payload: { to: 'user@example.com', subject: 'Hello' },\n maxAttempts: 5,\n priority: 10,\n runAt: new Date(Date.now() + 60000), // run in 1 minute\n timeoutMs: 30000, // 30 seconds\n forceKillOnTimeout: false, // Use graceful shutdown (default)\n tags: ['welcome', 'user'], // tags for grouping/searching\n idempotencyKey: 'welcome-email-user-123', // prevent duplicate jobs\n deadLetterJobType: 'email_dead_letter', // route exhausted failures\n dependsOn: { jobIds: [42] }, // run only after job 42 completes\n};\n```"
|
|
37
37
|
},
|
|
38
38
|
{
|
|
39
39
|
"slug": "api/job-queue",
|
|
40
40
|
"title": "JobQueue",
|
|
41
41
|
"description": "",
|
|
42
|
-
"content": "## Initialization\n\n### initJobQueue\n\n```ts\ninitJobQueue(config: JobQueueConfig): JobQueue\n```\n\nInitializes the job queue system with the provided configuration. The `JobQueueConfig` is a discriminated union -- you provide either a PostgreSQL or Redis configuration.\n\n#### PostgresJobQueueConfig\n\nProvide either `databaseConfig` (the library creates a pool) or `pool` (bring your own `pg.Pool`). At least one must be set.\n\n```ts\ninterface PostgresJobQueueConfig {\n backend?: 'postgres'; // Optional, defaults to 'postgres'\n databaseConfig?: {\n connectionString?: string;\n host?: string;\n port?: number;\n database?: string;\n user?: string;\n password?: string;\n ssl?: DatabaseSSLConfig;\n };\n pool?: import('pg').Pool; // Bring your own pool\n verbose?: boolean;\n}\n```\n\n#### RedisJobQueueConfig\n\nProvide either `redisConfig` (the library creates an ioredis client) or `client` (bring your own). At least one must be set.\n\n```ts\ninterface RedisJobQueueConfig {\n backend: 'redis'; // Required\n redisConfig?: {\n url?: string;\n host?: string;\n port?: number;\n password?: string;\n db?: number;\n tls?: RedisTLSConfig;\n keyPrefix?: string; // Default: 'dq:'\n };\n client?: unknown; // Bring your own ioredis client\n keyPrefix?: string; // Key prefix when using external client (default: 'dq:')\n verbose?: boolean;\n}\n```\n\n#### JobQueueConfig\n\n```ts\ntype JobQueueConfig = PostgresJobQueueConfig | RedisJobQueueConfig;\n```\n\n#### DatabaseSSLConfig\n\n```ts\ninterface DatabaseSSLConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n- `ca` - Client certificate authority (CA) as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `cert` - Client certificate as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `key` - Client private key as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `rejectUnauthorized` - Whether to reject unauthorized certificates (default: true)\n\n#### RedisTLSConfig\n\n```ts\ninterface RedisTLSConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n---\n\n## Adding Jobs\n\n### addJob\n\n```ts\naddJob(job: JobOptions, options?: AddJobOptions): Promise<number>\n```\n\nAdds a job to the queue. Returns the job ID.\n\n#### JobOptions\n\n```ts\ninterface JobOptions {\n jobType: string;\n payload: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n idempotencyKey?: string;\n retryDelay?: number; // Base delay between retries in seconds (default: 60)\n retryBackoff?: boolean; // Use exponential backoff (default: true)\n retryDelayMax?: number; // Max delay cap in seconds (default: none)\n deadLetterJobType?: string; // Route exhausted failures to this job type\n group?: { id: string; tier?: string }; // Optional group for global concurrency limits\n}\n```\n\n- `retryDelay` - Base delay between retries in seconds. When `retryBackoff` is true, this is the base for exponential backoff (`retryDelay * 2^attempts`). When false, retries use this fixed delay. Default: `60`.\n- `retryBackoff` - Whether to use exponential backoff. When true, delay doubles with each attempt and includes jitter. Default: `true`.\n- `retryDelayMax` - Maximum delay cap in seconds. Only meaningful when `retryBackoff` is true. No limit when omitted.\n- `deadLetterJobType` - Optional dead-letter destination. When retries are exhausted, a new pending job is created in this job type with an envelope payload (`originalJob`, `originalPayload`, `failure`).\n- `group` - Optional grouping metadata. Use `group.id` to enforce global per-group limits with `ProcessorOptions.groupConcurrency`. `group.tier` is reserved for future policies.\n\n#### AddJobOptions\n\n```ts\ninterface AddJobOptions {\n db?: DatabaseClient;\n}\n```\n\n- `db` — An external database client (e.g., a `pg.PoolClient` inside a transaction). When provided, the INSERT runs on this client instead of the internal pool. **PostgreSQL only.** Throws if used with the Redis backend.\n\n### addJobs\n\n```ts\naddJobs(jobs: JobOptions[], options?: AddJobOptions): Promise<number[]>\n```\n\nAdds multiple jobs to the queue in a single operation. More efficient than calling `addJob` in a loop because it batches the INSERT into a single database round-trip (PostgreSQL) or a single atomic Lua script (Redis).\n\nReturns an array of job IDs in the same order as the input array.\n\nEach job can independently have its own `priority`, `runAt`, `tags`, `idempotencyKey`, and other options. Idempotency keys are handled per-job — duplicates resolve to the existing job's ID without creating a new row.\n\nPassing an empty array returns `[]` immediately without touching the database.\n\n```ts\nconst jobIds = await jobQueue.addJobs([\n {\n jobType: 'email',\n payload: { to: 'a@example.com', subject: 'Hi', body: '...' },\n },\n {\n jobType: 'email',\n payload: { to: 'b@example.com', subject: 'Hi', body: '...' },\n priority: 10,\n },\n {\n jobType: 'report',\n payload: { reportId: '123', userId: '456' },\n tags: ['monthly'],\n },\n]);\n// jobIds = [1, 2, 3]\n```\n\nThe `{ db }` option works the same as `addJob` — pass a transactional client to batch-insert within an existing transaction (PostgreSQL only).\n\n#### DatabaseClient\n\n```ts\ninterface DatabaseClient {\n query(\n text: string,\n values?: any[],\n ): Promise<{ rows: any[]; rowCount: number | null }>;\n}\n```\n\nAny object matching this interface works — `pg.Pool`, `pg.PoolClient`, `pg.Client`, or ORM query runners that expose a raw `query()` method.\n\n---\n\n## Retrieving Jobs\n\n### getJob\n\n```ts\ngetJob(id: number): Promise<JobRecord | null>\n```\n\nRetrieves a job by its ID.\n\n### getJobs\n\n```ts\ngetJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n limit?: number,\n offset?: number\n): Promise<JobRecord[]>\n```\n\nRetrieves jobs matching the provided filters, with optional pagination.\n\n### getJobsByStatus\n\n```ts\ngetJobsByStatus(status: string, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by their status, with pagination.\n\n### getAllJobs\n\n```ts\ngetAllJobs(limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves all jobs, with optional pagination.\n\n### getJobsByTags\n\n```ts\ngetJobsByTags(tags: string[], mode?: TagQueryMode, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by tag(s).\n\n---\n\n## Managing Jobs\n\n### retryJob\n\n```ts\nretryJob(jobId: number): Promise<void>\n```\n\nRetries a job given its ID.\n\n### cancelJob\n\n```ts\ncancelJob(jobId: number): Promise<void>\n```\n\nCancels a job given its ID.\n\n### editJob\n\n```ts\neditJob(jobId: number, updates: EditJobOptions): Promise<void>\n```\n\nEdits a pending job given its ID. Only works for jobs with status 'pending'. Silently fails for other statuses (processing, completed, failed, cancelled).\n\n#### EditJobOptions\n\n```ts\ninterface EditJobOptions {\n payload?: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n retryDelay?: number | null;\n retryBackoff?: boolean | null;\n retryDelayMax?: number | null;\n deadLetterJobType?: string | null;\n}\n```\n\nAll fields are optional - only provided fields will be updated. Note that `jobType` cannot be changed. Set retry fields to `null` to revert to legacy default behavior. Set `deadLetterJobType` to `null` to clear dead-letter routing for pending jobs.\n\n#### Example\n\n```ts\n// Edit a pending job's payload and priority\nawait jobQueue.editJob(jobId, {\n payload: { to: 'newemail@example.com', subject: 'Updated' },\n priority: 10,\n});\n\n// Edit only the scheduled run time\nawait jobQueue.editJob(jobId, {\n runAt: new Date(Date.now() + 60000), // Run in 1 minute\n});\n\n// Edit multiple fields at once\nawait jobQueue.editJob(jobId, {\n payload: { to: 'updated@example.com' },\n priority: 5,\n maxAttempts: 10,\n timeoutMs: 30000,\n tags: ['urgent', 'priority'],\n});\n```\n\n### editAllPendingJobs\n\n```ts\neditAllPendingJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n updates: EditJobOptions\n): Promise<number>\n```\n\nEdits all pending jobs that match the filters. Only works for jobs with status 'pending'. Non-pending jobs are not affected. Returns the number of jobs that were edited.\n\n#### Parameters\n\n- `filters` (optional): Filters to select which jobs to edit. If not provided, all pending jobs are edited.\n - `jobType`: Filter by job type\n - `priority`: Filter by priority\n - `runAt`: Filter by scheduled run time (supports `gt`, `gte`, `lt`, `lte`, `eq` operators or exact Date match)\n - `tags`: Filter by tags with mode ('all', 'any', 'none', 'exact')\n- `updates`: The fields to update (same as `EditJobOptions`). All fields are optional - only provided fields will be updated.\n\n#### Returns\n\nThe number of jobs that were successfully edited.\n\n#### Examples\n\n```ts\n// Edit all pending jobs\nconst editedCount = await jobQueue.editAllPendingJobs(undefined, {\n priority: 10,\n});\n\n// Edit all pending email jobs\nconst editedCount = await jobQueue.editAllPendingJobs(\n { jobType: 'email' },\n {\n priority: 5,\n },\n);\n\n// Edit all pending jobs with 'urgent' tag\nconst editedCount = await jobQueue.editAllPendingJobs(\n { tags: { values: ['urgent'], mode: 'any' } },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n\n// Edit all pending jobs scheduled in the future\nconst editedCount = await jobQueue.editAllPendingJobs(\n { runAt: { gte: new Date() } },\n {\n priority: 10,\n },\n);\n\n// Edit with combined filters\nconst editedCount = await jobQueue.editAllPendingJobs(\n {\n jobType: 'email',\n tags: { values: ['urgent'], mode: 'any' },\n },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n```\n\n**Note:** Only pending jobs are edited. Jobs with other statuses (processing, completed, failed, cancelled) are not affected. Edit events are recorded for each affected job, just like single job edits.\n\n### cancelAllUpcomingJobs\n\n```ts\ncancelAllUpcomingJobs(filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n}): Promise<number>\n```\n\nCancels all upcoming jobs that match the filters. Returns the number of jobs cancelled.\n\n### cleanupOldJobs\n\n```ts\ncleanupOldJobs(daysToKeep?: number): Promise<number>\n```\n\nCleans up jobs older than the specified number of days. Returns the number of jobs removed.\n\n### reclaimStuckJobs\n\n```ts\nreclaimStuckJobs(maxProcessingTimeMinutes?: number): Promise<number>\n```\n\nReclaims jobs stuck in 'processing' for too long. Returns the number of jobs reclaimed. If a job has a `timeoutMs` that is longer than the `maxProcessingTimeMinutes` threshold, the job's own timeout is used instead, preventing premature reclamation of long-running jobs.\n\n---\n\n## Job Events\n\n### getJobEvents\n\n```ts\ngetJobEvents(jobId: number): Promise<JobEvent[]>\n```\n\nRetrieves the job events for a job.\n\n#### JobEvent\n\n```ts\ninterface JobEvent {\n id: number;\n jobId: number;\n eventType: JobEventType;\n createdAt: Date;\n metadata: any;\n}\n```\n\n#### JobEventType\n\n```ts\nenum JobEventType {\n Added = 'added',\n Processing = 'processing',\n Completed = 'completed',\n Failed = 'failed',\n Cancelled = 'cancelled',\n Retried = 'retried',\n Edited = 'edited',\n}\n```\n\n---\n\n## Event Hooks\n\nDataQueue emits real-time events for job lifecycle transitions. Register listeners using `on`, `once`, `off`, and `removeAllListeners`. Works identically with both PostgreSQL and Redis backends.\n\n### QueueEventMap\n\n```ts\ninterface QueueEventMap {\n 'job:added': { jobId: number; jobType: string };\n 'job:processing': { jobId: number; jobType: string };\n 'job:completed': { jobId: number; jobType: string };\n 'job:failed': {\n jobId: number;\n jobType: string;\n error: Error;\n willRetry: boolean;\n };\n 'job:cancelled': { jobId: number };\n 'job:retried': { jobId: number };\n 'job:waiting': { jobId: number; jobType: string };\n 'job:progress': { jobId: number; progress: number };\n error: Error;\n}\n```\n\n### on\n\n```ts\non(event: QueueEventName, listener: (data) => void): void\n```\n\nRegister a listener that fires every time the event is emitted.\n\n### once\n\n```ts\nonce(event: QueueEventName, listener: (data) => void): void\n```\n\nRegister a one-time listener that auto-removes after the first invocation.\n\n### off\n\n```ts\noff(event: QueueEventName, listener: (data) => void): void\n```\n\nRemove a previously registered listener. Pass the exact function reference used with `on` or `once`.\n\n### removeAllListeners\n\n```ts\nremoveAllListeners(event?: QueueEventName): void\n```\n\nRemove all listeners for a specific event, or all listeners for all events when called without arguments.\n\nSee [Event Hooks](/usage/event-hooks) for detailed usage examples.\n\n---\n\n## Processing Jobs\n\n### createProcessor\n\n```ts\ncreateProcessor(\n handlers: JobHandlers,\n options?: ProcessorOptions\n): Processor\n```\n\nCreates a job processor with the provided handlers and options.\n\n#### ProcessorOptions\n\n```ts\ninterface ProcessorOptions {\n workerId?: string;\n batchSize?: number;\n concurrency?: number;\n groupConcurrency?: number;\n pollInterval?: number;\n onError?: (error: Error) => void;\n verbose?: boolean;\n jobType?: string | string[];\n}\n```\n\n- `groupConcurrency` - Optional global per-group concurrency limit (positive integer). Applies only to jobs with `group.id`; ungrouped jobs are unaffected.\n\n---\n\n## Background Supervisor\n\n### createSupervisor\n\n```ts\ncreateSupervisor(options?: SupervisorOptions): Supervisor\n```\n\nCreates a background supervisor that automatically runs maintenance tasks on a configurable interval: reclaiming stuck jobs, cleaning up old completed jobs/events, and expiring timed-out waitpoint tokens.\n\n#### SupervisorOptions\n\n```ts\ninterface SupervisorOptions {\n intervalMs?: number; // default: 60000\n stuckJobsTimeoutMinutes?: number; // default: 10\n cleanupJobsDaysToKeep?: number; // default: 30 (0 to disable)\n cleanupEventsDaysToKeep?: number; // default: 30 (0 to disable)\n cleanupBatchSize?: number; // default: 1000\n reclaimStuckJobs?: boolean; // default: true\n expireTimedOutTokens?: boolean; // default: true\n onError?: (error: Error) => void; // default: console.error\n verbose?: boolean;\n}\n```\n\n#### Supervisor\n\n```ts\ninterface Supervisor {\n start(): Promise<SupervisorRunResult>;\n startInBackground(): void;\n stop(): void;\n stopAndDrain(timeoutMs?: number): Promise<void>;\n isRunning(): boolean;\n}\n```\n\n- `start()` runs all tasks once and returns the results (serverless-friendly).\n- `startInBackground()` starts a background loop that runs every `intervalMs`.\n- `stopAndDrain()` stops the loop and waits for the current run to finish.\n\n#### SupervisorRunResult\n\n```ts\ninterface SupervisorRunResult {\n reclaimedJobs: number;\n cleanedUpJobs: number;\n cleanedUpEvents: number;\n expiredTokens: number;\n}\n```\n\nSee [Long-Running Server](/usage/long-running-server#background-supervisor) for usage examples.\n\n---\n\n## Accessing the Underlying Client\n\n### getPool\n\n```ts\ngetPool(): Pool\n```\n\nReturns the PostgreSQL connection pool instance. Only available when using the PostgreSQL backend.\n\n> **Note:** Throws an error if called when using the Redis backend.\n\n### getRedisClient\n\n```ts\ngetRedisClient(): Redis\n```\n\nReturns the `ioredis` client instance. Only available when using the Redis backend.\n\n> **Note:** Throws an error if called when using the PostgreSQL backend."
|
|
42
|
+
"content": "## Initialization\n\n### initJobQueue\n\n```ts\ninitJobQueue(config: JobQueueConfig): JobQueue\n```\n\nInitializes the job queue system with the provided configuration. The `JobQueueConfig` is a discriminated union -- you provide either a PostgreSQL or Redis configuration.\n\n#### PostgresJobQueueConfig\n\nProvide either `databaseConfig` (the library creates a pool) or `pool` (bring your own `pg.Pool`). At least one must be set.\n\n```ts\ninterface PostgresJobQueueConfig {\n backend?: 'postgres'; // Optional, defaults to 'postgres'\n databaseConfig?: {\n connectionString?: string;\n host?: string;\n port?: number;\n database?: string;\n user?: string;\n password?: string;\n ssl?: DatabaseSSLConfig;\n };\n pool?: import('pg').Pool; // Bring your own pool\n verbose?: boolean;\n}\n```\n\n#### RedisJobQueueConfig\n\nProvide either `redisConfig` (the library creates an ioredis client) or `client` (bring your own). At least one must be set.\n\n```ts\ninterface RedisJobQueueConfig {\n backend: 'redis'; // Required\n redisConfig?: {\n url?: string;\n host?: string;\n port?: number;\n password?: string;\n db?: number;\n tls?: RedisTLSConfig;\n keyPrefix?: string; // Default: 'dq:'\n };\n client?: unknown; // Bring your own ioredis client\n keyPrefix?: string; // Key prefix when using external client (default: 'dq:')\n verbose?: boolean;\n}\n```\n\n#### JobQueueConfig\n\n```ts\ntype JobQueueConfig = PostgresJobQueueConfig | RedisJobQueueConfig;\n```\n\n#### DatabaseSSLConfig\n\n```ts\ninterface DatabaseSSLConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n- `ca` - Client certificate authority (CA) as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `cert` - Client certificate as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `key` - Client private key as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `rejectUnauthorized` - Whether to reject unauthorized certificates (default: true)\n\n#### RedisTLSConfig\n\n```ts\ninterface RedisTLSConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n---\n\n## Adding Jobs\n\n### addJob\n\n```ts\naddJob(job: JobOptions, options?: AddJobOptions): Promise<number>\n```\n\nAdds a job to the queue. Returns the job ID.\n\n#### JobOptions\n\n```ts\ninterface JobOptions {\n jobType: string;\n payload: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n idempotencyKey?: string;\n retryDelay?: number; // Base delay between retries in seconds (default: 60)\n retryBackoff?: boolean; // Use exponential backoff (default: true)\n retryDelayMax?: number; // Max delay cap in seconds (default: none)\n deadLetterJobType?: string; // Route exhausted failures to this job type\n group?: { id: string; tier?: string }; // Optional group for global concurrency limits\n dependsOn?: { jobIds?: number[]; tags?: string[] }; // Prerequisites — see Job dependencies\n}\n```\n\n- `retryDelay` - Base delay between retries in seconds. When `retryBackoff` is true, this is the base for exponential backoff (`retryDelay * 2^attempts`). When false, retries use this fixed delay. Default: `60`.\n- `retryBackoff` - Whether to use exponential backoff. When true, delay doubles with each attempt and includes jitter. Default: `true`.\n- `retryDelayMax` - Maximum delay cap in seconds. Only meaningful when `retryBackoff` is true. No limit when omitted.\n- `deadLetterJobType` - Optional dead-letter destination. When retries are exhausted, a new pending job is created in this job type with an envelope payload (`originalJob`, `originalPayload`, `failure`).\n- `group` - Optional grouping metadata. Use `group.id` to enforce global per-group limits with `ProcessorOptions.groupConcurrency`. `group.tier` is reserved for future policies.\n- `dependsOn` - Optional prerequisites (`jobIds` and/or `tags`). See [Job dependencies](/usage/job-dependencies).\n\n#### AddJobOptions\n\n```ts\ninterface AddJobOptions {\n db?: DatabaseClient;\n}\n```\n\n- `db` — An external database client (e.g., a `pg.PoolClient` inside a transaction). When provided, the INSERT runs on this client instead of the internal pool. **PostgreSQL only.** Throws if used with the Redis backend.\n\n### addJobs\n\n```ts\naddJobs(jobs: JobOptions[], options?: AddJobOptions): Promise<number[]>\n```\n\nAdds multiple jobs to the queue in a single operation. More efficient than calling `addJob` in a loop because it batches the INSERT into a single database round-trip (PostgreSQL) or a single atomic Lua script (Redis).\n\nReturns an array of job IDs in the same order as the input array.\n\nEach job can independently have its own `priority`, `runAt`, `tags`, `idempotencyKey`, `dependsOn`, and other options. Idempotency keys are handled per-job — duplicates resolve to the existing job's ID without creating a new row.\n\nPassing an empty array returns `[]` immediately without touching the database.\n\n```ts\nconst jobIds = await jobQueue.addJobs([\n {\n jobType: 'email',\n payload: { to: 'a@example.com', subject: 'Hi', body: '...' },\n },\n {\n jobType: 'email',\n payload: { to: 'b@example.com', subject: 'Hi', body: '...' },\n priority: 10,\n },\n {\n jobType: 'report',\n payload: { reportId: '123', userId: '456' },\n tags: ['monthly'],\n },\n]);\n// jobIds = [1, 2, 3]\n```\n\nThe `{ db }` option works the same as `addJob` — pass a transactional client to batch-insert within an existing transaction (PostgreSQL only).\n\n#### DatabaseClient\n\n```ts\ninterface DatabaseClient {\n query(\n text: string,\n values?: any[],\n ): Promise<{ rows: any[]; rowCount: number | null }>;\n}\n```\n\nAny object matching this interface works — `pg.Pool`, `pg.PoolClient`, `pg.Client`, or ORM query runners that expose a raw `query()` method.\n\n---\n\n## Retrieving Jobs\n\n### getJob\n\n```ts\ngetJob(id: number): Promise<JobRecord | null>\n```\n\nRetrieves a job by its ID.\n\n### getJobs\n\n```ts\ngetJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n limit?: number,\n offset?: number\n): Promise<JobRecord[]>\n```\n\nRetrieves jobs matching the provided filters, with optional pagination.\n\n### getJobsByStatus\n\n```ts\ngetJobsByStatus(status: string, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by their status, with pagination.\n\n### getAllJobs\n\n```ts\ngetAllJobs(limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves all jobs, with optional pagination.\n\n### getJobsByTags\n\n```ts\ngetJobsByTags(tags: string[], mode?: TagQueryMode, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by tag(s).\n\n---\n\n## Managing Jobs\n\n### retryJob\n\n```ts\nretryJob(jobId: number): Promise<void>\n```\n\nRetries a job given its ID.\n\n### cancelJob\n\n```ts\ncancelJob(jobId: number): Promise<void>\n```\n\nCancels a job given its ID.\n\n### editJob\n\n```ts\neditJob(jobId: number, updates: EditJobOptions): Promise<void>\n```\n\nEdits a pending job given its ID. Only works for jobs with status 'pending'. Silently fails for other statuses (processing, completed, failed, cancelled).\n\n#### EditJobOptions\n\n```ts\ninterface EditJobOptions {\n payload?: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n retryDelay?: number | null;\n retryBackoff?: boolean | null;\n retryDelayMax?: number | null;\n deadLetterJobType?: string | null;\n}\n```\n\nAll fields are optional - only provided fields will be updated. Note that `jobType` cannot be changed. Set retry fields to `null` to revert to legacy default behavior. Set `deadLetterJobType` to `null` to clear dead-letter routing for pending jobs.\n\n#### Example\n\n```ts\n// Edit a pending job's payload and priority\nawait jobQueue.editJob(jobId, {\n payload: { to: 'newemail@example.com', subject: 'Updated' },\n priority: 10,\n});\n\n// Edit only the scheduled run time\nawait jobQueue.editJob(jobId, {\n runAt: new Date(Date.now() + 60000), // Run in 1 minute\n});\n\n// Edit multiple fields at once\nawait jobQueue.editJob(jobId, {\n payload: { to: 'updated@example.com' },\n priority: 5,\n maxAttempts: 10,\n timeoutMs: 30000,\n tags: ['urgent', 'priority'],\n});\n```\n\n### editAllPendingJobs\n\n```ts\neditAllPendingJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n updates: EditJobOptions\n): Promise<number>\n```\n\nEdits all pending jobs that match the filters. Only works for jobs with status 'pending'. Non-pending jobs are not affected. Returns the number of jobs that were edited.\n\n#### Parameters\n\n- `filters` (optional): Filters to select which jobs to edit. If not provided, all pending jobs are edited.\n - `jobType`: Filter by job type\n - `priority`: Filter by priority\n - `runAt`: Filter by scheduled run time (supports `gt`, `gte`, `lt`, `lte`, `eq` operators or exact Date match)\n - `tags`: Filter by tags with mode ('all', 'any', 'none', 'exact')\n- `updates`: The fields to update (same as `EditJobOptions`). All fields are optional - only provided fields will be updated.\n\n#### Returns\n\nThe number of jobs that were successfully edited.\n\n#### Examples\n\n```ts\n// Edit all pending jobs\nconst editedCount = await jobQueue.editAllPendingJobs(undefined, {\n priority: 10,\n});\n\n// Edit all pending email jobs\nconst editedCount = await jobQueue.editAllPendingJobs(\n { jobType: 'email' },\n {\n priority: 5,\n },\n);\n\n// Edit all pending jobs with 'urgent' tag\nconst editedCount = await jobQueue.editAllPendingJobs(\n { tags: { values: ['urgent'], mode: 'any' } },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n\n// Edit all pending jobs scheduled in the future\nconst editedCount = await jobQueue.editAllPendingJobs(\n { runAt: { gte: new Date() } },\n {\n priority: 10,\n },\n);\n\n// Edit with combined filters\nconst editedCount = await jobQueue.editAllPendingJobs(\n {\n jobType: 'email',\n tags: { values: ['urgent'], mode: 'any' },\n },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n```\n\n**Note:** Only pending jobs are edited. Jobs with other statuses (processing, completed, failed, cancelled) are not affected. Edit events are recorded for each affected job, just like single job edits.\n\n### cancelAllUpcomingJobs\n\n```ts\ncancelAllUpcomingJobs(filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n}): Promise<number>\n```\n\nCancels all upcoming jobs that match the filters. Returns the number of jobs cancelled.\n\n### cleanupOldJobs\n\n```ts\ncleanupOldJobs(daysToKeep?: number): Promise<number>\n```\n\nCleans up jobs older than the specified number of days. Returns the number of jobs removed.\n\n### reclaimStuckJobs\n\n```ts\nreclaimStuckJobs(maxProcessingTimeMinutes?: number): Promise<number>\n```\n\nReclaims jobs stuck in 'processing' for too long. Returns the number of jobs reclaimed. If a job has a `timeoutMs` that is longer than the `maxProcessingTimeMinutes` threshold, the job's own timeout is used instead, preventing premature reclamation of long-running jobs.\n\n---\n\n## Job Events\n\n### getJobEvents\n\n```ts\ngetJobEvents(jobId: number): Promise<JobEvent[]>\n```\n\nRetrieves the job events for a job.\n\n#### JobEvent\n\n```ts\ninterface JobEvent {\n id: number;\n jobId: number;\n eventType: JobEventType;\n createdAt: Date;\n metadata: any;\n}\n```\n\n#### JobEventType\n\n```ts\nenum JobEventType {\n Added = 'added',\n Processing = 'processing',\n Completed = 'completed',\n Failed = 'failed',\n Cancelled = 'cancelled',\n Retried = 'retried',\n Edited = 'edited',\n}\n```\n\n---\n\n## Event Hooks\n\nDataQueue emits real-time events for job lifecycle transitions. Register listeners using `on`, `once`, `off`, and `removeAllListeners`. Works identically with both PostgreSQL and Redis backends.\n\n### QueueEventMap\n\n```ts\ninterface QueueEventMap {\n 'job:added': { jobId: number; jobType: string };\n 'job:processing': { jobId: number; jobType: string };\n 'job:completed': { jobId: number; jobType: string };\n 'job:failed': {\n jobId: number;\n jobType: string;\n error: Error;\n willRetry: boolean;\n };\n 'job:cancelled': { jobId: number };\n 'job:retried': { jobId: number };\n 'job:waiting': { jobId: number; jobType: string };\n 'job:progress': { jobId: number; progress: number };\n error: Error;\n}\n```\n\n### on\n\n```ts\non(event: QueueEventName, listener: (data) => void): void\n```\n\nRegister a listener that fires every time the event is emitted.\n\n### once\n\n```ts\nonce(event: QueueEventName, listener: (data) => void): void\n```\n\nRegister a one-time listener that auto-removes after the first invocation.\n\n### off\n\n```ts\noff(event: QueueEventName, listener: (data) => void): void\n```\n\nRemove a previously registered listener. Pass the exact function reference used with `on` or `once`.\n\n### removeAllListeners\n\n```ts\nremoveAllListeners(event?: QueueEventName): void\n```\n\nRemove all listeners for a specific event, or all listeners for all events when called without arguments.\n\nSee [Event Hooks](/usage/event-hooks) for detailed usage examples.\n\n---\n\n## Processing Jobs\n\n### createProcessor\n\n```ts\ncreateProcessor(\n handlers: JobHandlers,\n options?: ProcessorOptions\n): Processor\n```\n\nCreates a job processor with the provided handlers and options.\n\n#### ProcessorOptions\n\n```ts\ninterface ProcessorOptions {\n workerId?: string;\n batchSize?: number;\n concurrency?: number;\n groupConcurrency?: number;\n pollInterval?: number;\n onError?: (error: Error) => void;\n verbose?: boolean;\n jobType?: string | string[];\n}\n```\n\n- `groupConcurrency` - Optional global per-group concurrency limit (positive integer). Applies only to jobs with `group.id`; ungrouped jobs are unaffected.\n\n---\n\n## Background Supervisor\n\n### createSupervisor\n\n```ts\ncreateSupervisor(options?: SupervisorOptions): Supervisor\n```\n\nCreates a background supervisor that automatically runs maintenance tasks on a configurable interval: reclaiming stuck jobs, cleaning up old completed jobs/events, and expiring timed-out waitpoint tokens.\n\n#### SupervisorOptions\n\n```ts\ninterface SupervisorOptions {\n intervalMs?: number; // default: 60000\n stuckJobsTimeoutMinutes?: number; // default: 10\n cleanupJobsDaysToKeep?: number; // default: 30 (0 to disable)\n cleanupEventsDaysToKeep?: number; // default: 30 (0 to disable)\n cleanupBatchSize?: number; // default: 1000\n reclaimStuckJobs?: boolean; // default: true\n expireTimedOutTokens?: boolean; // default: true\n onError?: (error: Error) => void; // default: console.error\n verbose?: boolean;\n}\n```\n\n#### Supervisor\n\n```ts\ninterface Supervisor {\n start(): Promise<SupervisorRunResult>;\n startInBackground(): void;\n stop(): void;\n stopAndDrain(timeoutMs?: number): Promise<void>;\n isRunning(): boolean;\n}\n```\n\n- `start()` runs all tasks once and returns the results (serverless-friendly).\n- `startInBackground()` starts a background loop that runs every `intervalMs`.\n- `stopAndDrain()` stops the loop and waits for the current run to finish.\n\n#### SupervisorRunResult\n\n```ts\ninterface SupervisorRunResult {\n reclaimedJobs: number;\n cleanedUpJobs: number;\n cleanedUpEvents: number;\n expiredTokens: number;\n}\n```\n\nSee [Long-Running Server](/usage/long-running-server#background-supervisor) for usage examples.\n\n---\n\n## Accessing the Underlying Client\n\n### getPool\n\n```ts\ngetPool(): Pool\n```\n\nReturns the PostgreSQL connection pool instance. Only available when using the PostgreSQL backend.\n\n> **Note:** Throws an error if called when using the Redis backend.\n\n### getRedisClient\n\n```ts\ngetRedisClient(): Redis\n```\n\nReturns the `ioredis` client instance. Only available when using the Redis backend.\n\n> **Note:** Throws an error if called when using the PostgreSQL backend."
|
|
43
43
|
},
|
|
44
44
|
{
|
|
45
45
|
"slug": "api/job-record",
|
|
46
46
|
"title": "JobRecord",
|
|
47
47
|
"description": "",
|
|
48
|
-
"content": "The `JobRecord` interface represents a job stored in the queue, including its status, attempts, and metadata.\n\n## Fields\n\n- `id`: _number_ — Unique job ID.\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The job payload.\n- `status`:\n _'pending' | 'processing' | 'completed' | 'failed' | 'cancelled' | 'waiting'_ —\n Current job status.\n- `createdAt`: _Date_ — When the job was created.\n- `updated_at`: _Date_ — When the job was last updated.\n- `locked_at`: _Date | null_ — When the job was locked for\n processing.\n- `locked_by`: _string | null_ — Worker that locked the job.\n- `attempts`: _number_ — Number of attempts so far.\n- `maxAttempts`: _number_ — Maximum allowed attempts.\n- `nextAttemptAt`: _Date | null_ — When the next attempt is\n scheduled.\n- `priority`: _number_ — Job priority.\n- `runAt`: _Date_ — When the job is scheduled to run.\n- `pendingReason?`: _string | null_ — Reason for pending\n status.\n- `errorHistory?`: _\\{ message: string; timestamp: string \\}[]_ — Error history for the job.\n- `timeoutMs?`: _number | null_ — Timeout for this job in\n milliseconds.\n- `failureReason?`: _FailureReason | null_ — Reason for last\n failure, if any.\n- `completedAt`: _Date | null_ — When the job was completed.\n- `startedAt`: _Date | null_ — When the job was first picked up\n for processing.\n- `lastRetriedAt`: _Date | null_ — When the job was last\n retried.\n- `lastFailedAt`: _Date | null_ — When the job last failed.\n- `lastCancelledAt`: _Date | null_ — When the job was last\n cancelled.\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string | null_ — The idempotency key for this job, if one was provided when the job was created.\n- `progress?`: _number | null_ — Progress percentage (0–100) reported by the handler via `ctx.setProgress()`. `null` if no progress has been reported. See [Progress Tracking](/usage/progress-tracking).\n- `output?`: _unknown_ — Handler output stored via `ctx.setOutput(data)` or by returning a value from the handler. `null` if no output has been stored. See [Job Output](/usage/job-output).\n- `deadLetterJobType?`: _string | null_ — Configured dead-letter destination job type for this job.\n- `deadLetteredAt?`: _Date | null_ — Timestamp when this job was routed to a dead-letter job.\n- `deadLetterJobId?`: _number | null_ — Linked dead-letter job ID created when retries were exhausted.\n\n## Example\n\n```json\n{\n \"id\": 1,\n \"jobType\": \"email\",\n \"payload\": { \"to\": \"user@example.com\", \"subject\": \"Hello\" },\n \"status\": \"completed\",\n \"createdAt\": \"2024-06-01T12:00:00Z\",\n \"tags\": [\"welcome\", \"user\"],\n \"idempotencyKey\": \"welcome-email-user-123\",\n \"progress\": 100,\n \"output\": { \"messageId\": \"abc-123\", \"sentAt\": \"2024-06-01T12:00:05Z\" }\n}\n```"
|
|
48
|
+
"content": "The `JobRecord` interface represents a job stored in the queue, including its status, attempts, and metadata.\n\n## Fields\n\n- `id`: _number_ — Unique job ID.\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The job payload.\n- `status`:\n _'pending' | 'processing' | 'completed' | 'failed' | 'cancelled' | 'waiting'_ —\n Current job status.\n- `createdAt`: _Date_ — When the job was created.\n- `updated_at`: _Date_ — When the job was last updated.\n- `locked_at`: _Date | null_ — When the job was locked for\n processing.\n- `locked_by`: _string | null_ — Worker that locked the job.\n- `attempts`: _number_ — Number of attempts so far.\n- `maxAttempts`: _number_ — Maximum allowed attempts.\n- `nextAttemptAt`: _Date | null_ — When the next attempt is\n scheduled.\n- `priority`: _number_ — Job priority.\n- `runAt`: _Date_ — When the job is scheduled to run.\n- `pendingReason?`: _string | null_ — Reason for pending\n status.\n- `errorHistory?`: _\\{ message: string; timestamp: string \\}[]_ — Error history for the job.\n- `timeoutMs?`: _number | null_ — Timeout for this job in\n milliseconds.\n- `failureReason?`: _FailureReason | null_ — Reason for last\n failure, if any.\n- `completedAt`: _Date | null_ — When the job was completed.\n- `startedAt`: _Date | null_ — When the job was first picked up\n for processing.\n- `lastRetriedAt`: _Date | null_ — When the job was last\n retried.\n- `lastFailedAt`: _Date | null_ — When the job last failed.\n- `lastCancelledAt`: _Date | null_ — When the job was last\n cancelled.\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string | null_ — The idempotency key for this job, if one was provided when the job was created.\n- `progress?`: _number | null_ — Progress percentage (0–100) reported by the handler via `ctx.setProgress()`. `null` if no progress has been reported. See [Progress Tracking](/usage/progress-tracking).\n- `output?`: _unknown_ — Handler output stored via `ctx.setOutput(data)` or by returning a value from the handler. `null` if no output has been stored. See [Job Output](/usage/job-output).\n- `deadLetterJobType?`: _string | null_ — Configured dead-letter destination job type for this job.\n- `deadLetteredAt?`: _Date | null_ — Timestamp when this job was routed to a dead-letter job.\n- `deadLetterJobId?`: _number | null_ — Linked dead-letter job ID created when retries were exhausted.\n- `dependsOnJobIds?`: _number[] | null_ — Prerequisite job ids set at enqueue time, if any. See [Job dependencies](/usage/job-dependencies).\n- `dependsOnTags?`: _string[] | null_ — Tag-drain prerequisite tags set at enqueue time, if any. See [Job dependencies](/usage/job-dependencies).\n\n## Example\n\n```json\n{\n \"id\": 1,\n \"jobType\": \"email\",\n \"payload\": { \"to\": \"user@example.com\", \"subject\": \"Hello\" },\n \"status\": \"completed\",\n \"createdAt\": \"2024-06-01T12:00:00Z\",\n \"tags\": [\"welcome\", \"user\"],\n \"idempotencyKey\": \"welcome-email-user-123\",\n \"progress\": 100,\n \"output\": { \"messageId\": \"abc-123\", \"sentAt\": \"2024-06-01T12:00:05Z\" }\n}\n```"
|
|
49
49
|
},
|
|
50
50
|
{
|
|
51
51
|
"slug": "api/processor",
|
|
@@ -141,7 +141,7 @@
|
|
|
141
141
|
"slug": "usage/add-job",
|
|
142
142
|
"title": "Add Job",
|
|
143
143
|
"description": "",
|
|
144
|
-
"content": "You can add jobs to the queue from your application logic, such as in a [server function](https://react.dev/reference/rsc/server-functions):\n\n```typescript title=\"@/app/actions/send-email.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\nimport { revalidatePath } from 'next/cache';\n\nexport const sendEmail = async ({\n name,\n email,\n}: {\n name: string;\n email: string;\n}) => {\n // Add a welcome email job\n const jobQueue = getJobQueue();try {\n const runAt = new Date(Date.now() + 5 * 1000); // Run 5 seconds from nowconst job = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome to our platform!',\n body: `Hi ${name}, welcome to our platform!`,\n },\n priority: 10, // Higher number = higher priority\n runAt: runAt,\n tags: ['welcome', 'user'], // Add tags for grouping/searching\n });\n\n revalidatePath('/');\n return { job };\n } catch (error) {\n console.error('Error adding job:', error);\n throw error;\n }\n};\n```\n\nIn the example above, a job is added to the queue to send an email. The job type is `send_email`, and the payload includes the recipient's email, subject, and body.\n\nWhen adding a job, you can set its `priority`, schedule when it should run using `runAt`, and specify a timeout in milliseconds with `timeoutMs`.\n\nYou can also add `tags` (an array of strings) to group, search, or batch jobs by category. See [Tags](/api/tags) for more details.\n\n## Batch Insert\n\nWhen you need to enqueue many jobs at once, use `addJobs` instead of calling `addJob` in a loop. It batches the inserts into a single database round-trip (PostgreSQL) or a single atomic Lua script (Redis), which is significantly faster.\n\n```typescript title=\"@/app/actions/send-bulk.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\n\nexport const sendBulkEmails = async (\n recipients: { email: string; name: string }[],\n) => {\n const jobQueue = getJobQueue();const jobIds = await jobQueue.addJobs(\n recipients.map((r) => ({\n jobType: 'send_email' as const,\n payload: {\n to: r.email,\n subject: 'Newsletter',\n body: `Hi ${r.name}, here's your update!`,\n },\n tags: ['newsletter'],\n })),\n );\n // jobIds[i] corresponds to recipients[i]\n return { jobIds };\n};\n```\n\n`addJobs` returns an array of job IDs in the **same order** as the input array. Each job can independently have its own `priority`, `runAt`, `tags`, `idempotencyKey`, and other options.\n\n- **Empty array**: `addJobs([])` returns `[]` immediately without touching the database.\n- **Idempotency**: Each job's `idempotencyKey` is handled independently. Duplicate keys resolve to the existing job's ID.\n- **Transactional**: The `{ db }` option works with `addJobs` the same way as `addJob` (PostgreSQL only).\n\n## Idempotency\n\nYou can provide an `idempotencyKey` when adding a job to prevent duplicate jobs. If a job with the same key already exists in the queue, `addJob` returns the existing job's ID instead of creating a new one.\n\nThis is useful for preventing duplicates caused by retries, double-clicks, webhook replays, or serverless function re-invocations.\n\n```typescript title=\"@/app/actions/send-welcome.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\n\nexport const sendWelcomeEmail = async (userId: string, email: string) => {\n const jobQueue = getJobQueue();const jobId = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome!',\n body: `Welcome to our platform!`,\n },\n idempotencyKey: `welcome-email-${userId}`, // prevents duplicate welcome emails\n });\n\n return { jobId };\n};\n```\n\nIn the example above, calling `sendWelcomeEmail` multiple times for the same `userId` will only create one job. Subsequent calls return the existing job's ID.\n\n### Behavior\n\n- **No key provided**: Works exactly as before, no uniqueness check is performed.\n- **Key provided, no conflict**: The job is inserted and its new ID is returned.\n- **Key provided, conflict**: The existing job's ID is returned. The existing job is **not** updated.\n- **Scope**: The key is unique across the entire `job_queue` table regardless of job status. Once a key exists, it cannot be reused until the job is cleaned up via [`cleanupOldJobs`](/usage/cleanup-jobs).\n\n## Transactional Job Creation\n\n> **Note:** Transactional job creation is only available with the **PostgreSQL** backend.\n\nYou can insert a job within an existing database transaction by passing an external database client via the `db` option. This guarantees that the job is enqueued **atomically** with your other database writes — if the transaction rolls back, the job is never enqueued.\n\nThis is useful when you need to ensure that a job is only created when a related database operation succeeds (e.g., creating a user and enqueuing a welcome email in the same transaction).\n\n```typescript title=\"@/app/actions/register.ts\"\n'use server';\n\nimport { Pool } from 'pg';\nimport { getJobQueue } from '@/lib/queue';\n\nconst pool = new Pool({ connectionString: process.env.DATABASE_URL });\n\nexport const registerUser = async (email: string, name: string) => {\n const client = await pool.connect();\n try {\n await client.query('BEGIN');\n\n // Insert the user\n await client.query('INSERT INTO users (email, name) VALUES ($1, $2)', [\n email,\n name,\n ]);\n\n // Enqueue the welcome email in the same transactionconst jobQueue = getJobQueue();\n await jobQueue.addJob(\n {\n jobType: 'send_email',\n payload: { to: email, subject: 'Welcome!', body: `Hi ${name}!` },\n },\n { db: client }, // Use the transaction client\n );\n\n await client.query('COMMIT');\n } catch (error) {\n await client.query('ROLLBACK');\n throw error;\n } finally {\n client.release();\n }\n};\n```\n\n### How it works\n\n- When `db` is provided, the `INSERT` into the `job_queue` table and the associated job event are both executed on the supplied client.\n- The library does **not** call `client.release()` — you are responsible for managing the client lifecycle.\n- If the transaction is rolled back, both the job and its event are discarded.\n- When `db` is **not** provided, `addJob` behaves exactly as before (gets a connection from the internal pool)."
|
|
144
|
+
"content": "You can add jobs to the queue from your application logic, such as in a [server function](https://react.dev/reference/rsc/server-functions):\n\n```typescript title=\"@/app/actions/send-email.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\nimport { revalidatePath } from 'next/cache';\n\nexport const sendEmail = async ({\n name,\n email,\n}: {\n name: string;\n email: string;\n}) => {\n // Add a welcome email job\n const jobQueue = getJobQueue();try {\n const runAt = new Date(Date.now() + 5 * 1000); // Run 5 seconds from nowconst job = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome to our platform!',\n body: `Hi ${name}, welcome to our platform!`,\n },\n priority: 10, // Higher number = higher priority\n runAt: runAt,\n tags: ['welcome', 'user'], // Add tags for grouping/searching\n });\n\n revalidatePath('/');\n return { job };\n } catch (error) {\n console.error('Error adding job:', error);\n throw error;\n }\n};\n```\n\nIn the example above, a job is added to the queue to send an email. The job type is `send_email`, and the payload includes the recipient's email, subject, and body.\n\nWhen adding a job, you can set its `priority`, schedule when it should run using `runAt`, and specify a timeout in milliseconds with `timeoutMs`.\n\nYou can also add `tags` (an array of strings) to group, search, or batch jobs by category. See [Tags](/api/tags) for more details.\n\n## Batch Insert\n\nWhen you need to enqueue many jobs at once, use `addJobs` instead of calling `addJob` in a loop. It batches the inserts into a single database round-trip (PostgreSQL) or a single atomic Lua script (Redis), which is significantly faster.\n\n```typescript title=\"@/app/actions/send-bulk.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\n\nexport const sendBulkEmails = async (\n recipients: { email: string; name: string }[],\n) => {\n const jobQueue = getJobQueue();const jobIds = await jobQueue.addJobs(\n recipients.map((r) => ({\n jobType: 'send_email' as const,\n payload: {\n to: r.email,\n subject: 'Newsletter',\n body: `Hi ${r.name}, here's your update!`,\n },\n tags: ['newsletter'],\n })),\n );\n // jobIds[i] corresponds to recipients[i]\n return { jobIds };\n};\n```\n\n`addJobs` returns an array of job IDs in the **same order** as the input array. Each job can independently have its own `priority`, `runAt`, `tags`, `idempotencyKey`, and other options.\n\n- **Empty array**: `addJobs([])` returns `[]` immediately without touching the database.\n- **Idempotency**: Each job's `idempotencyKey` is handled independently. Duplicate keys resolve to the existing job's ID.\n- **Transactional**: The `{ db }` option works with `addJobs` the same way as `addJob` (PostgreSQL only).\n\n## Job dependencies\n\nUse `dependsOn` to wait for other jobs to finish (`jobIds`) and/or for a [tag drain](/usage/job-dependencies#dependsontags-tag-drain) (`tags`). In `addJobs`, use `batchDepRef` to point at other jobs in the same batch. See [Job dependencies](/usage/job-dependencies).\n\n## Idempotency\n\nYou can provide an `idempotencyKey` when adding a job to prevent duplicate jobs. If a job with the same key already exists in the queue, `addJob` returns the existing job's ID instead of creating a new one.\n\nThis is useful for preventing duplicates caused by retries, double-clicks, webhook replays, or serverless function re-invocations.\n\n```typescript title=\"@/app/actions/send-welcome.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\n\nexport const sendWelcomeEmail = async (userId: string, email: string) => {\n const jobQueue = getJobQueue();const jobId = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome!',\n body: `Welcome to our platform!`,\n },\n idempotencyKey: `welcome-email-${userId}`, // prevents duplicate welcome emails\n });\n\n return { jobId };\n};\n```\n\nIn the example above, calling `sendWelcomeEmail` multiple times for the same `userId` will only create one job. Subsequent calls return the existing job's ID.\n\n### Behavior\n\n- **No key provided**: Works exactly as before, no uniqueness check is performed.\n- **Key provided, no conflict**: The job is inserted and its new ID is returned.\n- **Key provided, conflict**: The existing job's ID is returned. The existing job is **not** updated.\n- **Scope**: The key is unique across the entire `job_queue` table regardless of job status. Once a key exists, it cannot be reused until the job is cleaned up via [`cleanupOldJobs`](/usage/cleanup-jobs).\n\n## Transactional Job Creation\n\n> **Note:** Transactional job creation is only available with the **PostgreSQL** backend.\n\nYou can insert a job within an existing database transaction by passing an external database client via the `db` option. This guarantees that the job is enqueued **atomically** with your other database writes — if the transaction rolls back, the job is never enqueued.\n\nThis is useful when you need to ensure that a job is only created when a related database operation succeeds (e.g., creating a user and enqueuing a welcome email in the same transaction).\n\n```typescript title=\"@/app/actions/register.ts\"\n'use server';\n\nimport { Pool } from 'pg';\nimport { getJobQueue } from '@/lib/queue';\n\nconst pool = new Pool({ connectionString: process.env.DATABASE_URL });\n\nexport const registerUser = async (email: string, name: string) => {\n const client = await pool.connect();\n try {\n await client.query('BEGIN');\n\n // Insert the user\n await client.query('INSERT INTO users (email, name) VALUES ($1, $2)', [\n email,\n name,\n ]);\n\n // Enqueue the welcome email in the same transactionconst jobQueue = getJobQueue();\n await jobQueue.addJob(\n {\n jobType: 'send_email',\n payload: { to: email, subject: 'Welcome!', body: `Hi ${name}!` },\n },\n { db: client }, // Use the transaction client\n );\n\n await client.query('COMMIT');\n } catch (error) {\n await client.query('ROLLBACK');\n throw error;\n } finally {\n client.release();\n }\n};\n```\n\n### How it works\n\n- When `db` is provided, the `INSERT` into the `job_queue` table and the associated job event are both executed on the supplied client.\n- The library does **not** call `client.release()` — you are responsible for managing the client lifecycle.\n- If the transaction is rolled back, both the job and its event are discarded.\n- When `db` is **not** provided, `addJob` behaves exactly as before (gets a connection from the internal pool)."
|
|
145
145
|
},
|
|
146
146
|
{
|
|
147
147
|
"slug": "usage/building-with-ai",
|
|
@@ -215,6 +215,12 @@
|
|
|
215
215
|
"description": "",
|
|
216
216
|
"content": "After defining your job types, payloads, and handlers, you need to initialize the job queue which sets up the connection to your database backend.\n\n## PostgreSQL\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE, // Set this in your environment\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** The value of `connectionString` must be a [valid Postgres connection\n string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).\n For example:\n\n```dotenv\nPG_DATAQUEUE_DATABASE=postgresql://postgres:password@localhost:5432/my_database?search_path=my_schema\n```\n\n\n## Redis\n\nTo use Redis as the backend, set `backend: 'redis'` and provide `redisConfig` instead of `databaseConfig`:\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {jobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n url: process.env.REDIS_URL, // e.g. redis://localhost:6379\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\nYou can also connect using individual connection options instead of a URL:\n\n```typescript title=\"@lib/queue.ts\"\njobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n host: 'localhost',\n port: 6379,\n password: process.env.REDIS_PASSWORD,\n db: 0,\n keyPrefix: 'myapp:', // Optional, defaults to 'dq:'\n },\n verbose: process.env.NODE_ENV === 'development',\n});\n```\n\n> **Note:** The `keyPrefix` option lets you namespace all Redis keys. This is useful when\n sharing a Redis instance between multiple applications or multiple queues. The\n default prefix is `dq:`.\n\n---\n\n## Bring Your Own Pool / Client\n\nInstead of providing connection configuration, you can pass an existing connection instance. This is useful when your application already manages its own connection pool and you want to share it with dataqueue.\n\n### PostgreSQL — External Pool\n\n```typescript title=\"@lib/queue.ts\"\nimport { Pool } from 'pg';\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nconst pool = new Pool({ connectionString: process.env.DATABASE_URL });\n\nconst jobQueue = initJobQueue<JobPayloadMap>({\n pool,verbose: process.env.NODE_ENV === 'development',\n});\n```\n\n### Redis — External Client\n\n```typescript title=\"@lib/queue.ts\"\nimport IORedis from 'ioredis';\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nconst redis = new IORedis(process.env.REDIS_URL);\n\nconst jobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n client: redis,keyPrefix: 'myapp:',\n verbose: process.env.NODE_ENV === 'development',\n});\n```\n\n> **Note:** **Connection ownership:** When you provide your own `pool` or `client`, the\n library will **not** close it on shutdown. You are responsible for calling\n `pool.end()` or `client.quit()` when your application exits.\n\n---\n\n## Using the Queue\n\nOnce initialized, you use the queue instance identically regardless of backend. The API is the same for both PostgreSQL and Redis.\n\n```typescript title=\"@/app/actions/send-email.ts\"\nimport { getJobQueue } from '@/lib/queue';\n\nconst sendEmail = async () => {const jobQueue = getJobQueue();\n await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: 'test@example.com',\n subject: 'Hello',\n body: 'Hello, world!',\n },\n });\n};\n```\n\n---\n\n## SSL Configuration (PostgreSQL)\n\nMost managed Postgres providers (like DigitalOcean, Supabase, etc.) require SSL connections and use their own CA certificate (.crt file) to sign the server's certificate. To securely verify the server's identity, you must configure your client to trust this CA certificate.\n\nYou can configure SSL for your database connection in several ways, depending on your environment and security requirements.\n\n### Using PEM Strings from Environment Variables\n\nThis is ideal for serverless environments where you cannot mount files. Store your CA certificate, and optionally client certificate and key, as environment variables then pass them to the `ssl` property of the `databaseConfig` object.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE, // Set this in your environment\n ssl: {\n ca: process.env.PGSSLROOTCERT, // PEM string: the content of your .crt file\n cert: process.env.PGSSLCERT, // PEM string (optional, for client authentication)\n key: process.env.PGSSLKEY, // PEM string (optional, for client authentication)\n rejectUnauthorized: true, // Always true for CA-signed certs\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using a custom CA certificate and `connectionString`, you must remove the\n `sslmode` parameter from the connection string. Otherwise, the connection will\n fail.\n\n### Using File Paths\n\nIf you have the CA certificate, client certificate, or key on disk, provide their absolute paths using the `file://` prefix. Only values starting with `file://` will be loaded from the file system; all others are treated as PEM strings.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE,\n ssl: {\n ca: 'file:///absolute/path/to/ca.crt', // Path to your provider's CA cert\n cert: 'file:///absolute/path/to/client.crt', // optional, for client authentication\n key: 'file:///absolute/path/to/client.key', // optional, for client authentication\n rejectUnauthorized: true,\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using a custom CA certificate and `connectionString`, you must remove the\n `sslmode` parameter from the connection string. Otherwise, the connection will\n fail.\n\n### Skipping Certificate Validation\n\nFor convenience, you can skip certificate validation (not recommended for production) by setting `rejectUnauthorized` to `false` and without providing a custom CA certificate.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE,\n ssl: {\n rejectUnauthorized: false,\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using `rejectUnauthorized: false` and `connectionString`, you must remove\n the `sslmode` parameter from the connection string. Otherwise, the connection\n will fail.\n\n---\n\n## TLS Configuration (Redis)\n\nIf your Redis server requires TLS (common with managed services like AWS ElastiCache, Redis Cloud, etc.), provide TLS options in the `redisConfig`:\n\n```typescript title=\"@lib/queue.ts\"\njobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n url: process.env.REDIS_URL,\n tls: {\n ca: process.env.REDIS_CA_CERT, // PEM string\n rejectUnauthorized: true,\n },\n },\n});\n```"
|
|
217
217
|
},
|
|
218
|
+
{
|
|
219
|
+
"slug": "usage/job-dependencies",
|
|
220
|
+
"title": "Job Dependencies",
|
|
221
|
+
"description": "",
|
|
222
|
+
"content": "You can defer a job until prerequisites are satisfied by setting `dependsOn` on [`JobOptions`](/api/job-options). Both dimensions use **logical AND** when both are present.\n\n## `dependsOn.jobIds`\n\nThe job stays **pending** until **every** listed prerequisite has status `completed`.\n\n- **Invalid ids**: Enqueue fails if any id does not exist in the queue.\n- **Self-dependency**: A job cannot list its own id in `dependsOn.jobIds`.\n- **Cycles**: DataQueue rejects inserts that would create a dependency cycle between jobs.\n- **Failure or cancellation**: If any prerequisite ends as `failed` or `cancelled`, pending jobs that depend on it (transitively) are **cancelled**.\n\nUse this for explicit chains: _job B runs only after job A completes successfully_.\n\n```typescript title=\"Linear chain\"\nimport { getJobQueue } from '@/lib/queue';\n\nconst jobQueue = getJobQueue();\n\nconst a = await jobQueue.addJob({\n jobType: 'ingest',\n payload: { fileId: 'f1' },\n});\n\nawait jobQueue.addJob({\n jobType: 'transform',\n payload: { fileId: 'f1' },\n dependsOn: { jobIds: [a] },\n});\n```\n\n## `dependsOn.tags` (tag drain)\n\nThe job stays **pending** while **another** job (not itself) is **active** — `pending`, `processing`, or `waiting` — and that job’s `tags` are a **superset** of **every** tag listed in `dependsOn.tags` (same semantics as Postgres `tags @> depends_on_tags`).\n\nWhen no such blocking job exists, the dependent job becomes eligible to run (subject to `runAt`, workers, etc.).\n\n- **Failure or cancellation**: If a job that matches the tag barrier fails or is cancelled, pending jobs that listed those tags are **cancelled** (transitively).\n\nUse this for _drain_ patterns: _wait until no in-flight work is tagged in a certain way_ (for example a “wave” or “tenant” tag).\n\n```typescript title=\"Tag drain\"\nawait jobQueue.addJob({\n jobType: 'finalize_wave',\n payload: { wave: 2 },\n tags: ['wave:2'],\n dependsOn: { tags: ['wave:1'] },\n});\n```\n\n## Combining `jobIds` and `tags`\n\nIf you set both, **all** job-id prerequisites must be `completed` **and** the tag-drain condition must be clear before the job runs.\n\n## `addJob` vs `addJobs`\n\n### Single `addJob`\n\n`dependsOn.jobIds` must contain **positive** database ids only. Negative placeholders are **not** allowed (they are reserved for batch inserts).\n\n### Batch `addJobs` and `batchDepRef`\n\nWhen you enqueue several related jobs in one `addJobs` call, you can reference earlier jobs in the **same batch** using negative placeholders: `-(index + 1)` for the job at `index` in the array. Use the helper **`batchDepRef`** from `@nicnocquee/dataqueue` instead of hard-coding negatives.\n\n```typescript title=\"Batch dependencies\"\nimport { batchDepRef } from '@nicnocquee/dataqueue';\nimport { getJobQueue } from '@/lib/queue';\n\nconst jobQueue = getJobQueue();\n\nconst [idA, idB, idC] = await jobQueue.addJobs([\n { jobType: 'step_a', payload: {} },\n {\n jobType: 'step_b',\n payload: {},\n dependsOn: { jobIds: [batchDepRef(0)] },\n },\n {\n jobType: 'step_c',\n payload: {},\n dependsOn: { jobIds: [batchDepRef(0), batchDepRef(1)] },\n },\n]);\n```\n\nThis enqueues **three jobs in one round-trip**. Array indices are **0-based**: `batchDepRef(0)` means “the job at index 0 in this same array,” and `batchDepRef(1)` means “the job at index 1.” After insert, those placeholders become real ids.\n\n- **`step_a`** (index `0`) has no prerequisites; it can run as soon as a worker picks it up.\n- **`step_b`** waits for **`step_a`** only (`dependsOn: { jobIds: [batchDepRef(0)] }`).\n- **`step_c`** waits for **both** earlier jobs (`batchDepRef(0)` and `batchDepRef(1)`), so it runs only after `step_a` and `step_b` have reached `completed`.\n\n`[idA, idB, idC]` are the final database ids in the same order as the input array — the same ids that were written into each row’s dependency list when placeholders were resolved.\n\n> **Note:** `batchDepRef` is exported from `@nicnocquee/dataqueue`. Re-export it from your\n queue module if you prefer a single import path.\n\n## Persisted fields on `JobRecord`\n\nPrerequisites are stored on the row as [`dependsOnJobIds`](/api/job-record) and [`dependsOnTags`](/api/job-record) for inspection and debugging.\n\n## See also\n\n- [`JobOptions`](/api/job-options) — full `dependsOn` / `JobDependsOn` shape\n- [`JobRecord`](/api/job-record) — persisted prerequisite columns\n- [Add job](/usage/add-job) — `addJob` / `addJobs` batch behavior"
|
|
223
|
+
},
|
|
218
224
|
{
|
|
219
225
|
"slug": "usage/job-events",
|
|
220
226
|
"title": "Job Events",
|
package/ai/rules/advanced.md
CHANGED
|
@@ -56,6 +56,33 @@ The processor auto-enqueues due cron jobs before each batch. Manage with `pauseC
|
|
|
56
56
|
- `ctx.onTimeout(() => ms)` — reactive; return ms to extend, or nothing to let timeout proceed.
|
|
57
57
|
- `forceKillOnTimeout: true` — terminates handler via Worker Thread. Requires Node.js, serializable handler, and disables `ctx.run`/waits/`prolong`/`onTimeout`.
|
|
58
58
|
|
|
59
|
+
## Job Dependencies
|
|
60
|
+
|
|
61
|
+
Defer enqueue eligibility with `dependsOn` on `addJob` / `addJobs`. PostgreSQL and Redis both support this. If both `jobIds` and `tags` are set, **all** conditions must pass (logical AND).
|
|
62
|
+
|
|
63
|
+
### `dependsOn.jobIds`
|
|
64
|
+
|
|
65
|
+
The job stays **pending** until **every** listed prerequisite has status `completed`. Enqueue fails if any id is missing, if a job depends on itself, or if the graph would contain a cycle. If a prerequisite ends `failed` or `cancelled`, dependent pending jobs are **cancelled** (transitively).
|
|
66
|
+
|
|
67
|
+
Single `addJob` calls must use **positive** database ids only.
|
|
68
|
+
|
|
69
|
+
### `dependsOn.tags` (tag drain)
|
|
70
|
+
|
|
71
|
+
The job stays **pending** while **another** job (not itself) is **active** (`pending`, `processing`, or `waiting`) and that job’s `tags` are a **superset** of every tag in `dependsOn.tags`. When no such blocker exists, the job becomes eligible. Matching jobs that fail or cancel also cancel dependents waiting on those tags.
|
|
72
|
+
|
|
73
|
+
### Same-batch `addJobs` — `batchDepRef`
|
|
74
|
+
|
|
75
|
+
Use `batchDepRef(batchIndex)` from `@nicnocquee/dataqueue` to reference the job at `batchIndex` in the **same** `addJobs` array (negative placeholders resolved after insert). Hard-coding negative ids is discouraged.
|
|
76
|
+
|
|
77
|
+
```typescript
|
|
78
|
+
import { batchDepRef } from '@nicnocquee/dataqueue';
|
|
79
|
+
|
|
80
|
+
await queue.addJobs([
|
|
81
|
+
{ jobType: 'a', payload: {} },
|
|
82
|
+
{ jobType: 'b', payload: {}, dependsOn: { jobIds: [batchDepRef(0)] } },
|
|
83
|
+
]);
|
|
84
|
+
```
|
|
85
|
+
|
|
59
86
|
## Tags and Filtering
|
|
60
87
|
|
|
61
88
|
```typescript
|
package/ai/rules/basic.md
CHANGED
|
@@ -86,7 +86,7 @@ const ids = await queue.addJobs([
|
|
|
86
86
|
// ids[i] corresponds to the i-th input job
|
|
87
87
|
```
|
|
88
88
|
|
|
89
|
-
Both support `idempotencyKey`, `priority`, `runAt`, `tags`, optional `group: { id, tier? }`, and `{ db }` for transactional inserts (PostgreSQL only).
|
|
89
|
+
Both support `idempotencyKey`, `priority`, `runAt`, `tags`, optional `group: { id, tier? }`, optional `dependsOn` for prerequisite jobs or tag-drain barriers, and `{ db }` for transactional inserts (PostgreSQL only).
|
|
90
90
|
|
|
91
91
|
## Handlers
|
|
92
92
|
|
|
@@ -1,10 +1,49 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: dataqueue-advanced
|
|
3
|
-
description: Advanced DataQueue patterns — step memoization, waits, tokens, cron, timeouts, tags, idempotency.
|
|
3
|
+
description: Advanced DataQueue patterns — job dependencies, step memoization, waits, tokens, cron, timeouts, tags, idempotency.
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# DataQueue Advanced Patterns
|
|
7
7
|
|
|
8
|
+
## Job Dependencies
|
|
9
|
+
|
|
10
|
+
Use `dependsOn` on `addJob` or `addJobs` so a job stays **pending** until prerequisites are satisfied (PostgreSQL and Redis). Combining `jobIds` and `tags` requires **both** to be clear (logical AND).
|
|
11
|
+
|
|
12
|
+
### Prerequisites by job id (`dependsOn.jobIds`)
|
|
13
|
+
|
|
14
|
+
The job runs only after **every** listed job has reached `completed`. DataQueue validates ids, rejects self-dependencies and cycles, and cancels dependents (transitively) if a prerequisite ends `failed` or `cancelled`.
|
|
15
|
+
|
|
16
|
+
- **`addJob`**: use only **positive** existing job ids.
|
|
17
|
+
- **`addJobs`**: import `batchDepRef` from `@nicnocquee/dataqueue` to point at another entry in the **same** batch — e.g. `dependsOn: { jobIds: [batchDepRef(0)] }` waits for the job at index `0`.
|
|
18
|
+
|
|
19
|
+
```typescript
|
|
20
|
+
import { batchDepRef } from '@nicnocquee/dataqueue';
|
|
21
|
+
|
|
22
|
+
const [idA, idB] = await queue.addJobs([
|
|
23
|
+
{ jobType: 'ingest', payload: { fileId: '1' } },
|
|
24
|
+
{
|
|
25
|
+
jobType: 'transform',
|
|
26
|
+
payload: { fileId: '1' },
|
|
27
|
+
dependsOn: { jobIds: [batchDepRef(0)] },
|
|
28
|
+
},
|
|
29
|
+
]);
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
### Tag drain (`dependsOn.tags`)
|
|
33
|
+
|
|
34
|
+
Wait until there is **no** other active job (`pending`, `processing`, or `waiting`) whose `tags` are a **superset** of every tag in `dependsOn.tags`. Use for “wave” or tenant barriers. If a matching job fails or is cancelled, dependent jobs waiting on those tags are cancelled (transitively).
|
|
35
|
+
|
|
36
|
+
```typescript
|
|
37
|
+
await queue.addJob({
|
|
38
|
+
jobType: 'finalize_wave',
|
|
39
|
+
payload: { wave: 2 },
|
|
40
|
+
tags: ['wave:2'],
|
|
41
|
+
dependsOn: { tags: ['wave:1'] },
|
|
42
|
+
});
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
Persisted fields on `JobRecord`: `dependsOnJobIds`, `dependsOnTags`.
|
|
46
|
+
|
|
8
47
|
## Step Memoization with ctx.run()
|
|
9
48
|
|
|
10
49
|
Wrap side-effectful work in `ctx.run(stepName, fn)`. Results are cached in the database — when the handler re-runs after a wait, completed steps replay from cache without re-executing.
|
|
@@ -143,6 +143,15 @@ const jobIds = await queue.addJobs([
|
|
|
143
143
|
|
|
144
144
|
Each job can independently have its own `idempotencyKey`, `priority`, `runAt`, `tags`, etc. The `{ db }` transactional option is also supported (PostgreSQL only).
|
|
145
145
|
|
|
146
|
+
### Job dependencies
|
|
147
|
+
|
|
148
|
+
Optional `dependsOn` defers a job until prerequisites are satisfied:
|
|
149
|
+
|
|
150
|
+
- `dependsOn.jobIds` — wait until every listed job is `completed` (ids must exist; cycles and self-deps are rejected).
|
|
151
|
+
- `dependsOn.tags` — tag-drain: wait while another active job’s tags are a superset of every listed tag.
|
|
152
|
+
|
|
153
|
+
For multiple jobs in one `addJobs` call, import `batchDepRef` and pass `batchDepRef(0)`, `batchDepRef(1)`, etc., to depend on earlier entries in the same array. See the **dataqueue-advanced** skill for failure/cancellation propagation and full semantics.
|
|
154
|
+
|
|
146
155
|
### Transactional Job Creation (PostgreSQL only)
|
|
147
156
|
|
|
148
157
|
Pass an external `pg.PoolClient` inside a transaction via `{ db: client }`:
|