@nicnocquee/dataqueue 1.33.0 → 1.34.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,278 @@
1
+ [
2
+ {
3
+ "slug": "api/db-util",
4
+ "title": "Database Utility",
5
+ "description": "",
6
+ "content": "The `createPool` function creates a PostgreSQL connection pool for use with the job queue system.\n\n> **Note:** This utility is only relevant for the **PostgreSQL backend**. If you're using\n the Redis backend, you don't need this function.\n\n## Function\n\n```ts\ncreatePool(config: PostgresJobQueueConfig['databaseConfig']): Pool\n```\n\n- `config`: The database connection configuration (connection string, host, port, database, user, password, ssl).\n- Returns a `Pool` instance from `pg`.\n\n## Example\n\n```ts\nimport { createPool } from '@nicnocquee/dataqueue';\n\nconst pool = createPool({\n host: 'localhost',\n port: 5432,\n database: 'mydb',\n user: 'postgres',\n password: 'secret',\n});\n```"
7
+ },
8
+ {
9
+ "slug": "api/failure-reason",
10
+ "title": "FailureReason",
11
+ "description": "",
12
+ "content": "The `FailureReason` enum represents the possible reasons for a job failure.\n\n## Enum\n\n```ts\nenum FailureReason {\n Timeout = 'timeout',\n HandlerError = 'handler_error',\n NoHandler = 'no_handler',\n}\n```\n\n## Values\n\n- `Timeout`: The job timed out.\n- `HandlerError`: The job handler threw an error.\n- `NoHandler`: The job handler was not found."
13
+ },
14
+ {
15
+ "slug": "api",
16
+ "title": "API Reference",
17
+ "description": "",
18
+ "content": "This section documents the main classes, types, and functions available for managing job queues, processing jobs, and interacting with the database.\n\n## API Surface\n\n- [JobQueue](/api/job-queue)\n- [JobOptions](/api/job-options)\n- [JobRecord](/api/job-record)\n- [JobEvent](/api/job-event)\n- [Processor](/api/processor)\n- [ProcessorOptions](/api/processor-options)\n- [JobHandlers](/api/job-handlers)\n- [Database Utility](/api/db-util)\n- [Tags](/api/tags)"
19
+ },
20
+ {
21
+ "slug": "api/job-event",
22
+ "title": "JobEvent",
23
+ "description": "",
24
+ "content": "The `JobEvent` interface represents an event in the lifecycle of a job, such as when it is added, processed, completed, failed, cancelled, or retried.\n\n## Fields\n\n- `id`: _number_ — Unique event ID.\n- `jobId`: _number_ — The job this event is associated with.\n- `eventType`: _JobEventType_ — The type of event (see below).\n- `createdAt`: _Date_ — When the event was created.\n- `metadata`: _any_ — Additional metadata for the event.\n\n## JobEventType\n\n`JobEventType` is an enum of possible job event types:\n\n```ts\ntype JobEventType =\n | 'added'\n | 'processing'\n | 'completed'\n | 'failed'\n | 'cancelled'\n | 'retried'\n | 'edited'\n | 'prolonged';\n```\n\nThe `prolonged` event is recorded when a running job extends its timeout via `prolong()` or `onTimeout()`. See [Job Timeout](/usage/job-timeout) for details."
25
+ },
26
+ {
27
+ "slug": "api/job-handlers",
28
+ "title": "JobHandlers",
29
+ "description": "",
30
+ "content": "The `JobHandlers` type defines a map of job types to their handler functions. Each handler processes a job's payload and receives an `AbortSignal` for cancellation and a `JobContext` for timeout extension.\n\n## Type\n\n```ts\ntype OnTimeoutCallback = () => number | void | undefined;\n\ninterface JobContext {\n /** Proactively reset the timeout deadline.\n * If ms is provided, sets deadline to ms from now.\n * If omitted, resets to the original timeoutMs. */\n prolong: (ms?: number) => void;\n\n /** Register a callback invoked when timeout fires (before abort).\n * Return a number (ms) to extend, or nothing to let timeout proceed.\n * The callback may be called multiple times if the job keeps extending. */\n onTimeout: (callback: OnTimeoutCallback) => void;\n}\n\ntype JobHandler<PayloadMap, T extends keyof PayloadMap> = (\n payload: PayloadMap[T],\n signal: AbortSignal,\n ctx: JobContext,\n) => Promise<void>;\n\n// Map of job types to handlers\n\nexport type JobHandlers<PayloadMap> = {\n [K in keyof PayloadMap]: JobHandler<PayloadMap, K>;\n};\n```\n\n## Example\n\n```ts\nconst handlers = {\n email: async (payload, signal) => {\n // send email\n },\n generateReport: async (payload, signal, { prolong }) => {\n // prolong the timeout before a heavy step\n prolong(60_000);\n // generate report\n },\n processData: async (payload, signal, { onTimeout }) => {\n let progress = 0;\n onTimeout(() => {\n if (progress < 100) return 30_000; // extend if still working\n });\n // process data in chunks, updating progress\n },\n};\n```"
31
+ },
32
+ {
33
+ "slug": "api/job-options",
34
+ "title": "JobOptions",
35
+ "description": "",
36
+ "content": "The `JobOptions` interface defines the options for creating a new job in the queue.\n\n## Fields\n\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The payload for the job, type-safe per job\n type.\n- `maxAttempts?`: _number_ — Maximum number of attempts for\n this job (default: 3).\n- `priority?`: _number_ — Priority of the job (higher runs\n first, default: 0).\n- `runAt?`: _Date | null_ — When to run the job (default: now).\n- `timeoutMs?`: _number_ — Timeout for this job in milliseconds.\n If not set, uses the processor default or unlimited.\n- `forceKillOnTimeout?`: _boolean_ — If true, the job will be forcefully terminated (using Worker Threads) when timeout is reached. If false (default), the job will only receive an AbortSignal and must handle the abort gracefully.\n\n **⚠️ Runtime Requirements**: This option requires **Node.js** and will **not work** in Bun or other runtimes without worker thread support. See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string_ — Optional idempotency key. When provided, ensures that only one job exists for a given key. If a job with the same key already exists, `addJob` returns the existing job's ID instead of creating a duplicate. See [Idempotency](/usage/add-job#idempotency) for details.\n\n## Example\n\n```ts\nconst job = {\n jobType: 'email',\n payload: { to: 'user@example.com', subject: 'Hello' },\n maxAttempts: 5,\n priority: 10,\n runAt: new Date(Date.now() + 60000), // run in 1 minute\n timeoutMs: 30000, // 30 seconds\n forceKillOnTimeout: false, // Use graceful shutdown (default)\n tags: ['welcome', 'user'], // tags for grouping/searching\n idempotencyKey: 'welcome-email-user-123', // prevent duplicate jobs\n};\n```"
37
+ },
38
+ {
39
+ "slug": "api/job-queue",
40
+ "title": "JobQueue",
41
+ "description": "",
42
+ "content": "## Initialization\n\n### initJobQueue\n\n```ts\ninitJobQueue(config: JobQueueConfig): JobQueue\n```\n\nInitializes the job queue system with the provided configuration. The `JobQueueConfig` is a discriminated union -- you provide either a PostgreSQL or Redis configuration.\n\n#### PostgresJobQueueConfig\n\n```ts\ninterface PostgresJobQueueConfig {\n backend?: 'postgres'; // Optional, defaults to 'postgres'\n databaseConfig: {\n connectionString?: string;\n host?: string;\n port?: number;\n database?: string;\n user?: string;\n password?: string;\n ssl?: DatabaseSSLConfig;\n };\n verbose?: boolean;\n}\n```\n\n#### RedisJobQueueConfig\n\n```ts\ninterface RedisJobQueueConfig {\n backend: 'redis'; // Required\n redisConfig: {\n url?: string;\n host?: string;\n port?: number;\n password?: string;\n db?: number;\n tls?: RedisTLSConfig;\n keyPrefix?: string; // Default: 'dq:'\n };\n verbose?: boolean;\n}\n```\n\n#### JobQueueConfig\n\n```ts\ntype JobQueueConfig = PostgresJobQueueConfig | RedisJobQueueConfig;\n```\n\n#### DatabaseSSLConfig\n\n```ts\ninterface DatabaseSSLConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n- `ca` - Client certificate authority (CA) as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `cert` - Client certificate as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `key` - Client private key as PEM string or file path. If the value starts with 'file://', it will be loaded from file, otherwise treated as PEM string.\n- `rejectUnauthorized` - Whether to reject unauthorized certificates (default: true)\n\n#### RedisTLSConfig\n\n```ts\ninterface RedisTLSConfig {\n ca?: string;\n cert?: string;\n key?: string;\n rejectUnauthorized?: boolean;\n}\n```\n\n---\n\n## Adding Jobs\n\n### addJob\n\n```ts\naddJob(job: JobOptions): Promise<number>\n```\n\nAdds a job to the queue. Returns the job ID.\n\n#### JobOptions\n\n```ts\ninterface JobOptions {\n jobType: string;\n payload: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n}\n```\n\n---\n\n## Retrieving Jobs\n\n### getJob\n\n```ts\ngetJob(id: number): Promise<JobRecord | null>\n```\n\nRetrieves a job by its ID.\n\n### getJobs\n\n```ts\ngetJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n limit?: number,\n offset?: number\n): Promise<JobRecord[]>\n```\n\nRetrieves jobs matching the provided filters, with optional pagination.\n\n### getJobsByStatus\n\n```ts\ngetJobsByStatus(status: string, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by their status, with pagination.\n\n### getAllJobs\n\n```ts\ngetAllJobs(limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves all jobs, with optional pagination.\n\n### getJobsByTags\n\n```ts\ngetJobsByTags(tags: string[], mode?: TagQueryMode, limit?: number, offset?: number): Promise<JobRecord[]>\n```\n\nRetrieves jobs by tag(s).\n\n---\n\n## Managing Jobs\n\n### retryJob\n\n```ts\nretryJob(jobId: number): Promise<void>\n```\n\nRetries a job given its ID.\n\n### cancelJob\n\n```ts\ncancelJob(jobId: number): Promise<void>\n```\n\nCancels a job given its ID.\n\n### editJob\n\n```ts\neditJob(jobId: number, updates: EditJobOptions): Promise<void>\n```\n\nEdits a pending job given its ID. Only works for jobs with status 'pending'. Silently fails for other statuses (processing, completed, failed, cancelled).\n\n#### EditJobOptions\n\n```ts\ninterface EditJobOptions {\n payload?: any;\n maxAttempts?: number;\n priority?: number;\n runAt?: Date | null;\n timeoutMs?: number;\n tags?: string[];\n}\n```\n\nAll fields are optional - only provided fields will be updated. Note that `jobType` cannot be changed.\n\n#### Example\n\n```ts\n// Edit a pending job's payload and priority\nawait jobQueue.editJob(jobId, {\n payload: { to: 'newemail@example.com', subject: 'Updated' },\n priority: 10,\n});\n\n// Edit only the scheduled run time\nawait jobQueue.editJob(jobId, {\n runAt: new Date(Date.now() + 60000), // Run in 1 minute\n});\n\n// Edit multiple fields at once\nawait jobQueue.editJob(jobId, {\n payload: { to: 'updated@example.com' },\n priority: 5,\n maxAttempts: 10,\n timeoutMs: 30000,\n tags: ['urgent', 'priority'],\n});\n```\n\n### editAllPendingJobs\n\n```ts\neditAllPendingJobs(\n filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n },\n updates: EditJobOptions\n): Promise<number>\n```\n\nEdits all pending jobs that match the filters. Only works for jobs with status 'pending'. Non-pending jobs are not affected. Returns the number of jobs that were edited.\n\n#### Parameters\n\n- `filters` (optional): Filters to select which jobs to edit. If not provided, all pending jobs are edited.\n - `jobType`: Filter by job type\n - `priority`: Filter by priority\n - `runAt`: Filter by scheduled run time (supports `gt`, `gte`, `lt`, `lte`, `eq` operators or exact Date match)\n - `tags`: Filter by tags with mode ('all', 'any', 'none', 'exact')\n- `updates`: The fields to update (same as `EditJobOptions`). All fields are optional - only provided fields will be updated.\n\n#### Returns\n\nThe number of jobs that were successfully edited.\n\n#### Examples\n\n```ts\n// Edit all pending jobs\nconst editedCount = await jobQueue.editAllPendingJobs(undefined, {\n priority: 10,\n});\n\n// Edit all pending email jobs\nconst editedCount = await jobQueue.editAllPendingJobs(\n { jobType: 'email' },\n {\n priority: 5,\n },\n);\n\n// Edit all pending jobs with 'urgent' tag\nconst editedCount = await jobQueue.editAllPendingJobs(\n { tags: { values: ['urgent'], mode: 'any' } },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n\n// Edit all pending jobs scheduled in the future\nconst editedCount = await jobQueue.editAllPendingJobs(\n { runAt: { gte: new Date() } },\n {\n priority: 10,\n },\n);\n\n// Edit with combined filters\nconst editedCount = await jobQueue.editAllPendingJobs(\n {\n jobType: 'email',\n tags: { values: ['urgent'], mode: 'any' },\n },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n```\n\n**Note:** Only pending jobs are edited. Jobs with other statuses (processing, completed, failed, cancelled) are not affected. Edit events are recorded for each affected job, just like single job edits.\n\n### cancelAllUpcomingJobs\n\n```ts\ncancelAllUpcomingJobs(filters?: {\n jobType?: string;\n priority?: number;\n runAt?: Date | { gt?: Date; gte?: Date; lt?: Date; lte?: Date; eq?: Date };\n tags?: { values: string[]; mode?: 'all' | 'any' | 'none' | 'exact' };\n}): Promise<number>\n```\n\nCancels all upcoming jobs that match the filters. Returns the number of jobs cancelled.\n\n### cleanupOldJobs\n\n```ts\ncleanupOldJobs(daysToKeep?: number): Promise<number>\n```\n\nCleans up jobs older than the specified number of days. Returns the number of jobs removed.\n\n### reclaimStuckJobs\n\n```ts\nreclaimStuckJobs(maxProcessingTimeMinutes?: number): Promise<number>\n```\n\nReclaims jobs stuck in 'processing' for too long. Returns the number of jobs reclaimed. If a job has a `timeoutMs` that is longer than the `maxProcessingTimeMinutes` threshold, the job's own timeout is used instead, preventing premature reclamation of long-running jobs.\n\n---\n\n## Job Events\n\n### getJobEvents\n\n```ts\ngetJobEvents(jobId: number): Promise<JobEvent[]>\n```\n\nRetrieves the job events for a job.\n\n#### JobEvent\n\n```ts\ninterface JobEvent {\n id: number;\n jobId: number;\n eventType: JobEventType;\n createdAt: Date;\n metadata: any;\n}\n```\n\n#### JobEventType\n\n```ts\nenum JobEventType {\n Added = 'added',\n Processing = 'processing',\n Completed = 'completed',\n Failed = 'failed',\n Cancelled = 'cancelled',\n Retried = 'retried',\n Edited = 'edited',\n}\n```\n\n---\n\n## Processing Jobs\n\n### createProcessor\n\n```ts\ncreateProcessor(\n handlers: JobHandlers,\n options?: ProcessorOptions\n): Processor\n```\n\nCreates a job processor with the provided handlers and options.\n\n#### ProcessorOptions\n\n```ts\ninterface ProcessorOptions {\n workerId?: string;\n batchSize?: number;\n concurrency?: number;\n pollInterval?: number;\n onError?: (error: Error) => void;\n verbose?: boolean;\n jobType?: string | string[];\n}\n```\n\n---\n\n## Accessing the Underlying Client\n\n### getPool\n\n```ts\ngetPool(): Pool\n```\n\nReturns the PostgreSQL connection pool instance. Only available when using the PostgreSQL backend.\n\n> **Note:** Throws an error if called when using the Redis backend.\n\n### getRedisClient\n\n```ts\ngetRedisClient(): Redis\n```\n\nReturns the `ioredis` client instance. Only available when using the Redis backend.\n\n> **Note:** Throws an error if called when using the PostgreSQL backend."
43
+ },
44
+ {
45
+ "slug": "api/job-record",
46
+ "title": "JobRecord",
47
+ "description": "",
48
+ "content": "The `JobRecord` interface represents a job stored in the queue, including its status, attempts, and metadata.\n\n## Fields\n\n- `id`: _number_ — Unique job ID.\n- `jobType`: _string_ — The type of the job.\n- `payload`: _any_ — The job payload.\n- `status`:\n _'pending' | 'processing' | 'completed' | 'failed' | 'cancelled'_ —\n Current job status.\n- `createdAt`: _Date_ — When the job was created.\n- `updated_at`: _Date_ — When the job was last updated.\n- `locked_at`: _Date | null_ — When the job was locked for\n processing.\n- `locked_by`: _string | null_ — Worker that locked the job.\n- `attempts`: _number_ — Number of attempts so far.\n- `maxAttempts`: _number_ — Maximum allowed attempts.\n- `nextAttemptAt`: _Date | null_ — When the next attempt is\n scheduled.\n- `priority`: _number_ — Job priority.\n- `runAt`: _Date_ — When the job is scheduled to run.\n- `pendingReason?`: _string | null_ — Reason for pending\n status.\n- `errorHistory?`: _\\{ message: string; timestamp: string \\}[]_ — Error history for the job.\n- `timeoutMs?`: _number | null_ — Timeout for this job in\n milliseconds.\n- `failureReason?`: _FailureReason | null_ — Reason for last\n failure, if any.\n- `completedAt`: _Date | null_ — When the job was completed.\n- `startedAt`: _Date | null_ — When the job was first picked up\n for processing.\n- `lastRetriedAt`: _Date | null_ — When the job was last\n retried.\n- `lastFailedAt`: _Date | null_ — When the job last failed.\n- `lastCancelledAt`: _Date | null_ — When the job was last\n cancelled.\n- `tags?`: _string[]_ — Tags for this job. Used for grouping, searching, or batch operations.\n- `idempotencyKey?`: _string | null_ — The idempotency key for this job, if one was provided when the job was created.\n- `progress?`: _number | null_ — Progress percentage (0–100) reported by the handler via `ctx.setProgress()`. `null` if no progress has been reported. See [Progress Tracking](/usage/progress-tracking).\n\n## Example\n\n```json\n{\n \"id\": 1,\n \"jobType\": \"email\",\n \"payload\": { \"to\": \"user@example.com\", \"subject\": \"Hello\" },\n \"status\": \"pending\",\n \"createdAt\": \"2024-06-01T12:00:00Z\",\n \"tags\": [\"welcome\", \"user\"],\n \"idempotencyKey\": \"welcome-email-user-123\",\n \"progress\": null\n}\n```"
49
+ },
50
+ {
51
+ "slug": "api/processor",
52
+ "title": "Processor",
53
+ "description": "",
54
+ "content": "The `Processor` interface represents a job processor that can process jobs from the queue, either in the background or synchronously.\n\n## Creating a processor\n\nCreate a processor by calling `createProcessor` on the queue.\n\n```ts\nconst jobQueue = getJobQueue();\nconst processor = queue.createProcessor(handlers, options);\n```\n\n### ProcessorOptions\n\n```ts\ninterface ProcessorOptions {\n workerId?: string;\n batchSize?: number;\n concurrency?: number;\n pollInterval?: number;\n onError?: (error: Error) => void;\n verbose?: boolean;\n jobType?: string | string[];\n}\n```\n\n## Methods\n\n### startInBackground\n\n```ts\nstartInBackground(): void\n```\n\nStart the job processor in the background. This will run continuously and process jobs as they become available. It polls for new jobs every `pollInterval` milliseconds (default: 5 seconds).\n\n### stop\n\n```ts\nstop(): void\n```\n\nStop the job processor that runs in the background. Does not wait for in-flight jobs to finish.\n\n### stopAndDrain\n\n```ts\nstopAndDrain(timeoutMs?: number): Promise<void>\n```\n\nStop the processor and wait for the current in-flight batch to finish before resolving. Accepts an optional timeout in milliseconds (default: `30000`). If the batch does not complete within the timeout, the promise resolves anyway so your process is not stuck indefinitely. Useful for graceful shutdown (e.g., SIGTERM handling). See [Long-Running Server](/usage/long-running-server) for a full example.\n\n### isRunning\n\n```ts\nisRunning(): boolean\n```\n\nCheck if the job processor is running.\n\n### start\n\n```ts\nstart(): Promise<number>\n```\n\nStart the job processor synchronously. This will process jobs immediately and then stop. Returns the number of jobs processed."
55
+ },
56
+ {
57
+ "slug": "api/tags",
58
+ "title": "Tags",
59
+ "description": "",
60
+ "content": "The tags feature lets you group, search, and batch jobs using arbitrary string tags. Tags can be set when adding a job and used in various JobQueue methods.\n\n## Tags in JobOptions\n\nYou can assign tags to a job when adding it:\n\n```typescript\nawait jobQueue.addJob({\n jobType: 'email',\n payload: { to: 'user@example.com', subject: 'Hello' },\n tags: ['welcome', 'user'],\n});\n```\n\n## Tags in JobRecord\n\nThe `tags` field is available on JobRecord objects:\n\n```json\n{\n \"id\": 1,\n \"jobType\": \"email\",\n \"tags\": [\"welcome\", \"user\"]\n}\n```\n\n## Tag Query Methods\n\n### getJobsByTags\n\n```typescript\nconst jobs = await jobQueue.getJobsByTags(['welcome', 'user'], 'all');\n```\n\n### Cancel jobs by tags\n\nYou can cancel jobs by their tags using the `cancelAllUpcomingJobs` method with the `tags` filter (an object with `values` and `mode`):\n\n```typescript\n// Cancel all jobs with both 'welcome' and 'user' tags\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['welcome', 'user'], mode: 'all' },\n});\n\n// Cancel all jobs with any of the tags\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'any' },\n});\n\n// Cancel all jobs with exactly the given tags\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'exact' },\n});\n\n// Cancel all jobs with none of the given tags\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'none' },\n});\n```\n\n## TagQueryMode\n\nThe `mode` parameter controls how tags are matched:\n\n- `'exact'`: Jobs with exactly the same tags (no more, no less)\n- `'all'`: Jobs that have all the given tags (can have more)\n- `'any'`: Jobs that have at least one of the given tags\n- `'none'`: Jobs that have none of the given tags\n\nThe default mode is `'all'`."
61
+ },
62
+ {
63
+ "slug": "cli",
64
+ "title": "CLI",
65
+ "description": "Command-line tools for managing DataQueue migrations, project scaffolding, and AI integrations.",
66
+ "content": "DataQueue ships a CLI tool called `dataqueue-cli` that you can run directly with `npx`:\n\n```bash\nnpx dataqueue-cli <command> [options]\n```\n\n## Commands\n\n| Command | Description |\n| --------------------------------------- | ------------------------------------------------- |\n| [`migrate`](/cli/migrate) | Run PostgreSQL database migrations |\n| [`init`](/cli/init) | Scaffold a Next.js project for DataQueue |\n| [`install-skills`](/cli/install-skills) | Install AI skill files for coding assistants |\n| [`install-rules`](/cli/install-rules) | Install agent rule sets for AI clients |\n| [`install-mcp`](/cli/install-mcp) | Configure the DataQueue MCP server for AI clients |\n| [`mcp`](/cli/mcp) | Start the DataQueue MCP server over stdio |\n\n## Usage\n\nRunning `dataqueue-cli` without a command (or with an unrecognized command) prints the help output:\n\n```\nUsage:\n dataqueue-cli migrate [--envPath <path>] [-s <schema> | --schema <schema>]\n dataqueue-cli init\n dataqueue-cli install-skills\n dataqueue-cli install-rules\n dataqueue-cli install-mcp\n dataqueue-cli mcp\n\nOptions for migrate:\n --envPath <path> Path to a .env file to load environment variables\n -s, --schema <schema> Set the schema to use\n\nAI tooling commands:\n install-skills Install DataQueue skill files for AI assistants\n install-rules Install DataQueue agent rules for AI clients\n install-mcp Configure the DataQueue MCP server for AI clients\n mcp Start the DataQueue MCP server (stdio)\n```"
67
+ },
68
+ {
69
+ "slug": "cli/init",
70
+ "title": "init",
71
+ "description": "Scaffold a Next.js project for DataQueue with a single command.",
72
+ "content": "Scaffolds your Next.js project with everything needed to start using DataQueue — API routes, a job queue singleton, a cron script, and all required dependencies.\n\n```bash\nnpx dataqueue-cli init\n```\n\n## What It Does\n\nThe `init` command auto-detects your project structure (App Router vs Pages Router, `src/` directory vs root) and creates the following:\n\n### Files Created\n\n| File | Purpose |\n| ----------------------------------------------- | ------------------------------------------------------ |\n| `app/api/dataqueue/manage/[[...task]]/route.ts` | API route for queue management (App Router) |\n| `pages/api/dataqueue/manage/[[...task]].ts` | API route for queue management (Pages Router) |\n| `lib/dataqueue/queue.ts` | Job queue singleton with a sample `send_email` handler |\n| `cron.sh` | Shell script for local development cron jobs |\n\n> **Note:** Only the API route matching your detected router is created. Existing files\n are never overwritten.\n\n### Dependencies Added\n\n**Production:**\n\n- `@nicnocquee/dataqueue`\n- `@nicnocquee/dataqueue-dashboard`\n- `@nicnocquee/dataqueue-react`\n\n**Development:**\n\n- `dotenv-cli`\n- `ts-node`\n- `node-pg-migrate`\n\n### Scripts Added\n\n| Script | Command |\n| ------------------- | ----------------------------------------------- |\n| `cron` | `bash cron.sh` |\n| `migrate-dataqueue` | `dotenv -e .env.local -- dataqueue-cli migrate` |\n\n## After Running\n\n1. Install the newly added dependencies:\n\n```bash\nnpm install\n```\n\n2. Set up your environment variables in `.env.local`:\n\n```bash\nPG_DATAQUEUE_DATABASE=postgresql://user:password@localhost:5432/mydb\nCRON_SECRET=your-secret-here\n```\n\n3. Run database migrations:\n\n```bash\nnpm run migrate-dataqueue\n```\n\n4. Start your Next.js dev server and the cron script:\n\n```bash\nnpm run dev\nnpm run cron\n```\n\n## Requirements\n\n- Must be run in a Next.js project directory (looks for `next` in `package.json` dependencies)\n- Must have either an `app/` or `pages/` directory"
73
+ },
74
+ {
75
+ "slug": "cli/install-mcp",
76
+ "title": "install-mcp",
77
+ "description": "Configure the DataQueue MCP server for AI coding clients.",
78
+ "content": "Configures the DataQueue [MCP](https://modelcontextprotocol.io/) (Model Context Protocol) server in your AI client's configuration. This gives your AI assistant direct access to DataQueue documentation.\n\n```bash\nnpx dataqueue-cli install-mcp\n```\n\n## Interactive Prompt\n\nThe command prompts you to select your AI client:\n\n```\nDataQueue MCP Server Installer\n\nSelect your AI client:\n\n 1) Cursor\n 2) Claude Code\n 3) VS Code (Copilot)\n 4) Windsurf\n\nEnter choice (1-4):\n```\n\n## What It Configures\n\nThe installer adds a `\"dataqueue\"` server entry to your client's MCP config file:\n\n```json\n{\n \"mcpServers\": {\n \"dataqueue\": {\n \"command\": \"npx\",\n \"args\": [\"dataqueue-cli\", \"mcp\"]\n }\n }\n}\n```\n\nIf the config file already exists, the `dataqueue` entry is merged in without affecting other servers.\n\n## Install Locations\n\n| Client | Config File |\n| ----------------- | ------------------------------------- |\n| Cursor | `.cursor/mcp.json` |\n| Claude Code | `.mcp.json` |\n| VS Code (Copilot) | `.vscode/mcp.json` |\n| Windsurf | `~/.codeium/windsurf/mcp_config.json` |\n\n> **Note:** After installing, your AI client will automatically start the MCP server when\n needed. See the [`mcp`](/cli/mcp) command for details on what the server\n exposes."
79
+ },
80
+ {
81
+ "slug": "cli/install-rules",
82
+ "title": "install-rules",
83
+ "description": "Install DataQueue agent rules for AI coding clients.",
84
+ "content": "Installs comprehensive DataQueue rule sets into your AI client's configuration. Rules give AI assistants detailed guidance for generating correct DataQueue code.\n\n```bash\nnpx dataqueue-cli install-rules\n```\n\n## Interactive Prompt\n\nThe command prompts you to select your AI client:\n\n```\nDataQueue Agent Rules Installer\n\nSelect your AI client:\n\n 1) Cursor\n 2) Claude Code\n 3) AGENTS.md (Codex, Jules, OpenCode)\n 4) GitHub Copilot\n 5) Windsurf\n\nEnter choice (1-5):\n```\n\n## Rules Installed\n\nThree rule files are installed, covering the full surface area of DataQueue:\n\n| Rule File | What It Covers |\n| -------------------- | ------------------------------------------------------------- |\n| `basic.md` | Core API — initialization, adding jobs, processing, handlers |\n| `advanced.md` | Advanced features — waits, cron, tokens, cancellation, events |\n| `react-dashboard.md` | React SDK and Dashboard components |\n\n## Install Locations\n\n| Client | Installs To |\n| -------------- | -------------------------------------------------------------------------------------------------------------------------- |\n| Cursor | `.cursor/rules/dataqueue-basic.mdc`, `.cursor/rules/dataqueue-advanced.mdc`, `.cursor/rules/dataqueue-react-dashboard.mdc` |\n| Claude Code | `CLAUDE.md` (appended between markers) |\n| AGENTS.md | `AGENTS.md` (appended between markers) |\n| GitHub Copilot | `.github/copilot-instructions.md` (appended between markers) |\n| Windsurf | `CONVENTIONS.md` (appended between markers) |\n\n> **Note:** For Cursor, each rule file is written separately. For all other clients, the\n rules are combined and appended to a single file between `&lt;!-- DATAQUEUE\n RULES START --&gt;` and `&lt;!-- DATAQUEUE RULES END --&gt;` markers.\n Re-running the command updates the content between the markers without\n duplicating it."
85
+ },
86
+ {
87
+ "slug": "cli/install-skills",
88
+ "title": "install-skills",
89
+ "description": "Install DataQueue skill files for AI coding assistants.",
90
+ "content": "Copies DataQueue skill files (`SKILL.md`) into your AI coding assistant's skills directory. Skills teach AI assistants DataQueue patterns and best practices.\n\n```bash\nnpx dataqueue-cli install-skills\n```\n\n## Skills Installed\n\n| Skill | What It Covers |\n| -------------------- | ------------------------------------------------------------------ |\n| `dataqueue-core` | Core patterns — initialization, adding jobs, processing, handlers |\n| `dataqueue-advanced` | Advanced features — waits, cron jobs, tokens, cancellation, events |\n| `dataqueue-react` | React SDK and Dashboard integration |\n\n## Auto-Detection\n\nThe command automatically detects which AI tools are present by checking for their config directories:\n\n| AI Tool | Detected By | Skills Installed To |\n| -------------- | ----------- | ------------------- |\n| Cursor | `.cursor/` | `.cursor/skills/` |\n| Claude Code | `.claude/` | `.claude/skills/` |\n| GitHub Copilot | `.github/` | `.github/skills/` |\n\nIf no AI tool directories are detected, it defaults to `.cursor/skills/`.\n\n## Example Output\n\n```\nInstalling skills for Cursor...\n ✓ dataqueue-core\n ✓ dataqueue-advanced\n ✓ dataqueue-react\n\nDone! Installed 3 skill(s) for Cursor.\n```"
91
+ },
92
+ {
93
+ "slug": "cli/mcp",
94
+ "title": "mcp",
95
+ "description": "Start the DataQueue MCP server for AI-powered documentation access.",
96
+ "content": "Starts the DataQueue MCP (Model Context Protocol) server over stdio. This server gives AI coding assistants live access to the full DataQueue documentation.\n\n```bash\nnpx dataqueue-cli mcp\n```\n\n> **Note:** You typically don't run this command directly. Use\n [`install-mcp`](/cli/install-mcp) to configure your AI client, which will\n start the server automatically.\n\n## Tools Exposed\n\nThe MCP server exposes three tools that AI assistants can call:\n\n| Tool | Description |\n| ---------------- | ------------------------------------------------------------------------------ |\n| `list-doc-pages` | Lists all available documentation pages with titles and descriptions |\n| `get-doc-page` | Fetches a specific page by slug (e.g., `\"usage/add-job\"` or `\"api/job-queue\"`) |\n| `search-docs` | Full-text search across all documentation pages with term matching |\n\n## Resources\n\nThe server also exposes a resource:\n\n| URI | Description |\n| ---------------------- | ------------------------------------------------------- |\n| `dataqueue://llms.txt` | Machine-readable DataQueue overview for LLM consumption |\n\n## How It Works\n\nThe server loads a bundled `docs-content.json` file containing all DataQueue documentation pages. Search uses simple term matching across page titles, descriptions, and content, returning the top 5 results with relevant excerpts.\n\nCommunication happens over stdio using the [Model Context Protocol](https://modelcontextprotocol.io/), so it works with any MCP-compatible client."
97
+ },
98
+ {
99
+ "slug": "cli/migrate",
100
+ "title": "migrate",
101
+ "description": "Run PostgreSQL database migrations for DataQueue.",
102
+ "content": "Runs the DataQueue database migrations against your PostgreSQL database using [node-pg-migrate](https://github.com/salsita/node-pg-migrate).\n\n```bash\nnpx dataqueue-cli migrate [options]\n```\n\n## Options\n\n| Option | Description |\n| ----------------------- | ------------------------------------------------------------------------------ |\n| `--envPath <path>` | Path to a `.env` file to load environment variables from |\n| `-s, --schema <schema>` | PostgreSQL schema to use. Automatically creates the schema if it doesn't exist |\n\n## Environment Variables\n\nThe migration reads the connection string from the `PG_DATAQUEUE_DATABASE` environment variable. You can set it directly or load it from a `.env` file using `--envPath`.\n\n## Examples\n\nRun migrations using environment variables already set in your shell:\n\n```bash\nnpx dataqueue-cli migrate\n```\n\nLoad environment variables from a specific `.env` file:\n\n```bash\nnpx dataqueue-cli migrate --envPath .env.local\n```\n\nRun migrations in a custom PostgreSQL schema:\n\n```bash\nnpx dataqueue-cli migrate --schema my_schema\n```\n\nCombine both options:\n\n```bash\nnpx dataqueue-cli migrate --envPath .env.local --schema my_schema\n```\n\n## How It Works\n\nUnder the hood, `dataqueue-cli migrate` runs:\n\n```bash\nnpx node-pg-migrate up \\\n -t dataqueuedev_migrations \\\n -d PG_DATAQUEUE_DATABASE \\\n -m <bundled-migrations-dir> \\\n [--envPath <path>] \\\n [-s <schema> --create-schema]\n```\n\nThe migrations directory is bundled with the `@nicnocquee/dataqueue` package, so you don't need to manage migration files yourself.\n\n> **Note:** This command is only needed for the PostgreSQL backend. The Redis backend\n requires no migrations."
103
+ },
104
+ {
105
+ "slug": "example",
106
+ "title": "Next.js Demo App",
107
+ "description": "",
108
+ "content": "You can see a working example of a Next.js app using DataQueue [here](https://dataqueue-demo.netlify.app/) and the code in [apps/demo](https://github.com/nicnocquee/dataqueue/tree/main/apps/demo) folder of this repository."
109
+ },
110
+ {
111
+ "slug": "index",
112
+ "title": "DataQueue Docs",
113
+ "description": "Documentation for DataQueue, a lightweight job queue for Node.js/TypeScript projects, backed by PostgreSQL or Redis.",
114
+ "content": "Welcome to the DataQueue docs! Start from the [Introduction](/intro) to learn more about DataQueue.\n\nDataQueue supports **PostgreSQL** and **Redis (beta)** as storage backends. Choose the one that fits your stack -- the API is identical regardless of which backend you use.\n\n## Packages\n\n| Package | Description |\n| --------------------------------- | --------------------------------------------------------------------------------------- |\n| `@nicnocquee/dataqueue` | Core job queue library (PostgreSQL & Redis (beta)) |\n| `@nicnocquee/dataqueue-react` | [React hooks](/usage/react-sdk) for subscribing to job status and progress |\n| `@nicnocquee/dataqueue-dashboard` | [Admin dashboard](/usage/dashboard) — plug-and-play UI for monitoring and managing jobs |\n\n## Source Code\n\nThe source code for DataQueue is available on [GitHub](https://github.com/nicnocquee/dataqueue).\n\n## Demo App\n\nA demo Next.js app that showcases all features of DataQueue is available [here](https://dataqueue-demo.netlify.app) and the source code is available on [GitHub](https://github.com/nicnocquee/dataqueue/tree/main/apps/demo)."
115
+ },
116
+ {
117
+ "slug": "intro/comparison",
118
+ "title": "Comparison",
119
+ "description": "How DataQueue compares to BullMQ and Trigger.dev",
120
+ "content": "Choosing a job queue depends on your stack, infrastructure preferences, and the features you need. Here is a side-by-side comparison of **DataQueue**, **BullMQ**, and **Trigger.dev**.\n\n| Feature | DataQueue | BullMQ | Trigger.dev |\n| ----------------------- | ----------------------------------------------- | ------------------------------------------- | --------------------------------------- |\n| **Backend** | PostgreSQL or Redis | Redis only | Cloud or self-hosted (Postgres + Redis) |\n| **Type Safety** | Full generic `PayloadMap` | Basic types | Full TypeScript tasks |\n| **Scheduling** | `runAt`, Cron | Cron, delayed, recurring | Cron, delayed |\n| **Retries** | Exponential backoff, configurable `maxAttempts` | Exponential backoff, custom strategies, DLQ | Auto retries, bulk replay, DLQ |\n| **Priority** | Integer priority | Priority levels | Queue-based priority |\n| **Concurrency Control** | `batchSize` + `concurrency` | Built-in | Per-task + shared limits |\n| **Rate Limiting** | - | Yes | Via concurrency limits |\n| **Job Flows / DAGs** | - | Parent-child flows | Workflows |\n| **Dashboard** | Built-in Next.js package | Third-party (Bull Board, etc.) | Built-in web dashboard |\n| **Wait / Pause Jobs** | `waitFor`, `waitUntil`, token system | - | Durable execution |\n| **Human-in-the-Loop** | Token system | - | Yes |\n| **Progress Tracking** | Yes (0-100%) | Yes | Yes (realtime) |\n| **Serverless-First** | Yes | No (needs long-running process) | Yes (cloud) |\n| **Self-Hosted** | Yes | Yes (your Redis) | Yes (containers) |\n| **Cloud Option** | - | - | Yes |\n| **License** | MIT | MIT | Apache-2.0 |\n| **Pricing** | Free (OSS) | Free (OSS) | Free tier + paid plans |\n| **Infrastructure** | Your own Postgres or Redis | Your own Redis | Their cloud or your infra |\n\n## Where DataQueue shines\n\n- **Serverless-first** — designed from the ground up for Vercel, AWS Lambda, and other serverless platforms. No long-running process required.\n- **Use your existing database** — back your queue with PostgreSQL or Redis. No additional infrastructure to provision or pay for.\n- **Wait and token system** — pause jobs with `waitFor`, `waitUntil`, or token-based waits for human-in-the-loop workflows, all within a single handler function.\n- **Type-safe PayloadMap** — a generic `PayloadMap` gives you compile-time validation of every job type and its payload, catching bugs before they reach production.\n- **Built-in Next.js dashboard** — add a full admin UI to your Next.js app with a single route file. No separate service to deploy."
121
+ },
122
+ {
123
+ "slug": "intro",
124
+ "title": "About",
125
+ "description": "",
126
+ "content": "DataQueue is an open source lightweight job queue for Node.js/TypeScript projects, backed by **PostgreSQL** or **Redis**. It lets you easily schedule, process, and manage background jobs. It's ideal for serverless environments like Vercel, AWS Lambda, and more.\n\n## Features\n\n- Simple API for adding and processing jobs\n- Strong typing for job types and payloads, preventing you from adding jobs with the wrong payload and ensuring handlers receive the correct type\n- Works in serverless environments\n- Supports job priorities, scheduling, canceling, and retries\n- Reclaims stuck jobs: No job will remain in the `processing` state indefinitely\n- Cleans up old jobs: Keeps only jobs from the last xxx days\n- **Choose your backend**: Use PostgreSQL or Redis -- same API, same features, your choice\n\n## Who is this for?\n\nThis package is for you if all of the following apply:\n\n| | |\n| --- | ---------------------------------------------------------------------------------------------- |\n| ☁️ | You deploy web apps to serverless platforms like Vercel, AWS Lambda, etc. |\n| 📝 | You use TypeScript |\n| ⚡ | You want your app to stay fast and responsive by offloading heavy tasks to the background |\n| 💾 | You use PostgreSQL or Redis |\n| 💸 | You're on a budget and want to avoid paying for a job queue service or running your own server |\n\n## Backend Options\n\n### PostgreSQL\n\nIf you already use PostgreSQL, it makes sense to use it for job queues, thanks to [SKIP LOCKED](https://www.postgresql.org/docs/current/sql-select.html).\n\nThe update process in DataQueue uses `FOR UPDATE SKIP LOCKED` to avoid race conditions and improve performance. If two jobs are scheduled at the same time, one will skip any jobs that are already being processed and work on other available jobs instead. This lets multiple workers handle different jobs at once without waiting or causing conflicts, making PostgreSQL a great choice for job queues and similar tasks.\n\nThe PostgreSQL backend requires running [database migrations](/usage/database-migration) before use.\n\n### Redis\n\nIf you already have a Redis instance in your stack, you can use it as the backend instead. The Redis backend uses Lua scripts for atomic operations and sorted sets for priority-based job claiming, providing the same guarantees as the PostgreSQL backend.\n\nThe Redis backend requires **no migrations** -- it automatically creates the necessary keys when jobs are added.\n\n> **Note:** Both backends provide **full feature parity**. Tags, filters, idempotency\n keys, job events, priority ordering, scheduling, and all other features work\n identically regardless of which backend you choose. You can switch backends at\n any time by changing a single configuration option."
127
+ },
128
+ {
129
+ "slug": "intro/install",
130
+ "title": "Installation",
131
+ "description": "",
132
+ "content": "Before you begin, make sure you have Node.js and TypeScript installed in your environment.\n\n## PostgreSQL Backend\n\nIf you're using PostgreSQL as your backend, you need a Postgres database. Install the required libraries:\n\n```bash\nnpm install @nicnocquee/dataqueue\nnpm install -D node-pg-migrate ts-node\n```\n\nYou need to install <code>node-pg-migrate</code> and <code>ts-node</code> as development dependencies to run [database migrations](/usage/database-migration).\n\n## Redis Backend\n\nIf you're using Redis as your backend, you need a Redis server (v6+). Install the required libraries:\n\n```bash\nnpm install @nicnocquee/dataqueue ioredis\n```\n\n> **Note:** The `ioredis` package is an optional peer dependency. You only need to install\n it if you choose Redis as your backend. No database migrations are required\n for Redis."
133
+ },
134
+ {
135
+ "slug": "intro/overview",
136
+ "title": "Overview",
137
+ "description": "",
138
+ "content": "DataQueue is a lightweight library that helps you manage your job queue using a **PostgreSQL** or **Redis** backend. It has three main components: the processor, the queue, and the job. It is not an external tool or service. You install DataQueue in your project and use it to add jobs to the queue, process them, and more, using your own existing database.\n\n\n\n## Processor\n\nThe processor has these responsibilities:\n\n- retrieve a certain number of unclaimed, pending jobs from the database\n- run the defined job handlers for each job\n- update the job status accordingly\n- retry failed jobs\n\nIn a serverless environment, you can initiate and start the processor for example in an API route and use cron job to periodically call it. In a long running process environment, you can start the processor when your application starts, and it will periodically check for jobs to process.\n\nFor more information, see [Processor](/api/processor).\n\n## Queue\n\nThe queue is an abstraction over the database. It has these responsibilities:\n\n- add jobs to the database\n- retrieve jobs from the database\n- cancel pending jobs\n- edit pending jobs\n\nThe API is identical whether you're using PostgreSQL or Redis (beta) as your backend. You select the backend when [initializing the queue](/usage/init-queue).\n\nFor more information, see [Queue](/api/queue).\n\n## Job\n\nA job that you add to the queue needs to have a type and a payload. The type is a string that identifies the job, and the payload is the data that will be passed to the job handler of that job type.\n\nOnce a job is added to the queue, it can be in one of these states:\n\n- `pending`: The job is waiting in the queue to be processed.\n- `processing`: The job is currently being worked on.\n- `completed`: The job finished successfully.\n- `failed`: The job did not finish successfully. It can be retried up to `maxAttempts` times.\n- `cancelled`: The job was cancelled before it finished.\n\nFor more information, see [Job](/api/job)."
139
+ },
140
+ {
141
+ "slug": "usage/add-job",
142
+ "title": "Add Job",
143
+ "description": "",
144
+ "content": "You can add jobs to the queue from your application logic, such as in a [server function](https://react.dev/reference/rsc/server-functions):\n\n```typescript title=\"@/app/actions/send-email.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\nimport { revalidatePath } from 'next/cache';\n\nexport const sendEmail = async ({\n name,\n email,\n}: {\n name: string;\n email: string;\n}) => {\n // Add a welcome email job\n const jobQueue = getJobQueue();try {\n const runAt = new Date(Date.now() + 5 * 1000); // Run 5 seconds from nowconst job = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome to our platform!',\n body: `Hi ${name}, welcome to our platform!`,\n },\n priority: 10, // Higher number = higher priority\n runAt: runAt,\n tags: ['welcome', 'user'], // Add tags for grouping/searching\n });\n\n revalidatePath('/');\n return { job };\n } catch (error) {\n console.error('Error adding job:', error);\n throw error;\n }\n};\n```\n\nIn the example above, a job is added to the queue to send an email. The job type is `send_email`, and the payload includes the recipient's email, subject, and body.\n\nWhen adding a job, you can set its `priority`, schedule when it should run using `runAt`, and specify a timeout in milliseconds with `timeoutMs`.\n\nYou can also add `tags` (an array of strings) to group, search, or batch jobs by category. See [Tags](/api/tags) for more details.\n\n## Idempotency\n\nYou can provide an `idempotencyKey` when adding a job to prevent duplicate jobs. If a job with the same key already exists in the queue, `addJob` returns the existing job's ID instead of creating a new one.\n\nThis is useful for preventing duplicates caused by retries, double-clicks, webhook replays, or serverless function re-invocations.\n\n```typescript title=\"@/app/actions/send-welcome.ts\"\n'use server';\n\nimport { getJobQueue } from '@/lib/queue';\n\nexport const sendWelcomeEmail = async (userId: string, email: string) => {\n const jobQueue = getJobQueue();const jobId = await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: email,\n subject: 'Welcome!',\n body: `Welcome to our platform!`,\n },\n idempotencyKey: `welcome-email-${userId}`, // prevents duplicate welcome emails\n });\n\n return { jobId };\n};\n```\n\nIn the example above, calling `sendWelcomeEmail` multiple times for the same `userId` will only create one job. Subsequent calls return the existing job's ID.\n\n### Behavior\n\n- **No key provided**: Works exactly as before, no uniqueness check is performed.\n- **Key provided, no conflict**: The job is inserted and its new ID is returned.\n- **Key provided, conflict**: The existing job's ID is returned. The existing job is **not** updated.\n- **Scope**: The key is unique across the entire `job_queue` table regardless of job status. Once a key exists, it cannot be reused until the job is cleaned up via [`cleanupOldJobs`](/usage/cleanup-jobs)."
145
+ },
146
+ {
147
+ "slug": "usage/building-with-ai",
148
+ "title": "Building with AI",
149
+ "description": "Tools and resources for building DataQueue projects with AI coding assistants.",
150
+ "content": "We provide multiple tools to help AI coding assistants write correct DataQueue code. Use one or all of them for the best developer experience.\n\n## Quick Setup\n\n### 1. Install Skills\n\nPortable instruction sets that teach any AI coding assistant DataQueue best practices.\n\n```bash\nnpx dataqueue-cli install-skills\n```\n\nSkills are installed as `SKILL.md` files into your AI tool's skills directory (`.cursor/skills/`, `.claude/skills/`, etc.). They cover core patterns, advanced features (waits, cron, tokens), and React/Dashboard integration.\n\n### 2. Install Agent Rules\n\nComprehensive rule sets installed directly into your AI client's config files.\n\n```bash\nnpx dataqueue-cli install-rules\n```\n\nThe installer prompts you to choose your AI client and writes rules to the appropriate location:\n\n| Client | Installs to |\n| -------------- | --------------------------------- |\n| Cursor | `.cursor/rules/dataqueue-*.mdc` |\n| Claude Code | `CLAUDE.md` |\n| AGENTS.md | `AGENTS.md` |\n| GitHub Copilot | `.github/copilot-instructions.md` |\n| Windsurf | `CONVENTIONS.md` |\n\n### 3. Install MCP Server\n\nGive your AI assistant direct access to DataQueue documentation — search docs, fetch specific pages, and list all available topics.\n\n```bash\nnpx dataqueue-cli install-mcp\n```\n\nThe installer prompts you to choose your AI client and writes the MCP config to the appropriate location. Currently supported clients:\n\n| Client | Installs to |\n| ----------------- | ------------------------------------- |\n| Cursor | `.cursor/mcp.json` |\n| Claude Code | `.mcp.json` |\n| VS Code (Copilot) | `.vscode/mcp.json` |\n| Windsurf | `~/.codeium/windsurf/mcp_config.json` |\n\nThe MCP server runs via `npx dataqueue-cli mcp` and communicates over stdio. It exposes three tools:\n\n| Tool | Description |\n| ---------------- | ---------------------------------------- |\n| `search-docs` | Full-text search across all doc pages |\n| `get-doc-page` | Fetch a specific doc page by slug |\n| `list-doc-pages` | List all available doc pages with titles |\n\n## Skills vs Agent Rules vs MCP\n\n| | **Skills** | **Agent Rules** | **MCP Server** |\n| :---------------- | :----------------------------------- | :------------------------------------- | :------------------------------------- |\n| **What it does** | Drops skill files into your project | Installs rule sets into client config | Runs a live server your AI connects to |\n| **Installs to** | `.cursor/skills/`, `.claude/skills/` | `.cursor/rules/`, `CLAUDE.md`, etc. | `.cursor/mcp.json`, `.mcp.json`, etc. |\n| **Best for** | Teaching patterns and best practices | Comprehensive code generation guidance | Live documentation search |\n| **Works offline** | Yes | Yes | Yes (runs locally) |\n\n**Recommendation:** Install all three. Skills and Agent Rules teach your AI _how_ to write code. The MCP Server lets it _look up_ the docs when it needs specifics.\n\n## llms.txt\n\nWe publish machine-readable documentation for LLM consumption:\n\n- [docs.dataqueue.dev/llms.txt](https://docs.dataqueue.dev/llms.txt) — concise overview\n- [docs.dataqueue.dev/llms-full.txt](https://docs.dataqueue.dev/llms-full.txt) — full documentation\n\nThese follow the [llms.txt standard](https://llmstxt.org) and can be fed directly into any LLM context window.\n\n## Project-Level Context Snippet\n\nIf you prefer a lightweight approach, paste this snippet into a context file at the root of your project:\n\n| File | Read by |\n| :-------------------------------- | :---------------------------- |\n| `CLAUDE.md` | Claude Code |\n| `AGENTS.md` | OpenAI Codex, Jules, OpenCode |\n| `.cursor/rules/*.md` | Cursor |\n| `.github/copilot-instructions.md` | GitHub Copilot |\n| `CONVENTIONS.md` | Windsurf, Cline, and others |\n\n```markdown\n# DataQueue rules\n\n## Imports\n\nAlways import from `@nicnocquee/dataqueue`.\n\n## PayloadMap pattern\n\nDefine a type map of job types to payload shapes for full type safety:\n\n\\`\\`\\`ts\ntype JobPayloadMap = {\nsend_email: { to: string; subject: string; body: string };\n};\n\\`\\`\\`\n\n## Initialization (singleton)\n\nNever call initJobQueue per request — use a module-level singleton:\n\n\\`\\`\\`ts\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nlet queue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\nexport const getJobQueue = () => {\nif (!queue) {\nqueue = initJobQueue<JobPayloadMap>({\ndatabaseConfig: { connectionString: process.env.PG_DATAQUEUE_DATABASE },\n});\n}\nreturn queue;\n};\n\\`\\`\\`\n\n## Handler pattern\n\nType handlers as `JobHandlers<PayloadMap>` — TypeScript enforces a handler for every job type.\n\n## Processing\n\n- Serverless: `processor.start()` (one-shot)\n- Long-running: `processor.startInBackground()` + `stopAndDrain()` on SIGTERM\n\n## Common mistakes\n\n1. Creating initJobQueue per request (creates a DB pool each time)\n2. Missing handler for a job type (fails with NoHandler)\n3. Not checking signal.aborted in long handlers\n4. Forgetting reclaimStuckJobs() — crashed workers leave jobs stuck\n5. Skipping migrations (PostgreSQL requires `dataqueue-cli migrate`)\n```"
151
+ },
152
+ {
153
+ "slug": "usage/cancel-jobs",
154
+ "title": "Cancel Jobs",
155
+ "description": "",
156
+ "content": "You can cancel a job by its ID, but only if it is still pending (not yet started or scheduled for the future).\n\n```typescript title=\"@/app/api/cancel-job/route.ts\"\nimport { NextRequest, NextResponse } from 'next/server';\nimport { getJobQueue } from '@/lib/queue';\n\nexport async function POST(request: NextRequest) {\n try {\n const { jobId } = await request.json();const jobQueue = getJobQueue();\n await jobQueue.cancelJob(jobId);\n return NextResponse.json({ message: 'Job cancelled' });\n } catch (error) {\n console.error('Error cancelling job:', error);\n return NextResponse.json(\n { message: 'Failed to cancel job' },\n { status: 500 },\n );\n }\n}\n```\n\n### Cancel All Pending Jobs\n\nDataQueue also lets you cancel all pending jobs at once. This is useful if you want to stop all jobs that haven't started yet or are scheduled for the future.\n\n```typescript title=\"@/app/api/cancel-all-jobs/route.ts\"\nimport { NextRequest, NextResponse } from 'next/server';\nimport { getJobQueue } from '@/lib/queue';\n\nexport async function POST(request: NextRequest) {\n try {const jobQueue = getJobQueue();\n const cancelledCount = await jobQueue.cancelAllUpcomingJobs();\n return NextResponse.json({ message: `Cancelled ${cancelledCount} jobs` });\n } catch (error) {\n console.error('Error cancelling jobs:', error);\n return NextResponse.json(\n { message: 'Failed to cancel jobs' },\n { status: 500 },\n );\n }\n}\n```\n\n#### Cancel Jobs by Filter\n\nYou can also cancel only the pending jobs that match certain criteria:\n\n```typescript\n// Cancel only email jobs\nawait jobQueue.cancelAllUpcomingJobs({ jobType: 'email' });\n\n// Cancel only jobs with priority 2\nawait jobQueue.cancelAllUpcomingJobs({ priority: 2 });\n\n// Cancel only jobs scheduled for a specific time (exact match)\nconst runAt = new Date('2024-06-01T12:00:00Z');\nawait jobQueue.cancelAllUpcomingJobs({ runAt });\n\n// Cancel jobs scheduled after a certain time\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: { gt: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel jobs scheduled on or after a certain time\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: { gte: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel jobs scheduled before a certain time\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: { lt: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel jobs scheduled on or before a certain time\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: { lte: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel jobs scheduled exactly at a certain time\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: { eq: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel jobs scheduled between two times (inclusive)\nawait jobQueue.cancelAllUpcomingJobs({\n runAt: {\n gte: new Date('2024-06-01T00:00:00Z'),\n lte: new Date('2024-06-01T23:59:59Z'),\n },\n});\n\n// Combine runAt with other filters\nawait jobQueue.cancelAllUpcomingJobs({\n jobType: 'email',\n runAt: { gt: new Date('2024-06-01T12:00:00Z') },\n});\n\n// Cancel all jobs with both 'welcome' and 'user' tags. The jobs can have other tags.\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['welcome', 'user'], mode: 'all' },\n});\n\n// Cancel all jobs with any of the tags. The jobs can have other tags.\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'any' },\n});\n\n// Cancel all jobs with exactly the given tags. The jobs cannot have other tags.\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'exact' },\n});\n\n// Cancel all jobs with none of the given tags\nawait jobQueue.cancelAllUpcomingJobs({\n tags: { values: ['foo', 'bar'], mode: 'none' },\n});\n\n// Combine filters\nawait jobQueue.cancelAllUpcomingJobs({\n jobType: 'email',\n tags: { values: ['welcome', 'user'], mode: 'all' },\n runAt: { lt: new Date('2024-06-01T12:00:00Z') },\n});\n```\n\n**runAt filter details:**\n\n- You can pass a single `Date` for an exact match, or an object with any of the following keys:\n - `gt`: Greater than\n - `gte`: Greater than or equal to\n - `lt`: Less than\n - `lte`: Less than or equal to\n - `eq`: Equal to\n- All filters (`jobType`, `priority`, `runAt`, `tags`) can be combined for precise cancellation.\n\nThis will set the status of all jobs that are still pending (not yet started or scheduled for the future) and match the filters to `cancelled`."
157
+ },
158
+ {
159
+ "slug": "usage/cleanup-jobs",
160
+ "title": "Cleanup Jobs",
161
+ "description": "",
162
+ "content": "If you have a lot of jobs, you may want to clean up old ones—for example, keeping only jobs from the last 30 days. You can do this by calling the `cleanupOldJobs` method. The example below shows an API route (`/api/cron/cleanup`) that can be triggered by a cron job:\n\n```typescript title=\"@/app/api/cron/cleanup.ts\"\nimport { getJobQueue } from '@/lib/queue';\nimport { NextResponse } from 'next/server';\n\nexport async function GET(request: Request) {\n const authHeader = request.headers.get('authorization');\n if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {\n return NextResponse.json({ message: 'Unauthorized' }, { status: 401 });\n }\n\n try {const jobQueue = getJobQueue();\n\n // Clean up old jobs (keep only the last 30 days)\n const deleted = await jobQueue.cleanupOldJobs(30);\n console.log(`Deleted ${deleted} old jobs`);\n\n return NextResponse.json({\n message: 'Old jobs cleaned up',\n deleted,\n });\n } catch (error) {\n console.error('Error cleaning up jobs:', error);\n return NextResponse.json(\n { message: 'Failed to clean up jobs' },\n { status: 500 },\n );\n }\n}\n```\n\n#### Scheduling the Cleanup Job with Cron\n\nAdd the following to your `vercel.json` to call the cleanup route every day at midnight:\n\n```json title=\"vercel.json\"\n{\n \"crons\": [\n {\n \"path\": \"/api/cron/cleanup\",\n \"schedule\": \"0 0 * * *\"\n }\n ]\n}\n```"
163
+ },
164
+ {
165
+ "slug": "usage/cron-jobs",
166
+ "title": "Cron Jobs (Recurring Schedules)",
167
+ "description": "Define recurring jobs that automatically enqueue on a cron schedule.",
168
+ "content": "DataQueue supports recurring cron schedules. Define a schedule with a cron expression, and the processor will **automatically enqueue** job instances before each batch — no extra code required.\n\n## Add a Cron Schedule\n\n```typescript title=\"@/app/api/cron-schedules/route.ts\"\nimport { NextRequest, NextResponse } from 'next/server';\nimport { getJobQueue } from '@/lib/queue';\n\nexport async function POST(request: NextRequest) {\n const jobQueue = getJobQueue();const id = await jobQueue.addCronJob({\n scheduleName: 'daily-report', // must be unique!\n cronExpression: '0 9 * * *', // every day at 9:00 AM\n jobType: 'generate_report',\n payload: { reportId: 'daily', userId: 'system' },\n timezone: 'America/New_York', // default: 'UTC'\n });\n return NextResponse.json({ id });\n}\n```\n\n### Options\n\n| Option | Type | Default | Description |\n| ---------------- | ---------- | ---------- | -------------------------------------------------- |\n| `scheduleName` | `string` | _required_ | Unique name for the schedule |\n| `cronExpression` | `string` | _required_ | Standard 5-field cron expression |\n| `jobType` | `string` | _required_ | Job type from your PayloadMap |\n| `payload` | `object` | _required_ | Payload for each job instance |\n| `timezone` | `string` | `'UTC'` | IANA timezone for cron evaluation |\n| `allowOverlap` | `boolean` | `false` | Allow new instance while previous is still running |\n| `maxAttempts` | `number` | `3` | Max retry attempts per job instance |\n| `priority` | `number` | `0` | Priority for each job instance |\n| `timeoutMs` | `number` | — | Timeout per job instance |\n| `tags` | `string[]` | — | Tags for each job instance |\n\n## Automatic Enqueueing\n\nWhen you call `processor.start()` or `processor.startInBackground()`, DataQueue automatically checks all active cron schedules and enqueues jobs whose next run time has passed — **before** processing the batch.\n\n```typescript title=\"@/app/api/cron/process/route.ts\"\nimport { NextRequest, NextResponse } from 'next/server';\nimport { getJobQueue, jobHandlers } from '@/lib/queue';\n\nexport async function GET(request: NextRequest) {\n const authHeader = request.headers.get('authorization');\n if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {\n return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });\n }\n\n const jobQueue = getJobQueue();// Cron jobs are automatically enqueued before each batch\n const processor = jobQueue.createProcessor(jobHandlers, {\n batchSize: 10,\n concurrency: 3,\n });\n const processed = await processor.start();\n\n return NextResponse.json({ processed });\n}\n```\n\n### Vercel Cron Example\n\n```json title=\"vercel.json\"\n{\n \"crons\": [\n {\n \"path\": \"/api/cron/process\",\n \"schedule\": \"* * * * *\"\n }\n ]\n}\n```\n\n### Manual Trigger\n\nIf you need to enqueue due cron jobs outside the processor (e.g., in tests or one-off scripts), you can still call `enqueueDueCronJobs()` directly:\n\n```typescript\nconst enqueued = await jobQueue.enqueueDueCronJobs();\n```\n\n## Overlap Protection\n\nBy default, `allowOverlap` is `false`. This means if a previous job instance from the same schedule is still **pending**, **processing**, or **waiting**, a new instance will **not** be enqueued — even if the cron expression says it's time.\n\n```typescript\n// Allow overlapping instances (e.g., for idempotent jobs)\nawait jobQueue.addCronJob({\n scheduleName: 'heartbeat',\n cronExpression: '* * * * *',\n jobType: 'send_email',\n payload: { to: 'admin@example.com', subject: 'heartbeat', body: 'ping' },allowOverlap: true,\n});\n```\n\n## Manage Schedules\n\n### Pause and Resume\n\n```typescript\n// Pause — skipped during automatic enqueueing\nawait jobQueue.pauseCronJob(scheduleId);\n\n// Resume\nawait jobQueue.resumeCronJob(scheduleId);\n```\n\n### Edit a Schedule\n\n```typescript\nawait jobQueue.editCronJob(scheduleId, {\n cronExpression: '0 */2 * * *', // change to every 2 hours\n payload: { reportId: 'bi-hourly', userId: 'system' },\n});\n```\n\nWhen `cronExpression` or `timezone` changes, `nextRunAt` is automatically recalculated.\n\n### Remove a Schedule\n\n```typescript\n// Deletes the schedule definition. Already-enqueued jobs are not cancelled.\nawait jobQueue.removeCronJob(scheduleId);\n```\n\n### List and Query\n\n```typescript\n// List all schedules\nconst all = await jobQueue.listCronJobs();\n\n// List only active / paused\nconst active = await jobQueue.listCronJobs('active');\nconst paused = await jobQueue.listCronJobs('paused');\n\n// Get by ID or name\nconst byId = await jobQueue.getCronJob(id);\nconst byName = await jobQueue.getCronJobByName('daily-report');\n```\n\n## Database Migration\n\nThe cron feature requires the `cron_schedules` table. Run the DataQueue migrations to create it:\n\n```bash\nnpx dataqueue-cli migrate up\n```\n\nIf you're already using DataQueue, just run migrations again — the new table will be added alongside existing ones."
169
+ },
170
+ {
171
+ "slug": "usage/dashboard",
172
+ "title": "Dashboard",
173
+ "description": "Plug-and-play admin dashboard for monitoring and managing jobs",
174
+ "content": "The `@nicnocquee/dataqueue-dashboard` package provides a self-contained admin dashboard that you can add to any Next.js application with a single file. It lets you view jobs, inspect details, manually trigger processing, and cancel or retry jobs.\n\n## Installation\n\n```bash\nnpm install @nicnocquee/dataqueue-dashboard\n```\n\n## Setup (Next.js)\n\nCreate a single catch-all route file in your app:\n\n```typescript title=\"app/admin/dataqueue/[[...path]]/route.ts\"\nimport { createDataqueueDashboard } from '@nicnocquee/dataqueue-dashboard/next';\nimport { getJobQueue, jobHandlers } from '@/lib/queue';\n\nconst { GET, POST } = createDataqueueDashboard({\n jobQueue: getJobQueue(),\n jobHandlers,\n basePath: '/admin/dataqueue',\n});\n\nexport { GET, POST };\n```\n\nThat's it. Visit `/admin/dataqueue` to open the dashboard.\n\n> **Note:** The `basePath` must match the directory where you placed the route file. If\n you put it at `app/jobs/dashboard/[[...path]]/route.ts`, use `basePath:\n '/jobs/dashboard'`.\n\n## Features\n\n### Jobs List\n\nThe main page shows all jobs in a table with:\n\n- **Status filter tabs** — All, Pending, Processing, Completed, Failed, Cancelled, Waiting\n- **Pagination** — Navigate through pages of jobs\n- **Auto-refresh** — Toggle automatic polling every 3 seconds\n- **Inline actions** — Cancel pending/waiting jobs or retry failed/cancelled jobs directly from the table\n\n### Job Detail\n\nClick any job ID to see the full detail view:\n\n- **Properties** — Status, type, priority, attempts, all timestamps, tags, progress bar\n- **Payload** — Formatted JSON display of the job's payload\n- **Error History** — All errors with timestamps (if the job has failed)\n- **Step Data** — Completed step results for jobs using `ctx.run()` (if any)\n- **Events Timeline** — Chronological history of all job events\n\n### Process Jobs\n\nThe **Process Jobs** button in the header triggers one-shot job processing. It creates a temporary processor, runs a single batch, and returns the count of jobs processed. This is useful for:\n\n- Debugging job handlers during development\n- Manually processing jobs in environments without a background worker\n- Testing job behavior before deploying a cron-based processor\n\n## Configuration\n\n### DashboardConfig\n\n```typescript\ninterface DashboardConfig<PayloadMap> {\n /** The initialized JobQueue instance. */\n jobQueue: JobQueue<PayloadMap>;\n\n /** Job handlers used when triggering processing from the dashboard. */\n jobHandlers: JobHandlers<PayloadMap>;\n\n /** Base path where the dashboard is mounted (e.g., '/admin/dataqueue'). */\n basePath: string;\n\n /** Options for the processor when manually triggering processing. */\n processorOptions?: {\n batchSize?: number; // default: 10\n concurrency?: number;\n pollInterval?: number;\n onError?: (error: Error) => void;\n verbose?: boolean;\n jobType?: string | string[];\n };\n}\n```\n\n### Customizing the Processor\n\nPass `processorOptions` to control how jobs are processed when using the \"Process Jobs\" button:\n\n```typescript title=\"app/admin/dataqueue/[[...path]]/route.ts\"\nconst { GET, POST } = createDataqueueDashboard({\n jobQueue: getJobQueue(),\n jobHandlers,\n basePath: '/admin/dataqueue',\n processorOptions: {\n batchSize: 5,\n concurrency: 2,\n verbose: true,\n },\n});\n```\n\n## Protecting the Dashboard\n\nSince you own the route file, you can protect the dashboard with any authentication strategy your app already uses.\n\n### Using Next.js Middleware\n\n```typescript title=\"middleware.ts\"\nimport { NextResponse } from 'next/server';\nimport type { NextRequest } from 'next/server';\n\nexport function middleware(request: NextRequest) {\n if (request.nextUrl.pathname.startsWith('/admin/dataqueue')) {\n const session = request.cookies.get('session');\n if (!session) {\n return NextResponse.redirect(new URL('/login', request.url));\n }\n }\n return NextResponse.next();\n}\n```\n\n### Wrapping the Handler\n\n```typescript title=\"app/admin/dataqueue/[[...path]]/route.ts\"\nimport { createDataqueueDashboard } from '@nicnocquee/dataqueue-dashboard/next';\nimport { getJobQueue, jobHandlers } from '@/lib/queue';\nimport { auth } from '@/lib/auth';\n\nconst dashboard = createDataqueueDashboard({\n jobQueue: getJobQueue(),\n jobHandlers,\n basePath: '/admin/dataqueue',\n});\n\nexport async function GET(req: Request, ctx: any) {\n const session = await auth();\n if (!session?.user?.isAdmin) {\n return new Response('Unauthorized', { status: 401 });\n }\n return dashboard.GET(req, ctx);\n}\n\nexport async function POST(req: Request, ctx: any) {\n const session = await auth();\n if (!session?.user?.isAdmin) {\n return new Response('Unauthorized', { status: 401 });\n }\n return dashboard.POST(req, ctx);\n}\n```\n\n## API Endpoints\n\nThe dashboard exposes these API endpoints under the configured `basePath`:\n\n| Method | Path | Description |\n| ------ | ---------------------- | ------------------------------------------------------------------------ |\n| GET | `/` | Dashboard HTML page |\n| GET | `/api/jobs` | List jobs (supports `status`, `jobType`, `limit`, `offset` query params) |\n| GET | `/api/jobs/:id` | Get a single job |\n| GET | `/api/jobs/:id/events` | Get job event history |\n| POST | `/api/jobs/:id/cancel` | Cancel a pending or waiting job |\n| POST | `/api/jobs/:id/retry` | Retry a failed or cancelled job |\n| POST | `/api/process` | Trigger one-shot job processing |\n\n## Architecture\n\nThe package is designed with future framework support in mind. The core logic uses Web Standard `Request` and `Response` objects, making it framework-agnostic. The Next.js adapter is a thin wrapper that maps App Router conventions to the core handler:\n\n```\n@nicnocquee/dataqueue-dashboard\n├── core/ Framework-agnostic handlers (Web Request/Response)\n├── next.ts Next.js App Router adapter\n└── index.ts Type exports\n```\n\nAdding support for other frameworks (Express, Hono, etc.) would only require a new adapter file — the core handlers and dashboard UI remain unchanged."
175
+ },
176
+ {
177
+ "slug": "usage/database-migration",
178
+ "title": "Database Migration",
179
+ "description": "",
180
+ "content": "> **Note:** Database migrations are **only required for the PostgreSQL backend**. If\n you're using the Redis backend, you can skip this page entirely -- Redis\n requires no schema setup.\n\nAfter installing the package, add the following script to your `package.json` to apply [the migrations](https://github.com/nicnocquee/dataqueue/tree/main/packages/dataqueue/migrations):\n\n```json title=\"package.json\"\n\"scripts\": {\n \"migrate-dataqueue\": \"dataqueue-cli migrate\"\n}\n```\n\nNext, run this command to apply the migrations:\n\n```bash\nnpm run migrate-dataqueue\n```\n\nThis will apply all the necessary schema migrations so your Postgres database is ready to use with DataQueue.\n\n> **Note:** **Make sure the `PG_DATAQUEUE_DATABASE` environment variable is set to your\n Postgres connection string.** The CLI uses this environment variable to\n connect to your database. For example:\n\n```dotenv\nPG_DATAQUEUE_DATABASE=postgresql://postgres:password@localhost:5432/my_database\n```\n\n\n> **Note:** **You must run these migrations before using the job queue.** For example, if\n you are deploying your app to Vercel, run this command before deploying in the\n Vercel's pipeline. If you have used Prisma or other ORMs, you may be familiar\n with this process.\n\n### Using a custom .env file\n\nYou can use the `--envPath` option to specify a custom path to your environment file. For example:\n\n```bash\nnpm run migrate-dataqueue -- --envPath .env.local\n```\n\nThis will load environment variables from `.env.local` before running the migration.\n\n### Schema selection\n\nYou can explicitly set the schema for migrations using the `-s` or `--schema` CLI option. This option is passed directly to `node-pg-migrate` and will ensure the schema is created if it does not exist.\n\n**Example CLI usage with explicit schema:**\n\n```bash\nnpm run migrate-dataqueue -- --envPath .env.local --schema dataqueue\n```\n\n> **Note:** Specifying the schema is optional but **recommended** when you're using the\n same database as your main application. If you don't specify the schema, the\n CLI will use the default schema which is `public`. If you use\n [Prisma](https://www.prisma.io), the prisma migration will fail because of the\n additional tables added by DataQueue.\n\n> **Note:** You have to use the `--schema` option even when `PG_DATAQUEUE_DATABASE`\n contains the schema name in `search_path`.\n\n### Other options\n\nYou can pass other options supported by `node-pg-migrate` to the migration command. For example:\n\n```bash\nnpm run migrate-dataqueue -- --envPath .env.local --schema dataqueue --verbose\n```\n\nFor more information, see the [node-pg-migrate documentation](https://salsita.github.io/node-pg-migrate/cli).\n\n### Running migrations with SSL and a custom CA\n\nMost managed Postgres providers (like DigitalOcean, Supabase, etc.) require SSL connections and provide a CA certificate (`.crt` file). You can use the CA certificate to validate the server's identity. In order to successfully run the migration with custom CA, you must set the `NODE_EXTRA_CA_CERTS` environment variable to the path of your CA certificate. This tells Node.js to trust your provider's CA for outgoing TLS connections, including Postgres.\n\n```bash\nNODE_EXTRA_CA_CERTS=/absolute/path/to/ca.crt \\\nPG_DATAQUEUE_DATABASE=your_connection_string \\\nnpm run migrate-dataqueue\n```\n\n#### Migration without Certificate Validation\n\nFor convenience, you can run the migration without certificate validation by adding the `--no-reject-unauthorized` flag to the command.\n\n```bash\nnpm run migrate-dataqueue -- --no-reject-unauthorized\n```\n\n#### Using a CA certificate in environments where you cannot upload files\n\nIn some serverless or cloud environments (like Vercel, AWS Lambda, etc.), you cannot upload files directly, but you still need Node.js to trust your managed Postgres provider's CA certificate.\n\nIn this case, you can store the CA certificate as an environment variable and write it to a temporary file in your pipeline shell script before running the migration.\n\n1. **Store the PEM content as an environment variable**\n - Copy the full contents of your `.crt` file into a new environment variable, e.g. `PGSSLROOTCERT_CONTENT`.\n - Make sure your environment supports multi-line secrets.\n2. **Write the CA certificate to a file and set NODE_EXTRA_CA_CERTS in your pipeline script**\n\n```sh\n# Write the CA cert to a file\nprintf \"%s\" \"$PGSSLROOTCERT_CONTENT\" > /tmp/ca.crt\n# Set NODE_EXTRA_CA_CERTS and run the migration\nNODE_EXTRA_CA_CERTS=/tmp/ca.crt npm run migrate-dataqueue\n```"
181
+ },
182
+ {
183
+ "slug": "usage/edit-jobs",
184
+ "title": "Edit Jobs",
185
+ "description": "",
186
+ "content": "You can edit a pending job by its ID to update its properties before it is processed. Only jobs with status 'pending' can be edited. Attempting to edit a job with any other status (processing, completed, failed, cancelled) will silently fail.\n\n## Basic Usage\n\n```typescript title=\"@/app/api/edit-job/route.ts\"\nimport { NextRequest, NextResponse } from 'next/server';\nimport { getJobQueue } from '@/lib/queue';\n\nexport async function POST(request: NextRequest) {\n try {\n const { jobId, updates } = await request.json();const jobQueue = getJobQueue();\n await jobQueue.editJob(jobId, updates);\n return NextResponse.json({ message: 'Job updated' });\n } catch (error) {\n console.error('Error editing job:', error);\n return NextResponse.json(\n { message: 'Failed to edit job' },\n { status: 500 },\n );\n }\n}\n```\n\n## Editable Fields\n\nAll fields in `EditJobOptions` are optional - only the fields you provide will be updated. The following fields can be edited:\n\n- `payload` - The job payload data\n- `priority` - Job priority (higher runs first)\n- `maxAttempts` - Maximum number of attempts\n- `runAt` - When to run the job (Date or null)\n- `timeoutMs` - Timeout for the job in milliseconds\n- `tags` - Tags for grouping, searching, or batch operations\n\n**Note:** `jobType` cannot be changed. If you need to change the job type, you should cancel the job and create a new one.\n\n## Examples\n\n### Edit Payload\n\n```typescript\n// Update the payload of a pending job\nawait jobQueue.editJob(jobId, {\n payload: { to: 'newemail@example.com', subject: 'Updated Subject' },\n});\n```\n\n### Edit Priority\n\n```typescript\n// Increase the priority of a job\nawait jobQueue.editJob(jobId, {\n priority: 10,\n});\n```\n\n### Edit Scheduled Time\n\n```typescript\n// Reschedule a job to run in 1 hour\nawait jobQueue.editJob(jobId, {\n runAt: new Date(Date.now() + 60 * 60 * 1000),\n});\n\n// Schedule a job to run immediately (or as soon as possible)\nawait jobQueue.editJob(jobId, {\n runAt: null,\n});\n```\n\n### Edit Multiple Fields\n\n```typescript\n// Update multiple fields at once\nawait jobQueue.editJob(jobId, {\n payload: { to: 'updated@example.com', subject: 'New Subject' },\n priority: 5,\n maxAttempts: 10,\n timeoutMs: 30000,\n tags: ['urgent', 'priority'],\n});\n```\n\n### Partial Updates\n\n```typescript\n// Only update what you need - other fields remain unchanged\nawait jobQueue.editJob(jobId, {\n priority: 10,\n // payload, maxAttempts, runAt, timeoutMs, and tags remain unchanged\n});\n```\n\n### Clear Tags or Timeout\n\n```typescript\n// Remove tags by setting to undefined\nawait jobQueue.editJob(jobId, {\n tags: undefined,\n});\n\n// Remove timeout by setting to undefined\nawait jobQueue.editJob(jobId, {\n timeoutMs: undefined,\n});\n```\n\n## Batch Editing\n\nYou can edit multiple pending jobs at once using `editAllPendingJobs`. This is useful when you need to update many jobs that match certain criteria. The function returns the number of jobs that were edited.\n\n### Basic Batch Edit\n\n```typescript\n// Edit all pending jobs\nconst editedCount = await jobQueue.editAllPendingJobs(undefined, {\n priority: 10,\n});\nconsole.log(`Edited ${editedCount} jobs`);\n```\n\n### Filter by Job Type\n\n```typescript\n// Edit all pending email jobs\nconst editedCount = await jobQueue.editAllPendingJobs(\n { jobType: 'email' },\n {\n priority: 5,\n },\n);\n```\n\n### Filter by Priority\n\n```typescript\n// Edit all pending jobs with priority 1\nconst editedCount = await jobQueue.editAllPendingJobs(\n { priority: 1 },\n {\n priority: 5,\n },\n);\n```\n\n### Filter by Tags\n\n```typescript\n// Edit all pending jobs with 'urgent' tag\nconst editedCount = await jobQueue.editAllPendingJobs(\n { tags: { values: ['urgent'], mode: 'any' } },\n {\n priority: 10,\n },\n);\n```\n\n### Filter by Scheduled Time\n\n```typescript\n// Edit all pending jobs scheduled in the future\nconst editedCount = await jobQueue.editAllPendingJobs(\n { runAt: { gte: new Date() } },\n {\n priority: 10,\n },\n);\n\n// Edit all pending jobs scheduled before a specific date\nconst editedCount = await jobQueue.editAllPendingJobs(\n { runAt: { lt: new Date('2024-12-31') } },\n {\n priority: 5,\n },\n);\n```\n\n### Combined Filters\n\n```typescript\n// Edit all pending email jobs with 'urgent' tag\nconst editedCount = await jobQueue.editAllPendingJobs(\n {\n jobType: 'email',\n tags: { values: ['urgent'], mode: 'any' },\n },\n {\n priority: 10,\n maxAttempts: 5,\n },\n);\n```\n\n### Batch Edit Notes\n\n- Only pending jobs are edited. Jobs with other statuses (processing, completed, failed, cancelled) are not affected.\n- The function returns the number of jobs that were successfully edited.\n- Edit events are recorded for each affected job, just like single job edits.\n- If no fields are provided in the updates object, the function returns 0 and no jobs are modified.\n\n## When to Use Edit vs Cancel vs Retry\n\n- **Edit**: Use when you want to modify a pending job's properties before it runs\n- **Cancel**: Use when you want to completely remove a pending job from the queue\n- **Retry**: Use when you want to retry a failed job (sets status back to pending)\n\n## Error Handling\n\nThe `editJob` function silently fails if you try to edit a non-pending job. This means:\n\n- No error is thrown\n- The job remains unchanged\n- The operation completes successfully (but does nothing)\n\nTo check if an edit was successful, you can:\n\n```typescript\nconst job = await jobQueue.getJob(jobId);\nif (job?.status === 'pending') {\n // Job is still pending, edit might have succeeded\n // Check if the fields you wanted to update actually changed\n if (job.priority === newPriority) {\n console.log('Edit successful');\n }\n} else {\n console.log('Job is not pending, edit was ignored');\n}\n```\n\n## Event Tracking\n\nWhen a job is edited, an 'edited' event is recorded in the job's event history. The event metadata contains the fields that were updated:\n\n```typescript\nconst events = await jobQueue.getJobEvents(jobId);\nconst editEvent = events.find((e) => e.eventType === 'edited');\nif (editEvent) {\n console.log('Updated fields:', editEvent.metadata);\n // { payload: {...}, priority: 10, ... }\n}\n```\n\n## Best Practices\n\n1. **Check job status before editing**: If you're unsure whether a job is pending, check its status first:\n\n```typescript\nconst job = await jobQueue.getJob(jobId);\nif (job?.status === 'pending') {\n await jobQueue.editJob(jobId, updates);\n} else {\n console.log('Job is not pending, cannot edit');\n}\n```\n\n2. **Use partial updates**: Only update the fields you need to change. This is more efficient and reduces the chance of accidentally overwriting other fields.\n\n3. **Validate updates**: Ensure the updated values are valid for your job handlers. For example, if your handler expects a specific payload structure, make sure the updated payload matches.\n\n4. **Consider race conditions**: If a job might be picked up for processing while you're editing it, be aware that the edit might not take effect if the job transitions to 'processing' status between your check and the edit operation.\n\n5. **Monitor events**: Use job events to track when and what was edited for audit purposes."
187
+ },
188
+ {
189
+ "slug": "usage/failed-jobs",
190
+ "title": "Failed Jobs",
191
+ "description": "",
192
+ "content": "A job handler can fail for many reasons, such as a bug in the code or running out of resources.\n\nWhen a job fails, it will be marked as `failed` and retried up to `maxAttempts` times. You can set the `maxAttempts` value when adding the job to the queue.\n\nEach retry is scheduled after `2^attempts * 1 minute` from the previous attempt. You can view the error history for a job in its `errorHistory` field."
193
+ },
194
+ {
195
+ "slug": "usage/force-kill-timeout",
196
+ "title": "Force Kill on Timeout",
197
+ "description": "",
198
+ "content": "When you set `forceKillOnTimeout: true` on a job, the handler will be forcefully terminated (using Worker Threads) when the timeout is reached, rather than just receiving an AbortSignal.\n\n## Runtime Requirements\n\n**⚠️ IMPORTANT**: `forceKillOnTimeout` requires **Node.js** and uses the `worker_threads` module. It will **not work** in Bun or other runtimes that don't support Node.js worker threads.\n\n- ✅ **Node.js**: Fully supported (Node.js v10.5.0+)\n- ❌ **Bun**: Not supported - use `forceKillOnTimeout: false` (default) and ensure your handler checks `signal.aborted`\n\nIf you're using Bun or another runtime without worker thread support, use the default graceful shutdown approach (`forceKillOnTimeout: false`) and make sure your handlers check `signal.aborted` to exit gracefully when timed out.\n\n## Handler Serialization Requirements\n\n**IMPORTANT**: When using `forceKillOnTimeout`, your handler must be **serializable**. This means the handler function can be converted to a string and executed in a separate worker thread.\n\n### ✅ Serializable Handlers\n\nThese handlers will work with `forceKillOnTimeout`:\n\n```typescript\n// Standalone function\nconst handler = async (payload, signal) => {\n await doSomething(payload);\n};\n\n// Function that imports dependencies inside\nconst handler = async (payload, signal) => {\n const { api } = await import('./api');\n await api.call(payload);\n};\n\n// Function with local variables\nconst handler = async (payload, signal) => {\n const localVar = 'value';\n await process(payload, localVar);\n};\n```\n\n### ❌ Non-Serializable Handlers\n\nThese handlers will **NOT** work with `forceKillOnTimeout`:\n\n```typescript\n// ❌ Closure over external variable\nconst db = getDatabase();\nconst handler = async (payload, signal) => {\n await db.query(payload); // 'db' is captured from closure\n};\n\n// ❌ Uses 'this' context\nclass MyHandler {\n async handle(payload, signal) {\n await this.doSomething(payload); // 'this' won't work\n }\n}\n\n// ❌ Closure over imported module\nimport { someService } from './services';\nconst handler = async (payload, signal) => {\n await someService.process(payload); // 'someService' is from closure\n};\n```\n\n## Validating Handler Serialization\n\nYou can validate that your handlers are serializable before using them:\n\n```typescript\nimport {\n validateHandlerSerializable,\n testHandlerSerialization,\n} from '@nicnocquee/dataqueue';\n\nconst handler = async (payload, signal) => {\n await doSomething(payload);\n};\n\n// Quick validation (synchronous)\nconst result = validateHandlerSerializable(handler, 'myJob');\nif (!result.isSerializable) {\n console.error('Handler is not serializable:', result.error);\n}\n\n// Thorough test (asynchronous, actually tries to serialize)\nconst testResult = await testHandlerSerialization(handler, 'myJob');\nif (!testResult.isSerializable) {\n console.error('Handler failed serialization test:', testResult.error);\n}\n```\n\n## Limitations\n\n- **`prolong` and `onTimeout` are not supported** with `forceKillOnTimeout: true`. Because the handler runs in a separate Worker Thread, the `JobContext` methods (`prolong` and `onTimeout`) are no-ops in force-kill mode. If you need to extend timeouts dynamically, use the default graceful shutdown (`forceKillOnTimeout: false`) instead. See [Job Timeout](/usage/job-timeout) for details on extending timeouts.\n\n## Best Practices\n\n1. **Use standalone functions**: Define handlers as standalone functions, not closures\n2. **Import dependencies inside**: If you need external dependencies, import them inside the handler function\n3. **Avoid 'this' context**: Don't use class methods as handlers unless they're bound\n4. **Test early**: Use `validateHandlerSerializable` during development to catch issues early\n5. **When in doubt, use graceful shutdown**: If your handler can't be serialized, use `forceKillOnTimeout: false` (default) and ensure your handler checks `signal.aborted`\n6. **Use `prolong`/`onTimeout` instead**: If your main concern is jobs that are still working but slow, consider using `prolong` or `onTimeout` (with `forceKillOnTimeout: false`) instead of forcefully terminating\n\n## Example: Converting a Non-Serializable Handler\n\n**Before** (not serializable):\n\n```typescript\nimport { db } from './db';\n\nexport const jobHandlers = {\n processData: async (payload, signal) => {\n // ❌ 'db' is captured from closure\n await db.query('SELECT * FROM data WHERE id = $1', [payload.id]);\n },\n};\n```\n\n**After** (serializable):\n\n```typescript\nexport const jobHandlers = {\n processData: async (payload, signal) => {\n // ✅ Import inside the handler\n const { db } = await import('./db');\n await db.query('SELECT * FROM data WHERE id = $1', [payload.id]);\n },\n};\n```\n\n## Runtime Validation\n\nThe library automatically validates handlers when `forceKillOnTimeout` is enabled. If a handler cannot be serialized, you'll get a clear error message:\n\n```\nHandler for job type \"myJob\" uses 'this' context which cannot be serialized.\nUse a regular function or avoid 'this' references when forceKillOnTimeout is enabled.\n```\n\nThis validation happens when the job is processed, so you'll catch serialization issues early in development."
199
+ },
200
+ {
201
+ "slug": "usage/get-jobs",
202
+ "title": "Get Jobs",
203
+ "description": "",
204
+ "content": "To get a job by its ID:\n\n```typescript\nconst job = await jobQueue.getJob(jobId);\n```\n\nTo get all jobs:\n\n```typescript\nconst jobs = await jobQueue.getAllJobs(limit, offset);\n```\n\nTo get jobs by status:\n\n```typescript\nconst jobs = await jobQueue.getJobsByStatus(status, limit, offset);\n```\n\n## Get Jobs by Tags\n\nYou can get jobs by their tags using the `getJobsByTags` method:\n\n```typescript\nconst jobs = await jobQueue.getJobsByTags(['welcome', 'user'], 'all', 10, 0);\n```\n\n- The first argument is an array of tags to match.\n- The second argument is the tag query mode. See [Tags](/api/tags) for more details.\n- The third and fourth arguments are optional for pagination.\n\n## Get Jobs by Filter\n\nYou can retrieve jobs using multiple filters with the `getJobs` method:\n\n```typescript\nconst jobs = await jobQueue.getJobs(\n {\n jobType: 'email',\n priority: 2,\n runAt: { gte: new Date('2024-01-01'), lt: new Date('2024-02-01') },\n tags: { values: ['welcome', 'user'], mode: 'all' },\n },\n 10,\n 0,\n);\n```\n\n- The first argument is an optional filter object. You can filter by:\n - `jobType`: The job type (string).\n - `priority`: The job priority (number).\n - `runAt`: The scheduled time. You can use a `Date` for exact match, or an object with `gt`, `gte`, `lt`, `lte`, or `eq` for range queries.\n - `tags`: An object with `values` (array of tags) and `mode` (see [Tags](/api/tags)).\n- The second and third arguments are optional for pagination (`limit`, `offset`).\n\nYou can combine any of these filters. If no filters are provided, all jobs are returned (with pagination if specified)."
205
+ },
206
+ {
207
+ "slug": "usage/init-queue",
208
+ "title": "Initialize Queue",
209
+ "description": "",
210
+ "content": "After defining your job types, payloads, and handlers, you need to initialize the job queue which sets up the connection to your database backend.\n\n## PostgreSQL\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE, // Set this in your environment\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** The value of `connectionString` must be a [valid Postgres connection\n string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).\n For example:\n\n```dotenv\nPG_DATAQUEUE_DATABASE=postgresql://postgres:password@localhost:5432/my_database?search_path=my_schema\n```\n\n\n## Redis\n\nTo use Redis as the backend, set `backend: 'redis'` and provide `redisConfig` instead of `databaseConfig`:\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {jobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n url: process.env.REDIS_URL, // e.g. redis://localhost:6379\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\nYou can also connect using individual connection options instead of a URL:\n\n```typescript title=\"@lib/queue.ts\"\njobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n host: 'localhost',\n port: 6379,\n password: process.env.REDIS_PASSWORD,\n db: 0,\n keyPrefix: 'myapp:', // Optional, defaults to 'dq:'\n },\n verbose: process.env.NODE_ENV === 'development',\n});\n```\n\n> **Note:** The `keyPrefix` option lets you namespace all Redis keys. This is useful when\n sharing a Redis instance between multiple applications or multiple queues. The\n default prefix is `dq:`.\n\n---\n\n## Using the Queue\n\nOnce initialized, you use the queue instance identically regardless of backend. The API is the same for both PostgreSQL and Redis.\n\n```typescript title=\"@/app/actions/send-email.ts\"\nimport { getJobQueue } from '@/lib/queue';\n\nconst sendEmail = async () => {const jobQueue = getJobQueue();\n await jobQueue.addJob({\n jobType: 'send_email',\n payload: {\n to: 'test@example.com',\n subject: 'Hello',\n body: 'Hello, world!',\n },\n });\n};\n```\n\n---\n\n## SSL Configuration (PostgreSQL)\n\nMost managed Postgres providers (like DigitalOcean, Supabase, etc.) require SSL connections and use their own CA certificate (.crt file) to sign the server's certificate. To securely verify the server's identity, you must configure your client to trust this CA certificate.\n\nYou can configure SSL for your database connection in several ways, depending on your environment and security requirements.\n\n### Using PEM Strings from Environment Variables\n\nThis is ideal for serverless environments where you cannot mount files. Store your CA certificate, and optionally client certificate and key, as environment variables then pass them to the `ssl` property of the `databaseConfig` object.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE, // Set this in your environment\n ssl: {\n ca: process.env.PGSSLROOTCERT, // PEM string: the content of your .crt file\n cert: process.env.PGSSLCERT, // PEM string (optional, for client authentication)\n key: process.env.PGSSLKEY, // PEM string (optional, for client authentication)\n rejectUnauthorized: true, // Always true for CA-signed certs\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using a custom CA certificate and `connectionString`, you must remove the\n `sslmode` parameter from the connection string. Otherwise, the connection will\n fail.\n\n### Using File Paths\n\nIf you have the CA certificate, client certificate, or key on disk, provide their absolute paths using the `file://` prefix. Only values starting with `file://` will be loaded from the file system; all others are treated as PEM strings.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE,\n ssl: {\n ca: 'file:///absolute/path/to/ca.crt', // Path to your provider's CA cert\n cert: 'file:///absolute/path/to/client.crt', // optional, for client authentication\n key: 'file:///absolute/path/to/client.key', // optional, for client authentication\n rejectUnauthorized: true,\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using a custom CA certificate and `connectionString`, you must remove the\n `sslmode` parameter from the connection string. Otherwise, the connection will\n fail.\n\n### Skipping Certificate Validation\n\nFor convenience, you can skip certificate validation (not recommended for production) by setting `rejectUnauthorized` to `false` and without providing a custom CA certificate.\n\n```typescript title=\"@lib/queue.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { type JobPayloadMap } from './types/job-payload-map';\n\nlet jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;\n\nexport const getJobQueue = () => {\n if (!jobQueue) {\n jobQueue = initJobQueue<JobPayloadMap>({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE,\n ssl: {\n rejectUnauthorized: false,\n },\n },\n verbose: process.env.NODE_ENV === 'development',\n });\n }\n return jobQueue;\n};\n```\n\n> **Note:** When using `rejectUnauthorized: false` and `connectionString`, you must remove\n the `sslmode` parameter from the connection string. Otherwise, the connection\n will fail.\n\n---\n\n## TLS Configuration (Redis)\n\nIf your Redis server requires TLS (common with managed services like AWS ElastiCache, Redis Cloud, etc.), provide TLS options in the `redisConfig`:\n\n```typescript title=\"@lib/queue.ts\"\njobQueue = initJobQueue<JobPayloadMap>({\n backend: 'redis',\n redisConfig: {\n url: process.env.REDIS_URL,\n tls: {\n ca: process.env.REDIS_CA_CERT, // PEM string\n rejectUnauthorized: true,\n },\n },\n});\n```"
211
+ },
212
+ {
213
+ "slug": "usage/job-events",
214
+ "title": "Job Events",
215
+ "description": "",
216
+ "content": "DataQueue keeps track of every change in a job's status. You can use `getJobEvents` to see all the events for a job, such as when it was added, started processing, completed, failed, cancelled, retried, edited, or prolonged.\n\n```typescript tab=\"Code\"\nconst events = await jobQueue.getJobEvents(jobId);\nconsole.log(events);\n```\n\n```json tab=\"Output\"\n[\n {\n \"id\": 1,\n \"jobId\": 1,\n \"eventType\": \"processing\",\n \"createdAt\": \"2024-06-01T12:00:00Z\",\n \"metadata\": \"\"\n }\n]\n```"
217
+ },
218
+ {
219
+ "slug": "usage/job-handlers",
220
+ "title": "Job Handlers",
221
+ "description": "",
222
+ "content": "The first thing you need to do is define your job types and their corresponding payload types. A payload is the data passed to the job handler. A job handler is a function that runs when a job is processed.\n\n### Define Job Types and Payloads\n\nJob types and their payloads are specific to your app. You can define them in any file. The important thing is that they are an object type, where the keys are the job types and the values are the payload types. In this example, `send_email`, `generate_report`, and `generate_image` are the job types, and their values are the payload types.\n\n```typescript title=\"@lib/types/job-payload-map.ts\"\n// Define the job payload map for this app.\n// This ensures that the job payload is typed correctly when adding jobs.\n// The keys are the job types, and the values are the payload types.\nexport type JobPayloadMap = {\n send_email: {\n to: string;\n subject: string;\n body: string;\n };\n generate_report: {\n reportId: string;\n userId: string;\n };\n generate_image: {\n prompt: string;\n };\n};\n```\n\n### Define Job Handlers\n\nNext, define the job handlers by exporting a `JobHandlers` object that maps job types to handler functions. If you forget to add a handler for a job type, TypeScript will show an error.\n\n```typescript title=\"@lib/job-handlers.ts\"\nimport { sendEmail } from './services/email'; // Function to send the email\nimport { generateReport } from './services/generate-report'; // Function to generate the report\nimport { JobHandlers } from '@nicnocquee/dataqueue';\n\nexport const jobHandlers: JobHandlers<JobPayloadMap> = {\n send_email: async (payload) => {\n const { to, subject, body } = payload;\n await sendEmail(to, subject, body);\n },\n generate_report: async (payload) => {\n const { reportId, userId } = payload;\n await generateReport(reportId, userId);\n },\n generate_image: async (payload, signal) => {\n const { prompt } = payload;\n await generateImageAi(prompt, signal);\n },\n};\n```\n\nIn the example above, we define three job handlers: `send_email`, `generate_report`, and `generate_image`. Each handler is a function that takes a payload, an `AbortSignal`, and a `JobContext` as arguments. The `AbortSignal` is used to abort the job if it takes too long to complete. The `JobContext` provides methods to extend the job's timeout while it's running.\n\n### Job Handler Signature\n\nA job handler receives three arguments: the job payload, an `AbortSignal`, and a `JobContext`.\n\n```typescript\n(payload: Payload, signal: AbortSignal, ctx: JobContext) => Promise<void>;\n```\n\nYou can omit arguments you don't need. For example, if you only need the payload:\n\n```typescript\nconst handler = async (payload) => {\n // ...\n};\n```\n\n### JobContext\n\nThe third argument provides methods for timeout management and progress reporting:\n\n- `ctx.prolong(ms?)` — Proactively reset the timeout. If `ms` is provided, sets the deadline to `ms` milliseconds from now. If omitted, resets to the original `timeoutMs`.\n- `ctx.onTimeout(callback)` — Register a callback that fires when the timeout is about to hit, before the `AbortSignal` is triggered. Return a number (ms) to extend, or return nothing to let the timeout proceed.\n- `ctx.setProgress(percent)` — Report progress as a percentage (0–100). The value is persisted to the database and can be read by clients via `getJob()` or the React SDK's `useJob()` hook.\n\nSee [Job Timeout](/usage/job-timeout) for timeout examples and [Progress Tracking](/usage/progress-tracking) for progress reporting."
223
+ },
224
+ {
225
+ "slug": "usage/job-timeout",
226
+ "title": "Job Timeout",
227
+ "description": "",
228
+ "content": "When you add a job to the queue, you can set a timeout for it. If the job doesn't finish before the timeout, it will be marked as `failed` and may be retried. See [Failed Jobs](/usage/failed-jobs) for more information.\n\nWhen the timeout is reached, DataQueue does not actually stop the handler from running. You need to handle this in your handler by checking the `AbortSignal` at one or more points in your code. For example:\n\n```typescript title=\"@lib/job-handlers.ts\"\nconst handler = async (payload, signal) => {\n // Simulate work\n // Do something that may take a long time\n\n // Check if the job is aborted\n if (signal.aborted) {\n return;\n }\n\n // Do something else\n // Check again if the job is aborted\n if (signal.aborted) {\n return;\n }\n\n // ...rest of your logic\n};\n```\n\nIf the job times out, the signal will be aborted and your handler should exit early. If your handler does not check for `signal.aborted`, it will keep running in the background even after the job is marked as failed due to timeout. For best results, always make your handlers abortable if they might run for a long time.\n\n## Extending the Timeout\n\nSometimes a job takes longer than expected but is still making progress. Instead of letting it time out and fail, you can extend the timeout from inside the handler using two mechanisms: **prolong** (proactive) and **onTimeout** (reactive).\n\n### Prolong (proactive)\n\nCall `ctx.prolong()` at any point in your handler to reset the timeout deadline:\n\n```typescript title=\"@lib/job-handlers.ts\"\nconst handler = async (payload, signal, { prolong }) => {\n await doStep1(payload);\n\n // \"I know the next step is heavy, give me 60 more seconds\"\n prolong(60_000);\n await doHeavyStep2(payload);\n\n // Reset to the original timeout duration (heartbeat-style)\n prolong();\n await doStep3(payload);\n};\n```\n\n- `prolong(ms)` — sets the timeout deadline to `ms` milliseconds from now.\n- `prolong()` — resets the timeout deadline to the original `timeoutMs` from now.\n\n### onTimeout (reactive)\n\nRegister a callback that fires when the timeout is about to hit, **before** the `AbortSignal` is triggered. The callback can decide whether to extend or let the timeout proceed:\n\n```typescript title=\"@lib/job-handlers.ts\"\nconst handler = async (payload, signal, { onTimeout }) => {\n let progress = 0;\n\n onTimeout(() => {\n if (progress < 100) {\n return 30_000; // still working, give me 30 more seconds\n }\n // return nothing to let the timeout proceed\n });\n\n for (const chunk of payload.chunks) {\n await processChunk(chunk);\n progress += 10;\n }\n};\n```\n\n- If the callback returns a number > 0, the timeout is reset to that many milliseconds from now.\n- If the callback returns `undefined`, `null`, `0`, or a negative number, the timeout proceeds normally (signal is aborted, job fails).\n- The callback fires again each time a new deadline is reached, so the job can keep extending or finally let go.\n\n### Using Both Together\n\n`prolong` and `onTimeout` work together. Use `prolong` when you know upfront that a step will be heavy. Use `onTimeout` for a last-second decision when the deadline arrives.\n\n```typescript title=\"@lib/job-handlers.ts\"\nconst handler = async (payload, signal, { prolong, onTimeout }) => {\n // Reactive fallback: extend if still making progress\n let step = 0;\n onTimeout(() => {\n if (step < 3) return 30_000;\n });\n\n step = 1;\n await doStep1(payload);\n\n // Proactive: we know step 2 is heavy\n step = 2;\n prolong(120_000);\n await doHeavyStep2(payload);\n\n step = 3;\n await doStep3(payload);\n};\n```\n\n### Side Effects\n\nWhen either mechanism extends the timeout, DataQueue also updates `locked_at` in the database. This prevents [`reclaimStuckJobs`](/usage/reclaim-jobs) from accidentally reclaiming the job while it's still actively working. A `prolonged` event is also recorded in the job's event history.\n\nNote that `reclaimStuckJobs` is already aware of each job's `timeoutMs` — a job will not be reclaimed until the greater of `maxProcessingTimeMinutes` and the job's own `timeoutMs` has elapsed. `prolong` is still useful when you want to extend _beyond_ the original timeout, or as a heartbeat for jobs without a `timeoutMs`.\n\n### Limitations\n\n- Both `prolong` and `onTimeout` are no-ops if the job has no `timeoutMs` set.\n- Neither is supported with `forceKillOnTimeout: true` (worker thread mode). See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n## Force Kill on Timeout\n\nIf you need to forcefully terminate jobs that don't respond to the abort signal, you can use `forceKillOnTimeout: true`. This will run the handler in a Worker Thread and forcefully terminate it when the timeout is reached.\n\n**Warning**: `forceKillOnTimeout` requires **Node.js** and will **not work** in Bun or other runtimes without worker thread support. See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n**Important**: When using `forceKillOnTimeout`, your handler must be serializable. See [Force Kill on Timeout](/usage/force-kill-timeout) for details.\n\n```typescript\nawait queue.addJob({\n jobType: 'longRunningTask',\n payload: { data: '...' },\n timeoutMs: 5000,\n forceKillOnTimeout: true, // Forcefully terminate if timeout is reached\n});\n```"
229
+ },
230
+ {
231
+ "slug": "usage/long-running-server",
232
+ "title": "Long-Running Server",
233
+ "description": "",
234
+ "content": "The [Process Jobs](/usage/process-jobs) page covers processing jobs in a serverless environment using cron-triggered API routes. If you're running a long-lived server (Express, Fastify, plain Node.js, etc.), you can instead run the processor continuously in the background and handle lifecycle management yourself.\n\n## Starting the Processor in the Background\n\nUse `startInBackground()` to run the processor as a continuous polling loop. It will check for new jobs every `pollInterval` milliseconds (default: 5 seconds) and process them automatically.\n\n```typescript\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { jobHandlers } from './job-handlers';\n\nconst jobQueue = initJobQueue({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE,\n },\n});\n\nconst processor = jobQueue.createProcessor(jobHandlers, {\n workerId: `server-${process.pid}`,\n batchSize: 10,\n concurrency: 3,\n pollInterval: 5000, // check for new jobs every 5 seconds\n onError: (error) => {\n // Called when an unexpected error occurs during batch processing.\n // Use this to send errors to your monitoring service.\n console.error('Processor error:', error);\n },\n});\n\nprocessor.startInBackground();\n```\n\nWhen a full batch is returned (i.e., the number of processed jobs equals `batchSize`), the processor immediately fetches the next batch when the current batch is finished, so it can drain a large backlog quickly. Once a batch returns fewer jobs than `batchSize`, it waits `pollInterval` before fetching the next batch.\n\n### Configuration Tips\n\n- **`pollInterval`** -- Lower values (e.g., `1000`) reduce latency for new jobs but increase database load. Higher values (e.g., `10000`) are gentler on the database but introduce more delay. 5 seconds is a good default.\n- **`concurrency`** -- Keep this proportional to your server's resources. If jobs call external APIs with rate limits, keep it low.\n- **`batchSize`** -- Larger batches reduce polling overhead but hold a database lock longer during claim. 10-20 is typical.\n- **`onError`** -- Always set this in production. Without it, errors default to `console.error` which is easy to miss.\n\n## Graceful Shutdown\n\nWhen your server receives a termination signal (e.g., `SIGTERM` from a container orchestrator), you should stop the processor and wait for in-flight jobs to finish before exiting. Use `stopAndDrain()` for this.\n\n```typescript\nasync function shutdown() {\n console.log('Shutting down...');\n\n // Stop polling and wait for the current batch to finish (up to 30 seconds)\n await processor.stopAndDrain(30000);\n\n // Close the database connection pool\n // PostgreSQL:\n jobQueue.getPool().end();\n // Redis:\n // jobQueue.getRedisClient().quit();\n\n console.log('Shutdown complete');\n process.exit(0);\n}\n\nprocess.on('SIGTERM', shutdown);\nprocess.on('SIGINT', shutdown);\n```\n\n`stopAndDrain()` accepts an optional timeout in milliseconds (default: 30000). If the current batch does not finish within that time, the promise resolves anyway so your process is not stuck indefinitely.\n\n> **Note:** Use `stopAndDrain()` instead of `stop()` for graceful shutdown. `stop()` halts\n the polling loop immediately without waiting for in-flight jobs, which can\n leave jobs stuck in the `processing` state until they are reclaimed.\n\n## Scheduling Maintenance Tasks\n\nIn a serverless setup, you use cron-triggered API routes for [cleanup](/usage/cleanup-jobs) and [reclaim](/usage/reclaim-jobs). In a long-running server, you can use `setInterval` instead.\n\n```typescript\n// Reclaim stuck jobs every 10 minutes\nconst reclaimInterval = setInterval(\n async () => {\n try {\n const reclaimed = await jobQueue.reclaimStuckJobs(10);\n if (reclaimed > 0) console.log(`Reclaimed ${reclaimed} stuck jobs`);\n } catch (error) {\n console.error('Reclaim error:', error);\n }\n },\n 10 * 60 * 1000,\n);\n\n// Clean up completed jobs older than 30 days, once per day\nconst cleanupInterval = setInterval(\n async () => {\n try {\n const deleted = await jobQueue.cleanupOldJobs(30);\n if (deleted > 0) console.log(`Cleaned up ${deleted} old jobs`);\n\n const deletedEvents = await jobQueue.cleanupOldJobEvents(30);\n if (deletedEvents > 0)\n console.log(`Cleaned up ${deletedEvents} old job events`);\n } catch (error) {\n console.error('Cleanup error:', error);\n }\n },\n 24 * 60 * 60 * 1000,\n);\n```\n\nMake sure to clear these intervals during shutdown:\n\n```typescript\nasync function shutdown() {\n clearInterval(reclaimInterval);\n clearInterval(cleanupInterval);\n\n await processor.stopAndDrain(30000);\n\n jobQueue.getPool().end();\n process.exit(0);\n}\n```\n\n> **Note:** If you use the [wait/token](/usage/wait) feature (PostgreSQL only), also call\n `expireTimedOutTokens()` on an interval to expire tokens that have passed\n their timeout.\n\n## Full Example\n\nHere is a complete Express server that ties everything together:\n\n```typescript title=\"server.ts\"\nimport express from 'express';\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { jobHandlers } from './job-handlers';\n\n// --- Initialize the queue ---\nconst jobQueue = initJobQueue({\n databaseConfig: {\n connectionString: process.env.PG_DATAQUEUE_DATABASE!,\n },\n});\n\n// --- Create and start the processor ---\nconst processor = jobQueue.createProcessor(jobHandlers, {\n workerId: `server-${process.pid}`,\n batchSize: 10,\n concurrency: 3,\n pollInterval: 5000,\n onError: (error) => {\n console.error('Processor error:', error);\n },\n});\n\nprocessor.startInBackground();\n\n// --- Schedule maintenance ---\nconst reclaimInterval = setInterval(\n async () => {\n try {\n await jobQueue.reclaimStuckJobs(10);\n } catch (e) {\n console.error('Reclaim error:', e);\n }\n },\n 10 * 60 * 1000,\n);\n\nconst cleanupInterval = setInterval(\n async () => {\n try {\n await jobQueue.cleanupOldJobs(30);\n await jobQueue.cleanupOldJobEvents(30);\n } catch (e) {\n console.error('Cleanup error:', e);\n }\n },\n 24 * 60 * 60 * 1000,\n);\n\n// --- Express app ---\nconst app = express();\napp.use(express.json());\n\napp.post('/jobs', async (req, res) => {\n const { jobType, payload } = req.body;\n const jobId = await jobQueue.addJob({ jobType, payload });\n res.json({ jobId });\n});\n\napp.get('/jobs/:id', async (req, res) => {\n const job = await jobQueue.getJob(Number(req.params.id));\n if (!job) return res.status(404).json({ error: 'Not found' });\n res.json(job);\n});\n\nconst server = app.listen(3000, () => {\n console.log('Server running on port 3000');\n});\n\n// --- Graceful shutdown ---\nasync function shutdown() {\n console.log('Shutting down gracefully...');\n\n // Stop accepting new HTTP connections\n server.close();\n\n // Clear maintenance intervals\n clearInterval(reclaimInterval);\n clearInterval(cleanupInterval);\n\n // Wait for in-flight jobs to finish\n await processor.stopAndDrain(30000);\n\n // Close the database pool\n jobQueue.getPool().end();\n\n console.log('Shutdown complete');\n process.exit(0);\n}\n\nprocess.on('SIGTERM', shutdown);\nprocess.on('SIGINT', shutdown);\n```"
235
+ },
236
+ {
237
+ "slug": "usage/process-jobs",
238
+ "title": "Process Jobs",
239
+ "description": "",
240
+ "content": "So far, we haven't actually performed any jobs—we've only added them to the queue. Now, let's process those jobs.\n\n> **Note:** This page covers processing jobs in a **serverless environment** using\n cron-triggered API routes. If you're running a long-lived server (Express,\n Fastify, etc.), see [Long-Running Server](/usage/long-running-server).\n\nIn a serverless environment, we can't have a long-running process that constantly monitors and processes the queue.\n\nInstead, we create an API endpoint that checks the queue and processes jobs in batches. This endpoint is then triggered by a cron job. For example, you can create an API endpoint at `app/api/cron/process` to process jobs in batches:\n\n```typescript title=\"@/app/api/cron/process.ts\"\nimport { jobHandlers } from '@/lib/job-handler';\nimport { getJobQueue } from '@/lib/queue';\nimport { NextResponse } from 'next/server';\n\nexport async function GET(request: Request) {\n // Secure the cron route: https://vercel.com/docs/cron-jobs/manage-cron-jobs#securing-cron-jobs\n const authHeader = request.headers.get('authorization');\n if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {\n return NextResponse.json({ message: 'Unauthorized' }, { status: 401 });\n }\n\n try {const jobQueue = getJobQueue();\n\n // Control how many jobs are processed in parallel per batch using the `concurrency` option.\n // For example, to process up to 3 jobs in parallel per batch:\n const processor = jobQueue.createProcessor(jobHandlers, {\n workerId: `cron-${Date.now()}`,\n batchSize: 10, // up to 10 jobs per batch\n concurrency: 3, // up to 3 jobs processed in parallel\n verbose: true,\n });\n\n const processed = await processor.start();\n\n return NextResponse.json({\n message: 'Job processing completed',\n processed,\n });\n } catch (error) {\n console.error('Error processing jobs:', error);\n return NextResponse.json(\n { message: 'Failed to process jobs' },\n { status: 500 },\n );\n }\n}\n```\n\nIn the example above, we use the `createProcessor` method to create a processor. When you call the processor's `start` function, it processes jobs in the queue up to the `batchSize` limit.\n\n### Batch Size\n\nServerless platforms like Vercel limit how long a function can run. If you set `batchSize` too high, the function might run too long and get killed. Choose a `batchSize` that fits your use case.\n\nYou can also process only certain job types by setting the `jobType` option. If a job type is more resource-intensive, use a lower `batchSize` for that type.\n\nFor example, you can define two endpoints: one for low-resource jobs and another for high-resource jobs, each with different `batchSize` and `concurrency` values.\n\n### Concurrency\n\nSome jobs are resource-intensive, like image processing, LLM calls, or calling a rate-limited external service. In these cases, set the `concurrency` option to control how many jobs run in parallel per batch.\n\nThe default is `3`. Set it to `1` to process jobs one at a time. Use a lower value to avoid exhausting resources in constrained environments.\n\n### Triggering the Processor via Cron\n\nDefining an endpoint isn't enough—you need to trigger it regularly. For example, use Vercel cron to trigger the endpoint every minute by adding this to your `vercel.json`:\n\n```json title=\"vercel.json\"\n{\n \"$schema\": \"https://openapi.vercel.sh/vercel.json\",\n \"crons\": [\n {\n \"path\": \"/api/cron/process\",\n \"schedule\": \"* * * * *\"\n }\n ]\n}\n```\n\nFor Vercel cron, set the `CRON_SECRET` environment variable, as it's sent in the `authorization` header. If you use a different cron service, set the `authorization` header to the value of `CRON_SECRET`:\n\n```\nAuthorization: Bearer <VALUE_OF_CRON_SECRET>\n```\n\nDuring development, you can create a small script to run the cron job continuously in the background. For example, you can create a `cron.sh` file like [this one](https://github.com/nicnocquee/dataqueue/blob/main/apps/demo/cron.sh), then add it to your `package.json` scripts:\n\n```json title=\"package.json\"\n{\n \"scripts\": {\n \"cron\": \"bash cron.sh\"\n }\n}\n```\n\nThen, you can run the cron job by running `pnpm cron` from the apps/demo directory."
241
+ },
242
+ {
243
+ "slug": "usage/progress-tracking",
244
+ "title": "Progress Tracking",
245
+ "description": "Report and track job progress from handlers",
246
+ "content": "Jobs can report their progress as a percentage (0–100) while they run. This is useful for long-running tasks like file processing, data imports, or image generation where you want to show a progress bar or percentage to the user.\n\n## Reporting Progress from a Handler\n\nUse `ctx.setProgress(percent)` inside your job handler to report progress:\n\n```typescript title=\"@lib/job-handlers.ts\"\nimport { JobHandlers } from '@nicnocquee/dataqueue';\n\nexport const jobHandlers: JobHandlers<JobPayloadMap> = {\n generate_report: async (payload, signal, ctx) => {\n const chunks = await loadData(payload.reportId);\n\n for (let i = 0; i < chunks.length; i++) {\n if (signal.aborted) return;\n\n await processChunk(chunks[i]);\n\n // Report progress (0-100)\n await ctx.setProgress(Math.round(((i + 1) / chunks.length) * 100));\n }\n },\n};\n```\n\n### setProgress Rules\n\n- **Range**: The value must be between 0 and 100 (inclusive). Values outside this range throw an error.\n- **Rounding**: Fractional values are rounded to the nearest integer (`33.7` becomes `34`).\n- **Best-effort persistence**: Progress is written to the database but errors during the write do not kill the handler — processing continues.\n\n## Reading Progress\n\nProgress is stored in the `progress` field of the [JobRecord](/api/job-record):\n\n```typescript\nconst job = await jobQueue.getJob(jobId);\nconsole.log(job?.progress); // null | 0–100\n```\n\n- Before the handler calls `setProgress`, the value is `null`.\n- After the job completes, the last progress value is preserved (typically `100`).\n\n## Tracking Progress in React\n\nIf you're using the [React SDK](/usage/react-sdk), the `useJob` hook exposes `progress` directly:\n\n```tsx\nimport { useJob } from '@nicnocquee/dataqueue-react';\n\nfunction JobProgress({ jobId }: { jobId: number }) {\n const { status, progress } = useJob(jobId, {\n fetcher: (id) =>\n fetch(`/api/jobs/${id}`)\n .then((r) => r.json())\n .then((d) => d.job),\n });\n\n return (\n <div>\n <p>Status: {status}</p>\n <progress value={progress ?? 0} max={100} />\n <span>{progress ?? 0}%</span>\n </div>\n );\n}\n```\n\n## Database Migration\n\n> **Note:** If you're using the **PostgreSQL** backend, make sure to run the latest\n migrations to add the `progress` column. See [Database\n Migration](/usage/database-migration).\n\nThe Redis backend requires no migration — the `progress` field is stored automatically as part of the job hash."
247
+ },
248
+ {
249
+ "slug": "usage/quick-start",
250
+ "title": "Quick Start",
251
+ "description": "Get started with DataQueue",
252
+ "content": "In this docs, we'll use a Next.js with App Router project which is deployed to Vercel as an example.\n\n## Next.js Shortcut\n\nIf you're using Next.js, you can scaffold everything — API routes, a job queue singleton, a cron script, and all dependencies — with a single command:\n\n```bash\nnpx dataqueue-cli init\n```\n\nThe command auto-detects your project structure (App Router vs Pages Router, `src/` directory vs root) and creates all the files you need. See the [`init` CLI reference](/cli/init) for full details.\n\nIf you prefer to set things up manually, follow the steps below.\n\n## PostgreSQL Backend\n\n1. [Run migrations before deploying your app](/usage/database-migration)\n2. [Define job handlers](/usage/job-handlers)\n3. [Initialize the job queue](/usage/init-queue)\n4. [Add a job](/usage/add-job)\n5. Create three API routes to [process jobs](/usage/process-jobs), [reclaim stuck jobs](/usage/reclaim-jobs), and [cleanup old jobs](/usage/cleanup-jobs)\n6. [Call those API routes periodically](/usage/process-jobs#triggering-the-processor-via-cron) via a cron service (like Vercel cron) or a small script like [this one](https://github.com/nicnocquee/dataqueue/blob/main/apps/demo/cron.sh) during development.\n\n## Redis Backend\n\n1. [Install `ioredis`](/intro/install)\n2. [Define job handlers](/usage/job-handlers)\n3. [Initialize the job queue with Redis config](/usage/init-queue#redis)\n4. [Add a job](/usage/add-job)\n5. Create three API routes to [process jobs](/usage/process-jobs), [reclaim stuck jobs](/usage/reclaim-jobs), and [cleanup old jobs](/usage/cleanup-jobs)\n6. [Call those API routes periodically](/usage/process-jobs#triggering-the-processor-via-cron) via a cron service (like Vercel cron) or a small script like [this one](https://github.com/nicnocquee/dataqueue/blob/main/apps/demo/cron.sh) during development.\n\n> **Note:** The Redis backend requires **no database migrations**. Just install `ioredis`,\n configure the connection, and you're ready to go.\n\n## Long-Running Server\n\nIf you're running a persistent server (Express, Fastify, plain Node.js, etc.) instead of a serverless environment, the setup is slightly different:\n\n1. [Run migrations](/usage/database-migration) (PostgreSQL) or [install `ioredis`](/intro/install) (Redis)\n2. [Define job handlers](/usage/job-handlers)\n3. [Initialize the job queue](/usage/init-queue)\n4. Start the processor in the background with `startInBackground()`\n5. Schedule maintenance tasks (`reclaimStuckJobs`, `cleanupOldJobs`) on intervals\n6. Handle `SIGTERM`/`SIGINT` for graceful shutdown with `stopAndDrain()`\n\nSee [Long-Running Server](/usage/long-running-server) for a complete walkthrough and full example."
253
+ },
254
+ {
255
+ "slug": "usage/react-sdk",
256
+ "title": "React SDK",
257
+ "description": "Subscribe to job status and progress from React",
258
+ "content": "The `@nicnocquee/dataqueue-react` package provides React hooks for subscribing to job updates. It uses polling to track a job's status and progress in real-time.\n\n## Installation\n\n```bash\nnpm install @nicnocquee/dataqueue-react\n```\n\n> **Note:** The React SDK requires React 18 or later.\n## Quick Start\n\nThe simplest way to use the SDK is with the `useJob` hook:\n\n```tsx title=\"components/JobTracker.tsx\"\n'use client';\n\nimport { useJob } from '@nicnocquee/dataqueue-react';\n\nfunction JobTracker({ jobId }: { jobId: number }) {\n const { status, progress, data, isLoading, error } = useJob(jobId, {\n fetcher: (id) =>\n fetch(`/api/jobs/${id}`)\n .then((r) => r.json())\n .then((d) => d.job),\n pollingInterval: 1000,\n });\n\n if (isLoading) return <p>Loading...</p>;\n if (error) return <p>Error: {error.message}</p>;\n\n return (\n <div>\n <p>Status: {status}</p>\n <progress value={progress ?? 0} max={100} />\n <span>{progress ?? 0}%</span>\n </div>\n );\n}\n```\n\n### API Route\n\nThe `fetcher` function should call an API route that returns the job data. Here's an example Next.js API route:\n\n```typescript title=\"app/api/jobs/[id]/route.ts\"\nimport { getJobQueue } from '@/lib/queue';\nimport { NextResponse } from 'next/server';\n\nexport async function GET(\n _request: Request,\n { params }: { params: Promise<{ id: string }> },\n) {\n const { id } = await params;\n const jobQueue = getJobQueue();\n const job = await jobQueue.getJob(Number(id));\n if (!job) {\n return NextResponse.json({ error: 'Job not found' }, { status: 404 });\n }\n return NextResponse.json({ job });\n}\n```\n\n## DataqueueProvider\n\nTo avoid passing the `fetcher` and `pollingInterval` to every `useJob` call, wrap your app (or a subtree) in a `DataqueueProvider`:\n\n```tsx title=\"app/providers.tsx\"\n'use client';\n\nimport { DataqueueProvider } from '@nicnocquee/dataqueue-react';\n\nconst fetcher = (id: number) =>\n fetch(`/api/jobs/${id}`)\n .then((r) => r.json())\n .then((d) => d.job);\n\nexport function Providers({ children }: { children: React.ReactNode }) {\n return (\n <DataqueueProvider fetcher={fetcher} pollingInterval={2000}>\n {children}\n </DataqueueProvider>\n );\n}\n```\n\nThen use `useJob` without repeating the config:\n\n```tsx\nconst { status, progress } = useJob(jobId);\n```\n\nOptions passed directly to `useJob` override the provider values.\n\n## useJob API\n\n```typescript\nconst result = useJob(jobId, options?);\n```\n\n### Parameters\n\n- `jobId`: _number | null | undefined_ — The job ID to subscribe to. Pass `null` or `undefined` to skip polling.\n- `options` _(optional)_:\n\n| Option | Type | Default | Description |\n| ----------------- | ---------------------------------- | ------------- | --------------------------------- |\n| `fetcher` | `(id: number) => Promise<JobData>` | from provider | Function that fetches a job by ID |\n| `pollingInterval` | `number` | `1000` | Milliseconds between polls |\n| `enabled` | `boolean` | `true` | Set to `false` to pause polling |\n| `onStatusChange` | `(newStatus, oldStatus) => void` | — | Called when status changes |\n| `onComplete` | `(job) => void` | — | Called when job completes |\n| `onFailed` | `(job) => void` | — | Called when job fails |\n\n### Return Value\n\n| Field | Type | Description |\n| ----------- | ------------------- | --------------------------------------------- |\n| `data` | `JobData \\| null` | Latest job data, or `null` before first fetch |\n| `status` | `JobStatus \\| null` | Current job status |\n| `progress` | `number \\| null` | Progress percentage (0–100) |\n| `isLoading` | `boolean` | `true` until the first fetch resolves |\n| `error` | `Error \\| null` | Last fetch error, if any |\n\n### Smart Polling\n\nThe hook automatically stops polling when the job reaches a **terminal status**: `completed`, `failed`, or `cancelled`. This avoids unnecessary network requests once the job is done.\n\n## Callbacks\n\nUse callbacks to react to job lifecycle events:\n\n```tsx\nuseJob(jobId, {\n fetcher,\n onStatusChange: (newStatus, oldStatus) => {\n console.log(`Job went from ${oldStatus} to ${newStatus}`);\n },\n onComplete: (job) => {\n toast.success('Job completed!');\n },\n onFailed: (job) => {\n toast.error('Job failed.');\n },\n});\n```\n\n## JobData Type\n\nThe `fetcher` should return an object matching the `JobData` interface:\n\n```typescript\ninterface JobData {\n id: number;\n status:\n | 'pending'\n | 'processing'\n | 'completed'\n | 'failed'\n | 'cancelled'\n | 'waiting';\n progress?: number | null;\n [key: string]: unknown;\n}\n```\n\nThe `id`, `status`, and optionally `progress` fields are required. Any additional fields from your API response are preserved in `data`."
259
+ },
260
+ {
261
+ "slug": "usage/reclaim-jobs",
262
+ "title": "Reclaim Jobs",
263
+ "description": "",
264
+ "content": "Sometimes, a job can get stuck in the `processing` state. This usually happens if the process is killed or an unhandled error occurs after the job status is updated, but before it is marked as `completed` or `failed`.\n\nTo recover stuck jobs, use the `reclaimStuckJobs` method. The example below shows how to create an API route (`/api/cron/reclaim`) that can be triggered by a cron job:\n\n```typescript title=\"@/app/api/cron/reclaim.ts\"\nimport { getJobQueue } from '@/lib/queue';\nimport { NextResponse } from 'next/server';\n\nexport async function GET(request: Request) {\n // Secure the cron route: https://vercel.com/docs/cron-jobs/manage-cron-jobs#securing-cron-jobs\n const authHeader = request.headers.get('authorization');\n if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {\n return NextResponse.json({ message: 'Unauthorized' }, { status: 401 });\n }\n\n try {const jobQueue = getJobQueue();\n\n // Reclaim jobs stuck for more than 10 minutes\n const reclaimed = await jobQueue.reclaimStuckJobs(10);\n console.log(`Reclaimed ${reclaimed} stuck jobs`);\n\n return NextResponse.json({\n message: 'Stuck jobs reclaimed',\n reclaimed,\n });\n } catch (error) {\n console.error('Error reclaiming jobs:', error);\n return NextResponse.json(\n { message: 'Failed to reclaim jobs' },\n { status: 500 },\n );\n }\n}\n```\n\n#### Per-Job Timeout Awareness\n\n`reclaimStuckJobs` respects each job's individual `timeoutMs`. If a job has a `timeoutMs` that is longer than `maxProcessingTimeMinutes`, it will not be reclaimed until its own timeout has elapsed. For example, if you call `reclaimStuckJobs(10)` and a job has `timeoutMs: 1800000` (30 minutes), that job will only be reclaimed after 30 minutes — not 10.\n\nJobs without a `timeoutMs` continue to use the global `maxProcessingTimeMinutes` threshold as before.\n\n#### Scheduling the Reclaim Job with Cron\n\nAdd the following to your `vercel.json` to call the cron route every 10 minutes:\n\n```json title=\"vercel.json\"\n{\n \"crons\": [\n {\n \"path\": \"/api/cron/reclaim\",\n \"schedule\": \"*/10 * * * *\"\n }\n ]\n}\n```"
265
+ },
266
+ {
267
+ "slug": "usage/scaling",
268
+ "title": "Scaling to Thousands of Jobs",
269
+ "description": "",
270
+ "content": "DataQueue is designed to handle high-volume workloads out of the box. This page covers how to tune your setup for thousands (or more) of concurrent jobs, whether you're using PostgreSQL or Redis.\n\n## How Throughput Works\n\nWhen a processor runs, it follows this cycle:\n\n1. **Claim** a batch of ready jobs from the database (atomically, using `FOR UPDATE SKIP LOCKED` in PostgreSQL or Lua scripts in Redis).\n2. **Process** up to `concurrency` jobs in parallel from that batch.\n3. **Repeat** immediately if the batch was full, or wait `pollInterval` before checking again.\n\nThe theoretical maximum throughput of a single processor instance is:\n\n```\nthroughput = batchSize / (avgJobDuration + pollInterval)\n```\n\nFor example, with `batchSize: 20`, `concurrency: 10`, `pollInterval: 2000ms`, and jobs averaging 500ms each, a single processor can handle roughly **10 jobs/second** (600/minute).\n\n> **Note:** When a full batch is returned, the processor immediately polls again without\n waiting. This means it can drain backlogs much faster than the formula\n suggests during peak load.\n\n## Tuning Processor Settings\n\n### Batch Size\n\n`batchSize` controls how many jobs are claimed per polling cycle.\n\n| Environment | Recommended | Why |\n| --------------------------- | ----------- | -------------------------------------- |\n| Serverless (Vercel, Lambda) | 1-10 | Functions have execution time limits |\n| Long-running server | 10-50 | Larger batches reduce polling overhead |\n| High-throughput worker | 50-100 | Maximizes throughput per poll cycle |\n\n### Concurrency\n\n`concurrency` controls how many jobs from the batch run in parallel.\n\n- **CPU-bound jobs** (image processing, compression): keep concurrency low (1-4) to avoid CPU saturation.\n- **IO-bound jobs** (API calls, email sending): higher concurrency (5-20) works well since jobs spend most time waiting.\n- **Rate-limited APIs**: match concurrency to the API's rate limit to avoid throttling.\n\n### Poll Interval\n\n`pollInterval` controls how often the processor checks for new jobs when idle.\n\n- **1000ms**: Low latency, higher database load. Good for real-time workloads.\n- **5000ms** (default): Balanced. Good for most use cases.\n- **10000-30000ms**: Gentle on the database. Use when latency tolerance is high.\n\n```typescript\nconst processor = jobQueue.createProcessor(jobHandlers, {\n batchSize: 30,\n concurrency: 10,\n pollInterval: 2000,\n});\n```\n\n## Horizontal Scaling with Multiple Workers\n\nDataQueue supports running **multiple processor instances** simultaneously -- on the same server, across multiple servers, or in separate containers. No coordination is needed between workers.\n\n### How It Works\n\n- Each processor gets a unique `workerId` (auto-generated, or set manually).\n- PostgreSQL uses `FOR UPDATE SKIP LOCKED` to ensure no two workers claim the same job.\n- Redis uses atomic Lua scripts for the same guarantee.\n- Workers can safely run on different machines pointing at the same database.\n\n### Example: Multiple Workers\n\n```typescript\n// Worker 1 (e.g., on server A)\nconst processor1 = jobQueue.createProcessor(jobHandlers, {\n workerId: 'worker-a',\n batchSize: 20,\n concurrency: 5,\n});\nprocessor1.startInBackground();\n\n// Worker 2 (e.g., on server B)\nconst processor2 = jobQueue.createProcessor(jobHandlers, {\n workerId: 'worker-b',\n batchSize: 20,\n concurrency: 5,\n});\nprocessor2.startInBackground();\n```\n\n### Specialized Workers\n\nUse the `jobType` filter to create workers dedicated to specific job types. This lets you scale different workloads independently:\n\n```typescript\n// Fast worker for lightweight jobs\nconst emailWorker = jobQueue.createProcessor(jobHandlers, {\n jobType: 'email',\n batchSize: 50,\n concurrency: 20,\n pollInterval: 1000,\n});\n\n// Slow worker for heavy jobs\nconst reportWorker = jobQueue.createProcessor(jobHandlers, {\n jobType: 'report',\n batchSize: 5,\n concurrency: 1,\n pollInterval: 5000,\n});\n```\n\n## PostgreSQL Scaling\n\n### Connection Pool\n\nEach processor instance uses database connections from the pool. The default `max` is 10, which works for a single processor. If you run multiple processors in the same process, increase the pool size:\n\n```typescript\nconst jobQueue = initJobQueue({\n databaseConfig: {\n connectionString: process.env.DATABASE_URL,\n max: 20, // increase for multiple processors\n },\n});\n```\n\n> **Note:** If you run processors on separate servers, each has its own pool. The total\n connections across all servers should stay within your database's\n `max_connections` setting (typically 100 for managed databases like Neon or\n Supabase).\n\n### Table Maintenance\n\nAs completed jobs accumulate, the `job_queue` table grows. This doesn't affect processing speed (claim queries use partial indexes that only cover active jobs), but it does increase storage and slow down full-table queries like `getJobs()`.\n\n**Run cleanup regularly:**\n\n```typescript\n// Delete completed jobs older than 30 days\nawait jobQueue.cleanupOldJobs(30);\n\n// Delete old job events\nawait jobQueue.cleanupOldJobEvents(30);\n```\n\nCleanup operations are batched internally (1000 rows at a time) so they won't lock the table or timeout even with hundreds of thousands of old jobs.\n\n**Run reclaim regularly:**\n\n```typescript\n// Reclaim jobs stuck in 'processing' for more than 10 minutes\nawait jobQueue.reclaimStuckJobs(10);\n```\n\nThis recovers jobs from workers that crashed or timed out.\n\n### Monitoring Table Size\n\nYou can monitor the `job_queue` table size with this query:\n\n```sql\nSELECT\n pg_size_pretty(pg_total_relation_size('job_queue')) AS total_size,\n (SELECT count(*) FROM job_queue WHERE status = 'pending') AS pending,\n (SELECT count(*) FROM job_queue WHERE status = 'processing') AS processing,\n (SELECT count(*) FROM job_queue WHERE status = 'completed') AS completed,\n (SELECT count(*) FROM job_queue WHERE status = 'failed') AS failed;\n```\n\n### Performance Indexes\n\nDataQueue includes optimized partial indexes out of the box:\n\n- `idx_job_queue_claimable` -- speeds up job claiming (pending jobs by priority).\n- `idx_job_queue_failed_retry` -- speeds up retry scheduling.\n- `idx_job_queue_stuck` -- speeds up stuck job reclamation.\n- `idx_job_queue_cleanup` -- speeds up cleanup of old completed jobs.\n\nThese are created automatically when you run migrations.\n\n## Redis Scaling\n\n### Memory Usage\n\nRedis stores everything in memory. Estimate memory usage as:\n\n| Jobs | Approximate Memory |\n| ------- | ------------------ |\n| 1,000 | 1-2 MB |\n| 10,000 | 10-20 MB |\n| 100,000 | 100-200 MB |\n\nThis assumes payloads under 1 KB each. Larger payloads increase memory proportionally.\n\n### Key Best Practices\n\n- **Enable persistence**: Use AOF (Append Only File) for durability. Without it, a Redis restart loses all jobs.\n- **Use `keyPrefix`** to isolate multiple queues in the same Redis instance:\n\n```typescript\nconst jobQueue = initJobQueue({\n backend: 'redis',\n redisConfig: {\n url: process.env.REDIS_URL,\n keyPrefix: 'myapp:jobs:',\n },\n});\n```\n\n- **Run cleanup regularly** to free memory from completed jobs. Like PostgreSQL, Redis cleanup is batched internally using cursor-based scanning, so it's safe to run even with a large number of completed jobs.\n\n## Payload Best Practices\n\nKeep payloads small. Store references (IDs, URLs) rather than full data:\n\n```typescript\n// Good: small payload with reference\nawait jobQueue.addJob({\n jobType: 'processImage',\n payload: { imageId: 'img_abc123', bucket: 's3://uploads' },\n});\n\n// Avoid: large payload with embedded data\nawait jobQueue.addJob({\n jobType: 'processImage',\n payload: { imageData: '<base64 string of 5MB image>' }, // too large\n});\n```\n\nA good target is **under 10 KB per payload**. This keeps database queries fast and Redis memory predictable.\n\n## Example: Processing 10,000 Jobs per Hour\n\nHere's a configuration that can comfortably process ~10,000 jobs per hour, assuming each job takes about 1 second:\n\n```typescript title=\"worker.ts\"\nimport { initJobQueue } from '@nicnocquee/dataqueue';\nimport { jobHandlers } from './job-handlers';\n\nconst jobQueue = initJobQueue({\n databaseConfig: {\n connectionString: process.env.DATABASE_URL,\n max: 15, // room for processor + maintenance queries\n },\n});\n\nconst processor = jobQueue.createProcessor(jobHandlers, {\n batchSize: 30,\n concurrency: 10, // 10 jobs in parallel\n pollInterval: 2000,\n onError: (error) => {\n console.error('Processor error:', error);\n },\n});\n\nprocessor.startInBackground();\n\n// Maintenance\nsetInterval(\n async () => {\n try {\n await jobQueue.reclaimStuckJobs(10);\n } catch (e) {\n console.error(e);\n }\n },\n 10 * 60 * 1000,\n);\n\nsetInterval(\n async () => {\n try {\n await jobQueue.cleanupOldJobs(30);\n await jobQueue.cleanupOldJobEvents(30);\n } catch (e) {\n console.error(e);\n }\n },\n 24 * 60 * 60 * 1000,\n);\n\n// Graceful shutdown\nprocess.on('SIGTERM', async () => {\n await processor.stopAndDrain(30000);\n jobQueue.getPool().end();\n process.exit(0);\n});\n```\n\nWith 10 concurrent jobs and ~1s each, a single worker handles roughly **10 jobs/second** = **36,000 jobs/hour**. For higher throughput, add more worker instances.\n\n## Quick Reference\n\n| Scale | Workers | batchSize | concurrency | pollInterval | Notes |\n| ------------------- | ------- | --------- | ----------- | ------------ | -------------------------------- |\n| < 100 jobs/hour | 1 | 10 | 3 | 5000ms | Default settings work fine |\n| 100-1,000/hour | 1 | 20 | 5 | 3000ms | Single worker is sufficient |\n| 1,000-10,000/hour | 1-2 | 30 | 10 | 2000ms | Add a second worker if needed |\n| 10,000-100,000/hour | 2-5 | 50 | 15 | 1000ms | Multiple workers recommended |\n| 100,000+/hour | 5+ | 50-100 | 20 | 1000ms | Specialized workers per job type |"
271
+ },
272
+ {
273
+ "slug": "usage/wait",
274
+ "title": "Wait",
275
+ "description": "",
276
+ "content": "During a job handler you can wait for a period of time, until a specific date, or for an external signal before continuing execution. This is useful for building multi-step workflows like onboarding sequences, approval flows, or delayed notifications—all as a single handler.\n\n## How It Works\n\nUnlike traditional job queues where you would schedule separate jobs for each step, DataQueue's wait feature lets you write linear, async code. Under the hood:\n\n1. When a wait is triggered, the handler throws a `WaitSignal` internally\n2. The job moves to `'waiting'` status, the handler stops, and **the worker lock is released**\n3. After the wait condition is met, the job is re-picked by the processor\n4. The handler re-runs from the top, but **completed steps are replayed from cache**\n\n> **Note:** Waiting jobs are completely idle — they don't hold a worker lock, don't occupy\n a concurrency slot, and don't consume any processing resources. You can safely\n have thousands of jobs waiting in parallel with zero impact on queue\n throughput.\n\nThis means your handlers need to use `ctx.run()` to wrap side-effectful work (like sending emails) so it doesn't re-execute on re-invocation.\n\n## Step Tracking with `ctx.run()`\n\n`ctx.run()` wraps a step with memoization. Each step is identified by a unique name. If the step was already completed in a previous invocation, the cached result is returned without re-executing the function.\n\n```typescript title=\"@/lib/job-handlers.ts\"\nconst handler = async (payload, signal, ctx) => {\n // This will only execute once, even if the handler is re-invoked\n const data = await ctx.run('fetch-data', async () => {\n return await fetchExternalData(payload.url);\n });\n\n // This will also only execute once\n await ctx.run('send-email', async () => {\n await sendEmail(payload.email, data.subject, data.body);\n });\n};\n```\n\nStep results are persisted to the database after each `ctx.run()` call, ensuring durability even if the handler crashes.\n\n**Important**: Step names must be unique within a handler and stable across re-invocations.\n\n## Time-Based Waits\n\n### `ctx.waitFor(duration)`\n\nWait for a specific duration before continuing.\n\n```typescript title=\"@/lib/job-handlers.ts\"\nconst onboardingHandler = async (payload, signal, ctx) => {\n // Step 1: Send welcome email\n await ctx.run('send-welcome', async () => {\n await sendEmail(payload.email, 'Welcome!');\n });\n\n // Wait 24 hours\n await ctx.waitFor({ hours: 24 });\n\n // Step 2: Send follow-up (runs after the wait)\n await ctx.run('send-followup', async () => {\n await sendEmail(payload.email, 'How are you finding things?');\n });\n\n // Wait 7 days\n await ctx.waitFor({ days: 7 });\n\n // Step 3: Send survey\n await ctx.run('send-survey', async () => {\n await sendEmail(payload.email, 'We would love your feedback!');\n });\n};\n```\n\nSupported duration fields (additive):\n\n| Field | Description |\n| :-------- | :------------ |\n| `seconds` | Seconds |\n| `minutes` | Minutes |\n| `hours` | Hours |\n| `days` | Days |\n| `weeks` | Weeks |\n| `months` | Months (~30d) |\n| `years` | Years (~365d) |\n\n### `ctx.waitUntil(date)`\n\nWait until a specific date/time.\n\n```typescript title=\"@/lib/job-handlers.ts\"\nconst handler = async (payload, signal, ctx) => {\n await ctx.run('prepare', async () => {\n await prepareReport();\n });\n\n // Wait until next Monday at 9am\n const nextMonday = getNextMonday9AM();\n await ctx.waitUntil(nextMonday);\n\n await ctx.run('deliver', async () => {\n await deliverReport();\n });\n};\n```\n\n## Token-Based Waits\n\nToken waits allow you to pause a job until an external signal—like a human approval, a webhook callback, or another service's response.\n\n### Creating and Waiting for Tokens\n\n```typescript title=\"@/lib/job-handlers.ts\"\nconst approvalHandler = async (payload, signal, ctx) => {\n // Step 1: Submit for review\n const token = await ctx.run('create-token', async () => {\n return await ctx.createToken({ timeout: '48h' });\n });\n\n // Notify the reviewer (use ctx.run to avoid re-sending on resume)\n await ctx.run('notify-reviewer', async () => {\n await sendSlackMessage(\n `Please review request ${payload.id}. Token: ${token.id}`,\n );\n });\n\n // Wait for the token to be completed\n const result = await ctx.waitForToken<{ action: 'approve' | 'reject' }>(\n token.id,\n );\n\n if (result.ok) {\n if (result.output.action === 'approve') {\n await ctx.run('approve', async () => {\n await approveRequest(payload.id);\n });\n } else {\n await ctx.run('reject', async () => {\n await rejectRequest(payload.id);\n });\n }\n } else {\n // Token timed out\n await ctx.run('timeout', async () => {\n await escalateRequest(payload.id);\n });\n }\n};\n```\n\n### Completing Tokens Externally\n\nTokens can be completed from anywhere—API routes, webhooks, or external services:\n\n```typescript title=\"@/app/api/approve/route.ts\"\nimport { getJobQueue } from '@/lib/queue';\n\nexport async function POST(request: Request) {\n const { tokenId, action } = await request.json();\n const jobQueue = getJobQueue();\n\n await jobQueue.completeToken(tokenId, { action });\n\n return Response.json({ success: true });\n}\n```\n\n### Token Options\n\n```typescript\nconst token = await ctx.createToken({\n timeout: '10m', // Optional: '10s', '5m', '1h', '24h', '7d'\n tags: ['approval', 'user:123'], // Optional: tags for filtering\n});\n```\n\nIf a timeout is set and the token isn't completed in time, call `jobQueue.expireTimedOutTokens()` periodically (e.g., alongside `reclaimStuckJobs`) to expire tokens and resume waiting jobs:\n\n```typescript title=\"@/app/api/cron/maintenance/route.ts\"\nexport async function GET() {\n const jobQueue = getJobQueue();\n await jobQueue.reclaimStuckJobs();\n await jobQueue.expireTimedOutTokens();\n return Response.json({ ok: true });\n}\n```\n\n### Retrieving Tokens\n\n```typescript\nconst token = await jobQueue.getToken(tokenId);\n// { id, jobId, status, output, timeoutAt, createdAt, completedAt, tags }\n```\n\n## Backward Compatibility\n\nThe wait feature is fully backward compatible. Existing handlers that don't use `ctx.run()` or any wait methods will continue to work exactly as before. The new methods are purely additive to the existing `JobContext`.\n\n## Important Notes\n\n- **Step names must be stable**: Don't change step names between deployments while jobs are waiting. The handler uses step names to replay cached results.\n- **Wait counter is position-based**: If you add or remove `waitFor`/`waitUntil`/`waitForToken` calls between deployments while jobs are mid-wait, the counter may mismatch. Either deploy changes when no jobs are in `'waiting'` status, or create a new job type with the updated handler instead of editing the existing one. Existing waiting jobs will continue with the old logic safely.\n- **Waiting is free**: A waiting job releases its worker lock and concurrency slot. It sits passively in the database until resumed, consuming no processing resources.\n- **Waiting does not consume attempts**: When a job resumes from a wait, the attempt counter is not incremented. Only real failures count.\n- **Cancel waiting jobs**: You can cancel a job in `'waiting'` status just like a pending job using `jobQueue.cancelJob(jobId)`.\n- **`forceKillOnTimeout` limitation**: Wait features (`ctx.run`, `ctx.waitFor`, etc.) are not available when `forceKillOnTimeout` is enabled, since that mode runs handlers in isolated worker threads without database access."
277
+ }
278
+ ]