@nicnocquee/dataqueue 1.33.0 → 1.34.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/ai/build-docs-content.ts +96 -0
- package/ai/build-llms-full.ts +42 -0
- package/ai/docs-content.json +278 -0
- package/ai/rules/advanced.md +94 -0
- package/ai/rules/basic.md +90 -0
- package/ai/rules/react-dashboard.md +83 -0
- package/ai/skills/dataqueue-advanced/SKILL.md +211 -0
- package/ai/skills/dataqueue-core/SKILL.md +131 -0
- package/ai/skills/dataqueue-react/SKILL.md +189 -0
- package/dist/cli.cjs +577 -32
- package/dist/cli.cjs.map +1 -1
- package/dist/cli.d.cts +52 -2
- package/dist/cli.d.ts +52 -2
- package/dist/cli.js +575 -32
- package/dist/cli.js.map +1 -1
- package/dist/mcp-server.cjs +186 -0
- package/dist/mcp-server.cjs.map +1 -0
- package/dist/mcp-server.d.cts +32 -0
- package/dist/mcp-server.d.ts +32 -0
- package/dist/mcp-server.js +175 -0
- package/dist/mcp-server.js.map +1 -0
- package/package.json +10 -4
- package/src/cli.test.ts +65 -0
- package/src/cli.ts +56 -19
- package/src/install-mcp-command.test.ts +216 -0
- package/src/install-mcp-command.ts +185 -0
- package/src/install-rules-command.test.ts +218 -0
- package/src/install-rules-command.ts +233 -0
- package/src/install-skills-command.test.ts +176 -0
- package/src/install-skills-command.ts +124 -0
- package/src/mcp-server.test.ts +162 -0
- package/src/mcp-server.ts +231 -0
|
@@ -0,0 +1,94 @@
|
|
|
1
|
+
# DataQueue — Advanced Rules
|
|
2
|
+
|
|
3
|
+
## Step Memoization (ctx.run)
|
|
4
|
+
|
|
5
|
+
Wrap side-effectful work in `ctx.run(stepName, fn)` for durability. Cached results replay on re-invocation after a wait.
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
const data = await ctx.run('fetch', async () => fetchFromAPI(url));
|
|
9
|
+
await ctx.waitFor({ hours: 1 });
|
|
10
|
+
await ctx.run('notify', async () => sendNotification(data));
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
Step names must be unique within a handler and stable across deployments.
|
|
14
|
+
|
|
15
|
+
## Waits
|
|
16
|
+
|
|
17
|
+
- `ctx.waitFor({ hours: 24 })` — pause for a duration (seconds, minutes, hours, days, weeks, months, years).
|
|
18
|
+
- `ctx.waitUntil(date)` — pause until a specific date.
|
|
19
|
+
- `ctx.waitForToken(tokenId)` — pause until an external actor completes the token.
|
|
20
|
+
|
|
21
|
+
Waiting jobs release their worker lock and concurrency slot. They consume no resources.
|
|
22
|
+
|
|
23
|
+
Wait calls use a positional counter internally. Do not add/remove waits conditionally between re-invocations.
|
|
24
|
+
|
|
25
|
+
## Token System
|
|
26
|
+
|
|
27
|
+
```typescript
|
|
28
|
+
const token = await ctx.createToken({ timeout: '48h', tags: ['approval'] });
|
|
29
|
+
const result = await ctx.waitForToken<{ approved: boolean }>(token.id);
|
|
30
|
+
if (result.ok) {
|
|
31
|
+
/* result.output.approved */
|
|
32
|
+
}
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
Complete externally: `await queue.completeToken(tokenId, { approved: true })`.
|
|
36
|
+
Expire timed-out tokens: `await queue.expireTimedOutTokens()`.
|
|
37
|
+
|
|
38
|
+
## Cron Scheduling
|
|
39
|
+
|
|
40
|
+
```typescript
|
|
41
|
+
await queue.addCronJob({
|
|
42
|
+
scheduleName: 'daily-cleanup',
|
|
43
|
+
cronExpression: '0 2 * * *',
|
|
44
|
+
jobType: 'cleanup',
|
|
45
|
+
payload: { days: 30 },
|
|
46
|
+
timezone: 'UTC',
|
|
47
|
+
allowOverlap: false,
|
|
48
|
+
});
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
The processor auto-enqueues due cron jobs before each batch. Manage with `pauseCronJob`, `resumeCronJob`, `editCronJob`, `removeCronJob`, `listCronJobs`.
|
|
52
|
+
|
|
53
|
+
## Timeout Management
|
|
54
|
+
|
|
55
|
+
- `ctx.prolong(ms)` — proactively reset deadline. `ctx.prolong()` resets to original `timeoutMs`.
|
|
56
|
+
- `ctx.onTimeout(() => ms)` — reactive; return ms to extend, or nothing to let timeout proceed.
|
|
57
|
+
- `forceKillOnTimeout: true` — terminates handler via Worker Thread. Requires Node.js, serializable handler, and disables `ctx.run`/waits/`prolong`/`onTimeout`.
|
|
58
|
+
|
|
59
|
+
## Tags and Filtering
|
|
60
|
+
|
|
61
|
+
```typescript
|
|
62
|
+
await queue.addJob({ jobType: 'email', payload, tags: ['welcome', 'user'] });
|
|
63
|
+
const jobs = await queue.getJobsByTags(['welcome'], 'any');
|
|
64
|
+
await queue.cancelAllUpcomingJobs({ tags: { values: ['user'], mode: 'all' } });
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
Modes: `exact` (exact set), `all` (superset), `any` (intersection), `none` (exclusion).
|
|
68
|
+
|
|
69
|
+
## Idempotency
|
|
70
|
+
|
|
71
|
+
```typescript
|
|
72
|
+
await queue.addJob({
|
|
73
|
+
jobType: 'email',
|
|
74
|
+
payload,
|
|
75
|
+
idempotencyKey: `welcome-${userId}`,
|
|
76
|
+
});
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
Returns existing job ID if key already exists. Key persists until `cleanupOldJobs` removes the job.
|
|
80
|
+
|
|
81
|
+
## Scaling
|
|
82
|
+
|
|
83
|
+
- Increase `batchSize` and `concurrency` for higher throughput.
|
|
84
|
+
- Run multiple processor instances with unique `workerId` values — `FOR UPDATE SKIP LOCKED` (PostgreSQL) or Lua scripts (Redis) prevent double-claiming.
|
|
85
|
+
- Use `jobType` filter for specialized workers.
|
|
86
|
+
- Call `cleanupOldJobs` and `reclaimStuckJobs` on intervals.
|
|
87
|
+
|
|
88
|
+
## Progress Tracking
|
|
89
|
+
|
|
90
|
+
```typescript
|
|
91
|
+
await ctx.setProgress(50); // 0–100, persisted to DB
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
Read via `queue.getJob(id)` (`progress` field) or React SDK's `useJob` hook.
|
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
# DataQueue — Basic Rules
|
|
2
|
+
|
|
3
|
+
## Imports
|
|
4
|
+
|
|
5
|
+
Always import from `@nicnocquee/dataqueue`. There is no subpath like `/v2` or `/v3`.
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { initJobQueue, JobHandlers } from '@nicnocquee/dataqueue';
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
## PayloadMap Pattern
|
|
12
|
+
|
|
13
|
+
Define an object type where keys are job type strings and values are payload shapes. This powers type-safe `addJob`, `createProcessor`, and handler completeness checking.
|
|
14
|
+
|
|
15
|
+
```typescript
|
|
16
|
+
type JobPayloadMap = {
|
|
17
|
+
send_email: { to: string; subject: string; body: string };
|
|
18
|
+
generate_report: { reportId: string; userId: string };
|
|
19
|
+
};
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
## Initialization (Singleton)
|
|
23
|
+
|
|
24
|
+
Never call `initJobQueue` per request — each call creates a new database connection pool. Use a module-level singleton:
|
|
25
|
+
|
|
26
|
+
```typescript
|
|
27
|
+
import { initJobQueue } from '@nicnocquee/dataqueue';
|
|
28
|
+
|
|
29
|
+
let jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;
|
|
30
|
+
|
|
31
|
+
export const getJobQueue = () => {
|
|
32
|
+
if (!jobQueue) {
|
|
33
|
+
jobQueue = initJobQueue<JobPayloadMap>({
|
|
34
|
+
databaseConfig: { connectionString: process.env.PG_DATAQUEUE_DATABASE },
|
|
35
|
+
});
|
|
36
|
+
}
|
|
37
|
+
return jobQueue;
|
|
38
|
+
};
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
For Redis, set `backend: 'redis'` and use `redisConfig` with `url` or `host`/`port`/`password`. Install `ioredis` as a peer dependency.
|
|
42
|
+
|
|
43
|
+
## Handlers
|
|
44
|
+
|
|
45
|
+
Type handlers as `JobHandlers<PayloadMap>` so TypeScript enforces a handler for every job type.
|
|
46
|
+
|
|
47
|
+
```typescript
|
|
48
|
+
export const jobHandlers: JobHandlers<JobPayloadMap> = {
|
|
49
|
+
send_email: async (payload, signal, ctx) => {
|
|
50
|
+
await sendEmail(payload.to, payload.subject, payload.body);
|
|
51
|
+
},
|
|
52
|
+
generate_report: async (payload) => {
|
|
53
|
+
await generateReport(payload.reportId, payload.userId);
|
|
54
|
+
},
|
|
55
|
+
};
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
Handler signature: `(payload: T, signal: AbortSignal, ctx: JobContext) => Promise<void>`. You can omit arguments you don't need.
|
|
59
|
+
|
|
60
|
+
## Processing
|
|
61
|
+
|
|
62
|
+
**Serverless** — call `processor.start()` which processes one batch and stops:
|
|
63
|
+
|
|
64
|
+
```typescript
|
|
65
|
+
const processor = queue.createProcessor(handlers, {
|
|
66
|
+
batchSize: 10,
|
|
67
|
+
concurrency: 3,
|
|
68
|
+
});
|
|
69
|
+
await processor.start();
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
**Long-running** — call `processor.startInBackground()` which polls continuously:
|
|
73
|
+
|
|
74
|
+
```typescript
|
|
75
|
+
processor.startInBackground();
|
|
76
|
+
process.on('SIGTERM', async () => {
|
|
77
|
+
await processor.stopAndDrain(30000);
|
|
78
|
+
queue.getPool().end(); // or queue.getRedisClient().quit() for Redis
|
|
79
|
+
process.exit(0);
|
|
80
|
+
});
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## Common Mistakes
|
|
84
|
+
|
|
85
|
+
1. Creating `initJobQueue` per request — use a singleton.
|
|
86
|
+
2. Missing handler for a job type — fails with `NoHandler`. Type as `JobHandlers<PayloadMap>`.
|
|
87
|
+
3. Not checking `signal.aborted` in long handlers — timed-out jobs keep running.
|
|
88
|
+
4. Forgetting `reclaimStuckJobs()` — crashed workers leave jobs stuck.
|
|
89
|
+
5. Skipping migrations (PostgreSQL) — run `dataqueue-cli migrate` first. Redis needs none.
|
|
90
|
+
6. Using `stop()` instead of `stopAndDrain()` — leaves in-flight jobs stuck.
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
# DataQueue — React & Dashboard Rules
|
|
2
|
+
|
|
3
|
+
## React SDK (@nicnocquee/dataqueue-react)
|
|
4
|
+
|
|
5
|
+
Install: `npm install @nicnocquee/dataqueue-react` (requires React 18+).
|
|
6
|
+
|
|
7
|
+
### useJob Hook
|
|
8
|
+
|
|
9
|
+
```tsx
|
|
10
|
+
'use client';
|
|
11
|
+
import { useJob } from '@nicnocquee/dataqueue-react';
|
|
12
|
+
|
|
13
|
+
const { status, progress, data, isLoading, error } = useJob(jobId, {
|
|
14
|
+
fetcher: (id) =>
|
|
15
|
+
fetch(`/api/jobs/${id}`)
|
|
16
|
+
.then((r) => r.json())
|
|
17
|
+
.then((d) => d.job),
|
|
18
|
+
pollingInterval: 1000,
|
|
19
|
+
onComplete: (job) => {
|
|
20
|
+
/* job completed */
|
|
21
|
+
},
|
|
22
|
+
onFailed: (job) => {
|
|
23
|
+
/* job failed */
|
|
24
|
+
},
|
|
25
|
+
});
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
Polling auto-stops on terminal statuses (`completed`, `failed`, `cancelled`).
|
|
29
|
+
|
|
30
|
+
### DataqueueProvider
|
|
31
|
+
|
|
32
|
+
Wrap app in `DataqueueProvider` to share `fetcher` and `pollingInterval`:
|
|
33
|
+
|
|
34
|
+
```tsx
|
|
35
|
+
<DataqueueProvider fetcher={fetcher} pollingInterval={2000}>
|
|
36
|
+
{children}
|
|
37
|
+
</DataqueueProvider>
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
### API Route (Next.js)
|
|
41
|
+
|
|
42
|
+
```typescript
|
|
43
|
+
// app/api/jobs/[id]/route.ts
|
|
44
|
+
export async function GET(
|
|
45
|
+
_req: Request,
|
|
46
|
+
{ params }: { params: Promise<{ id: string }> },
|
|
47
|
+
) {
|
|
48
|
+
const { id } = await params;
|
|
49
|
+
const job = await getJobQueue().getJob(Number(id));
|
|
50
|
+
if (!job) return NextResponse.json({ error: 'Not found' }, { status: 404 });
|
|
51
|
+
return NextResponse.json({ job });
|
|
52
|
+
}
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
## Dashboard (@nicnocquee/dataqueue-dashboard)
|
|
56
|
+
|
|
57
|
+
Install: `npm install @nicnocquee/dataqueue-dashboard`.
|
|
58
|
+
|
|
59
|
+
### Setup (Next.js App Router)
|
|
60
|
+
|
|
61
|
+
```typescript
|
|
62
|
+
// app/admin/dataqueue/[[...path]]/route.ts
|
|
63
|
+
import { createDataqueueDashboard } from '@nicnocquee/dataqueue-dashboard/next';
|
|
64
|
+
import { getJobQueue, jobHandlers } from '@/lib/queue';
|
|
65
|
+
|
|
66
|
+
const { GET, POST } = createDataqueueDashboard({
|
|
67
|
+
jobQueue: getJobQueue(),
|
|
68
|
+
jobHandlers,
|
|
69
|
+
basePath: '/admin/dataqueue',
|
|
70
|
+
});
|
|
71
|
+
|
|
72
|
+
export { GET, POST };
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
`basePath` must match the route directory path.
|
|
76
|
+
|
|
77
|
+
### Protection
|
|
78
|
+
|
|
79
|
+
Wrap handlers with your auth middleware before exporting GET/POST.
|
|
80
|
+
|
|
81
|
+
## Progress Tracking
|
|
82
|
+
|
|
83
|
+
Use `ctx.setProgress(percent)` in handlers (0–100). The value appears in `useJob`'s `progress` field and the dashboard detail view.
|
|
@@ -0,0 +1,211 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dataqueue-advanced
|
|
3
|
+
description: Advanced DataQueue patterns — step memoization, waits, tokens, cron, timeouts, tags, idempotency.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# DataQueue Advanced Patterns
|
|
7
|
+
|
|
8
|
+
## Step Memoization with ctx.run()
|
|
9
|
+
|
|
10
|
+
Wrap side-effectful work in `ctx.run(stepName, fn)`. Results are cached in the database — when the handler re-runs after a wait, completed steps replay from cache without re-executing.
|
|
11
|
+
|
|
12
|
+
```typescript
|
|
13
|
+
const handler = async (payload, signal, ctx) => {
|
|
14
|
+
const data = await ctx.run('fetch-data', async () => {
|
|
15
|
+
return await fetchFromAPI(payload.url);
|
|
16
|
+
});
|
|
17
|
+
|
|
18
|
+
await ctx.run('send-notification', async () => {
|
|
19
|
+
await notify(data.userId, data.message);
|
|
20
|
+
});
|
|
21
|
+
};
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
**Rules:**
|
|
25
|
+
|
|
26
|
+
- Step names must be unique within a handler.
|
|
27
|
+
- Step names must be stable across deployments while jobs are waiting.
|
|
28
|
+
- Step order must not change conditionally between re-invocations.
|
|
29
|
+
|
|
30
|
+
## Time-Based Waits
|
|
31
|
+
|
|
32
|
+
### waitFor (duration)
|
|
33
|
+
|
|
34
|
+
```typescript
|
|
35
|
+
const handler = async (payload, signal, ctx) => {
|
|
36
|
+
await ctx.run('step-1', async () => {
|
|
37
|
+
/* ... */
|
|
38
|
+
});
|
|
39
|
+
await ctx.waitFor({ hours: 24 });
|
|
40
|
+
await ctx.run('step-2', async () => {
|
|
41
|
+
/* ... */
|
|
42
|
+
});
|
|
43
|
+
};
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
Duration fields: `seconds`, `minutes`, `hours`, `days`, `weeks`, `months`, `years` (additive).
|
|
47
|
+
|
|
48
|
+
### waitUntil (date)
|
|
49
|
+
|
|
50
|
+
```typescript
|
|
51
|
+
await ctx.waitUntil(new Date('2025-03-01T09:00:00Z'));
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### How waits work internally
|
|
55
|
+
|
|
56
|
+
1. Handler throws a `WaitSignal` internally.
|
|
57
|
+
2. Job moves to `'waiting'` status — worker lock is released.
|
|
58
|
+
3. After the wait expires, job becomes `'pending'` again.
|
|
59
|
+
4. Handler re-runs from top; `ctx.run()` replays cached steps.
|
|
60
|
+
|
|
61
|
+
Waiting jobs are idle — they hold no lock, no concurrency slot, no resources.
|
|
62
|
+
|
|
63
|
+
## Token-Based Waits (Human-in-the-Loop)
|
|
64
|
+
|
|
65
|
+
Create a token, send it to an external actor, and wait for them to complete it.
|
|
66
|
+
|
|
67
|
+
```typescript
|
|
68
|
+
const handler = async (payload, signal, ctx) => {
|
|
69
|
+
const token = await ctx.run('create-token', async () => {
|
|
70
|
+
return await ctx.createToken({ timeout: '48h', tags: ['approval'] });
|
|
71
|
+
});
|
|
72
|
+
|
|
73
|
+
await ctx.run('notify', async () => {
|
|
74
|
+
await sendSlack(`Approve: ${token.id}`);
|
|
75
|
+
});
|
|
76
|
+
|
|
77
|
+
const result = await ctx.waitForToken<{ approved: boolean }>(token.id);
|
|
78
|
+
if (result.ok) {
|
|
79
|
+
await ctx.run('process', async () => {
|
|
80
|
+
if (result.output.approved) await approve(payload.id);
|
|
81
|
+
});
|
|
82
|
+
}
|
|
83
|
+
};
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
Complete tokens externally:
|
|
87
|
+
|
|
88
|
+
```typescript
|
|
89
|
+
await queue.completeToken(tokenId, { approved: true });
|
|
90
|
+
```
|
|
91
|
+
|
|
92
|
+
Expire timed-out tokens periodically:
|
|
93
|
+
|
|
94
|
+
```typescript
|
|
95
|
+
await queue.expireTimedOutTokens();
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
## Cron Scheduling
|
|
99
|
+
|
|
100
|
+
```typescript
|
|
101
|
+
const cronId = await queue.addCronJob({
|
|
102
|
+
scheduleName: 'daily-report',
|
|
103
|
+
cronExpression: '0 9 * * *',
|
|
104
|
+
jobType: 'generate_report',
|
|
105
|
+
payload: { reportId: 'daily', userId: 'system' },
|
|
106
|
+
timezone: 'America/New_York',
|
|
107
|
+
allowOverlap: false,
|
|
108
|
+
});
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
The processor automatically enqueues due cron jobs before each batch — no manual triggering needed.
|
|
112
|
+
|
|
113
|
+
Manage schedules:
|
|
114
|
+
|
|
115
|
+
```typescript
|
|
116
|
+
await queue.pauseCronJob(cronId);
|
|
117
|
+
await queue.resumeCronJob(cronId);
|
|
118
|
+
await queue.editCronJob(cronId, { cronExpression: '0 */2 * * *' });
|
|
119
|
+
await queue.removeCronJob(cronId);
|
|
120
|
+
const schedules = await queue.listCronJobs('active');
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
## Timeout Management
|
|
124
|
+
|
|
125
|
+
### Proactive — ctx.prolong()
|
|
126
|
+
|
|
127
|
+
```typescript
|
|
128
|
+
const handler = async (payload, signal, ctx) => {
|
|
129
|
+
ctx.prolong(60_000); // set deadline to 60s from now
|
|
130
|
+
await doHeavyWork();
|
|
131
|
+
ctx.prolong(); // reset to original timeoutMs
|
|
132
|
+
};
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### Reactive — ctx.onTimeout()
|
|
136
|
+
|
|
137
|
+
```typescript
|
|
138
|
+
const handler = async (payload, signal, ctx) => {
|
|
139
|
+
let step = 0;
|
|
140
|
+
ctx.onTimeout(() => {
|
|
141
|
+
if (step < 3) return 30_000; // extend 30s
|
|
142
|
+
});
|
|
143
|
+
step = 1;
|
|
144
|
+
await doStep1();
|
|
145
|
+
step = 2;
|
|
146
|
+
await doStep2();
|
|
147
|
+
step = 3;
|
|
148
|
+
await doStep3();
|
|
149
|
+
};
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
Both update `locked_at` in the DB, preventing premature reclamation.
|
|
153
|
+
|
|
154
|
+
### Force Kill on Timeout
|
|
155
|
+
|
|
156
|
+
```typescript
|
|
157
|
+
await queue.addJob({
|
|
158
|
+
jobType: 'task',
|
|
159
|
+
payload: {
|
|
160
|
+
/* ... */
|
|
161
|
+
},
|
|
162
|
+
timeoutMs: 5000,
|
|
163
|
+
forceKillOnTimeout: true,
|
|
164
|
+
});
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
**Limitations of forceKillOnTimeout:**
|
|
168
|
+
|
|
169
|
+
- Requires Node.js (not Bun).
|
|
170
|
+
- Handler must be serializable (no closures over external variables).
|
|
171
|
+
- `prolong`, `onTimeout`, `ctx.run`, waits are NOT available.
|
|
172
|
+
|
|
173
|
+
## Tags
|
|
174
|
+
|
|
175
|
+
```typescript
|
|
176
|
+
await queue.addJob({
|
|
177
|
+
jobType: 'email',
|
|
178
|
+
payload: {
|
|
179
|
+
/* ... */
|
|
180
|
+
},
|
|
181
|
+
tags: ['welcome', 'onboarding'],
|
|
182
|
+
});
|
|
183
|
+
|
|
184
|
+
const jobs = await queue.getJobsByTags(['welcome'], 'any');
|
|
185
|
+
await queue.cancelAllUpcomingJobs({
|
|
186
|
+
tags: { values: ['onboarding'], mode: 'all' },
|
|
187
|
+
});
|
|
188
|
+
```
|
|
189
|
+
|
|
190
|
+
Tag query modes: `'exact'`, `'all'`, `'any'`, `'none'`.
|
|
191
|
+
|
|
192
|
+
## Idempotency
|
|
193
|
+
|
|
194
|
+
```typescript
|
|
195
|
+
const jobId = await queue.addJob({
|
|
196
|
+
jobType: 'email',
|
|
197
|
+
payload: { to: 'user@example.com', subject: 'Welcome', body: '...' },
|
|
198
|
+
idempotencyKey: `welcome-${userId}`,
|
|
199
|
+
});
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
If a job with the same key exists, returns the existing job ID. Key is unique across all statuses until `cleanupOldJobs` removes it.
|
|
203
|
+
|
|
204
|
+
## Maintenance
|
|
205
|
+
|
|
206
|
+
```typescript
|
|
207
|
+
await queue.reclaimStuckJobs(10); // reclaim jobs stuck > 10 min
|
|
208
|
+
await queue.cleanupOldJobs(30); // delete completed jobs > 30 days
|
|
209
|
+
await queue.cleanupOldJobEvents(30); // delete old events > 30 days
|
|
210
|
+
await queue.expireTimedOutTokens(); // expire overdue tokens
|
|
211
|
+
```
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dataqueue-core
|
|
3
|
+
description: Core patterns for using @nicnocquee/dataqueue — typed PayloadMap, init, handlers, adding and processing jobs.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# DataQueue Core Patterns
|
|
7
|
+
|
|
8
|
+
## Imports
|
|
9
|
+
|
|
10
|
+
Always import from `@nicnocquee/dataqueue`. There is no v2/v3 subpath.
|
|
11
|
+
|
|
12
|
+
```typescript
|
|
13
|
+
import { initJobQueue, JobHandlers } from '@nicnocquee/dataqueue';
|
|
14
|
+
```
|
|
15
|
+
|
|
16
|
+
## Step 1: Define a PayloadMap
|
|
17
|
+
|
|
18
|
+
Define an object type mapping job type strings to their payload shapes. This is the foundation of type safety — every API method is generic over this map.
|
|
19
|
+
|
|
20
|
+
```typescript
|
|
21
|
+
export type JobPayloadMap = {
|
|
22
|
+
send_email: { to: string; subject: string; body: string };
|
|
23
|
+
generate_report: { reportId: string; userId: string };
|
|
24
|
+
};
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## Step 2: Define Handlers
|
|
28
|
+
|
|
29
|
+
Create a `JobHandlers<PayloadMap>` object. TypeScript enforces that every key in the PayloadMap has a handler. Each handler receives `(payload, signal, ctx)`.
|
|
30
|
+
|
|
31
|
+
```typescript
|
|
32
|
+
import { JobHandlers } from '@nicnocquee/dataqueue';
|
|
33
|
+
import type { JobPayloadMap } from './types';
|
|
34
|
+
|
|
35
|
+
export const jobHandlers: JobHandlers<JobPayloadMap> = {
|
|
36
|
+
send_email: async (payload) => {
|
|
37
|
+
await sendEmail(payload.to, payload.subject, payload.body);
|
|
38
|
+
},
|
|
39
|
+
generate_report: async (payload, signal) => {
|
|
40
|
+
if (signal.aborted) return;
|
|
41
|
+
await generateReport(payload.reportId, payload.userId);
|
|
42
|
+
},
|
|
43
|
+
};
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Step 3: Initialize the Queue (Singleton)
|
|
47
|
+
|
|
48
|
+
Use a module-level singleton. Each `initJobQueue` call creates a new database pool — never call it per-request.
|
|
49
|
+
|
|
50
|
+
### PostgreSQL
|
|
51
|
+
|
|
52
|
+
```typescript
|
|
53
|
+
import { initJobQueue } from '@nicnocquee/dataqueue';
|
|
54
|
+
import type { JobPayloadMap } from './types';
|
|
55
|
+
|
|
56
|
+
let jobQueue: ReturnType<typeof initJobQueue<JobPayloadMap>> | null = null;
|
|
57
|
+
|
|
58
|
+
export const getJobQueue = () => {
|
|
59
|
+
if (!jobQueue) {
|
|
60
|
+
jobQueue = initJobQueue<JobPayloadMap>({
|
|
61
|
+
databaseConfig: {
|
|
62
|
+
connectionString: process.env.PG_DATAQUEUE_DATABASE,
|
|
63
|
+
},
|
|
64
|
+
});
|
|
65
|
+
}
|
|
66
|
+
return jobQueue;
|
|
67
|
+
};
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
### Redis
|
|
71
|
+
|
|
72
|
+
```typescript
|
|
73
|
+
jobQueue = initJobQueue<JobPayloadMap>({
|
|
74
|
+
backend: 'redis',
|
|
75
|
+
redisConfig: {
|
|
76
|
+
url: process.env.REDIS_URL,
|
|
77
|
+
keyPrefix: 'myapp:',
|
|
78
|
+
},
|
|
79
|
+
});
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## Step 4: Add Jobs
|
|
83
|
+
|
|
84
|
+
```typescript
|
|
85
|
+
const jobId = await queue.addJob({
|
|
86
|
+
jobType: 'send_email',
|
|
87
|
+
payload: { to: 'user@example.com', subject: 'Hi', body: 'Hello' },
|
|
88
|
+
priority: 10,
|
|
89
|
+
runAt: new Date(Date.now() + 5000),
|
|
90
|
+
tags: ['welcome'],
|
|
91
|
+
idempotencyKey: 'welcome-user-123',
|
|
92
|
+
});
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
## Step 5: Process Jobs
|
|
96
|
+
|
|
97
|
+
### Serverless (one-shot)
|
|
98
|
+
|
|
99
|
+
```typescript
|
|
100
|
+
const processor = queue.createProcessor(handlers, {
|
|
101
|
+
batchSize: 10,
|
|
102
|
+
concurrency: 3,
|
|
103
|
+
});
|
|
104
|
+
const processed = await processor.start();
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
### Long-running server
|
|
108
|
+
|
|
109
|
+
```typescript
|
|
110
|
+
const processor = queue.createProcessor(handlers, {
|
|
111
|
+
batchSize: 10,
|
|
112
|
+
concurrency: 3,
|
|
113
|
+
pollInterval: 5000,
|
|
114
|
+
});
|
|
115
|
+
processor.startInBackground();
|
|
116
|
+
|
|
117
|
+
process.on('SIGTERM', async () => {
|
|
118
|
+
await processor.stopAndDrain(30000);
|
|
119
|
+
queue.getPool().end();
|
|
120
|
+
process.exit(0);
|
|
121
|
+
});
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
## Common Mistakes
|
|
125
|
+
|
|
126
|
+
1. **Creating a new queue per request** — always use a singleton. Each `initJobQueue` creates a DB pool.
|
|
127
|
+
2. **Missing handler for a job type** — the job fails with `FailureReason.NoHandler`. Let TypeScript enforce completeness by typing handlers as `JobHandlers<PayloadMap>`.
|
|
128
|
+
3. **Not checking `signal.aborted`** — timed-out jobs keep running in the background. Always check the signal in long-running handlers.
|
|
129
|
+
4. **Forgetting `reclaimStuckJobs`** — crashed workers leave jobs stuck in `processing`. Call `reclaimStuckJobs()` periodically.
|
|
130
|
+
5. **Forgetting to run migrations** — PostgreSQL requires `dataqueue-cli migrate` before use. Redis needs no migrations.
|
|
131
|
+
6. **Not calling `stopAndDrain` on shutdown** — use `stopAndDrain()` (not `stop()`) for graceful shutdown to avoid stuck jobs.
|