queasy 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,50 @@
1
+ name: Check
2
+
3
+ on:
4
+ pull_request:
5
+ branches: [master]
6
+
7
+ jobs:
8
+ check:
9
+ runs-on: ubuntu-latest
10
+
11
+ services:
12
+ redis:
13
+ image: redis:7
14
+ ports:
15
+ - 6379:6379
16
+ options: >-
17
+ --health-cmd "redis-cli ping"
18
+ --health-interval 10s
19
+ --health-timeout 5s
20
+ --health-retries 5
21
+
22
+ steps:
23
+ - uses: actions/checkout@v4
24
+ with:
25
+ fetch-depth: 0
26
+
27
+ - uses: actions/setup-node@v4
28
+ with:
29
+ node-version: 22
30
+
31
+ - run: npm ci
32
+
33
+ - name: Lint
34
+ run: npm run lint
35
+
36
+ - name: Typecheck
37
+ run: npm run typecheck
38
+
39
+ - name: Test with coverage
40
+ run: npm run test:coverage
41
+
42
+ - name: Check version is not already tagged
43
+ run: |
44
+ git fetch --tags
45
+ VERSION=$(node -e "console.log(require('./package.json').version)")
46
+ if [ -n "$(git tag -l "v$VERSION")" ]; then
47
+ echo "::error::Tag v$VERSION already exists. Bump the version in package.json."
48
+ exit 1
49
+ fi
50
+ echo "Version v$VERSION is not yet tagged"
@@ -0,0 +1,44 @@
1
+ name: Publish
2
+
3
+ on:
4
+ push:
5
+ branches: [master]
6
+
7
+ permissions:
8
+ contents: write
9
+ id-token: write
10
+
11
+ jobs:
12
+ publish:
13
+ runs-on: ubuntu-latest
14
+
15
+ steps:
16
+ - uses: actions/checkout@v4
17
+
18
+ - uses: actions/setup-node@v4
19
+ with:
20
+ node-version: 22
21
+ registry-url: https://registry.npmjs.org
22
+
23
+ - run: npm install -g npm@latest
24
+
25
+ - run: npm ci
26
+
27
+ - name: Extract version
28
+ id: version
29
+ run: echo "version=$(node -e "console.log(require('./package.json').version)")" >> "$GITHUB_OUTPUT"
30
+
31
+ - name: Publish to npm
32
+ run: npm publish --provenance --access public
33
+ env:
34
+ NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
35
+
36
+ - name: Create and push tag
37
+ run: |
38
+ git tag "v${{ steps.version.outputs.version }}"
39
+ git push origin "v${{ steps.version.outputs.version }}"
40
+
41
+ - name: Create GitHub release
42
+ run: gh release create "v${{ steps.version.outputs.version }}" --generate-notes
43
+ env:
44
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
package/AGENTS.md CHANGED
@@ -24,7 +24,8 @@ Queasy is a Redis-backed job queue for Node.js with **at-least-once** delivery s
24
24
  The JS side is split across several modules:
25
25
 
26
26
  - **`src/client.js`** (`Client` class): Top-level entry point. Wraps a `node-redis` connection, loads the Lua script into Redis via `FUNCTION LOAD REPLACE` on construction, and manages named `Queue` instances. Generates a unique `clientId` for heartbeats. All Redis `fCall` invocations live here (`dispatch`, `cancel`, `dequeue`, `finish`, `fail`, `retry`, `bump`). Exported from `src/index.js`.
27
- - **`src/queue.js`** (`Queue` class): Represents a single named queue. Holds dequeue options and handler path. `listen()` attaches a handler and starts a `setInterval` polling loop that calls `dequeue()`. `dequeue()` checks pool capacity, fetches jobs from Redis, and processes each via the pool. Handles retry/fail logic (backoff calculation, stall-count checks) on the JS side.
27
+ - **`src/queue.js`** (`Queue` class): Represents a single named queue. `listen()` attaches a handler path and options, optionally sets up a fail queue (`{key}-fail`), then registers itself with the `Manager` via `addQueue()`. `dequeue(count)` fetches jobs from Redis, processes each via the pool, and handles outcomes: finishes on success, retries with exponential backoff on retriable errors, and dispatches to the fail queue on permanent errors or when `maxRetries`/`maxStalls` limits are exceeded. Returns `{ count, promise }` so the manager can track whether the queue is saturated.
28
+ - **`src/manager.js`** (`Manager` class): Centralized dequeue scheduler shared across all queues on a client. When a queue calls `listen()`, it registers itself via `addQueue()`. The manager runs a single `next()` loop that round-robins through queues, calling `queue.dequeue(batchSize)` on each. Batch size is computed from pool capacity, the number of busy queues, and the handler's `size` option. After each dequeue, queues are re-sorted by a priority function (`compareQueueEntries`): busy queues first, then by `priority` (higher first), then by `lastDequeuedAt` (oldest first), then by `size` (larger first). The loop schedules the next tick immediately if the top queue is busy, otherwise waits `DEQUEUE_INTERVAL` ms from the last dequeue time.
28
29
  - **`src/pool.js`** (`Pool` class): Manages a set of `Worker` threads. Each worker has a `capacity` (default 100 units). `process()` picks the worker with the most spare capacity, posts the job, and returns a promise. Handles job timeouts: a timed-out job marks the worker as unhealthy, replaces it with a fresh one, and terminates the old worker once only stalled jobs remain.
29
30
  - **`src/worker.js`**: Runs inside a `Worker` thread. Receives `exec` messages, dynamically imports the handler module, calls `handle(data, job)`, and posts back `done` messages (with optional error info).
30
31
  - **`src/constants.js`**: Default retry options, heartbeat/timeout intervals, worker capacity, dequeue polling interval.
package/Readme.md CHANGED
@@ -2,18 +2,18 @@
2
2
 
3
3
  A Redis-backed job queue for Node.js, featuring (in comparison with design inspiration BullMQ):
4
4
 
5
- - **Singleton jobs**: Guarantees that no more than one job with a given ID is be processed at a time, without trampolines or dropping jobs (“unsafe deduplication”).
6
- - **Fail handlers**: Guaranteed at-least-once handlers for failed or stalled jobs, which permits reliable periodic jobs without a external scheduling or “reviver” systems.
5
+ - **Singleton jobs**: Guarantees that no more than one job with a given ID is being processed at any time, without trampolines or dropping jobs (“unsafe deduplication”).
6
+ - **Fail handlers**: Guaranteed at-least-once handlers for failed or stalled jobs, enabling reliable periodic jobs without a external scheduling or “reviver” systems.
7
7
  - **Instant config changes**: Most configuration changes take effect immediately no matter the queue length, as they apply at dequeue time.
8
8
  - **Worker threads**: Jobs are processed in worker threads, preventing main process stalling and failing health checks due to CPU-bound jobs
9
9
  - **Capacity model**: Worker capacity flexibly shared between heterogenous queues based on priority and demand, rather than queue-specific “concurrency”.
10
- - **Job timeout**: Enforced by draining and terminating worker threads with timed out jobs
11
- - **Zombie protection**: Clients that have lost locks detect this and exit at next heartbeat
10
+ - **Job timeout**: Timed out jobs are killed by draining and terminating the worker threads it runs on
11
+ - **Zombie protection**: Clients that have lost locks while stalled before recovering detect this and terminate themselves immediately
12
12
  - **Fine-grained updates**: Control over individual attributes when one job updates another with the same ID
13
13
 
14
14
  ### Terminology
15
15
 
16
- A _client_ is an instance of Quesy that connects to a Redis database. A _job_ is the basic unit of work that is _dispatched_ into a _queue_.
16
+ A _client_ is an instance of Queasy that connects to a Redis database. A _job_ is the basic unit of work that is _dispatched_ into a _queue_.
17
17
 
18
18
  A _handler_ is JavaScript code that performs work. There are two kinds of handlers: _task handlers_, which process jobs, and _fail handlers_, which are invoked when a job fails permanently. Handlers run on _workers_, which are Node.js worker threads. By default, a Queasy client automatically creates one worker per CPU.
19
19
 
@@ -22,8 +22,9 @@ A _handler_ is JavaScript code that performs work. There are two kinds of handle
22
22
  - `id`: string; generated if unspecified. See _update semantics_ below for more information.
23
23
  - `data`: a JSON-serializable value passed to handlers
24
24
  - `runAt`: number; a unix timestamp, to delay job execution until at least that time
25
- - `stallCount`: number; how many times has this job caused the client or worker to stall?
26
- - `retryCount`: number; how many times has this job caused the handler to throw an error?
25
+ - `retryCount`: number; how many times has this job been retried for any reason?
26
+ - `stallCount`: number; how many times did the client processing this job stop sending heartbeats?
27
+ - `timeoutCount`: number; how many times did this job fail to complete in the allocated time?
27
28
 
28
29
  ### Job lifecycle
29
30
 
@@ -42,13 +43,13 @@ Queues are dequeued based on their priority and the ratio of available capacity
42
43
 
43
44
  When a worker start processing a job, a timer is started; if the job completes or throws, the timer is cleared. If the timeout occurs, the job is marked stalled and the worker is removed from the pool so it no longer receives new jobs. A new worker is also created and added to the pool to replace it.
44
45
 
45
- The unhealthy worker (with stalled jobs) continues to run until it has *only* stalled jobs remaining. When this happens, the worker is terminated, and all its stalled jobs are retried.
46
+ The unhealthy worker (with at least one stalled job) continues to run until it has *only* stalled jobs remaining. When this happens, the worker is terminated, and all its stalled jobs are retried.
46
47
 
47
48
  ### Stall handling
48
49
 
49
50
  The client (in the main thread) sends periodic heartbeats to Redis for each queue it’s processing. If heartbeats from a client stop, a Lua script in Redis removes this client and returns all its active jobs into the waiting state with their stall count property incremented.
50
51
 
51
- When a job is dequeued, if its stall count exceeds the configured maximum, it is immediately considered permanently failed and its handler is not invoked.
52
+ When a job is dequeued, if its stall count exceeds the configured maximum, it is immediately considered permanently failed; its task handler is not invoked.
52
53
 
53
54
  The response of the heartbeat Lua function indicates whether the client had been removed due to an earlier stall; if it receives this response, the client terminates all its worker threads immediately and re-initializes the pool and queues.
54
55
 
@@ -59,6 +60,7 @@ Returns a Queasy client.
59
60
  - `redisConnection`: a node-redis connection object.
60
61
  - `workerCount`: number; Size of the worker pool. If 0, or if called in a queasy worker thread, no pool is created. Defaults to the number of CPUs.
61
62
 
63
+ The client object returned is an EventEmitter, which emits a 'disconnect' event when it fails permanently for any reason, such as library version mismatch between different workers connected to the same Redis insance, or a lost locks situation. When this happens, in general the application should exit the worker process and allow the supervisor to restart it.
62
64
 
63
65
  ### `client.queue(name)`
64
66
 
@@ -74,8 +76,7 @@ Adds a job to the queue. `data` may be any JSON value, which will be passed unch
74
76
  The following options take effect if an `id` is provided, and it matches that of a job already in the queue.
75
77
  - `updateData`: boolean; whether to replace the data of any waiting job with the same ID; default: true
76
78
  - `updateRunAt`: boolean | 'ifLater' | 'ifEarlier'; default: true
77
- - `updateRetryStrategy`: boolean; whether to replace `maxRetries`, `maxStalls`, `minBackoff` and `maxBackoff`
78
- - `resetCounts`: boolean; Whether to reset the internal failure and stall counts to 0; default: same as updateData
79
+ - `resetCounts`: boolean; Whether to reset the retry, timeout and stall counts to 0; default: same as updateData
79
80
 
80
81
  Returns a promise that resolves to the job ID when the job has been added to Redis.
81
82
 
@@ -92,10 +93,12 @@ Attaches handlers to a queue to process jobs that are added to it.
92
93
  The following options control retry behavior:
93
94
  - `maxRetries`: number; default: 10
94
95
  - `maxStalls`: number; default: 3
96
+ - `maxTimeouts`: number, default: 3
95
97
  - `minBackoff`: number; in milliseconds; default: 2,000
96
98
  - `maxBackoff`: number; default: 300,000
97
99
  - `size`: number; default: 10
98
100
  - `timeout`: number; in milliseconds; default: 60,000
101
+ - `priority`: number; higher values are given preference; default: 100
99
102
 
100
103
  Additional options affect failure handling:
101
104
  - `failHandler`: The path to a JavaScript module that exports the handler for failure jobs
@@ -107,13 +110,13 @@ Every handler module must have a named export `handle`, a function that is calle
107
110
 
108
111
  ### Task handlers
109
112
 
110
- It receives two arguments:
113
+ They receive two arguments:
111
114
  - `data`, the JSON value passed to dispatch
112
- - `job`, a Job object contains the job attributes except data
115
+ - `job`, a Job object containing other job attributes (excluding data)
113
116
 
114
- This function may throw (or return a Promise that rejects) to indicate job failure. If the thrown error is an
115
- instance of `PermanentError`, or if `maxRetries` has been reached, the job is not retried. Otherwise, the job
116
- is queued to be retried with `maxRetries` incremented.
117
+ This function may throw (or return a Promise that rejects) to indicate job failure. If the thrown error contains
118
+ a property `kind` with the value `permanent`, or if `maxRetries` has been reached, the job is not retried.
119
+ Otherwise, the job is queued to be retried with `retryCount` incremented.
117
120
 
118
121
  If the thrown error has a property `retryAt`, the job’s `runAt` is set to this value; otherwise, it’s set using
119
122
  the exponential backoff algorithm.
@@ -123,8 +126,10 @@ If it returns any value apart from a Promise that rejects, the job is considered
123
126
  ### Failure handlers
124
127
 
125
128
  This function receives three arguments:
126
- - `data`, the JSON value passed to dispatch
127
- - `job`
128
- - `error`, a JSON object with a copy of the enumerable properties of the error thrown by the final call to handle, or an instance of `StallError` if the final call to handle didn’t return or throw.
129
+ - `data`, a tuple (array) containing three items:
130
+ - `originalData`
131
+ - `originalJob`
132
+ - `error`, a JSON object with the name, message and kind properties of the error thrown by the final call to handle. Kind might be `permanent`, `retriable` or `stall`. In case of stall, the name property is either `StallError` or `TimeoutError`.
133
+ - `job`, details of the failure handling job
129
134
 
130
135
  If this function throws an error (or returns a Promise that rejects), it is retried using exponential backoff.
package/package.json CHANGED
@@ -1,12 +1,12 @@
1
1
  {
2
2
  "name": "queasy",
3
- "version": "0.1.0",
3
+ "version": "0.2.0",
4
4
  "description": "A simple Redis-backed queue library for Node.js",
5
5
  "main": "src/index.js",
6
6
  "type": "module",
7
7
  "scripts": {
8
8
  "test": "node --test",
9
- "test:coverage": "node --test --experimental-test-coverage",
9
+ "test:coverage": "node --test --experimental-test-coverage --test-coverage-lines=96 --test-coverage-include 'src/**/*.js' 'test/**/*.test.js'",
10
10
  "test:watch": "node --test --watch",
11
11
  "lint": "biome check .",
12
12
  "lint:fix": "biome check --write .",
@@ -16,6 +16,10 @@
16
16
  "docker:down": "docker compose down",
17
17
  "docker:logs": "docker compose logs -f"
18
18
  },
19
+ "repository": {
20
+ "type": "git",
21
+ "url": "git+https://github.com/aravindet/queasy.git"
22
+ },
19
23
  "keywords": [
20
24
  "queue",
21
25
  "redis",
package/src/client.js CHANGED
@@ -1,16 +1,42 @@
1
+ import EventEmitter from 'node:events';
1
2
  import { readFileSync } from 'node:fs';
2
3
  import { dirname, join } from 'node:path';
3
4
  import { fileURLToPath } from 'node:url';
4
5
  import { getEnvironmentData } from 'node:worker_threads';
5
- import { HEARTBEAT_INTERVAL, HEARTBEAT_TIMEOUT } from './constants.js';
6
+ import { HEARTBEAT_INTERVAL, HEARTBEAT_TIMEOUT, LUA_FUNCTIONS_VERSION } from './constants.js';
6
7
  import { Manager } from './manager.js';
7
8
  import { Pool } from './pool.js';
8
9
  import { Queue } from './queue.js';
9
- import { generateId } from './utils.js';
10
+ import { compareSemver, generateId, parseVersion } from './utils.js';
10
11
 
11
- // Load Lua script
12
12
  const __dirname = dirname(fileURLToPath(import.meta.url));
13
- const luaScript = readFileSync(join(__dirname, 'queasy.lua'), 'utf8');
13
+ const luaScript = readFileSync(join(__dirname, 'queasy.lua'), 'utf8').replace(
14
+ '__QUEASY_VERSION__',
15
+ LUA_FUNCTIONS_VERSION
16
+ );
17
+
18
+ /**
19
+ * Check the installed version and load our Lua functions if needed.
20
+ * Returns true if this client should be disconnected (newer major on server).
21
+ * @param {RedisClient} redis
22
+ * @returns {Promise<boolean>} Whether to disconnect.
23
+ */
24
+ async function installLuaFunctions(redis) {
25
+ const installedVersionString = /** @type {string?} */ (
26
+ await redis.fCall('queasy_version', { keys: [], arguments: [] }).catch(() => null)
27
+ );
28
+ const installedVersion = parseVersion(installedVersionString);
29
+ const availableVersion = parseVersion(LUA_FUNCTIONS_VERSION);
30
+
31
+ // No script installed or our version is later
32
+ if (compareSemver(availableVersion, installedVersion) > 0) {
33
+ await redis.sendCommand(['FUNCTION', 'LOAD', 'REPLACE', luaScript]);
34
+ return false;
35
+ }
36
+
37
+ // Keep the installed (newer) version. Return disconnect=true if the major versions disagree
38
+ return installedVersion[0] > availableVersion[0];
39
+ }
14
40
 
15
41
  /** @typedef {import('redis').RedisClientType} RedisClient */
16
42
  /** @typedef {import('./types').Job} Job */
@@ -42,26 +68,33 @@ export function parseJob(jobArray) {
42
68
  };
43
69
  }
44
70
 
45
- export class Client {
71
+ export class Client extends EventEmitter {
46
72
  /**
47
73
  * @param {RedisClient} redis - Redis client
48
74
  * @param {number?} workerCount - Allow this client to dequeue jobs.
75
+ * @param {((client: Client) => any)} [callback] - Callback when client is ready
49
76
  */
50
- constructor(redis, workerCount) {
77
+ constructor(redis, workerCount, callback) {
78
+ super();
51
79
  this.redis = redis;
52
80
  this.clientId = generateId();
53
81
 
54
82
  /** @type {Record<string, QueueEntry>} */
55
83
  this.queues = {};
84
+ this.disconnected = false;
56
85
 
57
86
  const inWorker = getEnvironmentData('queasy_worker_context');
58
87
  this.pool = !inWorker && workerCount !== 0 ? new Pool(workerCount) : undefined;
59
88
  if (this.pool) this.manager = new Manager(this.pool);
60
89
 
61
- // We are not awaiting this; we rely on Redis’ single-threaded blocking
62
- // nature to ensure that this load completes before other Redis commands
63
- // are processed.
64
- this.redis.sendCommand(['FUNCTION', 'LOAD', 'REPLACE', luaScript]);
90
+ // Not awaited the Lua script is read synchronously at module load,
91
+ // so Redis' single-threaded ordering ensures the FUNCTION LOAD completes
92
+ // before any subsequent fCalls from user code.
93
+ installLuaFunctions(this.redis).then((disconnect) => {
94
+ this.disconnected = disconnect;
95
+ if (disconnect) this.emit('disconnected', 'Redis has incompatible queasy version.');
96
+ else if (callback) callback(this);
97
+ });
65
98
  }
66
99
 
67
100
  /**
@@ -70,6 +103,8 @@ export class Client {
70
103
  * @returns {Queue} Queue object with dispatch, cancel, and listen methods
71
104
  */
72
105
  queue(name, isKey = false) {
106
+ if (this.disconnected) throw new Error('Can’t add queue: client disconnected');
107
+
73
108
  const key = isKey ? name : `{${name}}`;
74
109
  if (!this.queues[key]) {
75
110
  this.queues[key] = /** @type {QueueEntry} */ ({
@@ -79,20 +114,6 @@ export class Client {
79
114
  return this.queues[key].queue;
80
115
  }
81
116
 
82
- /**
83
- * This helps tests exit cleanly.
84
- */
85
- close() {
86
- for (const name in this.queues) {
87
- this.queues[name].queue.close();
88
- clearTimeout(this.queues[name].bumpTimer);
89
- }
90
- if (this.pool) this.pool.close();
91
- if (this.manager) this.manager.close();
92
- this.queues = {};
93
- this.pool = undefined;
94
- }
95
-
96
117
  /**
97
118
  * Schedule the next bump timer
98
119
  * @param {string} key
@@ -107,14 +128,33 @@ export class Client {
107
128
  * @param {string} key
108
129
  */
109
130
  async bump(key) {
131
+ if (this.disconnected) return;
110
132
  // Set up the next bump first, in case this
111
133
  this.scheduleBump(key);
112
134
  const now = Date.now();
113
135
  const expiry = now + HEARTBEAT_TIMEOUT;
114
- await this.redis.fCall('queasy_bump', {
136
+ const bumped = await this.redis.fCall('queasy_bump', {
115
137
  keys: [key],
116
138
  arguments: [this.clientId, String(now), String(expiry)],
117
139
  });
140
+
141
+ if (!bumped) {
142
+ // This client’s lock was lost and its jobs retried.
143
+ // We must stop processing jobs here to avoid duplication.
144
+ await this.close();
145
+ this.emit('disconnected', 'Lost locks, possible main thread freeze');
146
+ }
147
+ }
148
+
149
+ /**
150
+ * This marks this as disconnected.
151
+ */
152
+ async close() {
153
+ if (this.pool) await this.pool.close();
154
+ if (this.manager) await this.manager.close();
155
+ this.queues = {};
156
+ this.pool = undefined;
157
+ this.disconnected = true;
118
158
  }
119
159
 
120
160
  /**
package/src/constants.js CHANGED
@@ -9,6 +9,7 @@ export const DEFAULT_RETRY_OPTIONS = {
9
9
  maxBackoff: 300_000, // 5 minutes
10
10
  size: 10,
11
11
  timeout: 60_000, // 1 minute
12
+ priority: 100,
12
13
  };
13
14
 
14
15
  /** @type {Required<JobUpdateOptions>} */
@@ -26,8 +27,10 @@ export const FAILJOB_RETRY_OPTIONS = {
26
27
  maxBackoff: 900_000, // 15 minutes
27
28
  size: 2,
28
29
  timeout: 60_000,
30
+ priority: 100,
29
31
  };
30
32
 
33
+ export const LUA_FUNCTIONS_VERSION = '1.0';
31
34
  export const HEARTBEAT_INTERVAL = 5000; // 5 seconds
32
35
  export const HEARTBEAT_TIMEOUT = 10000; // 10 seconds
33
36
  export const WORKER_CAPACITY = 10;
package/src/pool.js CHANGED
@@ -1,4 +1,4 @@
1
- import { cpus } from 'node:os';
1
+ import { availableParallelism } from 'node:os';
2
2
  import { Worker } from 'node:worker_threads';
3
3
  import { WORKER_CAPACITY } from './constants.js';
4
4
  import { generateId } from './utils.js';
@@ -31,7 +31,7 @@ export class Pool {
31
31
 
32
32
  this.capacity = 0;
33
33
 
34
- const count = targetCount ?? cpus().length;
34
+ const count = targetCount ?? availableParallelism();
35
35
  for (let i = 0; i < count; i++) this.createWorker();
36
36
  }
37
37
 
@@ -144,11 +144,8 @@ export class Pool {
144
144
  /**
145
145
  * Terminates all workers
146
146
  */
147
- close() {
148
- for (const { worker } of this.workers) {
149
- worker.terminate();
150
- }
151
-
147
+ async close() {
148
+ await Promise.all([...this.workers].map(async ({ worker }) => worker.terminate()));
152
149
  for (const [jobId, { reject, timer }] of this.activeJobs.entries()) {
153
150
  clearTimeout(timer);
154
151
  reject({
package/src/queasy.lua CHANGED
@@ -113,9 +113,6 @@ local function do_retry(queue_key, id, retry_at)
113
113
  return { ok = 'OK' }
114
114
  end
115
115
 
116
- -- Forward declaration
117
- local sweep
118
-
119
116
  -- Helper: Clear active job and unblock waiting job
120
117
  local function finish(queue_key, id, client_id, now)
121
118
  local waiting_job_key = get_waiting_job_key(queue_key, id)
@@ -183,6 +180,33 @@ local function handle_stall(queue_key, id, retry_at)
183
180
  return do_retry(queue_key, id, retry_at)
184
181
  end
185
182
 
183
+ -- Sweep stalled clients
184
+ local function sweep(queue_key, now)
185
+ local expiry_key = get_expiry_key(queue_key)
186
+
187
+ -- Find first stalled client
188
+ local stalled = redis.call('ZRANGEBYSCORE', expiry_key, 0, now, 'LIMIT', 0, 1)
189
+
190
+ if #stalled == 0 then return 0 end
191
+
192
+ local stalled_client_id = stalled[1]
193
+ local checkouts_key = get_checkouts_key(queue_key, stalled_client_id)
194
+
195
+ -- Get all job IDs checked out by this client
196
+ -- RESP3 returns SMEMBERS as { set = { id1 = true, id2 = true, ... } }
197
+ local members_resp = redis.call('SMEMBERS', checkouts_key)
198
+
199
+ for id, _ in pairs(members_resp['set']) do
200
+ handle_stall(queue_key, id, 0)
201
+ end
202
+
203
+ -- Clean up the stalled client
204
+ redis.call('ZREM', expiry_key, stalled_client_id)
205
+ redis.call('DEL', checkouts_key)
206
+
207
+ return 1
208
+ end
209
+
186
210
  -- Dequeue jobs from waiting queue
187
211
  local function dequeue(queue_key, client_id, now, expiry, limit)
188
212
  local expiry_key = get_expiry_key(queue_key)
@@ -227,8 +251,9 @@ local function bump(queue_key, client_id, now, expiry)
227
251
  local expiry_key = get_expiry_key(queue_key)
228
252
 
229
253
  -- Check if this client exists in expiry set
230
- local existing = redis.call('ZSCORE', expiry_key, client_id)
231
- if not existing then
254
+ -- This can’t be skipped in favour of ZADD XX CH — when a client's new expiry
255
+ -- is the same as the old one, XX CH returns 0 but we need it to return 1
256
+ if not redis.call('ZSCORE', expiry_key, client_id) then
232
257
  return 0
233
258
  end
234
259
 
@@ -241,37 +266,6 @@ local function bump(queue_key, client_id, now, expiry)
241
266
  return 1
242
267
  end
243
268
 
244
- -- Sweep stalled clients
245
- sweep = function(queue_key, now)
246
- local expiry_key = get_expiry_key(queue_key)
247
-
248
- -- Find first stalled client
249
- local stalled = redis.call('ZRANGEBYSCORE', expiry_key, 0, now, 'LIMIT', 0, 1)
250
-
251
- if #stalled == 0 then
252
- return {}
253
- end
254
-
255
- local stalled_client_id = stalled[1]
256
- local checkouts_key = get_checkouts_key(queue_key, stalled_client_id)
257
-
258
- -- Get all job IDs checked out by this client
259
- -- RESP3 returns SMEMBERS as { set = { id1 = true, id2 = true, ... } }
260
- local members_resp = redis.call('SMEMBERS', checkouts_key)
261
- local processed_jobs = {}
262
-
263
- for id, _ in pairs(members_resp['set']) do
264
- handle_stall(queue_key, id, 0)
265
- table.insert(processed_jobs, id)
266
- end
267
-
268
- -- Clean up the stalled client
269
- redis.call('ZREM', expiry_key, stalled_client_id)
270
- redis.call('DEL', checkouts_key)
271
-
272
- return processed_jobs
273
- end
274
-
275
269
  -- Register: queasy_dispatch
276
270
  redis.register_function {
277
271
  function_name = 'queasy_dispatch',
@@ -391,7 +385,7 @@ redis.register_function {
391
385
  redis.register_function {
392
386
  function_name = 'queasy_version',
393
387
  callback = function(keys, args)
394
- return 1
388
+ return '__QUEASY_VERSION__'
395
389
  end,
396
390
  flags = {}
397
391
  }
package/src/queue.js CHANGED
@@ -50,7 +50,8 @@ export class Queue {
50
50
  * @returns {Promise<void>}
51
51
  */
52
52
  async listen(handlerPath, { failHandler, failRetryOptions, ...retryOptions } = {}) {
53
- if (!this.pool || !this.manager) throw new Error('Can’t listen on a non-processing client');
53
+ if (this.client.disconnected) throw new Error('Can’t listen: client disconnected');
54
+ if (!this.pool || !this.manager) throw new Error('Can’t listen: non-processing client');
54
55
 
55
56
  this.handlerPath = handlerPath;
56
57
  this.handlerOptions = { ...DEFAULT_RETRY_OPTIONS, ...retryOptions };
@@ -76,6 +77,7 @@ export class Queue {
76
77
  * @returns {Promise<string>} Job ID
77
78
  */
78
79
  async dispatch(data, options = {}) {
80
+ if (this.client.disconnected) throw new Error('Can’t dispatch: client disconnected');
79
81
  const {
80
82
  id = generateId(),
81
83
  runAt = 0,
@@ -94,6 +96,7 @@ export class Queue {
94
96
  * @returns {Promise<boolean>} True if job was cancelled
95
97
  */
96
98
  async cancel(id) {
99
+ if (this.client.disconnected) throw new Error('Can’t cancel: client disconnected');
97
100
  return await this.client.cancel(this.key, id);
98
101
  }
99
102
 
@@ -148,14 +151,4 @@ export class Queue {
148
151
 
149
152
  return { count: jobs.length, promise };
150
153
  }
151
-
152
- /**
153
- * Stop the dequeue interval and bump timer for this queue
154
- */
155
- close() {
156
- // if (this.dequeueInterval) {
157
- // clearInterval(this.dequeueInterval);
158
- // this.dequeueInterval = undefined;
159
- // }
160
- }
161
154
  }
package/src/utils.js CHANGED
@@ -11,3 +11,29 @@ export function generateId(length = 20) {
11
11
  }
12
12
  return id;
13
13
  }
14
+
15
+ /**
16
+ * Parse a semver string like '1.0' into { major, minor }
17
+ * Used by compareSemver
18
+ * @param {string?} version
19
+ * @returns {number[]}
20
+ */
21
+ export function parseVersion(version) {
22
+ const parsed = String(version).split('.').map(Number);
23
+ if (parsed.some((n) => Number.isNaN(n))) return [0];
24
+ return parsed;
25
+ }
26
+
27
+ /**
28
+ * Compare two semver strings. Returns -1 if a < b, 0 if equal, 1 if a > b.
29
+ * @param {number[]} a
30
+ * @param {number[]} b
31
+ * @returns {-1 | 0 | 1}
32
+ */
33
+ export function compareSemver(a, b) {
34
+ for (let i = 0; i < Math.min(a.length, b.length); i++) {
35
+ if (a[i] !== b[i]) return a[i] < b[i] ? -1 : 1;
36
+ }
37
+ if (a.length !== b.length) return a.length < b.length ? -1 : 1;
38
+ return 0;
39
+ }