rrq 0.2.5__tar.gz → 0.3.5__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
rrq-0.3.5/.coverage ADDED
Binary file
@@ -1,3 +1,4 @@
1
1
  .venv/
2
2
  __pycache__/
3
- .git/
3
+ .git/
4
+ .vscode/
rrq-0.3.5/FUTURE.md ADDED
@@ -0,0 +1,33 @@
1
+ # RRQ Multi-Worker Safety
2
+
3
+ The core coordination primitives in RRQ are already designed for safe fan-out over multiple worker processes:
4
+
5
+ - Jobs live in a Redis ZSET, with workers atomically acquiring a per-job lock (`SET NX PX`) before removing (`ZREM`) the job ID and executing it
6
+ - The lock ensures that even if two workers race on the same job, only one proceeds past the `SET NX`
7
+ - Once removed from the ZSET, the job can't re-appear until a retry or DLQ–requeue
8
+ - Heartbeats and health keys are namespaced by worker ID (PID+UUID), so many workers can register themselves independently
9
+
10
+ ## Areas to Watch in Large Worker Fleets
11
+
12
+ ### 1. Two-step Lock+Pop Isn't Fully Atomic
13
+ - If a worker acquires the lock but crashes before `ZREM`, the job stays in the queue until the lock TTL expires, and then another worker can grab it
14
+ - In practice it's rare (must crash in that tiny window), but you can eliminate it by bundling "pop from ZSET + set lock" in a single Lua script
15
+
16
+ ### 2. Lock TTL vs. Job Duration
17
+ - We set the lock TTL = `job_timeout + default_lock_timeout_extension_seconds`. If your handlers sometimes exceed that window, the lock can expire mid-run (though the job isn't in the queue anymore)
18
+ - Consider increasing the extension, or implementing a "lock refresher" for very long tasks
19
+
20
+ ### 3. Graceful Shutdown & Task Drain
21
+ - Workers will stop polling in burst mode or on a shutdown signal, then "drain" in-flight tasks up to `worker_shutdown_grace_period_seconds`
22
+ - Beyond that they cancel. Make sure your handlers handle `CancelledError` gracefully
23
+
24
+ ### 4. Logging & Observability
25
+ - If you're tailing stdout/stderr from many workers, add the worker ID (and queue list) to your log formatter to keep things straight
26
+
27
+ ### 5. Health-Check TTLs
28
+ - The heartbeat loop writes a Redis key with a buffer TTL
29
+ - If your network is flaky or your workers get paused (e.g. in a debugger), you may see transient "missing" health keys
30
+
31
+ ## Summary
32
+
33
+ With these caveats in mind, you can absolutely spin up dozens (or hundreds) of RRQ worker processes against the same Redis instance, each pulling from the same or different queue names. The locking, queueing, and retry/DLQ logic will keep them from stepping on each other.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: rrq
3
- Version: 0.2.5
3
+ Version: 0.3.5
4
4
  Summary: RRQ is a Python library for creating reliable job queues using Redis and asyncio
5
5
  Project-URL: Homepage, https://github.com/getresq/rrq
6
6
  Project-URL: Bug Tracker, https://github.com/getresq/rrq/issues
@@ -21,17 +21,12 @@ Requires-Dist: redis[hiredis]<6,>=4.2.0
21
21
  Requires-Dist: watchfiles>=0.19.0
22
22
  Provides-Extra: dev
23
23
  Requires-Dist: pytest-asyncio>=0.26.0; extra == 'dev'
24
+ Requires-Dist: pytest-cov>=6.0.0; extra == 'dev'
24
25
  Requires-Dist: pytest>=8.3.5; extra == 'dev'
25
26
  Description-Content-Type: text/markdown
26
27
 
27
28
  # RRQ: Reliable Redis Queue
28
29
 
29
- ____ ____ ___
30
- | _ \ | _ \ / _ \
31
- | |_) | | |_) | | | | |
32
- | _ < | _ < | |_| |
33
- |_| \_\ |_| \_\ \__\_\
34
-
35
30
  RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
36
31
 
37
32
  ## Core Components
@@ -58,10 +53,20 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
58
53
  * **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
59
54
  * **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
60
55
  *Note: Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.*
56
+ *To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:*
57
+
58
+ ```python
59
+ await client.enqueue(
60
+ "process_updates",
61
+ item_id=123,
62
+ _unique_key="update:123",
63
+ _defer_by=10,
64
+ )
65
+ ```
61
66
 
62
67
  ## Basic Usage
63
68
 
64
- *(See [`rrq_example.py`](examples/rrq_example.py) in the project root for a runnable example)*
69
+ *(See [`rrq_example.py`](https://github.com/GetResQ/rrq/tree/master/example) in the project root for a runnable example)*
65
70
 
66
71
  **1. Define Handlers:**
67
72
 
@@ -165,10 +170,9 @@ rrq <command> [options]
165
170
 
166
171
  - **`worker run`**: Run an RRQ worker process to process jobs from queues.
167
172
  ```bash
168
- rrq worker run [--burst] [--detach] --settings <settings_path>
173
+ rrq worker run [--burst] --settings <settings_path>
169
174
  ```
170
175
  - `--burst`: Run in burst mode (process one job/batch then exit).
171
- - `--detach`: Run the worker in the background.
172
176
  - `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
173
177
 
174
178
  - **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
@@ -1,11 +1,5 @@
1
1
  # RRQ: Reliable Redis Queue
2
2
 
3
- ____ ____ ___
4
- | _ \ | _ \ / _ \
5
- | |_) | | |_) | | | | |
6
- | _ < | _ < | |_| |
7
- |_| \_\ |_| \_\ \__\_\
8
-
9
3
  RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
10
4
 
11
5
  ## Core Components
@@ -32,10 +26,20 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
32
26
  * **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
33
27
  * **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
34
28
  *Note: Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.*
29
+ *To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:*
30
+
31
+ ```python
32
+ await client.enqueue(
33
+ "process_updates",
34
+ item_id=123,
35
+ _unique_key="update:123",
36
+ _defer_by=10,
37
+ )
38
+ ```
35
39
 
36
40
  ## Basic Usage
37
41
 
38
- *(See [`rrq_example.py`](examples/rrq_example.py) in the project root for a runnable example)*
42
+ *(See [`rrq_example.py`](https://github.com/GetResQ/rrq/tree/master/example) in the project root for a runnable example)*
39
43
 
40
44
  **1. Define Handlers:**
41
45
 
@@ -139,10 +143,9 @@ rrq <command> [options]
139
143
 
140
144
  - **`worker run`**: Run an RRQ worker process to process jobs from queues.
141
145
  ```bash
142
- rrq worker run [--burst] [--detach] --settings <settings_path>
146
+ rrq worker run [--burst] --settings <settings_path>
143
147
  ```
144
148
  - `--burst`: Run in burst mode (process one job/batch then exit).
145
- - `--detach`: Run the worker in the background.
146
149
  - `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
147
150
 
148
151
  - **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
4
4
 
5
5
  [project]
6
6
  name = "rrq"
7
- version = "0.2.5"
7
+ version = "0.3.5"
8
8
  authors = [
9
9
  { name = "Mazdak Rezvani", email = "mazdak@me.com" },
10
10
  ]
@@ -33,6 +33,7 @@ dependencies = [
33
33
  dev = [
34
34
  "pytest>=8.3.5",
35
35
  "pytest-asyncio>=0.26.0",
36
+ "pytest-cov>=6.0.0",
36
37
  ]
37
38
 
38
39
  [project.urls]
@@ -40,7 +41,7 @@ dev = [
40
41
  "Bug Tracker" = "https://github.com/getresq/rrq/issues"
41
42
 
42
43
  [project.scripts]
43
- rrq = "rrq.rrq:rrq" # Assumes rrq.py is at the root of your source directory
44
+ rrq = "rrq.cli:rrq"
44
45
 
45
46
  [tool.pytest.ini_options]
46
- asyncio_default_fixture_loop_scope = "function"
47
+ asyncio_default_fixture_loop_scope = "function"
@@ -20,10 +20,11 @@ from .worker import RRQWorker
20
20
 
21
21
  logger = logging.getLogger(__name__)
22
22
 
23
+
23
24
  # Helper to load settings for commands
24
25
  def _load_app_settings(settings_object_path: str | None = None) -> RRQSettings:
25
26
  """Load the settings object from the given path.
26
- If not provided, the RRQ_SETTINGS environment variable will be used.
27
+ If not provided, the RRQ_SETTINGS environment variable will be used.
27
28
  If the environment variable is not set, will create a default settings object.
28
29
  RRQ Setting objects, automatically pick up ENVIRONMENT variables starting with RRQ_.
29
30
 
@@ -53,10 +54,22 @@ def _load_app_settings(settings_object_path: str | None = None) -> RRQSettings:
53
54
 
54
55
  return settings_object
55
56
  except ImportError:
56
- click.echo(click.style(f"ERROR: Could not import settings object '{settings_object_path}'. Make sure it is in PYTHONPATH.", fg="red"), err=True)
57
+ click.echo(
58
+ click.style(
59
+ f"ERROR: Could not import settings object '{settings_object_path}'. Make sure it is in PYTHONPATH.",
60
+ fg="red",
61
+ ),
62
+ err=True,
63
+ )
57
64
  sys.exit(1)
58
65
  except Exception as e:
59
- click.echo(click.style(f"ERROR: Unexpected error processing settings object '{settings_object_path}': {e}", fg="red"), err=True)
66
+ click.echo(
67
+ click.style(
68
+ f"ERROR: Unexpected error processing settings object '{settings_object_path}': {e}",
69
+ fg="red",
70
+ ),
71
+ err=True,
72
+ )
60
73
  sys.exit(1)
61
74
 
62
75
 
@@ -73,13 +86,25 @@ async def check_health_async_impl(settings_object_path: str | None = None) -> bo
73
86
  logger.debug(f"Successfully connected to Redis: {rrq_settings.redis_dsn}")
74
87
 
75
88
  health_key_pattern = f"{HEALTH_KEY_PREFIX}*"
76
- worker_keys = [key_bytes.decode("utf-8") async for key_bytes in job_store.redis.scan_iter(match=health_key_pattern)]
89
+ worker_keys = [
90
+ key_bytes.decode("utf-8")
91
+ async for key_bytes in job_store.redis.scan_iter(match=health_key_pattern)
92
+ ]
77
93
 
78
94
  if not worker_keys:
79
- click.echo(click.style("Worker Health Check: FAIL (No active workers found)", fg="red"))
95
+ click.echo(
96
+ click.style(
97
+ "Worker Health Check: FAIL (No active workers found)", fg="red"
98
+ )
99
+ )
80
100
  return False
81
101
 
82
- click.echo(click.style(f"Worker Health Check: Found {len(worker_keys)} active worker(s):", fg="green"))
102
+ click.echo(
103
+ click.style(
104
+ f"Worker Health Check: Found {len(worker_keys)} active worker(s):",
105
+ fg="green",
106
+ )
107
+ )
83
108
  for key in worker_keys:
84
109
  worker_id = key.split(HEALTH_KEY_PREFIX)[1]
85
110
  health_data, ttl = await job_store.get_worker_health(worker_id)
@@ -95,28 +120,49 @@ async def check_health_async_impl(settings_object_path: str | None = None) -> bo
95
120
  f" TTL: {ttl if ttl is not None else 'N/A'} seconds"
96
121
  )
97
122
  else:
98
- click.echo(f" - Worker ID: {click.style(worker_id, bold=True)} - Health data missing/invalid. TTL: {ttl if ttl is not None else 'N/A'}s")
123
+ click.echo(
124
+ f" - Worker ID: {click.style(worker_id, bold=True)} - Health data missing/invalid. TTL: {ttl if ttl is not None else 'N/A'}s"
125
+ )
99
126
  return True
100
127
  except redis.exceptions.ConnectionError as e:
101
128
  logger.error(f"Redis connection failed during health check: {e}", exc_info=True)
102
- click.echo(click.style(f"Worker Health Check: FAIL - Redis connection error: {e}", fg="red"))
129
+ click.echo(
130
+ click.style(
131
+ f"Worker Health Check: FAIL - Redis connection error: {e}", fg="red"
132
+ )
133
+ )
103
134
  return False
104
135
  except Exception as e:
105
- logger.error(f"An unexpected error occurred during health check: {e}", exc_info=True)
106
- click.echo(click.style(f"Worker Health Check: FAIL - Unexpected error: {e}", fg="red"))
136
+ logger.error(
137
+ f"An unexpected error occurred during health check: {e}", exc_info=True
138
+ )
139
+ click.echo(
140
+ click.style(f"Worker Health Check: FAIL - Unexpected error: {e}", fg="red")
141
+ )
107
142
  return False
108
143
  finally:
109
144
  if job_store:
110
145
  await job_store.aclose()
111
146
 
147
+
112
148
  # --- Process Management ---
113
- def start_rrq_worker_subprocess(is_detached: bool = False, settings_object_path: str | None = None) -> subprocess.Popen | None:
114
- """Start an RRQ worker process."""
149
+ def start_rrq_worker_subprocess(
150
+ is_detached: bool = False,
151
+ settings_object_path: str | None = None,
152
+ queues: list[str] | None = None,
153
+ ) -> subprocess.Popen | None:
154
+ """Start an RRQ worker process, optionally for specific queues."""
115
155
  command = ["rrq", "worker", "run"]
116
156
  if settings_object_path:
117
157
  command.extend(["--settings", settings_object_path])
118
158
  else:
119
- raise ValueError("start_rrq_worker_subprocess called without settings_object_path!")
159
+ raise ValueError(
160
+ "start_rrq_worker_subprocess called without settings_object_path!"
161
+ )
162
+ # Add queue filters if specified
163
+ if queues:
164
+ for q in queues:
165
+ command.extend(["--queue", q])
120
166
 
121
167
  logger.info(f"Starting worker subprocess with command: {' '.join(command)}")
122
168
  if is_detached:
@@ -139,35 +185,55 @@ def start_rrq_worker_subprocess(is_detached: bool = False, settings_object_path:
139
185
  return process
140
186
 
141
187
 
142
- def terminate_worker_process(process: subprocess.Popen | None, logger: logging.Logger) -> None:
188
+ def terminate_worker_process(
189
+ process: subprocess.Popen | None, logger: logging.Logger
190
+ ) -> None:
143
191
  if not process or process.pid is None:
144
192
  logger.debug("No active worker process to terminate.")
145
193
  return
146
194
 
147
195
  try:
148
196
  if process.poll() is not None:
149
- logger.debug(f"Worker process {process.pid} already terminated (poll returned exit code: {process.returncode}).")
197
+ logger.debug(
198
+ f"Worker process {process.pid} already terminated (poll returned exit code: {process.returncode})."
199
+ )
150
200
  return
151
201
 
152
202
  pgid = os.getpgid(process.pid)
153
- logger.info(f"Terminating worker process group for PID {process.pid} (PGID {pgid})...")
203
+ logger.info(
204
+ f"Terminating worker process group for PID {process.pid} (PGID {pgid})..."
205
+ )
154
206
  os.killpg(pgid, signal.SIGTERM)
155
207
  process.wait(timeout=5)
156
208
  except subprocess.TimeoutExpired:
157
- logger.warning(f"Worker process {process.pid} did not terminate gracefully (SIGTERM timeout), sending SIGKILL.")
209
+ logger.warning(
210
+ f"Worker process {process.pid} did not terminate gracefully (SIGTERM timeout), sending SIGKILL."
211
+ )
158
212
  with suppress(ProcessLookupError):
159
213
  os.killpg(os.getpgid(process.pid), signal.SIGKILL)
160
214
  except Exception as e:
161
215
  logger.error(f"Unexpected error checking worker process {process.pid}: {e}")
162
216
 
163
217
 
164
- async def watch_rrq_worker_impl(watch_path: str, settings_object_path: str | None = None) -> None:
218
+ async def watch_rrq_worker_impl(
219
+ watch_path: str,
220
+ settings_object_path: str | None = None,
221
+ queues: list[str] | None = None,
222
+ ) -> None:
165
223
  if not settings_object_path:
166
- click.echo(click.style("ERROR: 'rrq worker watch' requires --settings to be specified.", fg="red"), err=True)
224
+ click.echo(
225
+ click.style(
226
+ "ERROR: 'rrq worker watch' requires --settings to be specified.",
227
+ fg="red",
228
+ ),
229
+ err=True,
230
+ )
167
231
  sys.exit(1)
168
232
 
169
233
  abs_watch_path = os.path.abspath(watch_path)
170
- click.echo(f"Watching for file changes in {abs_watch_path} to restart RRQ worker (app settings: {settings_object_path})...")
234
+ click.echo(
235
+ f"Watching for file changes in {abs_watch_path} to restart RRQ worker (app settings: {settings_object_path})..."
236
+ )
171
237
  worker_process: subprocess.Popen | None = None
172
238
  loop = asyncio.get_event_loop()
173
239
  shutdown_event = asyncio.Event()
@@ -184,20 +250,28 @@ async def watch_rrq_worker_impl(watch_path: str, settings_object_path: str | Non
184
250
  signal.signal(signal.SIGTERM, sig_handler)
185
251
 
186
252
  try:
187
- worker_process = start_rrq_worker_subprocess(is_detached=False, settings_object_path=settings_object_path)
253
+ worker_process = start_rrq_worker_subprocess(
254
+ is_detached=False,
255
+ settings_object_path=settings_object_path,
256
+ queues=queues,
257
+ )
188
258
  async for changes in awatch(abs_watch_path, stop_event=shutdown_event):
189
- if shutdown_event.is_set():
259
+ if shutdown_event.is_set():
190
260
  break
191
- if not changes:
261
+ if not changes:
192
262
  continue
193
263
 
194
264
  logger.info(f"File changes detected: {changes}. Restarting RRQ worker...")
195
265
  if worker_process is not None:
196
266
  terminate_worker_process(worker_process, logger)
197
267
  await asyncio.sleep(1)
198
- if shutdown_event.is_set():
268
+ if shutdown_event.is_set():
199
269
  break
200
- worker_process = start_rrq_worker_subprocess(is_detached=False, settings_object_path=settings_object_path)
270
+ worker_process = start_rrq_worker_subprocess(
271
+ is_detached=False,
272
+ settings_object_path=settings_object_path,
273
+ queues=queues,
274
+ )
201
275
  except Exception as e:
202
276
  logger.error(f"Error in watch_rrq_worker: {e}", exc_info=True)
203
277
  finally:
@@ -213,7 +287,8 @@ async def watch_rrq_worker_impl(watch_path: str, settings_object_path: str | Non
213
287
 
214
288
  # --- Click CLI Definitions ---
215
289
 
216
- CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
290
+ CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
291
+
217
292
 
218
293
  @click.group(context_settings=CONTEXT_SETTINGS)
219
294
  def rrq():
@@ -226,7 +301,6 @@ def rrq():
226
301
  pass
227
302
 
228
303
 
229
-
230
304
  @rrq.group("worker")
231
305
  def worker_cli():
232
306
  """Manage RRQ workers (run, watch)."""
@@ -234,41 +308,66 @@ def worker_cli():
234
308
 
235
309
 
236
310
  @worker_cli.command("run")
237
- @click.option("--burst", is_flag=True, help="Run worker in burst mode (process one job/batch then exit). Not Implemented yet.")
238
- @click.option("--detach", is_flag=True, help="Run the worker in the background (detached).")
239
311
  @click.option(
240
- "--settings",
241
- "settings_object_path",
242
- type=str,
243
- required=False,
244
- default=None,
245
- help="Python settings path for application worker settings (e.g., myapp.worker_config.rrq_settings)."
312
+ "--burst",
313
+ is_flag=True,
314
+ help="Run worker in burst mode (process one job/batch then exit).",
246
315
  )
247
- def worker_run_command(burst: bool, detach: bool, settings_object_path: str):
316
+ @click.option(
317
+ "--queue",
318
+ "queues",
319
+ type=str,
320
+ multiple=True,
321
+ help="Queue(s) to poll. Defaults to settings.default_queue_name.",
322
+ )
323
+ @click.option(
324
+ "--settings",
325
+ "settings_object_path",
326
+ type=str,
327
+ required=False,
328
+ default=None,
329
+ help=(
330
+ "Python settings path for application worker settings "
331
+ "(e.g., myapp.worker_config.rrq_settings). "
332
+ "The specified settings object must include a `job_registry: JobRegistry`."
333
+ ),
334
+ )
335
+ def worker_run_command(
336
+ burst: bool,
337
+ queues: tuple[str, ...],
338
+ settings_object_path: str,
339
+ ):
248
340
  """Run an RRQ worker process. Requires --settings."""
249
341
  rrq_settings = _load_app_settings(settings_object_path)
250
342
 
251
- if detach:
252
- logger.info("Attempting to start worker in detached (background) mode...")
253
- process = start_rrq_worker_subprocess(is_detached=True, settings_object_path=settings_object_path)
254
- click.echo(f"Worker initiated in background (PID: {process.pid}). Check logs for status.")
255
- return
256
-
257
- if burst:
258
- raise NotImplementedError("Burst mode is not implemented yet.")
343
+ # Determine queues to poll
344
+ queues_arg = list(queues) if queues else None
345
+ # Run worker in foreground (burst or continuous mode)
259
346
 
260
- logger.info(f"Starting RRQ Worker (Burst: {burst}, App Settings: {settings_object_path})")
347
+ logger.info(
348
+ f"Starting RRQ Worker (Burst: {burst}, App Settings: {settings_object_path})"
349
+ )
261
350
 
262
351
  if not rrq_settings.job_registry:
263
- click.echo(click.style("ERROR: No 'job_registry_app'. You must provide a JobRegistry instance in settings.", fg="red"), err=True)
352
+ click.echo(
353
+ click.style(
354
+ "ERROR: No 'job_registry_app'. You must provide a JobRegistry instance in settings.",
355
+ fg="red",
356
+ ),
357
+ err=True,
358
+ )
264
359
  sys.exit(1)
265
360
 
266
- logger.debug(f"Registered handlers (from effective registry): {rrq_settings.job_registry.get_registered_functions()}")
361
+ logger.debug(
362
+ f"Registered handlers (from effective registry): {rrq_settings.job_registry.get_registered_functions()}"
363
+ )
267
364
  logger.debug(f"Effective RRQ settings for worker: {rrq_settings}")
268
365
 
269
366
  worker_instance = RRQWorker(
270
367
  settings=rrq_settings,
271
368
  job_registry=rrq_settings.job_registry,
369
+ queues=queues_arg,
370
+ burst=burst,
272
371
  )
273
372
 
274
373
  loop = asyncio.get_event_loop()
@@ -296,33 +395,126 @@ def worker_run_command(burst: bool, detach: bool, settings_object_path: str):
296
395
  show_default=True,
297
396
  )
298
397
  @click.option(
299
- "--settings",
300
- "settings_object_path",
301
- type=str,
302
- required=False,
303
- default=None,
304
- help="Python settings path for application worker settings (e.g., myapp.worker_config.rrq_settings)."
398
+ "--settings",
399
+ "settings_object_path",
400
+ type=str,
401
+ required=False,
402
+ default=None,
403
+ help=(
404
+ "Python settings path for application worker settings "
405
+ "(e.g., myapp.worker_config.rrq_settings). "
406
+ "The specified settings object must define a `job_registry: JobRegistry`."
407
+ ),
408
+ )
409
+ @click.option(
410
+ "--queue",
411
+ "queues",
412
+ type=str,
413
+ multiple=True,
414
+ help="Queue(s) to poll when restarting worker. Defaults to settings.default_queue_name.",
305
415
  )
306
- def worker_watch_command(path: str, settings_object_path: str):
416
+ def worker_watch_command(
417
+ path: str,
418
+ settings_object_path: str,
419
+ queues: tuple[str, ...],
420
+ ):
307
421
  """Run the RRQ worker with auto-restart on file changes in PATH. Requires --settings."""
308
- asyncio.run(watch_rrq_worker_impl(path, settings_object_path=settings_object_path))
422
+ # Run watch with optional queue filters
423
+ asyncio.run(
424
+ watch_rrq_worker_impl(
425
+ path,
426
+ settings_object_path=settings_object_path,
427
+ queues=list(queues) if queues else None,
428
+ )
429
+ )
430
+
431
+
432
+ # --- DLQ Requeue CLI Command (delegates to JobStore) ---
309
433
 
310
434
 
311
435
  @rrq.command("check")
312
436
  @click.option(
313
- "--settings",
314
- "settings_object_path",
315
- type=str,
316
- required=False,
317
- default=None,
318
- help="Python settings path for application worker settings (e.g., myapp.worker_config.rrq_settings)."
437
+ "--settings",
438
+ "settings_object_path",
439
+ type=str,
440
+ required=False,
441
+ default=None,
442
+ help=(
443
+ "Python settings path for application worker settings "
444
+ "(e.g., myapp.worker_config.rrq_settings). "
445
+ "Must include `job_registry: JobRegistry` to identify workers."
446
+ ),
319
447
  )
320
448
  def check_command(settings_object_path: str):
321
449
  """Perform a health check on active RRQ worker(s). Requires --settings."""
322
450
  click.echo("Performing RRQ health check...")
323
- healthy = asyncio.run(check_health_async_impl(settings_object_path=settings_object_path))
451
+ healthy = asyncio.run(
452
+ check_health_async_impl(settings_object_path=settings_object_path)
453
+ )
324
454
  if healthy:
325
455
  click.echo(click.style("Health check PASSED.", fg="green"))
326
456
  else:
327
457
  click.echo(click.style("Health check FAILED.", fg="red"))
328
458
  sys.exit(1)
459
+
460
+
461
+ @rrq.group("dlq")
462
+ def dlq_cli():
463
+ """Manage the Dead Letter Queue (DLQ)."""
464
+ pass
465
+
466
+
467
+ @dlq_cli.command("requeue")
468
+ @click.option(
469
+ "--settings",
470
+ "settings_object_path",
471
+ type=str,
472
+ required=False,
473
+ default=None,
474
+ help=(
475
+ "Python settings path for application worker settings "
476
+ "(e.g., myapp.worker_config.rrq_settings). "
477
+ "Must include `job_registry: JobRegistry` if requeueing requires handler resolution."
478
+ ),
479
+ )
480
+ @click.option(
481
+ "--dlq-name",
482
+ "dlq_name",
483
+ type=str,
484
+ required=False,
485
+ default=None,
486
+ help="Name of the DLQ (without prefix). Defaults to settings.default_dlq_name.",
487
+ )
488
+ @click.option(
489
+ "--queue",
490
+ "target_queue",
491
+ type=str,
492
+ required=False,
493
+ default=None,
494
+ help="Name of the target queue (without prefix). Defaults to settings.default_queue_name.",
495
+ )
496
+ @click.option(
497
+ "--limit",
498
+ type=int,
499
+ required=False,
500
+ default=None,
501
+ help="Maximum number of DLQ jobs to requeue; all if not set.",
502
+ )
503
+ def dlq_requeue_command(
504
+ settings_object_path: str,
505
+ dlq_name: str,
506
+ target_queue: str,
507
+ limit: int,
508
+ ):
509
+ """Requeue jobs from the dead letter queue back into a live queue."""
510
+ rrq_settings = _load_app_settings(settings_object_path)
511
+ dlq_to_use = dlq_name or rrq_settings.default_dlq_name
512
+ queue_to_use = target_queue or rrq_settings.default_queue_name
513
+ job_store = JobStore(settings=rrq_settings)
514
+ click.echo(
515
+ f"Requeuing jobs from DLQ '{dlq_to_use}' to queue '{queue_to_use}' (limit: {limit or 'all'})..."
516
+ )
517
+ count = asyncio.run(job_store.requeue_dlq(dlq_to_use, queue_to_use, limit))
518
+ click.echo(
519
+ f"Requeued {count} job(s) from DLQ '{dlq_to_use}' to queue '{queue_to_use}'."
520
+ )