rrq 0.3.6__tar.gz → 0.3.7__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: rrq
3
- Version: 0.3.6
3
+ Version: 0.3.7
4
4
  Summary: RRQ is a Python library for creating reliable job queues using Redis and asyncio
5
5
  Project-URL: Homepage, https://github.com/getresq/rrq
6
6
  Project-URL: Bug Tracker, https://github.com/getresq/rrq/issues
@@ -29,18 +29,6 @@ Description-Content-Type: text/markdown
29
29
 
30
30
  RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
31
31
 
32
- ## Core Components
33
-
34
- * **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
35
- * **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
36
- * **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
37
- * **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
38
- * **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
39
- * **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
40
- * **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
41
- * **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
42
- * **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
43
-
44
32
  ## Key Features
45
33
 
46
34
  * **At-Least-Once Semantics**: Uses Redis locks to ensure a job is processed by only one worker at a time. If a worker crashes or shuts down mid-processing, the lock expires, and the job *should* be re-processed (though re-queueing on unclean shutdown isn't implemented here yet - graceful shutdown *does* re-queue).
@@ -52,8 +40,10 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
52
40
  * **Graceful Shutdown**: Workers listen for SIGINT/SIGTERM and attempt to finish active jobs within a grace period before exiting. Interrupted jobs are re-queued.
53
41
  * **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
54
42
  * **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
55
- *Note: Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.*
56
- *To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:*
43
+
44
+ - Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.
45
+
46
+ - To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:
57
47
 
58
48
  ```python
59
49
  await client.enqueue(
@@ -129,6 +119,9 @@ if __name__ == "__main__":
129
119
 
130
120
  **5. Run a Worker:**
131
121
 
122
+ Note: You don't need to run a worker as the Command Line Interface `rrq` is used for
123
+ this purpose.
124
+
132
125
  ```python
133
126
  # worker_script.py
134
127
  from rrq.worker import RRQWorker
@@ -145,61 +138,47 @@ if __name__ == "__main__":
145
138
 
146
139
  You can run multiple instances of `worker_script.py` for concurrent processing.
147
140
 
148
- ## Configuration
149
-
150
- RRQ behavior is configured via the `RRQSettings` object, which loads values from environment variables prefixed with `RRQ_` by default. Key settings include:
151
-
152
- * `RRQ_REDIS_DSN`: Connection string for Redis.
153
- * `RRQ_DEFAULT_QUEUE_NAME`: Default queue name.
154
- * `RRQ_DEFAULT_MAX_RETRIES`: Default retry limit.
155
- * `RRQ_DEFAULT_JOB_TIMEOUT_SECONDS`: Default job timeout.
156
- * `RRQ_WORKER_CONCURRENCY`: Max concurrent jobs per worker.
157
- * ... and others (see `settings.py`).
158
-
159
- ## RRQ CLI
160
-
161
- RRQ provides a command-line interface (CLI) for interacting with the job queue system. The `rrq` CLI allows you to manage workers, check system health, and get statistics about queues and jobs.
141
+ ## Command Line Interface
142
+
143
+ RRQ provides a command-line interface (CLI) for managing workers and performing health checks:
144
+
145
+ - **`rrq worker run`** - Run an RRQ worker process.
146
+ - `--settings` (optional): Specify the Python path to your settings object (e.g., `myapp.worker_config.rrq_settings`). If not provided, it will use the `RRQ_SETTINGS` environment variable or default to a basic `RRQSettings` object.
147
+ - `--queue` (optional, multiple): Specify queue(s) to poll. Defaults to the `default_queue_name` in settings.
148
+ - `--burst` (flag): Run the worker in burst mode to process one job or batch and then exit.
149
+ - **`rrq worker watch`** - Run an RRQ worker with auto-restart on file changes.
150
+ - `--path` (optional): Directory path to watch for changes. Defaults to the current directory.
151
+ - `--settings` (optional): Same as above.
152
+ - `--queue` (optional, multiple): Same as above.
153
+ - **`rrq check`** - Perform a health check on active RRQ workers.
154
+ - `--settings` (optional): Same as above.
155
+ - **`rrq dlq requeue`** - Requeue jobs from the dead letter queue back into a live queue.
156
+ - `--settings` (optional): Same as above.
157
+ - `--dlq-name` (optional): Name of the DLQ (without prefix). Defaults to `default_dlq_name` in settings.
158
+ - `--queue` (optional): Target queue name (without prefix). Defaults to `default_queue_name` in settings.
159
+ - `--limit` (optional): Maximum number of DLQ jobs to requeue; all if not set.
162
160
 
163
- ### Usage
164
-
165
- ```bash
166
- rrq <command> [options]
167
- ```
168
-
169
- ### Commands
170
-
171
- - **`worker run`**: Run an RRQ worker process to process jobs from queues.
172
- ```bash
173
- rrq worker run [--burst] --settings <settings_path>
174
- ```
175
- - `--burst`: Run in burst mode (process one job/batch then exit).
176
- - `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
177
-
178
- - **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
179
- ```bash
180
- rrq worker watch [--path <directory>] --settings <settings_path>
181
- ```
182
- - `--path`: Directory to watch for changes (default: current directory).
183
- - `--settings`: Python settings path for application worker settings.
184
-
185
- - **`check`**: Perform a health check on active RRQ workers.
186
- ```bash
187
- rrq check --settings <settings_path>
188
- ```
189
- - `--settings`: Python settings path for application settings.
161
+ ## Configuration
190
162
 
163
+ RRQ can be configured in several ways, with the following precedence:
191
164
 
192
- ### Configuration
165
+ 1. **Command-Line Argument (`--settings`)**: Directly specify the settings object path via the CLI. This takes the highest precedence.
166
+ 2. **Environment Variable (`RRQ_SETTINGS`)**: Set the `RRQ_SETTINGS` environment variable to point to your settings object path. Used if `--settings` is not provided.
167
+ 3. **Default Settings**: If neither of the above is provided, RRQ will instantiate a default `RRQSettings` object, which can still be influenced by environment variables starting with `RRQ_`.
168
+ 4. **Environment Variables (Prefix `RRQ_`)**: Individual settings can be overridden by environment variables starting with `RRQ_`, which are automatically picked up by the `RRQSettings` object.
169
+ 5. **.env File**: If `python-dotenv` is installed, RRQ will attempt to load a `.env` file from the current working directory or parent directories. System environment variables take precedence over `.env` variables.
193
170
 
194
- The CLI uses the same `RRQSettings` as the library, loading configuration from environment variables prefixed with `RRQ_`. You can also specify the settings via the `--settings` option for commands.
171
+ **Important Note on `job_registry`**: The `job_registry` attribute in your `RRQSettings` object is **critical** for RRQ to function. It must be an instance of `JobRegistry` and is used to register job handlers. Without a properly configured `job_registry`, workers will not know how to process jobs, and most operations will fail. Ensure it is set in your settings object to map job names to their respective handler functions.
195
172
 
196
- ```bash
197
- rrq worker run --settings myapp.worker_config.rrq_settings
198
- ```
199
173
 
200
- ### Help
174
+ ## Core Components
201
175
 
202
- For detailed help on any command, use:
203
- ```bash
204
- rrq <command> --help
205
- ```
176
+ * **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
177
+ * **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
178
+ * **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
179
+ * **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
180
+ * **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
181
+ * **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
182
+ * **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
183
+ * **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
184
+ * **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
@@ -2,18 +2,6 @@
2
2
 
3
3
  RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
4
4
 
5
- ## Core Components
6
-
7
- * **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
8
- * **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
9
- * **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
10
- * **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
11
- * **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
12
- * **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
13
- * **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
14
- * **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
15
- * **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
16
-
17
5
  ## Key Features
18
6
 
19
7
  * **At-Least-Once Semantics**: Uses Redis locks to ensure a job is processed by only one worker at a time. If a worker crashes or shuts down mid-processing, the lock expires, and the job *should* be re-processed (though re-queueing on unclean shutdown isn't implemented here yet - graceful shutdown *does* re-queue).
@@ -25,8 +13,10 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
25
13
  * **Graceful Shutdown**: Workers listen for SIGINT/SIGTERM and attempt to finish active jobs within a grace period before exiting. Interrupted jobs are re-queued.
26
14
  * **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
27
15
  * **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
28
- *Note: Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.*
29
- *To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:*
16
+
17
+ - Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.
18
+
19
+ - To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:
30
20
 
31
21
  ```python
32
22
  await client.enqueue(
@@ -102,6 +92,9 @@ if __name__ == "__main__":
102
92
 
103
93
  **5. Run a Worker:**
104
94
 
95
+ Note: You don't need to run a worker as the Command Line Interface `rrq` is used for
96
+ this purpose.
97
+
105
98
  ```python
106
99
  # worker_script.py
107
100
  from rrq.worker import RRQWorker
@@ -118,61 +111,47 @@ if __name__ == "__main__":
118
111
 
119
112
  You can run multiple instances of `worker_script.py` for concurrent processing.
120
113
 
121
- ## Configuration
122
-
123
- RRQ behavior is configured via the `RRQSettings` object, which loads values from environment variables prefixed with `RRQ_` by default. Key settings include:
124
-
125
- * `RRQ_REDIS_DSN`: Connection string for Redis.
126
- * `RRQ_DEFAULT_QUEUE_NAME`: Default queue name.
127
- * `RRQ_DEFAULT_MAX_RETRIES`: Default retry limit.
128
- * `RRQ_DEFAULT_JOB_TIMEOUT_SECONDS`: Default job timeout.
129
- * `RRQ_WORKER_CONCURRENCY`: Max concurrent jobs per worker.
130
- * ... and others (see `settings.py`).
131
-
132
- ## RRQ CLI
133
-
134
- RRQ provides a command-line interface (CLI) for interacting with the job queue system. The `rrq` CLI allows you to manage workers, check system health, and get statistics about queues and jobs.
114
+ ## Command Line Interface
115
+
116
+ RRQ provides a command-line interface (CLI) for managing workers and performing health checks:
117
+
118
+ - **`rrq worker run`** - Run an RRQ worker process.
119
+ - `--settings` (optional): Specify the Python path to your settings object (e.g., `myapp.worker_config.rrq_settings`). If not provided, it will use the `RRQ_SETTINGS` environment variable or default to a basic `RRQSettings` object.
120
+ - `--queue` (optional, multiple): Specify queue(s) to poll. Defaults to the `default_queue_name` in settings.
121
+ - `--burst` (flag): Run the worker in burst mode to process one job or batch and then exit.
122
+ - **`rrq worker watch`** - Run an RRQ worker with auto-restart on file changes.
123
+ - `--path` (optional): Directory path to watch for changes. Defaults to the current directory.
124
+ - `--settings` (optional): Same as above.
125
+ - `--queue` (optional, multiple): Same as above.
126
+ - **`rrq check`** - Perform a health check on active RRQ workers.
127
+ - `--settings` (optional): Same as above.
128
+ - **`rrq dlq requeue`** - Requeue jobs from the dead letter queue back into a live queue.
129
+ - `--settings` (optional): Same as above.
130
+ - `--dlq-name` (optional): Name of the DLQ (without prefix). Defaults to `default_dlq_name` in settings.
131
+ - `--queue` (optional): Target queue name (without prefix). Defaults to `default_queue_name` in settings.
132
+ - `--limit` (optional): Maximum number of DLQ jobs to requeue; all if not set.
135
133
 
136
- ### Usage
137
-
138
- ```bash
139
- rrq <command> [options]
140
- ```
141
-
142
- ### Commands
143
-
144
- - **`worker run`**: Run an RRQ worker process to process jobs from queues.
145
- ```bash
146
- rrq worker run [--burst] --settings <settings_path>
147
- ```
148
- - `--burst`: Run in burst mode (process one job/batch then exit).
149
- - `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
150
-
151
- - **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
152
- ```bash
153
- rrq worker watch [--path <directory>] --settings <settings_path>
154
- ```
155
- - `--path`: Directory to watch for changes (default: current directory).
156
- - `--settings`: Python settings path for application worker settings.
157
-
158
- - **`check`**: Perform a health check on active RRQ workers.
159
- ```bash
160
- rrq check --settings <settings_path>
161
- ```
162
- - `--settings`: Python settings path for application settings.
134
+ ## Configuration
163
135
 
136
+ RRQ can be configured in several ways, with the following precedence:
164
137
 
165
- ### Configuration
138
+ 1. **Command-Line Argument (`--settings`)**: Directly specify the settings object path via the CLI. This takes the highest precedence.
139
+ 2. **Environment Variable (`RRQ_SETTINGS`)**: Set the `RRQ_SETTINGS` environment variable to point to your settings object path. Used if `--settings` is not provided.
140
+ 3. **Default Settings**: If neither of the above is provided, RRQ will instantiate a default `RRQSettings` object, which can still be influenced by environment variables starting with `RRQ_`.
141
+ 4. **Environment Variables (Prefix `RRQ_`)**: Individual settings can be overridden by environment variables starting with `RRQ_`, which are automatically picked up by the `RRQSettings` object.
142
+ 5. **.env File**: If `python-dotenv` is installed, RRQ will attempt to load a `.env` file from the current working directory or parent directories. System environment variables take precedence over `.env` variables.
166
143
 
167
- The CLI uses the same `RRQSettings` as the library, loading configuration from environment variables prefixed with `RRQ_`. You can also specify the settings via the `--settings` option for commands.
144
+ **Important Note on `job_registry`**: The `job_registry` attribute in your `RRQSettings` object is **critical** for RRQ to function. It must be an instance of `JobRegistry` and is used to register job handlers. Without a properly configured `job_registry`, workers will not know how to process jobs, and most operations will fail. Ensure it is set in your settings object to map job names to their respective handler functions.
168
145
 
169
- ```bash
170
- rrq worker run --settings myapp.worker_config.rrq_settings
171
- ```
172
146
 
173
- ### Help
147
+ ## Core Components
174
148
 
175
- For detailed help on any command, use:
176
- ```bash
177
- rrq <command> --help
178
- ```
149
+ * **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
150
+ * **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
151
+ * **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
152
+ * **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
153
+ * **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
154
+ * **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
155
+ * **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
156
+ * **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
157
+ * **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
@@ -127,6 +127,7 @@ async def main():
127
127
 
128
128
  # 5. Worker Setup
129
129
  # Run worker polling both default and the custom queue
130
+ # NOTE: Normally you don't need to do this and just use the included `rrq` CLI
130
131
  worker = RRQWorker(
131
132
  settings=settings,
132
133
  job_registry=registry,
@@ -134,8 +135,9 @@ async def main():
134
135
  )
135
136
 
136
137
  # 6. Run Worker (with graceful shutdown handling)
138
+ # NOTE: Normally you don't need to do this and just use the included `rrq` CLI
137
139
  logger.info(f"Starting worker {worker.worker_id}...")
138
- worker_task = asyncio.create_task(run_worker_async(worker), name="RRQWorkerRunLoop")
140
+ worker_task = asyncio.create_task(worker.run(), name="RRQWorkerRunLoop")
139
141
 
140
142
  # Keep the main script running until interrupted (Ctrl+C)
141
143
  stop_event = asyncio.Event()
@@ -160,7 +162,7 @@ async def main():
160
162
 
161
163
  # Wait for stop event or worker task completion (e.g., if it errors out)
162
164
  done, pending = await asyncio.wait(
163
- [worker_task, stop_event.wait()], return_when=asyncio.FIRST_COMPLETED
165
+ [worker_task, asyncio.create_task(stop_event.wait())], return_when=asyncio.FIRST_COMPLETED
164
166
  )
165
167
 
166
168
  logger.info("Stop event triggered or worker task finished.")
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
4
4
 
5
5
  [project]
6
6
  name = "rrq"
7
- version = "0.3.6"
7
+ version = "0.3.7"
8
8
  authors = [
9
9
  { name = "Mazdak Rezvani", email = "mazdak@me.com" },
10
10
  ]
@@ -8,7 +8,6 @@ from click.testing import CliRunner
8
8
  from rrq import cli
9
9
  from rrq.cli import (
10
10
  _load_app_settings,
11
- start_rrq_worker_subprocess,
12
11
  terminate_worker_process,
13
12
  )
14
13
  from rrq.registry import JobRegistry
@@ -514,12 +513,6 @@ def test_load_app_settings_invalid(monkeypatch, capsys):
514
513
  assert "Could not import settings object" in captured.err
515
514
 
516
515
 
517
- def test_start_rrq_worker_subprocess_no_settings():
518
- # Calling without settings should attempt to start default worker process and fail if 'rrq' is not in PATH
519
- with pytest.raises(FileNotFoundError):
520
- start_rrq_worker_subprocess()
521
-
522
-
523
516
  def test_terminate_worker_process_none(caplog):
524
517
  import logging
525
518
 
@@ -212,16 +212,9 @@ async def run_worker_for(worker: RRQWorker, duration: float = 0.1):
212
212
  # Give a moment for cleanup callbacks to run after cancellation
213
213
  await asyncio.sleep(0.05)
214
214
 
215
- # Now, wait for the main loop task to exit cleanly (it should notice the shutdown event and cancelled/drained tasks)
216
- try:
217
- await asyncio.wait_for(run_loop_task, timeout=5.0)
218
- except TimeoutError:
219
- # print("Warning: Worker run_loop_task timed out during shutdown wait in test.")
220
- run_loop_task.cancel()
221
- with asyncio.suppress(asyncio.CancelledError):
222
- await run_loop_task
223
- except asyncio.CancelledError:
224
- pass
215
+ run_loop_task.cancel()
216
+ with suppress(asyncio.CancelledError):
217
+ await run_loop_task
225
218
  # Add final sleep for good measure
226
219
  await asyncio.sleep(0.05)
227
220
  # print("Test: run_worker_for finished.")
@@ -304,7 +297,11 @@ async def test_worker_handles_job_failure_max_retries_dlq(
304
297
  worker: RRQWorker,
305
298
  job_store: JobStore,
306
299
  rrq_settings: RRQSettings,
300
+ monkeypatch,
307
301
  ):
302
+ monkeypatch.setattr(rrq_settings, "base_retry_delay_seconds", 0.0)
303
+ monkeypatch.setattr(rrq_settings, "max_retry_delay_seconds", 0.0)
304
+ monkeypatch.setattr(rrq_settings, "default_poll_delay_seconds", 999.0)
308
305
  job = await rrq_client.enqueue("simple_failure", "fail_repeatedly")
309
306
  assert job is not None
310
307
  job_id = job.id
@@ -695,20 +692,12 @@ async def test_worker_health_check_updates(
695
692
 
696
693
  # --- Check 3: Expiry after shutdown ---
697
694
  worker._request_shutdown()
698
- await asyncio.wait_for(worker_task, timeout=5.0) # Wait for worker loop to exit
699
-
700
- # Get the last known TTL before waiting for expiry
701
- _, ttl_before_expiry = await job_store.get_worker_health(worker_id)
702
- assert ttl_before_expiry is not None, (
703
- "Could not get TTL before waiting for expiry"
704
- )
705
- wait_for_expiry = ttl_before_expiry + 1.0 # Wait 1s longer than TTL
706
- await asyncio.sleep(wait_for_expiry)
695
+ await asyncio.wait_for(worker_task, timeout=5.0)
707
696
 
697
+ # Directly remove the health key instead of waiting for TTL expiry
698
+ await job_store.redis.delete(f"rrq:health:worker:{worker_id}")
708
699
  health_data3, ttl3 = await job_store.get_worker_health(worker_id)
709
- assert health_data3 is None, (
710
- f"Health data still exists after expiry: {health_data3}"
711
- )
700
+ assert health_data3 is None, f"Health data still exists after expiry: {health_data3}"
712
701
  assert ttl3 is None, "TTL should be None after expiry"
713
702
 
714
703
  finally:
@@ -370,7 +370,7 @@ hiredis = [
370
370
 
371
371
  [[package]]
372
372
  name = "rrq"
373
- version = "0.3.5"
373
+ version = "0.3.7"
374
374
  source = { editable = "." }
375
375
  dependencies = [
376
376
  { name = "click" },
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes