rrq 0.3.5__tar.gz → 0.3.7__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {rrq-0.3.5 → rrq-0.3.7}/PKG-INFO +45 -66
- {rrq-0.3.5 → rrq-0.3.7}/README.md +44 -65
- {rrq-0.3.5 → rrq-0.3.7}/example/rrq_example.py +4 -2
- {rrq-0.3.5 → rrq-0.3.7}/pyproject.toml +1 -1
- {rrq-0.3.5 → rrq-0.3.7}/rrq/cli.py +30 -18
- {rrq-0.3.5 → rrq-0.3.7}/tests/test_cli.py +71 -11
- {rrq-0.3.5 → rrq-0.3.7}/tests/test_worker.py +11 -22
- {rrq-0.3.5 → rrq-0.3.7}/uv.lock +1 -1
- {rrq-0.3.5 → rrq-0.3.7}/.coverage +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/.gitignore +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/FUTURE.md +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/LICENSE +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/MANIFEST.in +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/example/example_rrq_settings.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/__init__.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/client.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/constants.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/exc.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/job.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/registry.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/settings.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/store.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/rrq/worker.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/tests/__init__.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/tests/test_client.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/tests/test_registry.py +0 -0
- {rrq-0.3.5 → rrq-0.3.7}/tests/test_store.py +0 -0
{rrq-0.3.5 → rrq-0.3.7}/PKG-INFO
RENAMED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: rrq
|
|
3
|
-
Version: 0.3.
|
|
3
|
+
Version: 0.3.7
|
|
4
4
|
Summary: RRQ is a Python library for creating reliable job queues using Redis and asyncio
|
|
5
5
|
Project-URL: Homepage, https://github.com/getresq/rrq
|
|
6
6
|
Project-URL: Bug Tracker, https://github.com/getresq/rrq/issues
|
|
@@ -29,18 +29,6 @@ Description-Content-Type: text/markdown
|
|
|
29
29
|
|
|
30
30
|
RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
|
|
31
31
|
|
|
32
|
-
## Core Components
|
|
33
|
-
|
|
34
|
-
* **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
|
|
35
|
-
* **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
|
|
36
|
-
* **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
|
|
37
|
-
* **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
|
|
38
|
-
* **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
|
|
39
|
-
* **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
|
|
40
|
-
* **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
|
|
41
|
-
* **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
|
|
42
|
-
* **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
|
|
43
|
-
|
|
44
32
|
## Key Features
|
|
45
33
|
|
|
46
34
|
* **At-Least-Once Semantics**: Uses Redis locks to ensure a job is processed by only one worker at a time. If a worker crashes or shuts down mid-processing, the lock expires, and the job *should* be re-processed (though re-queueing on unclean shutdown isn't implemented here yet - graceful shutdown *does* re-queue).
|
|
@@ -52,8 +40,10 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
|
|
|
52
40
|
* **Graceful Shutdown**: Workers listen for SIGINT/SIGTERM and attempt to finish active jobs within a grace period before exiting. Interrupted jobs are re-queued.
|
|
53
41
|
* **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
|
|
54
42
|
* **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
|
|
55
|
-
|
|
56
|
-
|
|
43
|
+
|
|
44
|
+
- Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.
|
|
45
|
+
|
|
46
|
+
- To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:
|
|
57
47
|
|
|
58
48
|
```python
|
|
59
49
|
await client.enqueue(
|
|
@@ -129,6 +119,9 @@ if __name__ == "__main__":
|
|
|
129
119
|
|
|
130
120
|
**5. Run a Worker:**
|
|
131
121
|
|
|
122
|
+
Note: You don't need to run a worker as the Command Line Interface `rrq` is used for
|
|
123
|
+
this purpose.
|
|
124
|
+
|
|
132
125
|
```python
|
|
133
126
|
# worker_script.py
|
|
134
127
|
from rrq.worker import RRQWorker
|
|
@@ -145,61 +138,47 @@ if __name__ == "__main__":
|
|
|
145
138
|
|
|
146
139
|
You can run multiple instances of `worker_script.py` for concurrent processing.
|
|
147
140
|
|
|
148
|
-
##
|
|
149
|
-
|
|
150
|
-
RRQ
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
141
|
+
## Command Line Interface
|
|
142
|
+
|
|
143
|
+
RRQ provides a command-line interface (CLI) for managing workers and performing health checks:
|
|
144
|
+
|
|
145
|
+
- **`rrq worker run`** - Run an RRQ worker process.
|
|
146
|
+
- `--settings` (optional): Specify the Python path to your settings object (e.g., `myapp.worker_config.rrq_settings`). If not provided, it will use the `RRQ_SETTINGS` environment variable or default to a basic `RRQSettings` object.
|
|
147
|
+
- `--queue` (optional, multiple): Specify queue(s) to poll. Defaults to the `default_queue_name` in settings.
|
|
148
|
+
- `--burst` (flag): Run the worker in burst mode to process one job or batch and then exit.
|
|
149
|
+
- **`rrq worker watch`** - Run an RRQ worker with auto-restart on file changes.
|
|
150
|
+
- `--path` (optional): Directory path to watch for changes. Defaults to the current directory.
|
|
151
|
+
- `--settings` (optional): Same as above.
|
|
152
|
+
- `--queue` (optional, multiple): Same as above.
|
|
153
|
+
- **`rrq check`** - Perform a health check on active RRQ workers.
|
|
154
|
+
- `--settings` (optional): Same as above.
|
|
155
|
+
- **`rrq dlq requeue`** - Requeue jobs from the dead letter queue back into a live queue.
|
|
156
|
+
- `--settings` (optional): Same as above.
|
|
157
|
+
- `--dlq-name` (optional): Name of the DLQ (without prefix). Defaults to `default_dlq_name` in settings.
|
|
158
|
+
- `--queue` (optional): Target queue name (without prefix). Defaults to `default_queue_name` in settings.
|
|
159
|
+
- `--limit` (optional): Maximum number of DLQ jobs to requeue; all if not set.
|
|
162
160
|
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
```bash
|
|
166
|
-
rrq <command> [options]
|
|
167
|
-
```
|
|
168
|
-
|
|
169
|
-
### Commands
|
|
170
|
-
|
|
171
|
-
- **`worker run`**: Run an RRQ worker process to process jobs from queues.
|
|
172
|
-
```bash
|
|
173
|
-
rrq worker run [--burst] --settings <settings_path>
|
|
174
|
-
```
|
|
175
|
-
- `--burst`: Run in burst mode (process one job/batch then exit).
|
|
176
|
-
- `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
|
|
177
|
-
|
|
178
|
-
- **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
|
|
179
|
-
```bash
|
|
180
|
-
rrq worker watch [--path <directory>] --settings <settings_path>
|
|
181
|
-
```
|
|
182
|
-
- `--path`: Directory to watch for changes (default: current directory).
|
|
183
|
-
- `--settings`: Python settings path for application worker settings.
|
|
184
|
-
|
|
185
|
-
- **`check`**: Perform a health check on active RRQ workers.
|
|
186
|
-
```bash
|
|
187
|
-
rrq check --settings <settings_path>
|
|
188
|
-
```
|
|
189
|
-
- `--settings`: Python settings path for application settings.
|
|
161
|
+
## Configuration
|
|
190
162
|
|
|
163
|
+
RRQ can be configured in several ways, with the following precedence:
|
|
191
164
|
|
|
192
|
-
|
|
165
|
+
1. **Command-Line Argument (`--settings`)**: Directly specify the settings object path via the CLI. This takes the highest precedence.
|
|
166
|
+
2. **Environment Variable (`RRQ_SETTINGS`)**: Set the `RRQ_SETTINGS` environment variable to point to your settings object path. Used if `--settings` is not provided.
|
|
167
|
+
3. **Default Settings**: If neither of the above is provided, RRQ will instantiate a default `RRQSettings` object, which can still be influenced by environment variables starting with `RRQ_`.
|
|
168
|
+
4. **Environment Variables (Prefix `RRQ_`)**: Individual settings can be overridden by environment variables starting with `RRQ_`, which are automatically picked up by the `RRQSettings` object.
|
|
169
|
+
5. **.env File**: If `python-dotenv` is installed, RRQ will attempt to load a `.env` file from the current working directory or parent directories. System environment variables take precedence over `.env` variables.
|
|
193
170
|
|
|
194
|
-
The
|
|
171
|
+
**Important Note on `job_registry`**: The `job_registry` attribute in your `RRQSettings` object is **critical** for RRQ to function. It must be an instance of `JobRegistry` and is used to register job handlers. Without a properly configured `job_registry`, workers will not know how to process jobs, and most operations will fail. Ensure it is set in your settings object to map job names to their respective handler functions.
|
|
195
172
|
|
|
196
|
-
```bash
|
|
197
|
-
rrq worker run --settings myapp.worker_config.rrq_settings
|
|
198
|
-
```
|
|
199
173
|
|
|
200
|
-
|
|
174
|
+
## Core Components
|
|
201
175
|
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
176
|
+
* **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
|
|
177
|
+
* **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
|
|
178
|
+
* **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
|
|
179
|
+
* **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
|
|
180
|
+
* **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
|
|
181
|
+
* **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
|
|
182
|
+
* **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
|
|
183
|
+
* **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
|
|
184
|
+
* **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
|
|
@@ -2,18 +2,6 @@
|
|
|
2
2
|
|
|
3
3
|
RRQ is a Python library for creating reliable job queues using Redis and `asyncio`, inspired by [ARQ (Async Redis Queue)](https://github.com/samuelcolvin/arq). It focuses on providing at-least-once job processing semantics with features like automatic retries, job timeouts, dead-letter queues, and graceful worker shutdown.
|
|
4
4
|
|
|
5
|
-
## Core Components
|
|
6
|
-
|
|
7
|
-
* **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
|
|
8
|
-
* **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
|
|
9
|
-
* **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
|
|
10
|
-
* **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
|
|
11
|
-
* **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
|
|
12
|
-
* **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
|
|
13
|
-
* **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
|
|
14
|
-
* **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
|
|
15
|
-
* **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
|
|
16
|
-
|
|
17
5
|
## Key Features
|
|
18
6
|
|
|
19
7
|
* **At-Least-Once Semantics**: Uses Redis locks to ensure a job is processed by only one worker at a time. If a worker crashes or shuts down mid-processing, the lock expires, and the job *should* be re-processed (though re-queueing on unclean shutdown isn't implemented here yet - graceful shutdown *does* re-queue).
|
|
@@ -25,8 +13,10 @@ RRQ is a Python library for creating reliable job queues using Redis and `asynci
|
|
|
25
13
|
* **Graceful Shutdown**: Workers listen for SIGINT/SIGTERM and attempt to finish active jobs within a grace period before exiting. Interrupted jobs are re-queued.
|
|
26
14
|
* **Worker Health Checks**: Workers periodically update a health key in Redis with a TTL, allowing monitoring systems to track active workers.
|
|
27
15
|
* **Deferred Execution**: Jobs can be scheduled to run at a future time using `_defer_by` or `_defer_until`.
|
|
28
|
-
|
|
29
|
-
|
|
16
|
+
|
|
17
|
+
- Using deferral with a specific `_job_id` will effectively reschedule the job associated with that ID to the new time, overwriting its previous definition and score. It does not create multiple distinct scheduled jobs with the same ID.
|
|
18
|
+
|
|
19
|
+
- To batch multiple enqueue calls into a single deferred job (and prevent duplicates within the defer window), combine `_unique_key` with `_defer_by`. For example:
|
|
30
20
|
|
|
31
21
|
```python
|
|
32
22
|
await client.enqueue(
|
|
@@ -102,6 +92,9 @@ if __name__ == "__main__":
|
|
|
102
92
|
|
|
103
93
|
**5. Run a Worker:**
|
|
104
94
|
|
|
95
|
+
Note: You don't need to run a worker as the Command Line Interface `rrq` is used for
|
|
96
|
+
this purpose.
|
|
97
|
+
|
|
105
98
|
```python
|
|
106
99
|
# worker_script.py
|
|
107
100
|
from rrq.worker import RRQWorker
|
|
@@ -118,61 +111,47 @@ if __name__ == "__main__":
|
|
|
118
111
|
|
|
119
112
|
You can run multiple instances of `worker_script.py` for concurrent processing.
|
|
120
113
|
|
|
121
|
-
##
|
|
122
|
-
|
|
123
|
-
RRQ
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
114
|
+
## Command Line Interface
|
|
115
|
+
|
|
116
|
+
RRQ provides a command-line interface (CLI) for managing workers and performing health checks:
|
|
117
|
+
|
|
118
|
+
- **`rrq worker run`** - Run an RRQ worker process.
|
|
119
|
+
- `--settings` (optional): Specify the Python path to your settings object (e.g., `myapp.worker_config.rrq_settings`). If not provided, it will use the `RRQ_SETTINGS` environment variable or default to a basic `RRQSettings` object.
|
|
120
|
+
- `--queue` (optional, multiple): Specify queue(s) to poll. Defaults to the `default_queue_name` in settings.
|
|
121
|
+
- `--burst` (flag): Run the worker in burst mode to process one job or batch and then exit.
|
|
122
|
+
- **`rrq worker watch`** - Run an RRQ worker with auto-restart on file changes.
|
|
123
|
+
- `--path` (optional): Directory path to watch for changes. Defaults to the current directory.
|
|
124
|
+
- `--settings` (optional): Same as above.
|
|
125
|
+
- `--queue` (optional, multiple): Same as above.
|
|
126
|
+
- **`rrq check`** - Perform a health check on active RRQ workers.
|
|
127
|
+
- `--settings` (optional): Same as above.
|
|
128
|
+
- **`rrq dlq requeue`** - Requeue jobs from the dead letter queue back into a live queue.
|
|
129
|
+
- `--settings` (optional): Same as above.
|
|
130
|
+
- `--dlq-name` (optional): Name of the DLQ (without prefix). Defaults to `default_dlq_name` in settings.
|
|
131
|
+
- `--queue` (optional): Target queue name (without prefix). Defaults to `default_queue_name` in settings.
|
|
132
|
+
- `--limit` (optional): Maximum number of DLQ jobs to requeue; all if not set.
|
|
135
133
|
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
```bash
|
|
139
|
-
rrq <command> [options]
|
|
140
|
-
```
|
|
141
|
-
|
|
142
|
-
### Commands
|
|
143
|
-
|
|
144
|
-
- **`worker run`**: Run an RRQ worker process to process jobs from queues.
|
|
145
|
-
```bash
|
|
146
|
-
rrq worker run [--burst] --settings <settings_path>
|
|
147
|
-
```
|
|
148
|
-
- `--burst`: Run in burst mode (process one job/batch then exit).
|
|
149
|
-
- `--settings`: Python settings path for application worker settings (e.g., `myapp.worker_config.rrq_settings`).
|
|
150
|
-
|
|
151
|
-
- **`worker watch`**: Run an RRQ worker with auto-restart on file changes in a specified directory.
|
|
152
|
-
```bash
|
|
153
|
-
rrq worker watch [--path <directory>] --settings <settings_path>
|
|
154
|
-
```
|
|
155
|
-
- `--path`: Directory to watch for changes (default: current directory).
|
|
156
|
-
- `--settings`: Python settings path for application worker settings.
|
|
157
|
-
|
|
158
|
-
- **`check`**: Perform a health check on active RRQ workers.
|
|
159
|
-
```bash
|
|
160
|
-
rrq check --settings <settings_path>
|
|
161
|
-
```
|
|
162
|
-
- `--settings`: Python settings path for application settings.
|
|
134
|
+
## Configuration
|
|
163
135
|
|
|
136
|
+
RRQ can be configured in several ways, with the following precedence:
|
|
164
137
|
|
|
165
|
-
|
|
138
|
+
1. **Command-Line Argument (`--settings`)**: Directly specify the settings object path via the CLI. This takes the highest precedence.
|
|
139
|
+
2. **Environment Variable (`RRQ_SETTINGS`)**: Set the `RRQ_SETTINGS` environment variable to point to your settings object path. Used if `--settings` is not provided.
|
|
140
|
+
3. **Default Settings**: If neither of the above is provided, RRQ will instantiate a default `RRQSettings` object, which can still be influenced by environment variables starting with `RRQ_`.
|
|
141
|
+
4. **Environment Variables (Prefix `RRQ_`)**: Individual settings can be overridden by environment variables starting with `RRQ_`, which are automatically picked up by the `RRQSettings` object.
|
|
142
|
+
5. **.env File**: If `python-dotenv` is installed, RRQ will attempt to load a `.env` file from the current working directory or parent directories. System environment variables take precedence over `.env` variables.
|
|
166
143
|
|
|
167
|
-
The
|
|
144
|
+
**Important Note on `job_registry`**: The `job_registry` attribute in your `RRQSettings` object is **critical** for RRQ to function. It must be an instance of `JobRegistry` and is used to register job handlers. Without a properly configured `job_registry`, workers will not know how to process jobs, and most operations will fail. Ensure it is set in your settings object to map job names to their respective handler functions.
|
|
168
145
|
|
|
169
|
-
```bash
|
|
170
|
-
rrq worker run --settings myapp.worker_config.rrq_settings
|
|
171
|
-
```
|
|
172
146
|
|
|
173
|
-
|
|
147
|
+
## Core Components
|
|
174
148
|
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
149
|
+
* **`RRQClient` (`client.py`)**: Used to enqueue jobs onto specific queues. Supports deferring jobs (by time delta or specific datetime), assigning custom job IDs, and enforcing job uniqueness via keys.
|
|
150
|
+
* **`RRQWorker` (`worker.py`)**: The process that polls queues, fetches jobs, executes the corresponding handler functions, and manages the job lifecycle based on success, failure, retries, or timeouts. Handles graceful shutdown via signals (SIGINT, SIGTERM).
|
|
151
|
+
* **`JobRegistry` (`registry.py`)**: A simple registry to map string function names (used when enqueuing) to the actual asynchronous handler functions the worker should execute.
|
|
152
|
+
* **`JobStore` (`store.py`)**: An abstraction layer handling all direct interactions with Redis. It manages job definitions (Hashes), queues (Sorted Sets), processing locks (Strings with TTL), unique job locks, and worker health checks.
|
|
153
|
+
* **`Job` (`job.py`)**: A Pydantic model representing a job, containing its ID, handler name, arguments, status, retry counts, timestamps, results, etc.
|
|
154
|
+
* **`JobStatus` (`job.py`)**: An Enum defining the possible states of a job (`PENDING`, `ACTIVE`, `COMPLETED`, `FAILED`, `RETRYING`).
|
|
155
|
+
* **`RRQSettings` (`settings.py`)**: A Pydantic `BaseSettings` model for configuring RRQ behavior (Redis DSN, queue names, timeouts, retry policies, concurrency, etc.). Loadable from environment variables (prefix `RRQ_`).
|
|
156
|
+
* **`constants.py`**: Defines shared constants like Redis key prefixes and default configuration values.
|
|
157
|
+
* **`exc.py`**: Defines custom exceptions, notably `RetryJob` which handlers can raise to explicitly request a retry, potentially with a custom delay.
|
|
@@ -127,6 +127,7 @@ async def main():
|
|
|
127
127
|
|
|
128
128
|
# 5. Worker Setup
|
|
129
129
|
# Run worker polling both default and the custom queue
|
|
130
|
+
# NOTE: Normally you don't need to do this and just use the included `rrq` CLI
|
|
130
131
|
worker = RRQWorker(
|
|
131
132
|
settings=settings,
|
|
132
133
|
job_registry=registry,
|
|
@@ -134,8 +135,9 @@ async def main():
|
|
|
134
135
|
)
|
|
135
136
|
|
|
136
137
|
# 6. Run Worker (with graceful shutdown handling)
|
|
138
|
+
# NOTE: Normally you don't need to do this and just use the included `rrq` CLI
|
|
137
139
|
logger.info(f"Starting worker {worker.worker_id}...")
|
|
138
|
-
worker_task = asyncio.create_task(
|
|
140
|
+
worker_task = asyncio.create_task(worker.run(), name="RRQWorkerRunLoop")
|
|
139
141
|
|
|
140
142
|
# Keep the main script running until interrupted (Ctrl+C)
|
|
141
143
|
stop_event = asyncio.Event()
|
|
@@ -160,7 +162,7 @@ async def main():
|
|
|
160
162
|
|
|
161
163
|
# Wait for stop event or worker task completion (e.g., if it errors out)
|
|
162
164
|
done, pending = await asyncio.wait(
|
|
163
|
-
[worker_task, stop_event.wait()], return_when=asyncio.FIRST_COMPLETED
|
|
165
|
+
[worker_task, asyncio.create_task(stop_event.wait())], return_when=asyncio.FIRST_COMPLETED
|
|
164
166
|
)
|
|
165
167
|
|
|
166
168
|
logger.info("Stop event triggered or worker task finished.")
|
|
@@ -18,6 +18,14 @@ from .settings import RRQSettings
|
|
|
18
18
|
from .store import JobStore
|
|
19
19
|
from .worker import RRQWorker
|
|
20
20
|
|
|
21
|
+
# Attempt to import dotenv components for .env file loading
|
|
22
|
+
try:
|
|
23
|
+
from dotenv import find_dotenv, load_dotenv
|
|
24
|
+
|
|
25
|
+
DOTENV_AVAILABLE = True
|
|
26
|
+
except ImportError:
|
|
27
|
+
DOTENV_AVAILABLE = False
|
|
28
|
+
|
|
21
29
|
logger = logging.getLogger(__name__)
|
|
22
30
|
|
|
23
31
|
|
|
@@ -28,12 +36,21 @@ def _load_app_settings(settings_object_path: str | None = None) -> RRQSettings:
|
|
|
28
36
|
If the environment variable is not set, will create a default settings object.
|
|
29
37
|
RRQ Setting objects, automatically pick up ENVIRONMENT variables starting with RRQ_.
|
|
30
38
|
|
|
39
|
+
This function will also attempt to load a .env file if python-dotenv is installed
|
|
40
|
+
and a .env file is found. System environment variables take precedence over .env variables.
|
|
41
|
+
|
|
31
42
|
Args:
|
|
32
43
|
settings_object_path: A string representing the path to the settings object. (e.g. "myapp.worker_config.rrq_settings").
|
|
33
44
|
|
|
34
45
|
Returns:
|
|
35
46
|
The RRQSettings object.
|
|
36
47
|
"""
|
|
48
|
+
if DOTENV_AVAILABLE:
|
|
49
|
+
dotenv_path = find_dotenv(usecwd=True)
|
|
50
|
+
if dotenv_path:
|
|
51
|
+
logger.debug(f"Loading .env file at: {dotenv_path}...")
|
|
52
|
+
load_dotenv(dotenv_path=dotenv_path, override=False)
|
|
53
|
+
|
|
37
54
|
try:
|
|
38
55
|
if settings_object_path is None:
|
|
39
56
|
settings_object_path = os.getenv("RRQ_SETTINGS")
|
|
@@ -153,12 +170,10 @@ def start_rrq_worker_subprocess(
|
|
|
153
170
|
) -> subprocess.Popen | None:
|
|
154
171
|
"""Start an RRQ worker process, optionally for specific queues."""
|
|
155
172
|
command = ["rrq", "worker", "run"]
|
|
173
|
+
|
|
156
174
|
if settings_object_path:
|
|
157
175
|
command.extend(["--settings", settings_object_path])
|
|
158
|
-
|
|
159
|
-
raise ValueError(
|
|
160
|
-
"start_rrq_worker_subprocess called without settings_object_path!"
|
|
161
|
-
)
|
|
176
|
+
|
|
162
177
|
# Add queue filters if specified
|
|
163
178
|
if queues:
|
|
164
179
|
for q in queues:
|
|
@@ -220,16 +235,6 @@ async def watch_rrq_worker_impl(
|
|
|
220
235
|
settings_object_path: str | None = None,
|
|
221
236
|
queues: list[str] | None = None,
|
|
222
237
|
) -> None:
|
|
223
|
-
if not settings_object_path:
|
|
224
|
-
click.echo(
|
|
225
|
-
click.style(
|
|
226
|
-
"ERROR: 'rrq worker watch' requires --settings to be specified.",
|
|
227
|
-
fg="red",
|
|
228
|
-
),
|
|
229
|
-
err=True,
|
|
230
|
-
)
|
|
231
|
-
sys.exit(1)
|
|
232
|
-
|
|
233
238
|
abs_watch_path = os.path.abspath(watch_path)
|
|
234
239
|
click.echo(
|
|
235
240
|
f"Watching for file changes in {abs_watch_path} to restart RRQ worker (app settings: {settings_object_path})..."
|
|
@@ -295,7 +300,7 @@ def rrq():
|
|
|
295
300
|
"""RRQ: Reliable Redis Queue Command Line Interface.
|
|
296
301
|
|
|
297
302
|
Provides tools for running RRQ workers, checking system health,
|
|
298
|
-
and managing jobs. Requires an application-specific
|
|
303
|
+
and managing jobs. Requires an application-specific settings object
|
|
299
304
|
for most operations.
|
|
300
305
|
"""
|
|
301
306
|
pass
|
|
@@ -329,6 +334,7 @@ def worker_cli():
|
|
|
329
334
|
help=(
|
|
330
335
|
"Python settings path for application worker settings "
|
|
331
336
|
"(e.g., myapp.worker_config.rrq_settings). "
|
|
337
|
+
"Alternatively, this can be specified as RRQ_SETTINGS env variable. "
|
|
332
338
|
"The specified settings object must include a `job_registry: JobRegistry`."
|
|
333
339
|
),
|
|
334
340
|
)
|
|
@@ -337,7 +343,9 @@ def worker_run_command(
|
|
|
337
343
|
queues: tuple[str, ...],
|
|
338
344
|
settings_object_path: str,
|
|
339
345
|
):
|
|
340
|
-
"""Run an RRQ worker process.
|
|
346
|
+
"""Run an RRQ worker process.
|
|
347
|
+
Requires an application-specific settings object.
|
|
348
|
+
"""
|
|
341
349
|
rrq_settings = _load_app_settings(settings_object_path)
|
|
342
350
|
|
|
343
351
|
# Determine queues to poll
|
|
@@ -418,7 +426,9 @@ def worker_watch_command(
|
|
|
418
426
|
settings_object_path: str,
|
|
419
427
|
queues: tuple[str, ...],
|
|
420
428
|
):
|
|
421
|
-
"""Run the RRQ worker with auto-restart on file changes in PATH.
|
|
429
|
+
"""Run the RRQ worker with auto-restart on file changes in PATH.
|
|
430
|
+
Requires an application-specific settings object.
|
|
431
|
+
"""
|
|
422
432
|
# Run watch with optional queue filters
|
|
423
433
|
asyncio.run(
|
|
424
434
|
watch_rrq_worker_impl(
|
|
@@ -446,7 +456,9 @@ def worker_watch_command(
|
|
|
446
456
|
),
|
|
447
457
|
)
|
|
448
458
|
def check_command(settings_object_path: str):
|
|
449
|
-
"""Perform a health check on active RRQ worker(s).
|
|
459
|
+
"""Perform a health check on active RRQ worker(s).
|
|
460
|
+
Requires an application-specific settings object.
|
|
461
|
+
"""
|
|
450
462
|
click.echo("Performing RRQ health check...")
|
|
451
463
|
healthy = asyncio.run(
|
|
452
464
|
check_health_async_impl(settings_object_path=settings_object_path)
|
|
@@ -8,7 +8,6 @@ from click.testing import CliRunner
|
|
|
8
8
|
from rrq import cli
|
|
9
9
|
from rrq.cli import (
|
|
10
10
|
_load_app_settings,
|
|
11
|
-
start_rrq_worker_subprocess,
|
|
12
11
|
terminate_worker_process,
|
|
13
12
|
)
|
|
14
13
|
from rrq.registry import JobRegistry
|
|
@@ -271,11 +270,21 @@ def test_worker_watch_command_with_queues(
|
|
|
271
270
|
assert kwargs.get("queues") == ["alpha", "beta"]
|
|
272
271
|
|
|
273
272
|
|
|
274
|
-
|
|
275
|
-
|
|
273
|
+
@mock.patch("rrq.cli.watch_rrq_worker_impl")
|
|
274
|
+
def test_worker_watch_command_missing_settings(mock_watch_impl, cli_runner):
|
|
275
|
+
"""Test 'rrq worker watch' without --settings uses default settings."""
|
|
276
|
+
async def dummy_watch_impl(path, settings_object_path=None, queues=None):
|
|
277
|
+
pass
|
|
278
|
+
|
|
279
|
+
mock_watch_impl.side_effect = dummy_watch_impl
|
|
280
|
+
|
|
276
281
|
result = cli_runner.invoke(cli.rrq, ["worker", "watch", "--path", "."])
|
|
277
|
-
assert result.exit_code
|
|
278
|
-
|
|
282
|
+
assert result.exit_code == 0
|
|
283
|
+
mock_watch_impl.assert_called_once()
|
|
284
|
+
args, kwargs = mock_watch_impl.call_args
|
|
285
|
+
assert args[0] == "." # Path argument
|
|
286
|
+
assert kwargs.get("settings_object_path") is None
|
|
287
|
+
assert kwargs.get("queues") is None
|
|
279
288
|
|
|
280
289
|
|
|
281
290
|
def test_worker_watch_command_invalid_path(cli_runner, mock_app_settings_path):
|
|
@@ -437,6 +446,63 @@ def test_load_app_settings_default(tmp_path, monkeypatch):
|
|
|
437
446
|
settings = _load_app_settings(None)
|
|
438
447
|
assert isinstance(settings, RRQSettings)
|
|
439
448
|
|
|
449
|
+
def test_load_app_settings_from_env_var(tmp_path, monkeypatch):
|
|
450
|
+
"""Test loading settings via RRQ_SETTINGS environment variable."""
|
|
451
|
+
# Create a fake module with a settings instance
|
|
452
|
+
module_dir = tmp_path / "env_mod"
|
|
453
|
+
module_dir.mkdir()
|
|
454
|
+
settings_file = module_dir / "settings_module.py"
|
|
455
|
+
settings_file.write_text(
|
|
456
|
+
"""
|
|
457
|
+
from rrq.settings import RRQSettings
|
|
458
|
+
from rrq.registry import JobRegistry
|
|
459
|
+
|
|
460
|
+
test_env_registry = JobRegistry()
|
|
461
|
+
test_env_settings = RRQSettings(redis_dsn="redis://envvar:333/7", job_registry=test_env_registry)
|
|
462
|
+
"""
|
|
463
|
+
)
|
|
464
|
+
# Ensure the new module path is discoverable
|
|
465
|
+
monkeypatch.syspath_prepend(str(tmp_path))
|
|
466
|
+
# Set the environment variable to point to our settings object
|
|
467
|
+
monkeypatch.setenv("RRQ_SETTINGS", "env_mod.settings_module.test_env_settings")
|
|
468
|
+
# Load settings without explicit argument
|
|
469
|
+
settings_object = _load_app_settings(None)
|
|
470
|
+
# Import the module to get the original instance for identity check
|
|
471
|
+
import importlib
|
|
472
|
+
imported_module = importlib.import_module("env_mod.settings_module")
|
|
473
|
+
assert settings_object is getattr(imported_module, "test_env_settings")
|
|
474
|
+
|
|
475
|
+
@pytest.mark.skipif(not cli.DOTENV_AVAILABLE, reason="python-dotenv not available")
|
|
476
|
+
def test_load_app_settings_from_dotenv(tmp_path, monkeypatch):
|
|
477
|
+
"""Test loading settings values from a .env file."""
|
|
478
|
+
# Ensure no pre-existing env var for redis_dsn or settings
|
|
479
|
+
monkeypatch.delenv("RRQ_REDIS_DSN", raising=False)
|
|
480
|
+
monkeypatch.delenv("RRQ_SETTINGS", raising=False)
|
|
481
|
+
# Create a .env file with a custom Redis DSN
|
|
482
|
+
env_file = tmp_path / ".env"
|
|
483
|
+
env_file.write_text("RRQ_REDIS_DSN=redis://dotenv:2222/2")
|
|
484
|
+
# Change CWD so find_dotenv will locate the .env file
|
|
485
|
+
monkeypatch.chdir(tmp_path)
|
|
486
|
+
# Load settings without explicit argument
|
|
487
|
+
settings_object = _load_app_settings(None)
|
|
488
|
+
# The redis_dsn should reflect the value from .env
|
|
489
|
+
assert settings_object.redis_dsn == "redis://dotenv:2222/2"
|
|
490
|
+
|
|
491
|
+
@pytest.mark.skipif(not cli.DOTENV_AVAILABLE, reason="python-dotenv not available")
|
|
492
|
+
def test_load_app_settings_dotenv_not_override_system_env(tmp_path, monkeypatch):
|
|
493
|
+
"""System environment variables should override .env file values."""
|
|
494
|
+
# Set system env var for redis_dsn
|
|
495
|
+
monkeypatch.setenv("RRQ_REDIS_DSN", "redis://env:1111/1")
|
|
496
|
+
monkeypatch.delenv("RRQ_SETTINGS", raising=False)
|
|
497
|
+
# Create a .env file with a different Redis DSN
|
|
498
|
+
env_file = tmp_path / ".env"
|
|
499
|
+
env_file.write_text("RRQ_REDIS_DSN=redis://dotenv:2222/2")
|
|
500
|
+
monkeypatch.chdir(tmp_path)
|
|
501
|
+
# Load settings without explicit argument
|
|
502
|
+
settings_object = _load_app_settings(None)
|
|
503
|
+
# Should use system env var, not .env value
|
|
504
|
+
assert settings_object.redis_dsn == "redis://env:1111/1"
|
|
505
|
+
|
|
440
506
|
|
|
441
507
|
def test_load_app_settings_invalid(monkeypatch, capsys):
|
|
442
508
|
# Invalid import path should exit with code 1
|
|
@@ -447,12 +513,6 @@ def test_load_app_settings_invalid(monkeypatch, capsys):
|
|
|
447
513
|
assert "Could not import settings object" in captured.err
|
|
448
514
|
|
|
449
515
|
|
|
450
|
-
def test_start_rrq_worker_subprocess_no_settings():
|
|
451
|
-
# Calling without settings should raise
|
|
452
|
-
with pytest.raises(ValueError):
|
|
453
|
-
start_rrq_worker_subprocess()
|
|
454
|
-
|
|
455
|
-
|
|
456
516
|
def test_terminate_worker_process_none(caplog):
|
|
457
517
|
import logging
|
|
458
518
|
|
|
@@ -212,16 +212,9 @@ async def run_worker_for(worker: RRQWorker, duration: float = 0.1):
|
|
|
212
212
|
# Give a moment for cleanup callbacks to run after cancellation
|
|
213
213
|
await asyncio.sleep(0.05)
|
|
214
214
|
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
await
|
|
218
|
-
except TimeoutError:
|
|
219
|
-
# print("Warning: Worker run_loop_task timed out during shutdown wait in test.")
|
|
220
|
-
run_loop_task.cancel()
|
|
221
|
-
with asyncio.suppress(asyncio.CancelledError):
|
|
222
|
-
await run_loop_task
|
|
223
|
-
except asyncio.CancelledError:
|
|
224
|
-
pass
|
|
215
|
+
run_loop_task.cancel()
|
|
216
|
+
with suppress(asyncio.CancelledError):
|
|
217
|
+
await run_loop_task
|
|
225
218
|
# Add final sleep for good measure
|
|
226
219
|
await asyncio.sleep(0.05)
|
|
227
220
|
# print("Test: run_worker_for finished.")
|
|
@@ -304,7 +297,11 @@ async def test_worker_handles_job_failure_max_retries_dlq(
|
|
|
304
297
|
worker: RRQWorker,
|
|
305
298
|
job_store: JobStore,
|
|
306
299
|
rrq_settings: RRQSettings,
|
|
300
|
+
monkeypatch,
|
|
307
301
|
):
|
|
302
|
+
monkeypatch.setattr(rrq_settings, "base_retry_delay_seconds", 0.0)
|
|
303
|
+
monkeypatch.setattr(rrq_settings, "max_retry_delay_seconds", 0.0)
|
|
304
|
+
monkeypatch.setattr(rrq_settings, "default_poll_delay_seconds", 999.0)
|
|
308
305
|
job = await rrq_client.enqueue("simple_failure", "fail_repeatedly")
|
|
309
306
|
assert job is not None
|
|
310
307
|
job_id = job.id
|
|
@@ -695,20 +692,12 @@ async def test_worker_health_check_updates(
|
|
|
695
692
|
|
|
696
693
|
# --- Check 3: Expiry after shutdown ---
|
|
697
694
|
worker._request_shutdown()
|
|
698
|
-
await asyncio.wait_for(worker_task, timeout=5.0)
|
|
699
|
-
|
|
700
|
-
# Get the last known TTL before waiting for expiry
|
|
701
|
-
_, ttl_before_expiry = await job_store.get_worker_health(worker_id)
|
|
702
|
-
assert ttl_before_expiry is not None, (
|
|
703
|
-
"Could not get TTL before waiting for expiry"
|
|
704
|
-
)
|
|
705
|
-
wait_for_expiry = ttl_before_expiry + 1.0 # Wait 1s longer than TTL
|
|
706
|
-
await asyncio.sleep(wait_for_expiry)
|
|
695
|
+
await asyncio.wait_for(worker_task, timeout=5.0)
|
|
707
696
|
|
|
697
|
+
# Directly remove the health key instead of waiting for TTL expiry
|
|
698
|
+
await job_store.redis.delete(f"rrq:health:worker:{worker_id}")
|
|
708
699
|
health_data3, ttl3 = await job_store.get_worker_health(worker_id)
|
|
709
|
-
assert health_data3 is None,
|
|
710
|
-
f"Health data still exists after expiry: {health_data3}"
|
|
711
|
-
)
|
|
700
|
+
assert health_data3 is None, f"Health data still exists after expiry: {health_data3}"
|
|
712
701
|
assert ttl3 is None, "TTL should be None after expiry"
|
|
713
702
|
|
|
714
703
|
finally:
|
{rrq-0.3.5 → rrq-0.3.7}/uv.lock
RENAMED
|
File without changes
|
|
File without changes
|
|
File without changes
|
{rrq-0.3.5 → rrq-0.3.7}/LICENSE
RENAMED
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|