dj-queue 0.1.1__tar.gz → 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. {dj_queue-0.1.1 → dj_queue-0.2.0}/PKG-INFO +153 -147
  2. {dj_queue-0.1.1 → dj_queue-0.2.0}/README.md +151 -145
  3. dj_queue-0.2.0/dj_queue/admin.py +536 -0
  4. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/backend.py +3 -2
  5. dj_queue-0.2.0/dj_queue/dashboard.py +1394 -0
  6. dj_queue-0.2.0/dj_queue/migrations/0004_dashboard.py +27 -0
  7. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/models/__init__.py +2 -1
  8. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/models/runtime.py +8 -0
  9. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/base.py +30 -18
  10. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/supervisor.py +67 -10
  11. dj_queue-0.2.0/dj_queue/templates/admin/dj_queue/dashboard.html +819 -0
  12. dj_queue-0.2.0/dj_queue/templates/admin/dj_queue/queue_jobs.html +269 -0
  13. {dj_queue-0.1.1 → dj_queue-0.2.0}/pyproject.toml +2 -2
  14. dj_queue-0.1.1/dj_queue/admin.py +0 -90
  15. {dj_queue-0.1.1 → dj_queue-0.2.0}/LICENSE +0 -0
  16. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/__init__.py +0 -0
  17. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/api.py +0 -0
  18. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/apps.py +0 -0
  19. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/config.py +0 -0
  20. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/contrib/__init__.py +0 -0
  21. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/contrib/asgi.py +0 -0
  22. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/contrib/gunicorn.py +0 -0
  23. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/db.py +0 -0
  24. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/exceptions.py +0 -0
  25. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/hooks.py +0 -0
  26. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/log.py +0 -0
  27. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/management/__init__.py +0 -0
  28. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/management/commands/__init__.py +0 -0
  29. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/management/commands/dj_queue.py +0 -0
  30. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/management/commands/dj_queue_health.py +0 -0
  31. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/management/commands/dj_queue_prune.py +0 -0
  32. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/migrations/0001_initial.py +0 -0
  33. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/migrations/0002_pause_semaphore.py +0 -0
  34. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/migrations/0003_recurringtask_recurringexecution.py +0 -0
  35. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/migrations/__init__.py +0 -0
  36. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/models/jobs.py +0 -0
  37. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/models/recurring.py +0 -0
  38. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/operations/__init__.py +0 -0
  39. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/operations/cleanup.py +0 -0
  40. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/operations/concurrency.py +0 -0
  41. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/operations/jobs.py +0 -0
  42. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/operations/recurring.py +0 -0
  43. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/routers.py +0 -0
  44. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/__init__.py +0 -0
  45. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/dispatcher.py +0 -0
  46. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/errors.py +0 -0
  47. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/interruptible.py +0 -0
  48. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/notify.py +0 -0
  49. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/pidfile.py +0 -0
  50. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/pool.py +0 -0
  51. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/procline.py +0 -0
  52. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/scheduler.py +0 -0
  53. {dj_queue-0.1.1 → dj_queue-0.2.0}/dj_queue/runtime/worker.py +0 -0
@@ -1,10 +1,10 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: dj-queue
3
- Version: 0.1.1
3
+ Version: 0.2.0
4
4
  Summary: Database-backed task queue backend for Django's django.tasks framework
5
5
  License-Expression: MIT
6
6
  License-File: LICENSE
7
- Classifier: Development Status :: 3 - Alpha
7
+ Classifier: Development Status :: 4 - Beta
8
8
  Classifier: Framework :: Django
9
9
  Classifier: Framework :: Django :: 6.0
10
10
  Classifier: Intended Audience :: Developers
@@ -26,27 +26,30 @@ Description-Content-Type: text/markdown
26
26
 
27
27
  # dj_queue
28
28
 
29
- `dj_queue` is a database-backed task queue backend for Django's `django.tasks`
30
- framework.
29
+ [![CI](https://github.com/coriocactus/dj_queue/actions/workflows/ci.yml/badge.svg)](https://github.com/coriocactus/dj_queue/actions/workflows/ci.yml)
30
+ ![PyPI](https://img.shields.io/pypi/v/dj-queue.svg)
31
+ ![PyPI - Python Version](https://img.shields.io/pypi/pyversions/dj-queue.svg)
32
+ ![PyPI - Status](https://img.shields.io/pypi/status/dj-queue.svg)
33
+ ![PyPI - License](https://img.shields.io/pypi/l/dj-queue.svg)
31
34
 
32
- It keeps the queue, live execution state, runtime metadata, and task results in
33
- your database.
35
+ `dj_queue` is a database-backed task queue backend for the `django.tasks` framework.
36
+
37
+ It keeps the queue, live execution state, runtime metadata, and task results in your database.
34
38
 
35
39
  - no Redis, RabbitMQ, or separate result store
36
40
  - PostgreSQL is the first-class production backend
37
41
  - MySQL 8+, MariaDB 10.6+, and SQLite are supported
38
42
  - immediate, scheduled, recurring, and concurrency-limited work
39
- - fork and async runtime modes
40
- - multi-database aware from day one
41
43
 
42
- `dj_queue` is inspired by Rails solid_queue, but shaped to fit Django's task
43
- backend API and long-running process model.
44
+ `dj_queue` is inspired by Rails' [Solid Queue](https://github.com/rails/solid_queue),
45
+ but shaped to fit Django's [task backend API](https://docs.djangoproject.com/en/6.0/topics/tasks/).
44
46
 
45
47
  ## Why dj_queue
46
48
 
47
- The database is the queue.
49
+ Django applications already depend on the database as the durable system of
50
+ record. `dj_queue` lets background work follow the same model.
48
51
 
49
- That gives `dj_queue` a narrow, explicit shape:
52
+ It has a narrow, explicit shape:
50
53
 
51
54
  - application code uses Django's `@task` API
52
55
  - `DjQueueBackend` stores jobs and results in Django-managed tables
@@ -54,8 +57,8 @@ That gives `dj_queue` a narrow, explicit shape:
54
57
  - PostgreSQL can use `LISTEN/NOTIFY` and `SKIP LOCKED` as optimizations
55
58
  - polling remains the correctness path on every supported database
56
59
 
57
- If your application already depends on the database being the durable system of
58
- record, `dj_queue` lets background work follow the same model.
60
+ For detailed comparisons with Celery, RQ, Procrastinate, and other alternatives,
61
+ see [COMPARISONS.md](COMPARISONS.md).
59
62
 
60
63
  ## Installation
61
64
 
@@ -103,6 +106,9 @@ TASKS = {
103
106
  }
104
107
  ```
105
108
 
109
+ The router is optional when using the default database, but harmless to include
110
+ and required for [multi-database setups](#multi-database-setup).
111
+
106
112
  Run migrations:
107
113
 
108
114
  ```bash
@@ -115,10 +121,8 @@ Define a task with Django's `@task` decorator:
115
121
 
116
122
  ```python
117
123
  # myapp/tasks.py
118
-
119
124
  from django.tasks import task
120
125
 
121
-
122
126
  @task
123
127
  def add(a, b):
124
128
  return a + b
@@ -151,83 +155,6 @@ print(fresh_result.return_value)
151
155
 
152
156
  When the worker has executed the job, `fresh_result.return_value` will be `10`.
153
157
 
154
- ## Data Contract
155
-
156
- Job payloads and persisted return values are stored in JSON columns, so they
157
- must be JSON round-trippable.
158
-
159
- - enqueueing args or kwargs that cannot round-trip through JSON fails immediately
160
- - returning a non-JSON-serializable value marks the job failed instead of
161
- leaving it claimed forever
162
-
163
- If you need to pass model instances, files, or custom objects, store them
164
- elsewhere and pass identifiers or serialized data instead.
165
-
166
- ## How dj_queue runs
167
-
168
- `python manage.py dj_queue` starts a supervisor for one backend alias.
169
-
170
- Job lifecycle:
171
-
172
- `enqueue -> ready | scheduled | blocked -> claimed -> successful | failed`
173
-
174
- The runtime has four moving parts:
175
-
176
- - `supervisor`: boots and stops the runtime
177
- - `workers`: claim ready jobs and execute them
178
- - `dispatchers`: promote due scheduled jobs and run concurrency maintenance
179
- - `scheduler`: enqueue recurring tasks and finished-job cleanup when configured
180
-
181
- Useful command variants:
182
-
183
- ```bash
184
- python manage.py dj_queue
185
- python manage.py dj_queue --mode async
186
- python manage.py dj_queue --only-work
187
- python manage.py dj_queue --only-dispatch
188
- python manage.py dj_queue --skip-recurring
189
- ```
190
-
191
- Mode and topology notes:
192
-
193
- - `fork` is the default standalone mode
194
- - `async` runs supervised actors in threads inside one process
195
- - `--only-work` starts workers without dispatchers or scheduler
196
- - `--only-dispatch` starts dispatchers without workers or scheduler
197
- - `--skip-recurring` starts without the scheduler
198
-
199
- If you're familiar with Solid Queue, the same high-level tradeoff is described
200
- in its [fork vs async mode](https://github.com/rails/solid_queue?tab=readme-ov-file#fork-vs-async-mode)
201
- section.
202
-
203
- ## Choose a setup
204
-
205
- Once migrations are in place, start processing jobs with `python manage.py dj_queue`
206
- on the machine that should do the work. With the default configuration, this
207
- starts the supervisor, workers, and dispatcher for the default backend alias and
208
- processes all queues.
209
-
210
- For most deployments, start with a standalone `dj_queue` process. Reach for a
211
- dedicated queue database before you reach for embedded mode.
212
-
213
- - single database, standalone process: easiest way to start. Use the app
214
- database and run `python manage.py dj_queue`
215
- - dedicated queue database: recommended production default. Keep queue tables
216
- and runtime traffic on `database_alias`. See [Multi-Database Setup](#multi-database-setup)
217
- - embedded server mode: run `dj_queue` inside ASGI or Gunicorn when you want
218
- queue execution colocated with the server process. See [Embedded Server Mode](#embedded-server-mode)
219
-
220
- For small deployments, running `dj_queue` on the same machine as the web server
221
- is often enough. When you need more capacity, multiple machines can point at
222
- the same queue database. Full `python manage.py dj_queue` instances coordinate
223
- through database locking, so workers and dispatchers share load safely and
224
- recurring firing stays deduplicated across schedulers.
225
-
226
- In practice, keep recurring settings identical on every full node and prefer one
227
- full instance plus additional `python manage.py dj_queue --only-work` nodes.
228
- Add `--only-dispatch` nodes only when you need more scheduled-job promotion or
229
- concurrency-maintenance throughput.
230
-
231
158
  ## Common Patterns
232
159
 
233
160
  ### Scheduled jobs
@@ -236,9 +163,7 @@ Use `run_after` to keep work out of the ready queue until a future time:
236
163
 
237
164
  ```python
238
165
  from datetime import timedelta
239
-
240
166
  from django.utils import timezone
241
-
242
167
  from myapp.tasks import send_digest
243
168
 
244
169
  future = timezone.now() + timedelta(hours=1)
@@ -268,9 +193,75 @@ results = process_item.get_backend().enqueue_all(
268
193
  )
269
194
  ```
270
195
 
271
- ## Ordering and transactions
196
+ ### Enqueue after commit
197
+
198
+ `enqueue()` writes immediately. If a task depends on rows that are still inside
199
+ the current transaction, use `enqueue_on_commit()`:
200
+
201
+ ```python
202
+ from django.db import transaction
203
+ from dj_queue.api import enqueue_on_commit
204
+ from myapp.tasks import send_receipt
205
+
206
+ with transaction.atomic():
207
+ order = create_order()
208
+ enqueue_on_commit(send_receipt, order.id)
209
+ ```
210
+
211
+ ### Examples
212
+
213
+ The repository ships real runnable examples in `examples/`.
214
+
215
+ Recommended entry points:
216
+
217
+ - [examples/ex01_basic_enqueue.py](examples/ex01_basic_enqueue.py)
218
+ - [examples/ex07_basic_enqueue_on_commit.py](examples/ex07_basic_enqueue_on_commit.py)
219
+ - [examples/ex08_basic_recurring.py](examples/ex08_basic_recurring.py)
220
+ - [examples/ex20_advanced_concurrency.py](examples/ex20_advanced_concurrency.py)
221
+ - [examples/ex21_advanced_queue_control.py](examples/ex21_advanced_queue_control.py)
222
+ - [examples/ex24_advanced_multi_db.py](examples/ex24_advanced_multi_db.py)
223
+ - [examples/ex25_advanced_asgi.py](examples/ex25_advanced_asgi.py)
224
+
225
+ The [examples index](examples/README.md) lists the full progression.
226
+
227
+ ## How it Works
228
+
229
+ `python manage.py dj_queue` starts a supervisor for one backend alias.
230
+
231
+ Job lifecycle:
232
+
233
+ `enqueue -> ready | scheduled | blocked -> claimed -> successful | failed`
234
+
235
+ The runtime has four moving parts:
236
+
237
+ - `supervisor`: boots and stops the runtime
238
+ - `workers`: claim ready jobs and execute them
239
+ - `dispatchers`: promote due scheduled jobs and run concurrency maintenance
240
+ - `scheduler`: enqueue recurring tasks and finished-job cleanup when configured
241
+
242
+ Useful command variants:
243
+
244
+ ```bash
245
+ python manage.py dj_queue
246
+ python manage.py dj_queue --mode async
247
+ python manage.py dj_queue --only-work
248
+ python manage.py dj_queue --only-dispatch
249
+ python manage.py dj_queue --skip-recurring
250
+ ```
251
+
252
+ Mode and topology notes:
253
+
254
+ - `fork` is the default standalone mode
255
+ - `async` runs supervised actors in threads inside one process
256
+ - `--only-work` starts workers without dispatchers or scheduler
257
+ - `--only-dispatch` starts dispatchers without workers or scheduler
258
+ - `--skip-recurring` starts without the scheduler
259
+
260
+ `fork` runs each worker, dispatcher, and scheduler as a separate OS process.
261
+ `async` runs them as threads in one process, i.e., lower memory, less isolation.
262
+ Default is `fork`. Use `async` for embedded mode or memory-constrained environments.
272
263
 
273
- Queue ordering rules:
264
+ ### Claiming order
274
265
 
275
266
  - within one selected queue, higher numeric `priority` is claimed first
276
267
  - across multiple queue selectors, selector order wins
@@ -281,19 +272,29 @@ For example, a worker configured with `queues: ["email", "default"]` will
281
272
  prefer ready work from `email` before `default`, even if `default` contains
282
273
  higher-priority rows.
283
274
 
284
- `enqueue()` writes immediately. If a task depends on rows that are still inside
285
- the current transaction, use `enqueue_on_commit()`:
275
+ ## Database Support
286
276
 
287
- ```python
288
- from django.db import transaction
277
+ | Backend | Support level | Notes |
278
+ |---|---|---|
279
+ | PostgreSQL | first-class | polling, `SKIP LOCKED`, and optional `LISTEN/NOTIFY` |
280
+ | MySQL 8+ | supported | polling plus `SKIP LOCKED` |
281
+ | MariaDB 10.6+ | supported | polling plus `SKIP LOCKED` |
282
+ | SQLite | supported with limits | polling only, serialized writes, no `SKIP LOCKED`, no `LISTEN/NOTIFY`; practical for development, CI, and smaller deployments |
289
283
 
290
- from dj_queue.api import enqueue_on_commit
291
- from myapp.tasks import send_receipt
284
+ Polling is the portability path everywhere. Backend-specific features improve
285
+ latency and throughput but are not correctness requirements.
292
286
 
293
- with transaction.atomic():
294
- order = create_order()
295
- enqueue_on_commit(send_receipt, order.id)
296
- ```
287
+ ## Data Contract
288
+
289
+ Job payloads and persisted return values are stored in JSON columns, so they
290
+ must be JSON round-trippable.
291
+
292
+ - enqueueing args or kwargs that cannot round-trip through JSON fails immediately
293
+ - returning a non-JSON-serializable value marks the job failed instead of
294
+ leaving it claimed forever
295
+
296
+ If you need to pass model instances, files, or custom objects, store them
297
+ elsewhere and pass identifiers or serialized data instead.
297
298
 
298
299
  ## Recurring Tasks
299
300
 
@@ -352,18 +353,18 @@ separate recurring service.
352
353
 
353
354
  ## Concurrency Controls
354
355
 
355
- Tasks can opt into database-backed concurrency limits by defining concurrency
356
- metadata on the wrapped function:
356
+ Tasks can opt into database-backed concurrency limits.
357
+
358
+ `django.tasks` has no standard way to pass backend-specific options through the
359
+ `@task` decorator, so `dj_queue` reads them as attributes on the wrapped function:
357
360
 
358
361
  ```python
359
362
  from django.tasks import task
360
363
 
361
-
362
364
  @task
363
365
  def sync_account(account_id, action):
364
366
  return f"{account_id}:{action}"
365
367
 
366
-
367
368
  sync_account.func.concurrency_key = "account:{account_id}"
368
369
  sync_account.func.concurrency_limit = 1
369
370
  sync_account.func.concurrency_duration = 60
@@ -408,13 +409,13 @@ If Django admin is installed, `dj_queue` also registers the main operational
408
409
  models there, including jobs, failed executions, processes, recurring tasks,
409
410
  pauses, and semaphores.
410
411
 
411
- ## Failed jobs
412
+ ## Failed Jobs
412
413
 
413
414
  When a task raises, `dj_queue` keeps the job and its failed execution row in the
414
415
  queue database, including the exception class, message, and traceback.
415
416
 
416
- You can retry or discard failed jobs through Django admin or the operations
417
- layer:
417
+ You can retry failed jobs through Django admin, or retry and discard them
418
+ through the operations layer:
418
419
 
419
420
  ```python
420
421
  from dj_queue.operations.jobs import discard_failed_job, retry_failed_job
@@ -478,7 +479,6 @@ Wrap your ASGI application with `DjQueueLifespan`:
478
479
 
479
480
  ```python
480
481
  from django.core.asgi import get_asgi_application
481
-
482
482
  from dj_queue.contrib.asgi import DjQueueLifespan
483
483
 
484
484
  django_application = get_asgi_application()
@@ -491,7 +491,6 @@ Import the provided hooks in your Gunicorn config:
491
491
 
492
492
  ```python
493
493
  # gunicorn.conf.py
494
-
495
494
  from dj_queue.contrib.gunicorn import post_fork, worker_exit
496
495
  ```
497
496
 
@@ -500,6 +499,36 @@ signal handling to the host server.
500
499
 
501
500
  ## Configuration
502
501
 
502
+ ### Deployment topology
503
+
504
+ Once migrations are in place, start processing jobs with `python manage.py dj_queue`
505
+ on the machine that should do the work. With the default configuration, this
506
+ starts the supervisor, workers, dispatcher, and scheduler for the default
507
+ backend alias and processes all queues.
508
+
509
+ For most deployments, start with a standalone `dj_queue` process. Reach for a
510
+ dedicated queue database before you reach for embedded mode.
511
+
512
+ - single database, standalone process: easiest way to start. Use the app
513
+ database and run `python manage.py dj_queue`
514
+ - dedicated queue database: recommended production default. Keep queue tables
515
+ and runtime traffic on `database_alias`. See [Multi-Database Setup](#multi-database-setup)
516
+ - embedded server mode: run `dj_queue` inside ASGI or Gunicorn when you want
517
+ queue execution colocated with the server process. See [Embedded Server Mode](#embedded-server-mode)
518
+
519
+ For small deployments, running `dj_queue` on the same machine as the web server
520
+ is often enough. When you need more capacity, multiple machines can point at
521
+ the same queue database. Full `python manage.py dj_queue` instances coordinate
522
+ through database locking, so workers and dispatchers share load safely and
523
+ recurring firing stays deduplicated across schedulers.
524
+
525
+ In practice, keep recurring settings identical on every full node and prefer one
526
+ full instance plus additional `python manage.py dj_queue --only-work` nodes.
527
+ Add `--only-dispatch` nodes only when you need more scheduled-job promotion or
528
+ concurrency-maintenance throughput.
529
+
530
+ ### Options
531
+
503
532
  The main configuration lives in `TASKS[backend_alias]["OPTIONS"]`.
504
533
 
505
534
  Start with these options:
@@ -509,8 +538,7 @@ Start with these options:
509
538
  - `dispatchers`: scheduled promotion and concurrency maintenance settings
510
539
  - `scheduler`: dynamic recurring polling settings
511
540
  - `database_alias`: database alias for queue tables and runtime activity
512
- - `preserve_finished_jobs` and `clear_finished_jobs_after`: result retention and
513
- cleanup
541
+ - `preserve_finished_jobs` and `clear_finished_jobs_after`: result retention and cleanup
514
542
 
515
543
  Additional operational tuning is available when needed, including
516
544
  `use_skip_locked`, `listen_notify`, `silence_polling`,
@@ -521,6 +549,8 @@ On PostgreSQL, `listen_notify` uses the same Django PostgreSQL driver
521
549
  configuration as the main database connection. Install a compatible driver in
522
550
  your project, or use `dj-queue[postgres]` to pull in `psycopg`.
523
551
 
552
+ ### Precedence
553
+
524
554
  Configuration precedence is explicit:
525
555
 
526
556
  - CLI overrides
@@ -530,11 +560,11 @@ Configuration precedence is explicit:
530
560
 
531
561
  ### YAML file config
532
562
 
533
- You can point `dj_queue` at a YAML file with either `--config` or
534
- `DJ_QUEUE_CONFIG`:
535
-
536
563
  ```bash
564
+ # via cli
537
565
  python manage.py dj_queue --config /etc/dj_queue.yml
566
+
567
+ # or via environment variable
538
568
  DJ_QUEUE_CONFIG=/etc/dj_queue.yml python manage.py dj_queue
539
569
  ```
540
570
 
@@ -584,30 +614,6 @@ Environment overrides currently supported by `dj_queue` itself:
584
614
  - `DJ_QUEUE_MODE`
585
615
  - `DJ_QUEUE_SKIP_RECURRING`
586
616
 
587
- ## Database Support
588
-
589
- | Backend | Support level | Notes |
590
- |---|---|---|
591
- | PostgreSQL | first-class | polling, `SKIP LOCKED`, and optional `LISTEN/NOTIFY` |
592
- | MySQL 8+ | supported | polling plus `SKIP LOCKED` |
593
- | MariaDB 10.6+ | supported | polling plus `SKIP LOCKED` |
594
- | SQLite | supported with limits | polling only, serialized writes, no `SKIP LOCKED`, no `LISTEN/NOTIFY`; practical for development, CI, and smaller deployments |
595
-
596
- Polling is the portability path everywhere. Backend-specific features improve
597
- latency and throughput but are not correctness requirements.
598
-
599
- ## Examples
600
-
601
- The repository ships real runnable examples in `examples/`.
602
-
603
- Recommended entry points:
604
-
605
- - [examples/ex01_basic_enqueue.py](examples/ex01_basic_enqueue.py)
606
- - [examples/ex07_basic_enqueue_on_commit.py](examples/ex07_basic_enqueue_on_commit.py)
607
- - [examples/ex08_basic_recurring.py](examples/ex08_basic_recurring.py)
608
- - [examples/ex20_advanced_concurrency.py](examples/ex20_advanced_concurrency.py)
609
- - [examples/ex21_advanced_queue_control.py](examples/ex21_advanced_queue_control.py)
610
- - [examples/ex24_advanced_multi_db.py](examples/ex24_advanced_multi_db.py)
611
- - [examples/ex25_advanced_asgi.py](examples/ex25_advanced_asgi.py)
617
+ ## License
612
618
 
613
- The [examples index](examples/README.md) lists the full progression.
619
+ MIT