solid_queue 1.1.2 → 1.2.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +124 -30
- data/app/models/solid_queue/blocked_execution.rb +1 -1
- data/app/models/solid_queue/claimed_execution.rb +14 -5
- data/app/models/solid_queue/failed_execution.rb +5 -2
- data/app/models/solid_queue/job/concurrency_controls.rb +12 -0
- data/app/models/solid_queue/job/executable.rb +1 -1
- data/app/models/solid_queue/job.rb +4 -2
- data/app/models/solid_queue/record.rb +13 -5
- data/app/models/solid_queue/recurring_execution.rb +1 -1
- data/app/models/solid_queue/recurring_task.rb +2 -3
- data/app/models/solid_queue/scheduled_execution.rb +1 -1
- data/app/models/solid_queue/semaphore.rb +1 -1
- data/lib/active_job/concurrency_controls.rb +4 -1
- data/lib/active_job/queue_adapters/solid_queue_adapter.rb +4 -1
- data/lib/generators/solid_queue/install/templates/config/recurring.yml +7 -2
- data/lib/puma/plugin/solid_queue.rb +19 -7
- data/lib/solid_queue/cli.rb +3 -2
- data/lib/solid_queue/configuration.rb +6 -3
- data/lib/solid_queue/dispatcher.rb +9 -11
- data/lib/solid_queue/lifecycle_hooks.rb +11 -2
- data/lib/solid_queue/pool.rb +3 -7
- data/lib/solid_queue/processes/base.rb +2 -1
- data/lib/solid_queue/processes/interruptible.rb +21 -14
- data/lib/solid_queue/processes/process_pruned_error.rb +1 -1
- data/lib/solid_queue/processes/registrable.rb +11 -10
- data/lib/solid_queue/scheduler/recurring_schedule.rb +1 -1
- data/lib/solid_queue/scheduler.rb +5 -1
- data/lib/solid_queue/supervisor.rb +10 -3
- data/lib/solid_queue/version.rb +1 -1
- data/lib/solid_queue/worker.rb +5 -3
- data/lib/solid_queue.rb +12 -6
- metadata +41 -21
- data/Rakefile +0 -21
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 5450063206508cf94195d16dfe5be8db7609b4b959a9e2e36ca7cc102b90e04b
|
|
4
|
+
data.tar.gz: 6b10b7ddc67fdff01b31a78e486a4ea4f351277e7f851071406237300177d614
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 71a4c19d255e551e3a44aa1cdc508c5cb8dee3d66ae27c5b67a7fe2aaa31f958226e2a947bba773738274f717e5f635e412da1c471a03608657fc3deead4a961
|
|
7
|
+
data.tar.gz: 602c99609a6b65d3edbd29c8b8c26e9eed040830a115f207c0cebe139da80c6e43ccd2c1d583f4920fc0550c546747055f7da2fa73d858b9d20b4c7e37e674a9
|
data/README.md
CHANGED
|
@@ -1,27 +1,30 @@
|
|
|
1
1
|
# Solid Queue
|
|
2
2
|
|
|
3
|
-
Solid Queue is a
|
|
3
|
+
Solid Queue is a database-based queuing backend for [Active Job](https://edgeguides.rubyonrails.org/active_job_basics.html), designed with simplicity and performance in mind.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
In addition to regular job enqueuing and processing, Solid Queue supports delayed jobs, concurrency controls, recurring jobs, pausing queues, numeric priorities per job, priorities by queue order, and bulk enqueuing (`enqueue_all` for Active Job's `perform_all_later`).
|
|
6
6
|
|
|
7
|
-
Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite, and it leverages the `FOR UPDATE SKIP LOCKED` clause, if available, to avoid blocking and waiting on locks when polling jobs. It relies on Active Job for retries, discarding, error handling, serialization,
|
|
7
|
+
Solid Queue can be used with SQL databases such as MySQL, PostgreSQL, or SQLite, and it leverages the `FOR UPDATE SKIP LOCKED` clause, if available, to avoid blocking and waiting on locks when polling jobs. It relies on Active Job for retries, discarding, error handling, serialization, and delays, and it's compatible with Ruby on Rails's multi-threading.
|
|
8
8
|
|
|
9
|
-
## Table of
|
|
9
|
+
## Table of Contents
|
|
10
10
|
|
|
11
11
|
- [Installation](#installation)
|
|
12
|
+
- [Usage in development and other non-production environments](#usage-in-development-and-other-non-production-environments)
|
|
12
13
|
- [Single database configuration](#single-database-configuration)
|
|
14
|
+
- [Dashboard UI Setup](#dashboard-ui-setup)
|
|
13
15
|
- [Incremental adoption](#incremental-adoption)
|
|
14
16
|
- [High performance requirements](#high-performance-requirements)
|
|
15
17
|
- [Configuration](#configuration)
|
|
16
|
-
- [Workers, dispatchers and scheduler](#workers-dispatchers-and-scheduler)
|
|
18
|
+
- [Workers, dispatchers, and scheduler](#workers-dispatchers-and-scheduler)
|
|
17
19
|
- [Queue order and priorities](#queue-order-and-priorities)
|
|
18
20
|
- [Queues specification and performance](#queues-specification-and-performance)
|
|
19
|
-
- [Threads, processes and signals](#threads-processes-and-signals)
|
|
21
|
+
- [Threads, processes, and signals](#threads-processes-and-signals)
|
|
20
22
|
- [Database configuration](#database-configuration)
|
|
21
23
|
- [Other configuration settings](#other-configuration-settings)
|
|
22
24
|
- [Lifecycle hooks](#lifecycle-hooks)
|
|
23
25
|
- [Errors when enqueuing](#errors-when-enqueuing)
|
|
24
26
|
- [Concurrency controls](#concurrency-controls)
|
|
27
|
+
- [Performance considerations](#performance-considerations)
|
|
25
28
|
- [Failed jobs and retries](#failed-jobs-and-retries)
|
|
26
29
|
- [Error reporting on jobs](#error-reporting-on-jobs)
|
|
27
30
|
- [Puma plugin](#puma-plugin)
|
|
@@ -33,14 +36,16 @@ Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite,
|
|
|
33
36
|
|
|
34
37
|
## Installation
|
|
35
38
|
|
|
36
|
-
Solid Queue is configured by default in new Rails 8 applications.
|
|
39
|
+
Solid Queue is configured by default in new Rails 8 applications. If you're running an earlier version, you can add it manually following these steps:
|
|
37
40
|
|
|
38
41
|
1. `bundle add solid_queue`
|
|
39
42
|
2. `bin/rails solid_queue:install`
|
|
40
43
|
|
|
44
|
+
(Note: The minimum supported version of Rails is 7.1 and Ruby is 3.1.6.)
|
|
45
|
+
|
|
41
46
|
This will configure Solid Queue as the production Active Job backend, create the configuration files `config/queue.yml` and `config/recurring.yml`, and create the `db/queue_schema.rb`. It'll also create a `bin/jobs` executable wrapper that you can use to start Solid Queue.
|
|
42
47
|
|
|
43
|
-
Once you've done that, you will
|
|
48
|
+
Once you've done that, you will have to add the configuration for the queue database in `config/database.yml`. If you're using SQLite, it'll look like this:
|
|
44
49
|
|
|
45
50
|
```yaml
|
|
46
51
|
production:
|
|
@@ -74,7 +79,7 @@ Now you're ready to start processing jobs by running `bin/jobs` on the server th
|
|
|
74
79
|
|
|
75
80
|
For small projects, you can run Solid Queue on the same machine as your webserver. When you're ready to scale, Solid Queue supports horizontal scaling out-of-the-box. You can run Solid Queue on a separate server from your webserver, or even run `bin/jobs` on multiple machines at the same time. Depending on the configuration, you can designate some machines to run only dispatchers or only workers. See the [configuration](#configuration) section for more details on this.
|
|
76
81
|
|
|
77
|
-
**Note**:
|
|
82
|
+
**Note**: Future changes to the schema will come in the form of regular migrations.
|
|
78
83
|
|
|
79
84
|
### Usage in development and other non-production environments
|
|
80
85
|
|
|
@@ -84,7 +89,7 @@ For example, if you're using SQLite in development, update `database.yml` as fol
|
|
|
84
89
|
|
|
85
90
|
```diff
|
|
86
91
|
development:
|
|
87
|
-
|
|
92
|
+
+ primary:
|
|
88
93
|
<<: *default
|
|
89
94
|
database: storage/development.sqlite3
|
|
90
95
|
+ queue:
|
|
@@ -145,7 +150,7 @@ development:
|
|
|
145
150
|
|
|
146
151
|
### Single database configuration
|
|
147
152
|
|
|
148
|
-
Running Solid Queue in a separate database is recommended, but it's also possible to use one single database for both the app and the queue.
|
|
153
|
+
Running Solid Queue in a separate database is recommended, but it's also possible to use one single database for both the app and the queue. Follow these steps:
|
|
149
154
|
|
|
150
155
|
1. Copy the contents of `db/queue_schema.rb` into a normal migration and delete `db/queue_schema.rb`
|
|
151
156
|
2. Remove `config.solid_queue.connects_to` from `production.rb`
|
|
@@ -153,6 +158,10 @@ Running Solid Queue in a separate database is recommended, but it's also possibl
|
|
|
153
158
|
|
|
154
159
|
You won't have multiple databases, so `database.yml` doesn't need to have primary and queue database.
|
|
155
160
|
|
|
161
|
+
### Dashboard UI Setup
|
|
162
|
+
|
|
163
|
+
For viewing information about your jobs via a UI, we recommend taking a look at [mission_control-jobs](https://github.com/rails/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
|
|
164
|
+
|
|
156
165
|
### Incremental adoption
|
|
157
166
|
|
|
158
167
|
If you're planning to adopt Solid Queue incrementally by switching one job at the time, you can do so by leaving the `config.active_job.queue_adapter` set to your old backend, and then set the `queue_adapter` directly in the jobs you're moving:
|
|
@@ -172,7 +181,7 @@ Solid Queue was designed for the highest throughput when used with MySQL 8+ or P
|
|
|
172
181
|
|
|
173
182
|
## Configuration
|
|
174
183
|
|
|
175
|
-
### Workers, dispatchers and scheduler
|
|
184
|
+
### Workers, dispatchers, and scheduler
|
|
176
185
|
|
|
177
186
|
We have several types of actors in Solid Queue:
|
|
178
187
|
|
|
@@ -189,6 +198,8 @@ By default, Solid Queue will try to find your configuration under `config/queue.
|
|
|
189
198
|
bin/jobs -c config/calendar.yml
|
|
190
199
|
```
|
|
191
200
|
|
|
201
|
+
You can also skip all recurring tasks by setting the environment variable `SOLID_QUEUE_SKIP_RECURRING=true`. This is useful for environments like staging, review apps, or development where you don't want any recurring jobs to run. This is equivalent to using the `--skip-recurring` option with `bin/jobs`.
|
|
202
|
+
|
|
192
203
|
This is what this configuration looks like:
|
|
193
204
|
|
|
194
205
|
```yml
|
|
@@ -310,7 +321,7 @@ and then remove the paused ones. Pausing in general should be something rare, us
|
|
|
310
321
|
Do this:
|
|
311
322
|
|
|
312
323
|
```yml
|
|
313
|
-
queues: background, backend
|
|
324
|
+
queues: [ background, backend ]
|
|
314
325
|
```
|
|
315
326
|
|
|
316
327
|
instead of this:
|
|
@@ -319,7 +330,7 @@ queues: back*
|
|
|
319
330
|
```
|
|
320
331
|
|
|
321
332
|
|
|
322
|
-
### Threads, processes and signals
|
|
333
|
+
### Threads, processes, and signals
|
|
323
334
|
|
|
324
335
|
Workers in Solid Queue use a thread pool to run work in multiple threads, configurable via the `threads` parameter above. Besides this, parallelism can be achieved via multiple processes on one machine (configurable via different workers or the `processes` parameter above) or by horizontal scaling.
|
|
325
336
|
|
|
@@ -331,7 +342,7 @@ When receiving a `QUIT` signal, if workers still have jobs in-flight, these will
|
|
|
331
342
|
|
|
332
343
|
If processes have no chance of cleaning up before exiting (e.g. if someone pulls a cable somewhere), in-flight jobs might remain claimed by the processes executing them. Processes send heartbeats, and the supervisor checks and prunes processes with expired heartbeats. Jobs that were claimed by processes with an expired heartbeat will be marked as failed with a `SolidQueue::Processes::ProcessPrunedError`. You can configure both the frequency of heartbeats and the threshold to consider a process dead. See the section below for this.
|
|
333
344
|
|
|
334
|
-
In a similar way, if a worker is terminated in any other way not initiated by the above signals (e.g. a worker is sent a `KILL` signal), jobs in progress will be marked as failed so that they can be inspected, with a `SolidQueue::Processes::
|
|
345
|
+
In a similar way, if a worker is terminated in any other way not initiated by the above signals (e.g. a worker is sent a `KILL` signal), jobs in progress will be marked as failed so that they can be inspected, with a `SolidQueue::Processes::ProcessExitError`. Sometimes a job in particular is responsible for this, for example, if it has a memory leak and you have a mechanism to kill processes over a certain memory threshold, so this will help identifying this kind of situation.
|
|
335
346
|
|
|
336
347
|
|
|
337
348
|
### Database configuration
|
|
@@ -355,14 +366,14 @@ There are several settings that control how Solid Queue works that you can set a
|
|
|
355
366
|
|
|
356
367
|
**This is not used for errors raised within a job execution**. Errors happening in jobs are handled by Active Job's `retry_on` or `discard_on`, and ultimately will result in [failed jobs](#failed-jobs-and-retries). This is for errors happening within Solid Queue itself.
|
|
357
368
|
|
|
358
|
-
- `use_skip_locked`: whether to use `FOR UPDATE SKIP LOCKED` when performing locking reads. This will be automatically detected in the future, and for now, you
|
|
369
|
+
- `use_skip_locked`: whether to use `FOR UPDATE SKIP LOCKED` when performing locking reads. This will be automatically detected in the future, and for now, you only need to set this to `false` if your database doesn't support it. For MySQL, that'd be versions < 8, and for PostgreSQL, versions < 9.5. If you use SQLite, this has no effect, as writes are sequential.
|
|
359
370
|
- `process_heartbeat_interval`: the heartbeat interval that all processes will follow—defaults to 60 seconds.
|
|
360
371
|
- `process_alive_threshold`: how long to wait until a process is considered dead after its last heartbeat—defaults to 5 minutes.
|
|
361
372
|
- `shutdown_timeout`: time the supervisor will wait since it sent the `TERM` signal to its supervised processes before sending a `QUIT` version to them requesting immediate termination—defaults to 5 seconds.
|
|
362
373
|
- `silence_polling`: whether to silence Active Record logs emitted when polling for both workers and dispatchers—defaults to `true`.
|
|
363
374
|
- `supervisor_pidfile`: path to a pidfile that the supervisor will create when booting to prevent running more than one supervisor in the same host, or in case you want to use it for a health check. It's `nil` by default.
|
|
364
375
|
- `preserve_finished_jobs`: whether to keep finished jobs in the `solid_queue_jobs` table—defaults to `true`.
|
|
365
|
-
- `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true—defaults to 1 day.
|
|
376
|
+
- `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true — defaults to 1 day. When installing Solid Queue, [a recurring job](#recurring-tasks) is automatically configured to clear finished jobs every hour on the 12th minute in batches. You can edit the `recurring.yml` configuration to change this as you see fit.
|
|
366
377
|
- `default_concurrency_control_period`: the value to be used as the default for the `duration` parameter in [concurrency controls](#concurrency-controls). It defaults to 3 minutes.
|
|
367
378
|
|
|
368
379
|
|
|
@@ -372,9 +383,11 @@ In Solid queue, you can hook into two different points in the supervisor's life:
|
|
|
372
383
|
- `start`: after the supervisor has finished booting and right before it forks workers and dispatchers.
|
|
373
384
|
- `stop`: after receiving a signal (`TERM`, `INT` or `QUIT`) and right before starting graceful or immediate shutdown.
|
|
374
385
|
|
|
375
|
-
And into two different points in
|
|
376
|
-
- `
|
|
377
|
-
- `
|
|
386
|
+
And into two different points in the worker's, dispatcher's and scheduler's life:
|
|
387
|
+
- `(worker|dispatcher|scheduler)_start`: after the worker/dispatcher/scheduler has finished booting and right before it starts the polling loop or loading the recurring schedule.
|
|
388
|
+
- `(worker|dispatcher|scheduler)_stop`: after receiving a signal (`TERM`, `INT` or `QUIT`) and right before starting graceful or immediate shutdown (which is just `exit!`).
|
|
389
|
+
|
|
390
|
+
Each of these hooks has an instance of the supervisor/worker/dispatcher/scheduler yielded to the block so that you may read its configuration for logging or metrics reporting purposes.
|
|
378
391
|
|
|
379
392
|
You can use the following methods with a block to do this:
|
|
380
393
|
```ruby
|
|
@@ -383,12 +396,30 @@ SolidQueue.on_stop
|
|
|
383
396
|
|
|
384
397
|
SolidQueue.on_worker_start
|
|
385
398
|
SolidQueue.on_worker_stop
|
|
399
|
+
|
|
400
|
+
SolidQueue.on_dispatcher_start
|
|
401
|
+
SolidQueue.on_dispatcher_stop
|
|
402
|
+
|
|
403
|
+
SolidQueue.on_scheduler_start
|
|
404
|
+
SolidQueue.on_scheduler_stop
|
|
386
405
|
```
|
|
387
406
|
|
|
388
407
|
For example:
|
|
389
408
|
```ruby
|
|
390
|
-
SolidQueue.on_start
|
|
391
|
-
|
|
409
|
+
SolidQueue.on_start do |supervisor|
|
|
410
|
+
MyMetricsReporter.process_name = supervisor.name
|
|
411
|
+
|
|
412
|
+
start_metrics_server
|
|
413
|
+
end
|
|
414
|
+
|
|
415
|
+
SolidQueue.on_stop do |_supervisor|
|
|
416
|
+
stop_metrics_server
|
|
417
|
+
end
|
|
418
|
+
|
|
419
|
+
SolidQueue.on_worker_start do |worker|
|
|
420
|
+
MyMetricsReporter.process_name = worker.name
|
|
421
|
+
MyMetricsReporter.queues = worker.queues.join(',')
|
|
422
|
+
end
|
|
392
423
|
```
|
|
393
424
|
|
|
394
425
|
These can be called several times to add multiple hooks, but it needs to happen before Solid Queue is started. An initializer would be a good place to do this.
|
|
@@ -402,11 +433,13 @@ In the case of recurring tasks, if such error is raised when enqueuing the job c
|
|
|
402
433
|
|
|
403
434
|
## Concurrency controls
|
|
404
435
|
|
|
405
|
-
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, jobs will be blocked from running
|
|
436
|
+
Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, **by default, jobs will be blocked from running**, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses.
|
|
437
|
+
|
|
438
|
+
**Alternatively, jobs can be configured to be discarded instead of blocked**. This means that if a job with certain arguments has already been enqueued, other jobs with the same characteristics (in the same concurrency _class_) won't be enqueued.
|
|
406
439
|
|
|
407
440
|
```ruby
|
|
408
441
|
class MyJob < ApplicationJob
|
|
409
|
-
limits_concurrency to: max_concurrent_executions, key: ->(arg1, arg2, **) { ... }, duration: max_interval_to_guarantee_concurrency_limit, group: concurrency_group
|
|
442
|
+
limits_concurrency to: max_concurrent_executions, key: ->(arg1, arg2, **) { ... }, duration: max_interval_to_guarantee_concurrency_limit, group: concurrency_group, on_conflict: on_conflict_behaviour
|
|
410
443
|
|
|
411
444
|
# ...
|
|
412
445
|
```
|
|
@@ -414,10 +447,19 @@ class MyJob < ApplicationJob
|
|
|
414
447
|
- `to` is `1` by default.
|
|
415
448
|
- `duration` is set to `SolidQueue.default_concurrency_control_period` by default, which itself defaults to `3 minutes`, but that you can configure as well.
|
|
416
449
|
- `group` is used to control the concurrency of different job classes together. It defaults to the job class name.
|
|
450
|
+
- `on_conflict` controls behaviour when enqueuing a job that conflicts with the concurrency limits configured. It can be set to one of the following:
|
|
451
|
+
- (default) `:block`: the job is blocked and is dispatched when another job completes and unblocks it, or when the duration expires.
|
|
452
|
+
- `:discard`: the job is discarded. When you choose this option, bear in mind that if a job runs and fails to remove the concurrency lock (or _semaphore_, read below to know more about this), all jobs conflicting with it will be discarded up to the interval defined by `duration` has elapsed.
|
|
417
453
|
|
|
418
454
|
When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
|
|
419
455
|
|
|
420
|
-
The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_.
|
|
456
|
+
The concurrency limits use the concept of semaphores when enqueuing, and work as follows: when a job is enqueued, we check if it specifies concurrency controls. If it does, we check the semaphore for the computed concurrency key. If the semaphore is open, we claim it and we set the job as _ready_. Ready means it can be picked up by workers for execution. When the job finishes executing (be it successfully or unsuccessfully, resulting in a failed execution), we signal the semaphore and try to unblock the next job with the same key, if any. Unblocking the next job doesn't mean running that job right away, but moving it from _blocked_ to _ready_. If you're using the `discard` behaviour for `on_conflict`, jobs enqueued while the semaphore is closed will be discarded.
|
|
457
|
+
|
|
458
|
+
Since something can happen that prevents the first job from releasing the semaphore and unblocking the next job (for example, someone pulling a plug in the machine where the worker is running), we have the `duration` as a failsafe. Jobs that have been blocked for more than `duration` are candidates to be released, but only as many of them as the concurrency rules allow, as each one would need to go through the semaphore dance check. This means that the `duration` is not really about the job that's enqueued or being run, it's about the jobs that are blocked waiting, or about the jobs that would get discarded while the semaphore is closed.
|
|
459
|
+
|
|
460
|
+
It's important to note that after one or more candidate jobs are unblocked (either because a job finishes or because `duration` expires and a semaphore is released), the `duration` timer for the still blocked jobs is reset. This happens indirectly via the expiration time of the semaphore, which is updated.
|
|
461
|
+
|
|
462
|
+
When using `discard` as the behaviour to handle conflicts, you might have jobs discarded for up to the `duration` interval if something happens and a running job fails to release the semaphore.
|
|
421
463
|
|
|
422
464
|
|
|
423
465
|
For example:
|
|
@@ -450,12 +492,63 @@ class Bundle::RebundlePostingsJob < ApplicationJob
|
|
|
450
492
|
|
|
451
493
|
In this case, if we have a `Box::MovePostingsByContactToDesignatedBoxJob` job enqueued for a contact record with id `123` and another `Bundle::RebundlePostingsJob` job enqueued simultaneously for a bundle record that references contact `123`, only one of them will be allowed to proceed. The other one will stay blocked until the first one finishes (or 15 minutes pass, whatever happens first).
|
|
452
494
|
|
|
453
|
-
Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked. In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
|
|
495
|
+
Note that the `duration` setting depends indirectly on the value for `concurrency_maintenance_interval` that you set for your dispatcher(s), as that'd be the frequency with which blocked jobs are checked and unblocked (at which point, only one job per concurrency key, at most, is unblocked). In general, you should set `duration` in a way that all your jobs would finish well under that duration and think of the concurrency maintenance task as a failsafe in case something goes wrong.
|
|
454
496
|
|
|
455
|
-
Jobs are unblocked in order of priority but queue order is not taken into account for unblocking jobs
|
|
497
|
+
Jobs are unblocked in order of priority but **queue order is not taken into account for unblocking jobs**. That means that if you have a group of jobs that share a concurrency group but are in different queues, or jobs of the same class that you enqueue in different queues, the queue order you set for a worker is not taken into account when unblocking blocked ones. The reason is that a job that runs unblocks the next one, and the job itself doesn't know about a particular worker's queue order (you could even have different workers with different queue orders), it can only know about priority. Once blocked jobs are unblocked and available for polling, they'll be picked up by a worker following its queue order.
|
|
456
498
|
|
|
457
499
|
Finally, failed jobs that are automatically or manually retried work in the same way as new jobs that get enqueued: they get in the queue for getting an open semaphore, and whenever they get it, they'll be run. It doesn't matter if they had already gotten an open semaphore in the past.
|
|
458
500
|
|
|
501
|
+
### Scheduled jobs
|
|
502
|
+
|
|
503
|
+
Jobs set to run in the future (via Active Job's `wait` or `wait_until` options) have concurrency limits enforced when they're due, not when they're scheduled. For example, consider this job:
|
|
504
|
+
```ruby
|
|
505
|
+
class DeliverAnnouncementToContactJob < ApplicationJob
|
|
506
|
+
limits_concurrency to: 1, key: ->(contact) { contact.account }, duration: 5.minutes
|
|
507
|
+
|
|
508
|
+
def perform(contact)
|
|
509
|
+
# ...
|
|
510
|
+
```
|
|
511
|
+
|
|
512
|
+
If several jobs are enqueued like this:
|
|
513
|
+
|
|
514
|
+
```ruby
|
|
515
|
+
DeliverAnnouncementToContactJob.set(wait: 10.minutes).perform_later(contact)
|
|
516
|
+
DeliverAnnouncementToContactJob.set(wait: 10.minutes).perform_later(contact)
|
|
517
|
+
DeliverAnnouncementToContactJob.set(wait: 30.minutes).perform_later(contact)
|
|
518
|
+
```
|
|
519
|
+
|
|
520
|
+
The 3 jobs will go into the scheduled queue and will wait there until they're due. Then, 10 minutes after, the first two jobs will be enqueued and the second one most likely will be blocked because the first one will be running first. Then, assuming the jobs are fast and finish in a few seconds, when the third job is due, it'll be enqueued normally.
|
|
521
|
+
|
|
522
|
+
Normally scheduled jobs are enqueued in batches, but with concurrency controls, jobs need to be enqueued one by one. This has an impact on performance, similarly to the impact of concurrency controls in bulk enqueuing. Read below for more details. I'd generally advise against mixing concurrency controls with waiting/scheduling in the future.
|
|
523
|
+
|
|
524
|
+
### Performance considerations
|
|
525
|
+
|
|
526
|
+
Concurrency controls introduce significant overhead (blocked executions need to be created and promoted to ready, semaphores need to be created and updated) so you should consider carefully whether you need them. For throttling purposes, where you plan to have `limit` significantly larger than 1, I'd encourage relying on a limited number of workers per queue instead. For example:
|
|
527
|
+
|
|
528
|
+
```ruby
|
|
529
|
+
class ThrottledJob < ApplicationJob
|
|
530
|
+
queue_as :throttled
|
|
531
|
+
```
|
|
532
|
+
|
|
533
|
+
```yml
|
|
534
|
+
production:
|
|
535
|
+
workers:
|
|
536
|
+
- queues: throttled
|
|
537
|
+
threads: 1
|
|
538
|
+
polling_interval: 1
|
|
539
|
+
- queues: default
|
|
540
|
+
threads: 5
|
|
541
|
+
polling_interval: 0.1
|
|
542
|
+
processes: 3
|
|
543
|
+
```
|
|
544
|
+
|
|
545
|
+
Or something similar to that depending on your setup. You can also assign a different queue to a job on the moment of enqueuing so you can decide whether to enqueue a job in the throttled queue or another queue depending on the arguments, or pass a block to `queue_as` as explained [here](https://guides.rubyonrails.org/active_job_basics.html#queues).
|
|
546
|
+
|
|
547
|
+
|
|
548
|
+
In addition, mixing concurrency controls with **bulk enqueuing** (Active Job's `perform_all_later`) is not a good idea because concurrency controlled job needs to be enqueued one by one to ensure concurrency limits are respected, so you lose all the benefits of bulk enqueuing.
|
|
549
|
+
|
|
550
|
+
When jobs that have concurrency controls and `on_conflict: :discard` are enqueued in bulk, the ones that fail to be enqueued and are discarded would have `successfully_enqueued` set to `false`. The total count of jobs enqueued returned by `perform_all_later` will exclude these jobs as expected.
|
|
551
|
+
|
|
459
552
|
## Failed jobs and retries
|
|
460
553
|
|
|
461
554
|
Solid Queue doesn't include any automatic retry mechanism, it [relies on Active Job for this](https://edgeguides.rubyonrails.org/active_job_basics.html#retrying-or-discarding-failed-jobs). Jobs that fail will be kept in the system, and a _failed execution_ (a record in the `solid_queue_failed_executions` table) will be created for these. The job will stay there until manually discarded or re-enqueued. You can do this in a console as:
|
|
@@ -467,8 +560,6 @@ failed_execution.retry # This will re-enqueue the job as if it was enqueued for
|
|
|
467
560
|
failed_execution.discard # This will delete the job from the system
|
|
468
561
|
```
|
|
469
562
|
|
|
470
|
-
However, we recommend taking a look at [mission_control-jobs](https://github.com/rails/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
|
|
471
|
-
|
|
472
563
|
### Error reporting on jobs
|
|
473
564
|
|
|
474
565
|
Some error tracking services that integrate with Rails, such as Sentry or Rollbar, hook into [Active Job](https://guides.rubyonrails.org/active_job_basics.html#exceptions) and automatically report not handled errors that happen during job execution. However, if your error tracking system doesn't, or if you need some custom reporting, you can hook into Active Job yourself. A possible way of doing this would be:
|
|
@@ -510,6 +601,7 @@ plugin :solid_queue if ENV["SOLID_QUEUE_IN_PUMA"]
|
|
|
510
601
|
```
|
|
511
602
|
that you set in production only. This is what Rails 8's default Puma config looks like. Otherwise, if you're using Puma in development but not Solid Queue, starting Puma would start also Solid Queue supervisor and it'll most likely fail because it won't be properly configured.
|
|
512
603
|
|
|
604
|
+
**Note**: phased restarts are not supported currently because the plugin requires [app preloading](https://github.com/puma/puma?tab=readme-ov-file#cluster-mode) to work.
|
|
513
605
|
|
|
514
606
|
## Jobs and transactional integrity
|
|
515
607
|
:warning: Having your jobs in the same ACID-compliant database as your application data enables a powerful yet sharp tool: taking advantage of transactional integrity to ensure some action in your app is not committed unless your job is also committed and vice versa, and ensuring that your job won't be enqueued until the transaction within which you're enqueuing it is committed. This can be very powerful and useful, but it can also backfire if you base some of your logic on this behaviour, and in the future, you move to another active job backend, or if you simply move Solid Queue to its own database, and suddenly the behaviour changes under you. Because this can be quite tricky and many people shouldn't need to worry about it, by default Solid Queue is configured in a different database as the main app.
|
|
@@ -524,7 +616,7 @@ end
|
|
|
524
616
|
|
|
525
617
|
Using this option, you can also use Solid Queue in the same database as your app but not rely on transactional integrity.
|
|
526
618
|
|
|
527
|
-
If you don't set this option but still want to make sure you're not inadvertently on transactional integrity, you can make sure that:
|
|
619
|
+
If you don't set this option but still want to make sure you're not inadvertently relying on transactional integrity, you can make sure that:
|
|
528
620
|
- Your jobs relying on specific data are always enqueued on [`after_commit` callbacks](https://guides.rubyonrails.org/active_record_callbacks.html#after-commit-and-after-rollback) or otherwise from a place where you're certain that whatever data the job will use has been committed to the database before the job is enqueued.
|
|
529
621
|
- Or, you configure a different database for Solid Queue, even if it's the same as your app, ensuring that a different connection on the thread handling requests or running jobs for your app will be used to enqueue jobs. For example:
|
|
530
622
|
|
|
@@ -548,6 +640,8 @@ Solid Queue supports defining recurring tasks that run at specific times in the
|
|
|
548
640
|
bin/jobs --recurring_schedule_file=config/schedule.yml
|
|
549
641
|
```
|
|
550
642
|
|
|
643
|
+
You can completely disable recurring tasks by setting the environment variable `SOLID_QUEUE_SKIP_RECURRING=true` or by using the `--skip-recurring` option with `bin/jobs`.
|
|
644
|
+
|
|
551
645
|
The configuration itself looks like this:
|
|
552
646
|
|
|
553
647
|
```yml
|
|
@@ -12,7 +12,7 @@ module SolidQueue
|
|
|
12
12
|
class << self
|
|
13
13
|
def unblock(limit)
|
|
14
14
|
SolidQueue.instrument(:release_many_blocked, limit: limit) do |payload|
|
|
15
|
-
expired.distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
|
|
15
|
+
expired.order(:concurrency_key).distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
|
|
16
16
|
payload[:size] = release_many releasable(concurrency_keys)
|
|
17
17
|
end
|
|
18
18
|
end
|
|
@@ -37,9 +37,14 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
|
|
|
37
37
|
end
|
|
38
38
|
|
|
39
39
|
def fail_all_with(error)
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
40
|
+
includes(:job).tap do |executions|
|
|
41
|
+
return if executions.empty?
|
|
42
|
+
|
|
43
|
+
SolidQueue.instrument(:fail_many_claimed) do |payload|
|
|
44
|
+
executions.each do |execution|
|
|
45
|
+
execution.failed_with(error)
|
|
46
|
+
execution.unblock_next_job
|
|
47
|
+
end
|
|
43
48
|
|
|
44
49
|
payload[:process_ids] = executions.map(&:process_id).uniq
|
|
45
50
|
payload[:job_ids] = executions.map(&:job_id).uniq
|
|
@@ -67,7 +72,7 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
|
|
|
67
72
|
raise result.error
|
|
68
73
|
end
|
|
69
74
|
ensure
|
|
70
|
-
|
|
75
|
+
unblock_next_job
|
|
71
76
|
end
|
|
72
77
|
|
|
73
78
|
def release
|
|
@@ -90,9 +95,13 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
|
|
|
90
95
|
end
|
|
91
96
|
end
|
|
92
97
|
|
|
98
|
+
def unblock_next_job
|
|
99
|
+
job.unblock_next_blocked_job
|
|
100
|
+
end
|
|
101
|
+
|
|
93
102
|
private
|
|
94
103
|
def execute
|
|
95
|
-
ActiveJob::Base.execute(job.arguments)
|
|
104
|
+
ActiveJob::Base.execute(job.arguments.merge("provider_job_id" => job.id))
|
|
96
105
|
Result.new(true, nil)
|
|
97
106
|
rescue Exception => e
|
|
98
107
|
Result.new(false, e)
|
|
@@ -58,8 +58,11 @@ module SolidQueue
|
|
|
58
58
|
end
|
|
59
59
|
|
|
60
60
|
def determine_backtrace_size_limit
|
|
61
|
-
column = self.class.connection
|
|
62
|
-
|
|
61
|
+
column = self.class.connection_pool.with_connection do |connection|
|
|
62
|
+
connection.schema_cache.columns_hash(self.class.table_name)["error"]
|
|
63
|
+
end
|
|
64
|
+
|
|
65
|
+
if column && column.limit.present?
|
|
63
66
|
column.limit - exception_class_name.bytesize - exception_message.bytesize - JSON_OVERHEAD
|
|
64
67
|
end
|
|
65
68
|
end
|
|
@@ -34,6 +34,10 @@ module SolidQueue
|
|
|
34
34
|
end
|
|
35
35
|
|
|
36
36
|
private
|
|
37
|
+
def concurrency_on_conflict
|
|
38
|
+
job_class.concurrency_on_conflict.to_s.inquiry
|
|
39
|
+
end
|
|
40
|
+
|
|
37
41
|
def acquire_concurrency_lock
|
|
38
42
|
return true unless concurrency_limited?
|
|
39
43
|
|
|
@@ -46,6 +50,14 @@ module SolidQueue
|
|
|
46
50
|
Semaphore.signal(self)
|
|
47
51
|
end
|
|
48
52
|
|
|
53
|
+
def handle_concurrency_conflict
|
|
54
|
+
if concurrency_on_conflict.discard?
|
|
55
|
+
destroy
|
|
56
|
+
else
|
|
57
|
+
block
|
|
58
|
+
end
|
|
59
|
+
end
|
|
60
|
+
|
|
49
61
|
def block
|
|
50
62
|
BlockedExecution.create_or_find_by!(job_id: id)
|
|
51
63
|
end
|
|
@@ -10,6 +10,7 @@ module SolidQueue
|
|
|
10
10
|
|
|
11
11
|
class << self
|
|
12
12
|
def enqueue_all(active_jobs)
|
|
13
|
+
active_jobs.each { |job| job.scheduled_at ||= Time.current }
|
|
13
14
|
active_jobs_by_job_id = active_jobs.index_by(&:job_id)
|
|
14
15
|
|
|
15
16
|
transaction do
|
|
@@ -29,7 +30,8 @@ module SolidQueue
|
|
|
29
30
|
active_job.scheduled_at = scheduled_at
|
|
30
31
|
|
|
31
32
|
create_from_active_job(active_job).tap do |enqueued_job|
|
|
32
|
-
active_job.provider_job_id = enqueued_job.id
|
|
33
|
+
active_job.provider_job_id = enqueued_job.id if enqueued_job.persisted?
|
|
34
|
+
active_job.successfully_enqueued = enqueued_job.persisted?
|
|
33
35
|
end
|
|
34
36
|
end
|
|
35
37
|
|
|
@@ -49,7 +51,7 @@ module SolidQueue
|
|
|
49
51
|
def create_all_from_active_jobs(active_jobs)
|
|
50
52
|
job_rows = active_jobs.map { |job| attributes_from_active_job(job) }
|
|
51
53
|
insert_all(job_rows)
|
|
52
|
-
where(active_job_id: active_jobs.map(&:job_id))
|
|
54
|
+
where(active_job_id: active_jobs.map(&:job_id)).order(id: :asc)
|
|
53
55
|
end
|
|
54
56
|
|
|
55
57
|
def attributes_from_active_job(active_job)
|
|
@@ -6,11 +6,19 @@ module SolidQueue
|
|
|
6
6
|
|
|
7
7
|
connects_to(**SolidQueue.connects_to) if SolidQueue.connects_to
|
|
8
8
|
|
|
9
|
-
|
|
10
|
-
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
|
|
9
|
+
class << self
|
|
10
|
+
def non_blocking_lock
|
|
11
|
+
if SolidQueue.use_skip_locked
|
|
12
|
+
lock(Arel.sql("FOR UPDATE SKIP LOCKED"))
|
|
13
|
+
else
|
|
14
|
+
lock
|
|
15
|
+
end
|
|
16
|
+
end
|
|
17
|
+
|
|
18
|
+
def supports_insert_conflict_target?
|
|
19
|
+
connection_pool.with_connection do |connection|
|
|
20
|
+
connection.supports_insert_conflict_target?
|
|
21
|
+
end
|
|
14
22
|
end
|
|
15
23
|
end
|
|
16
24
|
end
|
|
@@ -8,7 +8,7 @@ module SolidQueue
|
|
|
8
8
|
|
|
9
9
|
class << self
|
|
10
10
|
def create_or_insert!(**attributes)
|
|
11
|
-
if
|
|
11
|
+
if supports_insert_conflict_target?
|
|
12
12
|
# PostgreSQL fails and aborts the current transaction when it hits a duplicate key conflict
|
|
13
13
|
# during two concurrent INSERTs for the same value of an unique index. We need to explicitly
|
|
14
14
|
# indicate unique_by to ignore duplicate rows by this value when inserting
|
|
@@ -36,7 +36,7 @@ module SolidQueue
|
|
|
36
36
|
end
|
|
37
37
|
|
|
38
38
|
def create_or_update_all(tasks)
|
|
39
|
-
if
|
|
39
|
+
if supports_insert_conflict_target?
|
|
40
40
|
# PostgreSQL fails and aborts the current transaction when it hits a duplicate key conflict
|
|
41
41
|
# during two concurrent INSERTs for the same value of an unique index. We need to explicitly
|
|
42
42
|
# indicate unique_by to ignore duplicate rows by this value when inserting
|
|
@@ -48,7 +48,7 @@ module SolidQueue
|
|
|
48
48
|
end
|
|
49
49
|
|
|
50
50
|
def delay_from_now
|
|
51
|
-
[ (next_time - Time.current).to_f, 0 ].max
|
|
51
|
+
[ (next_time - Time.current).to_f, 0.1 ].max
|
|
52
52
|
end
|
|
53
53
|
|
|
54
54
|
def next_time
|
|
@@ -130,7 +130,6 @@ module SolidQueue
|
|
|
130
130
|
active_job.run_callbacks(:enqueue) do
|
|
131
131
|
Job.enqueue(active_job)
|
|
132
132
|
end
|
|
133
|
-
active_job.successfully_enqueued = true
|
|
134
133
|
end
|
|
135
134
|
end
|
|
136
135
|
end
|
|
@@ -14,7 +14,7 @@ module SolidQueue
|
|
|
14
14
|
def dispatch_next_batch(batch_size)
|
|
15
15
|
transaction do
|
|
16
16
|
job_ids = next_batch(batch_size).non_blocking_lock.pluck(:job_id)
|
|
17
|
-
if job_ids.empty? then
|
|
17
|
+
if job_ids.empty? then 0
|
|
18
18
|
else
|
|
19
19
|
SolidQueue.instrument(:dispatch_scheduled, batch_size: batch_size) do |payload|
|
|
20
20
|
payload[:size] = dispatch_jobs(job_ids)
|
|
@@ -20,7 +20,7 @@ module SolidQueue
|
|
|
20
20
|
|
|
21
21
|
# Requires a unique index on key
|
|
22
22
|
def create_unique_by(attributes)
|
|
23
|
-
if
|
|
23
|
+
if supports_insert_conflict_target?
|
|
24
24
|
insert({ **attributes }, unique_by: :key).any?
|
|
25
25
|
else
|
|
26
26
|
create!(**attributes)
|
|
@@ -5,6 +5,7 @@ module ActiveJob
|
|
|
5
5
|
extend ActiveSupport::Concern
|
|
6
6
|
|
|
7
7
|
DEFAULT_CONCURRENCY_GROUP = ->(*) { self.class.name }
|
|
8
|
+
CONCURRENCY_ON_CONFLICT_BEHAVIOUR = %i[ block discard ]
|
|
8
9
|
|
|
9
10
|
included do
|
|
10
11
|
class_attribute :concurrency_key, instance_accessor: false
|
|
@@ -12,14 +13,16 @@ module ActiveJob
|
|
|
12
13
|
|
|
13
14
|
class_attribute :concurrency_limit
|
|
14
15
|
class_attribute :concurrency_duration, default: SolidQueue.default_concurrency_control_period
|
|
16
|
+
class_attribute :concurrency_on_conflict, default: :block
|
|
15
17
|
end
|
|
16
18
|
|
|
17
19
|
class_methods do
|
|
18
|
-
def limits_concurrency(key:, to: 1, group: DEFAULT_CONCURRENCY_GROUP, duration: SolidQueue.default_concurrency_control_period)
|
|
20
|
+
def limits_concurrency(key:, to: 1, group: DEFAULT_CONCURRENCY_GROUP, duration: SolidQueue.default_concurrency_control_period, on_conflict: :block)
|
|
19
21
|
self.concurrency_key = key
|
|
20
22
|
self.concurrency_limit = to
|
|
21
23
|
self.concurrency_group = group
|
|
22
24
|
self.concurrency_duration = duration
|
|
25
|
+
self.concurrency_on_conflict = on_conflict.presence_in(CONCURRENCY_ON_CONFLICT_BEHAVIOUR) || :block
|
|
23
26
|
end
|
|
24
27
|
end
|
|
25
28
|
|
|
@@ -7,7 +7,10 @@ module ActiveJob
|
|
|
7
7
|
# To use it set the queue_adapter config to +:solid_queue+.
|
|
8
8
|
#
|
|
9
9
|
# Rails.application.config.active_job.queue_adapter = :solid_queue
|
|
10
|
-
class SolidQueueAdapter
|
|
10
|
+
class SolidQueueAdapter < (Rails::VERSION::MAJOR == 7 && Rails::VERSION::MINOR == 1 ? Object : AbstractAdapter)
|
|
11
|
+
class_attribute :stopping, default: false, instance_writer: false
|
|
12
|
+
SolidQueue.on_worker_stop { self.stopping = true }
|
|
13
|
+
|
|
11
14
|
def enqueue_after_transaction_commit?
|
|
12
15
|
true
|
|
13
16
|
end
|
|
@@ -1,10 +1,15 @@
|
|
|
1
|
-
#
|
|
1
|
+
# examples:
|
|
2
2
|
# periodic_cleanup:
|
|
3
3
|
# class: CleanSoftDeletedRecordsJob
|
|
4
4
|
# queue: background
|
|
5
5
|
# args: [ 1000, { batch_size: 500 } ]
|
|
6
6
|
# schedule: every hour
|
|
7
|
-
#
|
|
7
|
+
# periodic_cleanup_with_command:
|
|
8
8
|
# command: "SoftDeletedRecord.due.delete_all"
|
|
9
9
|
# priority: 2
|
|
10
10
|
# schedule: at 5am every day
|
|
11
|
+
|
|
12
|
+
production:
|
|
13
|
+
clear_solid_queue_finished_jobs:
|
|
14
|
+
command: "SolidQueue::Job.clear_finished_in_batches(sleep_between_batches: 0.3)"
|
|
15
|
+
schedule: every hour at minute 12
|
|
@@ -11,15 +11,27 @@ Puma::Plugin.create do
|
|
|
11
11
|
monitor_solid_queue
|
|
12
12
|
end
|
|
13
13
|
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
14
|
+
if Gem::Version.new(Puma::Const::VERSION) < Gem::Version.new("7")
|
|
15
|
+
launcher.events.on_booted do
|
|
16
|
+
@solid_queue_pid = fork do
|
|
17
|
+
Thread.new { monitor_puma }
|
|
18
|
+
SolidQueue::Supervisor.start
|
|
19
|
+
end
|
|
20
|
+
end
|
|
21
|
+
|
|
22
|
+
launcher.events.on_stopped { stop_solid_queue }
|
|
23
|
+
launcher.events.on_restart { stop_solid_queue }
|
|
24
|
+
else
|
|
25
|
+
launcher.events.after_booted do
|
|
26
|
+
@solid_queue_pid = fork do
|
|
27
|
+
Thread.new { monitor_puma }
|
|
28
|
+
SolidQueue::Supervisor.start
|
|
29
|
+
end
|
|
18
30
|
end
|
|
19
|
-
end
|
|
20
31
|
|
|
21
|
-
|
|
22
|
-
|
|
32
|
+
launcher.events.after_stopped { stop_solid_queue }
|
|
33
|
+
launcher.events.before_restart { stop_solid_queue }
|
|
34
|
+
end
|
|
23
35
|
end
|
|
24
36
|
|
|
25
37
|
private
|
data/lib/solid_queue/cli.rb
CHANGED
|
@@ -12,8 +12,9 @@ module SolidQueue
|
|
|
12
12
|
desc: "Path to recurring schedule definition (default: #{Configuration::DEFAULT_RECURRING_SCHEDULE_FILE_PATH}).",
|
|
13
13
|
banner: "SOLID_QUEUE_RECURRING_SCHEDULE"
|
|
14
14
|
|
|
15
|
-
class_option :skip_recurring, type: :boolean,
|
|
16
|
-
desc: "Whether to skip recurring tasks scheduling"
|
|
15
|
+
class_option :skip_recurring, type: :boolean,
|
|
16
|
+
desc: "Whether to skip recurring tasks scheduling",
|
|
17
|
+
banner: "SOLID_QUEUE_SKIP_RECURRING"
|
|
17
18
|
|
|
18
19
|
def self.exit_on_failure?
|
|
19
20
|
true
|
|
@@ -88,7 +88,7 @@ module SolidQueue
|
|
|
88
88
|
recurring_schedule_file: Rails.root.join(ENV["SOLID_QUEUE_RECURRING_SCHEDULE"] || DEFAULT_RECURRING_SCHEDULE_FILE_PATH),
|
|
89
89
|
only_work: false,
|
|
90
90
|
only_dispatch: false,
|
|
91
|
-
skip_recurring:
|
|
91
|
+
skip_recurring: ActiveModel::Type::Boolean.new.cast(ENV["SOLID_QUEUE_SKIP_RECURRING"])
|
|
92
92
|
}
|
|
93
93
|
end
|
|
94
94
|
|
|
@@ -141,7 +141,7 @@ module SolidQueue
|
|
|
141
141
|
|
|
142
142
|
def recurring_tasks
|
|
143
143
|
@recurring_tasks ||= recurring_tasks_config.map do |id, options|
|
|
144
|
-
RecurringTask.from_configuration(id, **options) if options
|
|
144
|
+
RecurringTask.from_configuration(id, **options) if options&.has_key?(:schedule)
|
|
145
145
|
end.compact
|
|
146
146
|
end
|
|
147
147
|
|
|
@@ -153,7 +153,9 @@ module SolidQueue
|
|
|
153
153
|
end
|
|
154
154
|
|
|
155
155
|
def recurring_tasks_config
|
|
156
|
-
@recurring_tasks_config ||=
|
|
156
|
+
@recurring_tasks_config ||= begin
|
|
157
|
+
config_from options[:recurring_schedule_file]
|
|
158
|
+
end
|
|
157
159
|
end
|
|
158
160
|
|
|
159
161
|
|
|
@@ -186,6 +188,7 @@ module SolidQueue
|
|
|
186
188
|
if file.exist?
|
|
187
189
|
ActiveSupport::ConfigurationFile.parse(file).deep_symbolize_keys
|
|
188
190
|
else
|
|
191
|
+
puts "[solid_queue] WARNING: Provided configuration file '#{file}' does not exist. Falling back to default configuration."
|
|
189
192
|
{}
|
|
190
193
|
end
|
|
191
194
|
end
|
|
@@ -2,10 +2,14 @@
|
|
|
2
2
|
|
|
3
3
|
module SolidQueue
|
|
4
4
|
class Dispatcher < Processes::Poller
|
|
5
|
-
|
|
5
|
+
include LifecycleHooks
|
|
6
|
+
attr_reader :batch_size
|
|
6
7
|
|
|
8
|
+
after_boot :run_start_hooks
|
|
7
9
|
after_boot :start_concurrency_maintenance
|
|
8
10
|
before_shutdown :stop_concurrency_maintenance
|
|
11
|
+
before_shutdown :run_stop_hooks
|
|
12
|
+
after_shutdown :run_exit_hooks
|
|
9
13
|
|
|
10
14
|
def initialize(**options)
|
|
11
15
|
options = options.dup.with_defaults(SolidQueue::Configuration::DISPATCHER_DEFAULTS)
|
|
@@ -22,10 +26,12 @@ module SolidQueue
|
|
|
22
26
|
end
|
|
23
27
|
|
|
24
28
|
private
|
|
29
|
+
attr_reader :concurrency_maintenance
|
|
30
|
+
|
|
25
31
|
def poll
|
|
26
32
|
batch = dispatch_next_batch
|
|
27
33
|
|
|
28
|
-
batch.
|
|
34
|
+
batch.zero? ? polling_interval : 0.seconds
|
|
29
35
|
end
|
|
30
36
|
|
|
31
37
|
def dispatch_next_batch
|
|
@@ -38,20 +44,12 @@ module SolidQueue
|
|
|
38
44
|
concurrency_maintenance&.start
|
|
39
45
|
end
|
|
40
46
|
|
|
41
|
-
def schedule_recurring_tasks
|
|
42
|
-
recurring_schedule.schedule_tasks
|
|
43
|
-
end
|
|
44
|
-
|
|
45
47
|
def stop_concurrency_maintenance
|
|
46
48
|
concurrency_maintenance&.stop
|
|
47
49
|
end
|
|
48
50
|
|
|
49
|
-
def unschedule_recurring_tasks
|
|
50
|
-
recurring_schedule.unschedule_tasks
|
|
51
|
-
end
|
|
52
|
-
|
|
53
51
|
def all_work_completed?
|
|
54
|
-
SolidQueue::ScheduledExecution.none?
|
|
52
|
+
SolidQueue::ScheduledExecution.none?
|
|
55
53
|
end
|
|
56
54
|
|
|
57
55
|
def set_procline
|
|
@@ -5,7 +5,7 @@ module SolidQueue
|
|
|
5
5
|
extend ActiveSupport::Concern
|
|
6
6
|
|
|
7
7
|
included do
|
|
8
|
-
mattr_reader :lifecycle_hooks, default: { start: [], stop: [] }
|
|
8
|
+
mattr_reader :lifecycle_hooks, default: { start: [], stop: [], exit: [] }
|
|
9
9
|
end
|
|
10
10
|
|
|
11
11
|
class_methods do
|
|
@@ -17,7 +17,12 @@ module SolidQueue
|
|
|
17
17
|
self.lifecycle_hooks[:stop] << block
|
|
18
18
|
end
|
|
19
19
|
|
|
20
|
+
def on_exit(&block)
|
|
21
|
+
self.lifecycle_hooks[:exit] << block
|
|
22
|
+
end
|
|
23
|
+
|
|
20
24
|
def clear_hooks
|
|
25
|
+
self.lifecycle_hooks[:exit] = []
|
|
21
26
|
self.lifecycle_hooks[:start] = []
|
|
22
27
|
self.lifecycle_hooks[:stop] = []
|
|
23
28
|
end
|
|
@@ -32,9 +37,13 @@ module SolidQueue
|
|
|
32
37
|
run_hooks_for :stop
|
|
33
38
|
end
|
|
34
39
|
|
|
40
|
+
def run_exit_hooks
|
|
41
|
+
run_hooks_for :exit
|
|
42
|
+
end
|
|
43
|
+
|
|
35
44
|
def run_hooks_for(event)
|
|
36
45
|
self.class.lifecycle_hooks.fetch(event, []).each do |block|
|
|
37
|
-
|
|
46
|
+
block.call(self)
|
|
38
47
|
rescue Exception => exception
|
|
39
48
|
handle_thread_error(exception)
|
|
40
49
|
end
|
data/lib/solid_queue/pool.rb
CHANGED
|
@@ -18,20 +18,16 @@ module SolidQueue
|
|
|
18
18
|
def post(execution)
|
|
19
19
|
available_threads.decrement
|
|
20
20
|
|
|
21
|
-
|
|
21
|
+
Concurrent::Promises.future_on(executor, execution) do |thread_execution|
|
|
22
22
|
wrap_in_app_executor do
|
|
23
23
|
thread_execution.perform
|
|
24
24
|
ensure
|
|
25
25
|
available_threads.increment
|
|
26
26
|
mutex.synchronize { on_idle.try(:call) if idle? }
|
|
27
27
|
end
|
|
28
|
+
end.on_rejection! do |e|
|
|
29
|
+
handle_thread_error(e)
|
|
28
30
|
end
|
|
29
|
-
|
|
30
|
-
future.add_observer do |_, _, error|
|
|
31
|
-
handle_thread_error(error) if error
|
|
32
|
-
end
|
|
33
|
-
|
|
34
|
-
future.execute
|
|
35
31
|
end
|
|
36
32
|
|
|
37
33
|
def idle_threads
|
|
@@ -4,7 +4,8 @@ module SolidQueue
|
|
|
4
4
|
module Processes
|
|
5
5
|
class Base
|
|
6
6
|
include Callbacks # Defines callbacks needed by other concerns
|
|
7
|
-
include AppExecutor, Registrable,
|
|
7
|
+
include AppExecutor, Registrable, Procline
|
|
8
|
+
prepend Interruptible
|
|
8
9
|
|
|
9
10
|
attr_reader :name
|
|
10
11
|
|
|
@@ -2,32 +2,39 @@
|
|
|
2
2
|
|
|
3
3
|
module SolidQueue::Processes
|
|
4
4
|
module Interruptible
|
|
5
|
+
def initialize(...)
|
|
6
|
+
super
|
|
7
|
+
@self_pipe = create_self_pipe
|
|
8
|
+
end
|
|
9
|
+
|
|
5
10
|
def wake_up
|
|
6
11
|
interrupt
|
|
7
12
|
end
|
|
8
13
|
|
|
9
14
|
private
|
|
15
|
+
SELF_PIPE_BLOCK_SIZE = 11
|
|
16
|
+
|
|
17
|
+
attr_reader :self_pipe
|
|
10
18
|
|
|
11
19
|
def interrupt
|
|
12
|
-
|
|
20
|
+
self_pipe[:writer].write_nonblock(".")
|
|
21
|
+
rescue Errno::EAGAIN, Errno::EINTR
|
|
22
|
+
# Ignore writes that would block and retry
|
|
23
|
+
# if another signal arrived while writing
|
|
24
|
+
retry
|
|
13
25
|
end
|
|
14
26
|
|
|
15
|
-
# Sleeps for 'time'. Can be interrupted asynchronously and return early via wake_up.
|
|
16
|
-
# @param time [Numeric] the time to sleep. 0 returns immediately.
|
|
17
|
-
# @return [true, nil]
|
|
18
|
-
# * returns `true` if an interrupt was requested via #wake_up between the
|
|
19
|
-
# last call to `interruptible_sleep` and now, resulting in an early return.
|
|
20
|
-
# * returns `nil` if it slept the full `time` and was not interrupted.
|
|
21
27
|
def interruptible_sleep(time)
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
end.value
|
|
28
|
+
if time > 0 && self_pipe[:reader].wait_readable(time)
|
|
29
|
+
loop { self_pipe[:reader].read_nonblock(SELF_PIPE_BLOCK_SIZE) }
|
|
30
|
+
end
|
|
31
|
+
rescue Errno::EAGAIN, Errno::EINTR, IO::EWOULDBLOCKWaitReadable
|
|
27
32
|
end
|
|
28
33
|
|
|
29
|
-
|
|
30
|
-
|
|
34
|
+
# Self-pipe for signal-handling (http://cr.yp.to/docs/selfpipe.html)
|
|
35
|
+
def create_self_pipe
|
|
36
|
+
reader, writer = IO.pipe
|
|
37
|
+
{ reader: reader, writer: writer }
|
|
31
38
|
end
|
|
32
39
|
end
|
|
33
40
|
end
|
|
@@ -4,7 +4,7 @@ module SolidQueue
|
|
|
4
4
|
module Processes
|
|
5
5
|
class ProcessPrunedError < RuntimeError
|
|
6
6
|
def initialize(last_heartbeat_at)
|
|
7
|
-
super("Process was found dead and pruned (last heartbeat at: #{last_heartbeat_at}")
|
|
7
|
+
super("Process was found dead and pruned (last heartbeat at: #{last_heartbeat_at})")
|
|
8
8
|
end
|
|
9
9
|
end
|
|
10
10
|
end
|
|
@@ -7,8 +7,7 @@ module SolidQueue::Processes
|
|
|
7
7
|
included do
|
|
8
8
|
after_boot :register, :launch_heartbeat
|
|
9
9
|
|
|
10
|
-
|
|
11
|
-
after_shutdown :deregister
|
|
10
|
+
after_shutdown :stop_heartbeat, :deregister
|
|
12
11
|
end
|
|
13
12
|
|
|
14
13
|
def process_id
|
|
@@ -19,17 +18,19 @@ module SolidQueue::Processes
|
|
|
19
18
|
attr_accessor :process
|
|
20
19
|
|
|
21
20
|
def register
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
21
|
+
wrap_in_app_executor do
|
|
22
|
+
@process = SolidQueue::Process.register \
|
|
23
|
+
kind: kind,
|
|
24
|
+
name: name,
|
|
25
|
+
pid: pid,
|
|
26
|
+
hostname: hostname,
|
|
27
|
+
supervisor: try(:supervisor),
|
|
28
|
+
metadata: metadata.compact
|
|
29
|
+
end
|
|
29
30
|
end
|
|
30
31
|
|
|
31
32
|
def deregister
|
|
32
|
-
process&.deregister
|
|
33
|
+
wrap_in_app_executor { process&.deregister }
|
|
33
34
|
end
|
|
34
35
|
|
|
35
36
|
def registered?
|
|
@@ -3,11 +3,15 @@
|
|
|
3
3
|
module SolidQueue
|
|
4
4
|
class Scheduler < Processes::Base
|
|
5
5
|
include Processes::Runnable
|
|
6
|
+
include LifecycleHooks
|
|
6
7
|
|
|
7
|
-
|
|
8
|
+
attr_reader :recurring_schedule
|
|
8
9
|
|
|
10
|
+
after_boot :run_start_hooks
|
|
9
11
|
after_boot :schedule_recurring_tasks
|
|
10
12
|
before_shutdown :unschedule_recurring_tasks
|
|
13
|
+
before_shutdown :run_stop_hooks
|
|
14
|
+
after_shutdown :run_exit_hooks
|
|
11
15
|
|
|
12
16
|
def initialize(recurring_tasks:, **options)
|
|
13
17
|
@recurring_schedule = RecurringSchedule.new(recurring_tasks)
|
|
@@ -5,6 +5,8 @@ module SolidQueue
|
|
|
5
5
|
include LifecycleHooks
|
|
6
6
|
include Maintenance, Signals, Pidfiled
|
|
7
7
|
|
|
8
|
+
after_shutdown :run_exit_hooks
|
|
9
|
+
|
|
8
10
|
class << self
|
|
9
11
|
def start(**options)
|
|
10
12
|
SolidQueue.supervisor = true
|
|
@@ -170,10 +172,15 @@ module SolidQueue
|
|
|
170
172
|
end
|
|
171
173
|
end
|
|
172
174
|
|
|
175
|
+
# When a supervised fork crashes or exits we need to mark all the
|
|
176
|
+
# executions it had claimed as failed so that they can be retried
|
|
177
|
+
# by some other worker.
|
|
173
178
|
def handle_claimed_jobs_by(terminated_fork, status)
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
179
|
+
wrap_in_app_executor do
|
|
180
|
+
if registered_process = SolidQueue::Process.find_by(name: terminated_fork.name)
|
|
181
|
+
error = Processes::ProcessExitError.new(status)
|
|
182
|
+
registered_process.fail_all_claimed_executions_with(error)
|
|
183
|
+
end
|
|
177
184
|
end
|
|
178
185
|
end
|
|
179
186
|
|
data/lib/solid_queue/version.rb
CHANGED
data/lib/solid_queue/worker.rb
CHANGED
|
@@ -6,14 +6,16 @@ module SolidQueue
|
|
|
6
6
|
|
|
7
7
|
after_boot :run_start_hooks
|
|
8
8
|
before_shutdown :run_stop_hooks
|
|
9
|
+
after_shutdown :run_exit_hooks
|
|
9
10
|
|
|
10
|
-
|
|
11
|
-
attr_accessor :queues, :pool
|
|
11
|
+
attr_reader :queues, :pool
|
|
12
12
|
|
|
13
13
|
def initialize(**options)
|
|
14
14
|
options = options.dup.with_defaults(SolidQueue::Configuration::WORKER_DEFAULTS)
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
# Ensure that the queues array is deep frozen to prevent accidental modification
|
|
17
|
+
@queues = Array(options[:queues]).map(&:freeze).freeze
|
|
18
|
+
|
|
17
19
|
@pool = Pool.new(options[:threads], on_idle: -> { wake_up })
|
|
18
20
|
|
|
19
21
|
super(**options)
|
data/lib/solid_queue.rb
CHANGED
|
@@ -41,14 +41,20 @@ module SolidQueue
|
|
|
41
41
|
mattr_accessor :clear_finished_jobs_after, default: 1.day
|
|
42
42
|
mattr_accessor :default_concurrency_control_period, default: 3.minutes
|
|
43
43
|
|
|
44
|
-
delegate :on_start, :on_stop, to: Supervisor
|
|
44
|
+
delegate :on_start, :on_stop, :on_exit, to: Supervisor
|
|
45
45
|
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
46
|
+
[ Dispatcher, Scheduler, Worker ].each do |process|
|
|
47
|
+
define_singleton_method(:"on_#{process.name.demodulize.downcase}_start") do |&block|
|
|
48
|
+
process.on_start(&block)
|
|
49
|
+
end
|
|
50
|
+
|
|
51
|
+
define_singleton_method(:"on_#{process.name.demodulize.downcase}_stop") do |&block|
|
|
52
|
+
process.on_stop(&block)
|
|
53
|
+
end
|
|
49
54
|
|
|
50
|
-
|
|
51
|
-
|
|
55
|
+
define_singleton_method(:"on_#{process.name.demodulize.downcase}_exit") do |&block|
|
|
56
|
+
process.on_exit(&block)
|
|
57
|
+
end
|
|
52
58
|
end
|
|
53
59
|
|
|
54
60
|
def supervisor?
|
metadata
CHANGED
|
@@ -1,14 +1,14 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: solid_queue
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 1.
|
|
4
|
+
version: 1.2.4
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Rosa Gutierrez
|
|
8
8
|
autorequire:
|
|
9
9
|
bindir: bin
|
|
10
10
|
cert_chain: []
|
|
11
|
-
date:
|
|
11
|
+
date: 2025-10-30 00:00:00.000000000 Z
|
|
12
12
|
dependencies:
|
|
13
13
|
- !ruby/object:Gem::Dependency
|
|
14
14
|
name: activerecord
|
|
@@ -72,28 +72,42 @@ dependencies:
|
|
|
72
72
|
requirements:
|
|
73
73
|
- - "~>"
|
|
74
74
|
- !ruby/object:Gem::Version
|
|
75
|
-
version: 1.11
|
|
75
|
+
version: '1.11'
|
|
76
76
|
type: :runtime
|
|
77
77
|
prerelease: false
|
|
78
78
|
version_requirements: !ruby/object:Gem::Requirement
|
|
79
79
|
requirements:
|
|
80
80
|
- - "~>"
|
|
81
81
|
- !ruby/object:Gem::Version
|
|
82
|
-
version: 1.11
|
|
82
|
+
version: '1.11'
|
|
83
83
|
- !ruby/object:Gem::Dependency
|
|
84
84
|
name: thor
|
|
85
85
|
requirement: !ruby/object:Gem::Requirement
|
|
86
86
|
requirements:
|
|
87
|
-
- - "
|
|
87
|
+
- - ">="
|
|
88
88
|
- !ruby/object:Gem::Version
|
|
89
89
|
version: 1.3.1
|
|
90
90
|
type: :runtime
|
|
91
91
|
prerelease: false
|
|
92
92
|
version_requirements: !ruby/object:Gem::Requirement
|
|
93
93
|
requirements:
|
|
94
|
-
- - "
|
|
94
|
+
- - ">="
|
|
95
95
|
- !ruby/object:Gem::Version
|
|
96
96
|
version: 1.3.1
|
|
97
|
+
- !ruby/object:Gem::Dependency
|
|
98
|
+
name: appraisal
|
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
|
100
|
+
requirements:
|
|
101
|
+
- - ">="
|
|
102
|
+
- !ruby/object:Gem::Version
|
|
103
|
+
version: '0'
|
|
104
|
+
type: :development
|
|
105
|
+
prerelease: false
|
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
107
|
+
requirements:
|
|
108
|
+
- - ">="
|
|
109
|
+
- !ruby/object:Gem::Version
|
|
110
|
+
version: '0'
|
|
97
111
|
- !ruby/object:Gem::Dependency
|
|
98
112
|
name: debug
|
|
99
113
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -126,16 +140,16 @@ dependencies:
|
|
|
126
140
|
name: puma
|
|
127
141
|
requirement: !ruby/object:Gem::Requirement
|
|
128
142
|
requirements:
|
|
129
|
-
- - "
|
|
143
|
+
- - "~>"
|
|
130
144
|
- !ruby/object:Gem::Version
|
|
131
|
-
version: '0'
|
|
145
|
+
version: '7.0'
|
|
132
146
|
type: :development
|
|
133
147
|
prerelease: false
|
|
134
148
|
version_requirements: !ruby/object:Gem::Requirement
|
|
135
149
|
requirements:
|
|
136
|
-
- - "
|
|
150
|
+
- - "~>"
|
|
137
151
|
- !ruby/object:Gem::Version
|
|
138
|
-
version: '0'
|
|
152
|
+
version: '7.0'
|
|
139
153
|
- !ruby/object:Gem::Dependency
|
|
140
154
|
name: mysql2
|
|
141
155
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -220,6 +234,20 @@ dependencies:
|
|
|
220
234
|
- - ">="
|
|
221
235
|
- !ruby/object:Gem::Version
|
|
222
236
|
version: '0'
|
|
237
|
+
- !ruby/object:Gem::Dependency
|
|
238
|
+
name: zeitwerk
|
|
239
|
+
requirement: !ruby/object:Gem::Requirement
|
|
240
|
+
requirements:
|
|
241
|
+
- - '='
|
|
242
|
+
- !ruby/object:Gem::Version
|
|
243
|
+
version: 2.6.0
|
|
244
|
+
type: :development
|
|
245
|
+
prerelease: false
|
|
246
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
247
|
+
requirements:
|
|
248
|
+
- - '='
|
|
249
|
+
- !ruby/object:Gem::Version
|
|
250
|
+
version: 2.6.0
|
|
223
251
|
description: Database-backed Active Job backend.
|
|
224
252
|
email:
|
|
225
253
|
- rosa@37signals.com
|
|
@@ -229,7 +257,6 @@ extra_rdoc_files: []
|
|
|
229
257
|
files:
|
|
230
258
|
- MIT-LICENSE
|
|
231
259
|
- README.md
|
|
232
|
-
- Rakefile
|
|
233
260
|
- UPGRADING.md
|
|
234
261
|
- app/jobs/solid_queue/recurring_job.rb
|
|
235
262
|
- app/models/solid_queue/blocked_execution.rb
|
|
@@ -307,15 +334,8 @@ metadata:
|
|
|
307
334
|
homepage_uri: https://github.com/rails/solid_queue
|
|
308
335
|
source_code_uri: https://github.com/rails/solid_queue
|
|
309
336
|
post_install_message: |
|
|
310
|
-
Upgrading
|
|
311
|
-
|
|
312
|
-
Upgrading to Solid Queue 0.8.0 from < 0.6.0? You need to upgrade to 0.6.0 first.
|
|
313
|
-
|
|
314
|
-
Upgrading to Solid Queue 0.4.x, 0.5.x, 0.6.x or 0.7.x? There are some breaking changes about how Solid Queue is started,
|
|
315
|
-
configuration and new migrations.
|
|
316
|
-
|
|
317
|
-
--> Check https://github.com/rails/solid_queue/blob/main/UPGRADING.md
|
|
318
|
-
for upgrade instructions.
|
|
337
|
+
Upgrading from Solid Queue < 1.0? Check details on breaking changes and upgrade instructions
|
|
338
|
+
--> https://github.com/rails/solid_queue/blob/main/UPGRADING.md
|
|
319
339
|
rdoc_options: []
|
|
320
340
|
require_paths:
|
|
321
341
|
- lib
|
|
@@ -323,7 +343,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
|
323
343
|
requirements:
|
|
324
344
|
- - ">="
|
|
325
345
|
- !ruby/object:Gem::Version
|
|
326
|
-
version: '
|
|
346
|
+
version: '3.1'
|
|
327
347
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
328
348
|
requirements:
|
|
329
349
|
- - ">="
|
data/Rakefile
DELETED
|
@@ -1,21 +0,0 @@
|
|
|
1
|
-
# frozen_string_literal: true
|
|
2
|
-
|
|
3
|
-
require "bundler/setup"
|
|
4
|
-
|
|
5
|
-
APP_RAKEFILE = File.expand_path("test/dummy/Rakefile", __dir__)
|
|
6
|
-
load "rails/tasks/engine.rake"
|
|
7
|
-
|
|
8
|
-
load "rails/tasks/statistics.rake"
|
|
9
|
-
|
|
10
|
-
require "bundler/gem_tasks"
|
|
11
|
-
|
|
12
|
-
def databases
|
|
13
|
-
%w[ mysql postgres sqlite ]
|
|
14
|
-
end
|
|
15
|
-
|
|
16
|
-
task :test do
|
|
17
|
-
databases.each do |database|
|
|
18
|
-
sh("TARGET_DB=#{database} bin/setup")
|
|
19
|
-
sh("TARGET_DB=#{database} bin/rails test")
|
|
20
|
-
end
|
|
21
|
-
end
|