solid_queue 0.2.1 → 0.3.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +56 -7
- data/app/models/solid_queue/blocked_execution.rb +3 -3
- data/app/models/solid_queue/claimed_execution.rb +0 -1
- data/app/models/solid_queue/execution/dispatching.rb +1 -1
- data/app/models/solid_queue/job/clearable.rb +3 -3
- data/app/models/solid_queue/job/executable.rb +2 -6
- data/app/models/solid_queue/job/recurrable.rb +13 -0
- data/app/models/solid_queue/job.rb +1 -1
- data/app/models/solid_queue/recurring_execution.rb +26 -0
- data/app/models/solid_queue/scheduled_execution.rb +1 -1
- data/app/models/solid_queue/semaphore.rb +5 -22
- data/db/migrate/20240218110712_create_recurring_executions.rb +14 -0
- data/lib/solid_queue/configuration.rb +14 -5
- data/lib/solid_queue/dispatcher/concurrency_maintenance.rb +44 -0
- data/lib/solid_queue/dispatcher/recurring_schedule.rb +56 -0
- data/lib/solid_queue/dispatcher/recurring_task.rb +85 -0
- data/lib/solid_queue/dispatcher.rb +21 -36
- data/lib/solid_queue/processes/base.rb +1 -18
- data/lib/solid_queue/processes/callbacks.rb +19 -0
- data/lib/solid_queue/processes/poller.rb +28 -0
- data/lib/solid_queue/processes/registrable.rb +5 -6
- data/lib/solid_queue/processes/runnable.rb +31 -46
- data/lib/solid_queue/processes/supervised.rb +4 -0
- data/lib/solid_queue/recurring_tasks/manager.rb +31 -0
- data/lib/solid_queue/recurring_tasks/schedule.rb +58 -0
- data/lib/solid_queue/recurring_tasks/task.rb +87 -0
- data/lib/solid_queue/supervisor.rb +1 -1
- data/lib/solid_queue/version.rb +1 -1
- data/lib/solid_queue/worker.rb +11 -12
- data/lib/solid_queue.rb +22 -24
- metadata +116 -10
- data/lib/active_job/uniqueness.rb +0 -41
- data/lib/solid_queue/dispatcher/scheduled_executions_dispatcher.rb +0 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 2c5bed020eb4391f1885c95f2f4373e30d1eacd544e2cfcadf131c8c09079fd0
|
4
|
+
data.tar.gz: 749ac036b072622cd2cc299f1fc5b1a7353a44347beccaa8407157eed8af02ef
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: f39c8aadb36bc8ae5556f3cbd190e6d123ca441b14374b56c207530dd2d37d8dc4880f6e977f6bc2e047b47c886768a3c86d6fefd040a447f6d8f21e28e8a273
|
7
|
+
data.tar.gz: d95c4be96c4388c6558a32f44690e9d92913e47bd01a379f09ef8b3e65a9098a2e578dd68885c36374b27e389c65d67e1100ad4d1c6908e6f675ede75bce8d3a
|
data/README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2
2
|
|
3
3
|
Solid Queue is a DB-based queuing backend for [Active Job](https://edgeguides.rubyonrails.org/active_job_basics.html), designed with simplicity and performance in mind.
|
4
4
|
|
5
|
-
Besides regular job enqueuing and processing, Solid Queue supports delayed jobs, concurrency controls, pausing queues, numeric priorities per job, priorities by queue order, and bulk enqueuing (`enqueue_all` for Active Job's `perform_all_later`). _Improvements to logging and instrumentation, a better CLI tool, a way to run within an existing process in "async" mode,
|
5
|
+
Besides regular job enqueuing and processing, Solid Queue supports delayed jobs, concurrency controls, pausing queues, numeric priorities per job, priorities by queue order, and bulk enqueuing (`enqueue_all` for Active Job's `perform_all_later`). _Improvements to logging and instrumentation, a better CLI tool, a way to run within an existing process in "async" mode, and some way of specifying unique jobs are coming very soon._
|
6
6
|
|
7
7
|
Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite, and it leverages the `FOR UPDATE SKIP LOCKED` clause, if available, to avoid blocking and waiting on locks when polling jobs. It relies on Active Job for retries, discarding, error handling, serialization, or delays, and it's compatible with Ruby on Rails multi-threading.
|
8
8
|
|
@@ -66,6 +66,8 @@ $ bundle exec rake solid_queue:start
|
|
66
66
|
|
67
67
|
This will start processing jobs in all queues using the default configuration. See [below](#configuration) to learn more about configuring Solid Queue.
|
68
68
|
|
69
|
+
For small projects, you can run Solid Queue on the same machine as your webserver. When you're ready to scale, Solid Queue supports horizontal scaling out-of-the-box. You can run Solid Queue on a separate server from your webserver, or even run `bundle exec rake solid_queue:start` on multiple machines at the same time. If you'd like to designate some machines to be only dispatchers or only workers, use `bundle exec rake solid_queue:dispatch` or `bundle exec rake solid_queue:work`, respectively.
|
70
|
+
|
69
71
|
## Requirements
|
70
72
|
Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as they support `FOR UPDATE SKIP LOCKED`. You can use it with older versions, but in that case, you might run into lock waits if you run multiple workers for the same queue.
|
71
73
|
|
@@ -75,7 +77,7 @@ Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as t
|
|
75
77
|
|
76
78
|
We have three types of processes in Solid Queue:
|
77
79
|
- _Workers_ are in charge of picking jobs ready to run from queues and processing them. They work off the `solid_queue_ready_executions` table.
|
78
|
-
- _Dispatchers_ are in charge of selecting jobs scheduled to run in the future that are due and _dispatching_ them, which is simply moving them from the `solid_queue_scheduled_executions` table over to the `solid_queue_ready_executions` table so that workers can pick them up. They also do some maintenance work related to concurrency controls.
|
80
|
+
- _Dispatchers_ are in charge of selecting jobs scheduled to run in the future that are due and _dispatching_ them, which is simply moving them from the `solid_queue_scheduled_executions` table over to the `solid_queue_ready_executions` table so that workers can pick them up. They're also in charge of managing [recurring tasks](#recurring-tasks), dispatching jobs to process them according to their schedule. On top of that, they do some maintenance work related to [concurrency controls](#concurrency-controls).
|
79
81
|
- The _supervisor_ forks workers and dispatchers according to the configuration, controls their heartbeats, and sends them signals to stop and start them when needed.
|
80
82
|
|
81
83
|
By default, Solid Queue will try to find your configuration under `config/solid_queue.yml`, but you can set a different path using the environment variable `SOLID_QUEUE_CONFIG`. This is what this configuration looks like:
|
@@ -115,8 +117,10 @@ Everything is optional. If no configuration is provided, Solid Queue will run wi
|
|
115
117
|
This will create a worker fetching jobs from all queues starting with `staging`. The wildcard `*` is only allowed on its own or at the end of a queue name; you can't specify queue names such as `*_some_queue`. These will be ignored.
|
116
118
|
|
117
119
|
Finally, you can combine prefixes with exact names, like `[ staging*, background ]`, and the behaviour with respect to order will be the same as with only exact names.
|
118
|
-
- `threads`: this is the max size of the thread pool that each worker will have to run jobs. Each worker will fetch this number of jobs from their queue(s), at most and will post them to the thread pool to be run. By default, this is `
|
120
|
+
- `threads`: this is the max size of the thread pool that each worker will have to run jobs. Each worker will fetch this number of jobs from their queue(s), at most and will post them to the thread pool to be run. By default, this is `3`. Only workers have this setting.
|
119
121
|
- `processes`: this is the number of worker processes that will be forked by the supervisor with the settings given. By default, this is `1`, just a single process. This setting is useful if you want to dedicate more than one CPU core to a queue or queues with the same configuration. Only workers have this setting.
|
122
|
+
- `concurrency_maintenance`: whether the dispatcher will perform the concurrency maintenance work. This is `true` by default, and it's useful if you don't use any [concurrency controls](#concurrency-controls) and want to disable it or if you run multiple dispatchers and want some of them to just dispatch jobs without doing anything else.
|
123
|
+
- `recurring_tasks`: a list of recurring tasks the dispatcher will manage. Read more details about this one in the [Recurring tasks](#recurring-tasks) section.
|
120
124
|
|
121
125
|
|
122
126
|
### Queue order and priorities
|
@@ -131,7 +135,7 @@ We recommend not mixing queue order with priorities but either choosing one or t
|
|
131
135
|
|
132
136
|
### Threads, processes and signals
|
133
137
|
|
134
|
-
Workers in Solid Queue use a thread pool to run work in multiple threads, configurable via the `threads` parameter above. Besides this, parallelism can be achieved via multiple processes
|
138
|
+
Workers in Solid Queue use a thread pool to run work in multiple threads, configurable via the `threads` parameter above. Besides this, parallelism can be achieved via multiple processes on one machine (configurable via different workers or the `processes` parameter above) or by horizontal scaling.
|
135
139
|
|
136
140
|
The supervisor is in charge of managing these processes, and it responds to the following signals:
|
137
141
|
- `TERM`, `INT`: starts graceful termination. The supervisor will send a `TERM` signal to its supervised processes, and it'll wait up to `SolidQueue.shutdown_timeout` time until they're done. If any supervised processes are still around by then, it'll send a `QUIT` signal to them to indicate they must exit.
|
@@ -142,6 +146,7 @@ When receiving a `QUIT` signal, if workers still have jobs in-flight, these will
|
|
142
146
|
If processes have no chance of cleaning up before exiting (e.g. if someone pulls a cable somewhere), in-flight jobs might remain claimed by the processes executing them. Processes send heartbeats, and the supervisor checks and prunes processes with expired heartbeats, which will release any claimed jobs back to their queues. You can configure both the frequency of heartbeats and the threshold to consider a process dead. See the section below for this.
|
143
147
|
|
144
148
|
### Other configuration settings
|
149
|
+
_Note_: The settings in this section should be set in your `config/application.rb` or your environment config like this: `config.solid_queue.silence_polling = true`
|
145
150
|
|
146
151
|
There are several settings that control how Solid Queue works that you can set as well:
|
147
152
|
- `logger`: the logger you want Solid Queue to use. Defaults to the app logger.
|
@@ -161,7 +166,7 @@ There are several settings that control how Solid Queue works that you can set a
|
|
161
166
|
- `process_heartbeat_interval`: the heartbeat interval that all processes will follow—defaults to 60 seconds.
|
162
167
|
- `process_alive_threshold`: how long to wait until a process is considered dead after its last heartbeat—defaults to 5 minutes.
|
163
168
|
- `shutdown_timeout`: time the supervisor will wait since it sent the `TERM` signal to its supervised processes before sending a `QUIT` version to them requesting immediate termination—defaults to 5 seconds.
|
164
|
-
- `silence_polling`: whether to silence Active Record logs emitted when polling for both workers and dispatchers—defaults to `
|
169
|
+
- `silence_polling`: whether to silence Active Record logs emitted when polling for both workers and dispatchers—defaults to `true`.
|
165
170
|
- `supervisor_pidfile`: path to a pidfile that the supervisor will create when booting to prevent running more than one supervisor in the same host, or in case you want to use it for a health check. It's `nil` by default.
|
166
171
|
- `preserve_finished_jobs`: whether to keep finished jobs in the `solid_queue_jobs` table—defaults to `true`.
|
167
172
|
- `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true—defaults to 1 day. **Note:** Right now, there's no automatic cleanup of finished jobs. You'd need to do this by periodically invoking `SolidQueue::Job.clear_finished_in_batches`, but this will happen automatically in the near future.
|
@@ -228,8 +233,7 @@ failed_execution.retry # This will re-enqueue the job as if it was enqueued for
|
|
228
233
|
failed_execution.discard # This will delete the job from the system
|
229
234
|
```
|
230
235
|
|
231
|
-
|
232
|
-
|
236
|
+
However, we recommend taking a look at [mission_control-jobs](https://github.com/basecamp/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
|
233
237
|
|
234
238
|
## Puma plugin
|
235
239
|
We provide a Puma plugin if you want to run the Solid Queue's supervisor together with Puma and have Puma monitor and manage it. You just need to add
|
@@ -263,3 +267,48 @@ Solid Queue has been inspired by [resque](https://github.com/resque/resque) and
|
|
263
267
|
|
264
268
|
## License
|
265
269
|
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
270
|
+
|
271
|
+
## Recurring tasks
|
272
|
+
Solid Queue supports defining recurring tasks that run at specific times in the future, on a regular basis like cron jobs. These are managed by dispatcher processes and as such, they can be defined in the dispatcher's configuration like this:
|
273
|
+
```yml
|
274
|
+
dispatchers:
|
275
|
+
- polling_interval: 1
|
276
|
+
batch_size: 500
|
277
|
+
recurring_tasks:
|
278
|
+
my_periodic_job:
|
279
|
+
class: MyJob
|
280
|
+
args: [ 42, { status: "custom_status" } ]
|
281
|
+
schedule: every second
|
282
|
+
```
|
283
|
+
`recurring_tasks` is a hash/dictionary, and the key will be the task key internally. Each task needs to have a class, which will be the job class to enqueue, and a schedule. The schedule is parsed using [Fugit](https://github.com/floraison/fugit), so it accepts anything [that Fugit accepts as a cron](https://github.com/floraison/fugit?tab=readme-ov-file#fugitcron). You can also provide arguments to be passed to the job, as a single argument, a hash, or an array of arguments that can also include kwargs as the last element in the array.
|
284
|
+
|
285
|
+
The job in the example configuration above will be enqueued every second as:
|
286
|
+
```ruby
|
287
|
+
MyJob.perform_later(42, status: "custom_status")
|
288
|
+
```
|
289
|
+
|
290
|
+
Tasks are enqueued at their corresponding times by the dispatcher that owns them, and each task schedules the next one. This is pretty much [inspired by what GoodJob does](https://github.com/bensheldon/good_job/blob/994ecff5323bf0337e10464841128fda100750e6/lib/good_job/cron_manager.rb).
|
291
|
+
|
292
|
+
It's possible to run multiple dispatchers with the same `recurring_tasks` configuration. To avoid enqueuing duplicate tasks at the same time, an entry in a new `solid_queue_recurring_executions` table is created in the same transaction as the job is enqueued. This table has a unique index on `task_key` and `run_at`, ensuring only one entry per task per time will be created. This only works if you have `preserve_finished_jobs` set to `true` (the default), and the guarantee applies as long as you keep the jobs around.
|
293
|
+
|
294
|
+
Finally, it's possible to configure jobs that aren't handled by Solid Queue. That's it, you can a have a job like this in your app:
|
295
|
+
```ruby
|
296
|
+
class MyResqueJob < ApplicationJob
|
297
|
+
self.queue_adapter = :resque
|
298
|
+
|
299
|
+
def perform(arg)
|
300
|
+
# ..
|
301
|
+
end
|
302
|
+
end
|
303
|
+
```
|
304
|
+
|
305
|
+
You can still configure this in Solid Queue:
|
306
|
+
```yml
|
307
|
+
dispatchers:
|
308
|
+
- recurring_tasks:
|
309
|
+
my_periodic_resque_job:
|
310
|
+
class: MyResqueJob
|
311
|
+
args: 22
|
312
|
+
schedule: "*/5 * * * *"
|
313
|
+
```
|
314
|
+
and the job will be enqueued via `perform_later` so it'll run in Resque. However, in this case we won't track any `solid_queue_recurring_execution` record for it and there won't be any guarantees that the job is enqueued only once each time.
|
@@ -30,10 +30,10 @@ module SolidQueue
|
|
30
30
|
|
31
31
|
private
|
32
32
|
def releasable(concurrency_keys)
|
33
|
-
semaphores = Semaphore.where(key: concurrency_keys).
|
33
|
+
semaphores = Semaphore.where(key: concurrency_keys).pluck(:key, :value).to_h
|
34
34
|
|
35
35
|
# Concurrency keys without semaphore + concurrency keys with open semaphore
|
36
|
-
(concurrency_keys - semaphores.keys) | semaphores.select { |
|
36
|
+
(concurrency_keys - semaphores.keys) | semaphores.select { |_key, value| value > 0 }.keys
|
37
37
|
end
|
38
38
|
end
|
39
39
|
|
@@ -43,7 +43,7 @@ module SolidQueue
|
|
43
43
|
promote_to_ready
|
44
44
|
destroy!
|
45
45
|
|
46
|
-
SolidQueue.logger.
|
46
|
+
SolidQueue.logger.debug("[SolidQueue] Unblocked job #{job.id} under #{concurrency_key}")
|
47
47
|
end
|
48
48
|
end
|
49
49
|
end
|
@@ -16,7 +16,6 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
|
|
16
16
|
insert_all!(job_data)
|
17
17
|
where(job_id: job_ids, process_id: process_id).load.tap do |claimed|
|
18
18
|
block.call(claimed)
|
19
|
-
SolidQueue.logger.info("[SolidQueue] Claimed #{claimed.size} jobs")
|
20
19
|
end
|
21
20
|
end
|
22
21
|
|
@@ -10,7 +10,7 @@ module SolidQueue
|
|
10
10
|
jobs = Job.where(id: job_ids)
|
11
11
|
|
12
12
|
Job.dispatch_all(jobs).map(&:id).tap do |dispatched_job_ids|
|
13
|
-
where(job_id: dispatched_job_ids).delete_all
|
13
|
+
where(job_id: dispatched_job_ids).order(:job_id).delete_all
|
14
14
|
SolidQueue.logger.info("[SolidQueue] Dispatched #{dispatched_job_ids.size} jobs")
|
15
15
|
end
|
16
16
|
end
|
@@ -6,13 +6,13 @@ module SolidQueue
|
|
6
6
|
extend ActiveSupport::Concern
|
7
7
|
|
8
8
|
included do
|
9
|
-
scope :clearable, ->(finished_before: SolidQueue.clear_finished_jobs_after.ago) { where.not(finished_at: nil).where(finished_at: ...finished_before) }
|
9
|
+
scope :clearable, ->(finished_before: SolidQueue.clear_finished_jobs_after.ago, class_name: nil) { where.not(finished_at: nil).where(finished_at: ...finished_before).where(class_name.present? ? { class_name: class_name } : {}) }
|
10
10
|
end
|
11
11
|
|
12
12
|
class_methods do
|
13
|
-
def clear_finished_in_batches(batch_size: 500, finished_before: SolidQueue.clear_finished_jobs_after.ago)
|
13
|
+
def clear_finished_in_batches(batch_size: 500, finished_before: SolidQueue.clear_finished_jobs_after.ago, class_name: nil)
|
14
14
|
loop do
|
15
|
-
records_deleted = clearable(finished_before: finished_before).limit(batch_size).delete_all
|
15
|
+
records_deleted = clearable(finished_before: finished_before, class_name: class_name).limit(batch_size).delete_all
|
16
16
|
break if records_deleted == 0
|
17
17
|
end
|
18
18
|
end
|
@@ -6,7 +6,7 @@ module SolidQueue
|
|
6
6
|
extend ActiveSupport::Concern
|
7
7
|
|
8
8
|
included do
|
9
|
-
include
|
9
|
+
include ConcurrencyControls, Schedulable
|
10
10
|
|
11
11
|
has_one :ready_execution
|
12
12
|
has_one :claimed_execution
|
@@ -78,7 +78,7 @@ module SolidQueue
|
|
78
78
|
end
|
79
79
|
|
80
80
|
def finished!
|
81
|
-
if preserve_finished_jobs?
|
81
|
+
if SolidQueue.preserve_finished_jobs?
|
82
82
|
touch(:finished_at)
|
83
83
|
else
|
84
84
|
destroy!
|
@@ -117,10 +117,6 @@ module SolidQueue
|
|
117
117
|
def execution
|
118
118
|
%w[ ready claimed failed ].reduce(nil) { |acc, status| acc || public_send("#{status}_execution") }
|
119
119
|
end
|
120
|
-
|
121
|
-
def preserve_finished_jobs?
|
122
|
-
SolidQueue.preserve_finished_jobs
|
123
|
-
end
|
124
120
|
end
|
125
121
|
end
|
126
122
|
end
|
@@ -0,0 +1,26 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module SolidQueue
|
4
|
+
class RecurringExecution < Execution
|
5
|
+
scope :clearable, -> { where.missing(:job) }
|
6
|
+
|
7
|
+
class << self
|
8
|
+
def record(task_key, run_at, &block)
|
9
|
+
transaction do
|
10
|
+
if job_id = block.call
|
11
|
+
create!(job_id: job_id, task_key: task_key, run_at: run_at)
|
12
|
+
end
|
13
|
+
end
|
14
|
+
rescue ActiveRecord::RecordNotUnique
|
15
|
+
SolidQueue.logger.info("[SolidQueue] Skipped recurring task #{task_key} at #{run_at} — already dispatched")
|
16
|
+
end
|
17
|
+
|
18
|
+
def clear_in_batches(batch_size: 500)
|
19
|
+
loop do
|
20
|
+
records_deleted = clearable.limit(batch_size).delete_all
|
21
|
+
break if records_deleted == 0
|
22
|
+
end
|
23
|
+
end
|
24
|
+
end
|
25
|
+
end
|
26
|
+
end
|
@@ -5,7 +5,7 @@ module SolidQueue
|
|
5
5
|
include Dispatching
|
6
6
|
|
7
7
|
scope :due, -> { where(scheduled_at: ..Time.current) }
|
8
|
-
scope :ordered, -> { order(scheduled_at: :asc, priority: :asc) }
|
8
|
+
scope :ordered, -> { order(scheduled_at: :asc, priority: :asc, job_id: :asc) }
|
9
9
|
scope :next_batch, ->(batch_size) { due.ordered.limit(batch_size) }
|
10
10
|
|
11
11
|
assumes_attributes_from_job :scheduled_at
|
@@ -26,7 +26,6 @@ module SolidQueue
|
|
26
26
|
|
27
27
|
def initialize(job)
|
28
28
|
@job = job
|
29
|
-
@retries = 0
|
30
29
|
end
|
31
30
|
|
32
31
|
def wait
|
@@ -42,42 +41,26 @@ module SolidQueue
|
|
42
41
|
end
|
43
42
|
|
44
43
|
private
|
45
|
-
attr_accessor :job
|
44
|
+
attr_accessor :job
|
46
45
|
|
47
46
|
def attempt_creation
|
48
47
|
Semaphore.create!(key: key, value: limit - 1, expires_at: expires_at)
|
49
48
|
true
|
50
49
|
rescue ActiveRecord::RecordNotUnique
|
51
|
-
|
50
|
+
if limit == 1 then false
|
51
|
+
else
|
52
|
+
attempt_decrement
|
53
|
+
end
|
52
54
|
end
|
53
55
|
|
54
56
|
def attempt_decrement
|
55
57
|
Semaphore.available.where(key: key).update_all([ "value = value - 1, expires_at = ?", expires_at ]) > 0
|
56
|
-
rescue ActiveRecord::Deadlocked
|
57
|
-
if retriable? then attempt_retry
|
58
|
-
else
|
59
|
-
raise
|
60
|
-
end
|
61
58
|
end
|
62
59
|
|
63
60
|
def attempt_increment
|
64
61
|
Semaphore.where(key: key, value: ...limit).update_all([ "value = value + 1, expires_at = ?", expires_at ]) > 0
|
65
62
|
end
|
66
63
|
|
67
|
-
def attempt_retry
|
68
|
-
self.retries += 1
|
69
|
-
|
70
|
-
if semaphore = Semaphore.find_by(key: key)
|
71
|
-
semaphore.value > 0 && attempt_decrement
|
72
|
-
end
|
73
|
-
end
|
74
|
-
|
75
|
-
MAX_RETRIES = 1
|
76
|
-
|
77
|
-
def retriable?
|
78
|
-
retries < MAX_RETRIES
|
79
|
-
end
|
80
|
-
|
81
64
|
def key
|
82
65
|
job.concurrency_key
|
83
66
|
end
|
@@ -0,0 +1,14 @@
|
|
1
|
+
class CreateRecurringExecutions < ActiveRecord::Migration[7.1]
|
2
|
+
def change
|
3
|
+
create_table :solid_queue_recurring_executions do |t|
|
4
|
+
t.references :job, index: { unique: true }, null: false
|
5
|
+
t.string :task_key, null: false
|
6
|
+
t.datetime :run_at, null: false
|
7
|
+
t.datetime :created_at, null: false
|
8
|
+
|
9
|
+
t.index [ :task_key, :run_at ], unique: true
|
10
|
+
end
|
11
|
+
|
12
|
+
add_foreign_key :solid_queue_recurring_executions, :solid_queue_jobs, column: :job_id, on_delete: :cascade
|
13
|
+
end
|
14
|
+
end
|
@@ -4,7 +4,7 @@ module SolidQueue
|
|
4
4
|
class Configuration
|
5
5
|
WORKER_DEFAULTS = {
|
6
6
|
queues: "*",
|
7
|
-
threads:
|
7
|
+
threads: 3,
|
8
8
|
processes: 1,
|
9
9
|
polling_interval: 0.1
|
10
10
|
}
|
@@ -12,7 +12,9 @@ module SolidQueue
|
|
12
12
|
DISPATCHER_DEFAULTS = {
|
13
13
|
batch_size: 500,
|
14
14
|
polling_interval: 1,
|
15
|
-
|
15
|
+
concurrency_maintenance: true,
|
16
|
+
concurrency_maintenance_interval: 600,
|
17
|
+
recurring_tasks: []
|
16
18
|
}
|
17
19
|
|
18
20
|
def initialize(mode: :work, load_from: nil)
|
@@ -33,7 +35,7 @@ module SolidQueue
|
|
33
35
|
if mode.in? %i[ work all]
|
34
36
|
workers_options.flat_map do |worker_options|
|
35
37
|
processes = worker_options.fetch(:processes, WORKER_DEFAULTS[:processes])
|
36
|
-
processes.times.
|
38
|
+
processes.times.map { Worker.new(**worker_options.with_defaults(WORKER_DEFAULTS)) }
|
37
39
|
end
|
38
40
|
else
|
39
41
|
[]
|
@@ -42,8 +44,10 @@ module SolidQueue
|
|
42
44
|
|
43
45
|
def dispatchers
|
44
46
|
if mode.in? %i[ dispatch all]
|
45
|
-
dispatchers_options.
|
46
|
-
|
47
|
+
dispatchers_options.map do |dispatcher_options|
|
48
|
+
recurring_tasks = parse_recurring_tasks dispatcher_options[:recurring_tasks]
|
49
|
+
|
50
|
+
Dispatcher.new **dispatcher_options.merge(recurring_tasks: recurring_tasks).with_defaults(DISPATCHER_DEFAULTS)
|
47
51
|
end
|
48
52
|
end
|
49
53
|
end
|
@@ -73,6 +77,11 @@ module SolidQueue
|
|
73
77
|
.map { |options| options.dup.symbolize_keys }
|
74
78
|
end
|
75
79
|
|
80
|
+
def parse_recurring_tasks(tasks)
|
81
|
+
Array(tasks).map do |id, options|
|
82
|
+
Dispatcher::RecurringTask.from_configuration(id, **options)
|
83
|
+
end.select(&:valid?)
|
84
|
+
end
|
76
85
|
|
77
86
|
def load_config_from(file_or_hash)
|
78
87
|
case file_or_hash
|
@@ -0,0 +1,44 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module SolidQueue
|
4
|
+
class Dispatcher::ConcurrencyMaintenance
|
5
|
+
include AppExecutor
|
6
|
+
|
7
|
+
attr_reader :interval, :batch_size
|
8
|
+
|
9
|
+
def initialize(interval, batch_size)
|
10
|
+
@interval = interval
|
11
|
+
@batch_size = batch_size
|
12
|
+
end
|
13
|
+
|
14
|
+
def start
|
15
|
+
@concurrency_maintenance_task = Concurrent::TimerTask.new(run_now: true, execution_interval: interval) do
|
16
|
+
expire_semaphores
|
17
|
+
unblock_blocked_executions
|
18
|
+
end
|
19
|
+
|
20
|
+
@concurrency_maintenance_task.add_observer do |_, _, error|
|
21
|
+
handle_thread_error(error) if error
|
22
|
+
end
|
23
|
+
|
24
|
+
@concurrency_maintenance_task.execute
|
25
|
+
end
|
26
|
+
|
27
|
+
def stop
|
28
|
+
@concurrency_maintenance_task.shutdown
|
29
|
+
end
|
30
|
+
|
31
|
+
private
|
32
|
+
def expire_semaphores
|
33
|
+
wrap_in_app_executor do
|
34
|
+
Semaphore.expired.in_batches(of: batch_size, &:delete_all)
|
35
|
+
end
|
36
|
+
end
|
37
|
+
|
38
|
+
def unblock_blocked_executions
|
39
|
+
wrap_in_app_executor do
|
40
|
+
BlockedExecution.unblock(batch_size)
|
41
|
+
end
|
42
|
+
end
|
43
|
+
end
|
44
|
+
end
|
@@ -0,0 +1,56 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module SolidQueue
|
4
|
+
class Dispatcher::RecurringSchedule
|
5
|
+
include AppExecutor
|
6
|
+
|
7
|
+
attr_reader :configured_tasks, :scheduled_tasks
|
8
|
+
|
9
|
+
def initialize(tasks)
|
10
|
+
@configured_tasks = Array(tasks).map { |task| Dispatcher::RecurringTask.wrap(task) }
|
11
|
+
@scheduled_tasks = Concurrent::Hash.new
|
12
|
+
end
|
13
|
+
|
14
|
+
def load_tasks
|
15
|
+
configured_tasks.each do |task|
|
16
|
+
load_task(task)
|
17
|
+
end
|
18
|
+
end
|
19
|
+
|
20
|
+
def load_task(task)
|
21
|
+
scheduled_tasks[task.key] = schedule(task)
|
22
|
+
end
|
23
|
+
|
24
|
+
def unload_tasks
|
25
|
+
scheduled_tasks.values.each(&:cancel)
|
26
|
+
scheduled_tasks.clear
|
27
|
+
end
|
28
|
+
|
29
|
+
def tasks
|
30
|
+
configured_tasks.each_with_object({}) { |task, hsh| hsh[task.key] = task.to_h }
|
31
|
+
end
|
32
|
+
|
33
|
+
def inspect
|
34
|
+
configured_tasks.map(&:to_s).join(" | ")
|
35
|
+
end
|
36
|
+
|
37
|
+
private
|
38
|
+
def schedule(task)
|
39
|
+
scheduled_task = Concurrent::ScheduledTask.new(task.delay_from_now, args: [ self, task, task.next_time ]) do |thread_schedule, thread_task, thread_task_run_at|
|
40
|
+
thread_schedule.load_task(thread_task)
|
41
|
+
|
42
|
+
wrap_in_app_executor do
|
43
|
+
thread_task.enqueue(at: thread_task_run_at)
|
44
|
+
end
|
45
|
+
end
|
46
|
+
|
47
|
+
scheduled_task.add_observer do |_, _, error|
|
48
|
+
# Don't notify on task cancellation before execution, as this will happen normally
|
49
|
+
# as part of unloading tasks
|
50
|
+
handle_thread_error(error) if error && !error.is_a?(Concurrent::CancelledOperationError)
|
51
|
+
end
|
52
|
+
|
53
|
+
scheduled_task.tap(&:execute)
|
54
|
+
end
|
55
|
+
end
|
56
|
+
end
|
@@ -0,0 +1,85 @@
|
|
1
|
+
require "fugit"
|
2
|
+
|
3
|
+
module SolidQueue
|
4
|
+
class Dispatcher::RecurringTask
|
5
|
+
class << self
|
6
|
+
def wrap(args)
|
7
|
+
args.is_a?(self) ? args : from_configuration(args.first, **args.second)
|
8
|
+
end
|
9
|
+
|
10
|
+
def from_configuration(key, **options)
|
11
|
+
new(key, class_name: options[:class], schedule: options[:schedule], arguments: options[:args])
|
12
|
+
end
|
13
|
+
end
|
14
|
+
|
15
|
+
attr_reader :key, :schedule, :class_name, :arguments
|
16
|
+
|
17
|
+
def initialize(key, class_name:, schedule:, arguments: nil)
|
18
|
+
@key = key
|
19
|
+
@class_name = class_name
|
20
|
+
@schedule = schedule
|
21
|
+
@arguments = Array(arguments)
|
22
|
+
end
|
23
|
+
|
24
|
+
def delay_from_now
|
25
|
+
[ (next_time - Time.current).to_f, 0 ].max
|
26
|
+
end
|
27
|
+
|
28
|
+
def next_time
|
29
|
+
parsed_schedule.next_time.utc
|
30
|
+
end
|
31
|
+
|
32
|
+
def enqueue(at:)
|
33
|
+
if using_solid_queue_adapter?
|
34
|
+
perform_later_and_record(run_at: at)
|
35
|
+
else
|
36
|
+
perform_later
|
37
|
+
end
|
38
|
+
end
|
39
|
+
|
40
|
+
def valid?
|
41
|
+
parsed_schedule.instance_of?(Fugit::Cron)
|
42
|
+
end
|
43
|
+
|
44
|
+
def to_s
|
45
|
+
"#{class_name}.perform_later(#{arguments.map(&:inspect).join(",")}) [ #{parsed_schedule.original.to_s} ]"
|
46
|
+
end
|
47
|
+
|
48
|
+
def to_h
|
49
|
+
{
|
50
|
+
schedule: schedule,
|
51
|
+
class_name: class_name,
|
52
|
+
arguments: arguments
|
53
|
+
}
|
54
|
+
end
|
55
|
+
|
56
|
+
private
|
57
|
+
def using_solid_queue_adapter?
|
58
|
+
job_class.queue_adapter_name.inquiry.solid_queue?
|
59
|
+
end
|
60
|
+
|
61
|
+
def perform_later_and_record(run_at:)
|
62
|
+
RecurringExecution.record(key, run_at) { perform_later.provider_job_id }
|
63
|
+
end
|
64
|
+
|
65
|
+
def perform_later
|
66
|
+
job_class.perform_later(*arguments_with_kwargs)
|
67
|
+
end
|
68
|
+
|
69
|
+
def arguments_with_kwargs
|
70
|
+
if arguments.last.is_a?(Hash)
|
71
|
+
arguments[0...-1] + [ Hash.ruby2_keywords_hash(arguments.last) ]
|
72
|
+
else
|
73
|
+
arguments
|
74
|
+
end
|
75
|
+
end
|
76
|
+
|
77
|
+
def parsed_schedule
|
78
|
+
@parsed_schedule ||= Fugit.parse(schedule)
|
79
|
+
end
|
80
|
+
|
81
|
+
def job_class
|
82
|
+
@job_class ||= class_name.safe_constantize
|
83
|
+
end
|
84
|
+
end
|
85
|
+
end
|