solid_queue 0.2.2 → 0.3.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (42) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +60 -7
  3. data/app/models/solid_queue/blocked_execution.rb +16 -10
  4. data/app/models/solid_queue/claimed_execution.rb +11 -5
  5. data/app/models/solid_queue/execution/dispatching.rb +2 -3
  6. data/app/models/solid_queue/execution.rb +32 -15
  7. data/app/models/solid_queue/failed_execution.rb +10 -6
  8. data/app/models/solid_queue/job/clearable.rb +3 -3
  9. data/app/models/solid_queue/job/executable.rb +3 -7
  10. data/app/models/solid_queue/job/recurrable.rb +13 -0
  11. data/app/models/solid_queue/job/schedulable.rb +1 -1
  12. data/app/models/solid_queue/job.rb +1 -1
  13. data/app/models/solid_queue/process/prunable.rb +6 -5
  14. data/app/models/solid_queue/process.rb +13 -6
  15. data/app/models/solid_queue/recurring_execution.rb +26 -0
  16. data/app/models/solid_queue/scheduled_execution.rb +3 -1
  17. data/app/models/solid_queue/semaphore.rb +1 -1
  18. data/db/migrate/20240218110712_create_recurring_executions.rb +14 -0
  19. data/lib/active_job/queue_adapters/solid_queue_adapter.rb +4 -0
  20. data/lib/generators/solid_queue/install/templates/config.yml +1 -1
  21. data/lib/puma/plugin/solid_queue.rb +1 -0
  22. data/lib/solid_queue/app_executor.rb +1 -1
  23. data/lib/solid_queue/configuration.rb +14 -5
  24. data/lib/solid_queue/dispatcher/concurrency_maintenance.rb +44 -0
  25. data/lib/solid_queue/dispatcher/recurring_schedule.rb +56 -0
  26. data/lib/solid_queue/dispatcher/recurring_task.rb +91 -0
  27. data/lib/solid_queue/dispatcher.rb +24 -39
  28. data/lib/solid_queue/engine.rb +4 -2
  29. data/lib/solid_queue/log_subscriber.rb +164 -0
  30. data/lib/solid_queue/processes/base.rb +13 -14
  31. data/lib/solid_queue/processes/callbacks.rb +19 -0
  32. data/lib/solid_queue/processes/interruptible.rb +1 -1
  33. data/lib/solid_queue/processes/poller.rb +34 -4
  34. data/lib/solid_queue/processes/registrable.rb +9 -28
  35. data/lib/solid_queue/processes/runnable.rb +33 -47
  36. data/lib/solid_queue/processes/signals.rb +1 -1
  37. data/lib/solid_queue/processes/supervised.rb +4 -0
  38. data/lib/solid_queue/supervisor.rb +25 -24
  39. data/lib/solid_queue/version.rb +1 -1
  40. data/lib/solid_queue/worker.rb +15 -16
  41. data/lib/solid_queue.rb +27 -20
  42. metadata +129 -9
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d907fb9133f2c72a61b586038f05b248a29e7c55f506aa541ca39f35a15d40ff
4
- data.tar.gz: 887ecd7d0a15159ae10845466993699165a531367dbb21151e3ae704e56fa8e9
3
+ metadata.gz: 490264bacfa32881edc69e3fa349081bfa1666785db38c7df55824ab6bd538b4
4
+ data.tar.gz: b12570f172afad018ac4709c7203276c0ba737ff90654cae02cadc3197be0357
5
5
  SHA512:
6
- metadata.gz: 665bb353c9cc8c557952ca5e2f63120b0541eef7a20008b6c288147e0350564200be2e6272f9924448867b9b2d09a7e48dc5b2267ec35b91f5dcd38bc409fff1
7
- data.tar.gz: 9e911e1a270b5da57f75011e40ad6949480a8b87a2eb391934697159701273e1f05da9ac5c114e8d9dac492bb7685f6884eacac592ed44a460f513fd7349f85a
6
+ metadata.gz: b5909c1a726457b029d0de62bd22bf6929d84ebcddf35a56a8a0af10a0be7108ca1eb60ed48294ef77013f03d9e3c0f9b87a34d1e54ec966742abf79f5edd41f
7
+ data.tar.gz: daec7288a9652399f5dd75c0c6d8a5d5def60f9772a122a8b6212a675bb2d5e6ccee831156afd59b624204c694276f5011df64a122df7d44b578e848798cb5fe
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  Solid Queue is a DB-based queuing backend for [Active Job](https://edgeguides.rubyonrails.org/active_job_basics.html), designed with simplicity and performance in mind.
4
4
 
5
- Besides regular job enqueuing and processing, Solid Queue supports delayed jobs, concurrency controls, pausing queues, numeric priorities per job, priorities by queue order, and bulk enqueuing (`enqueue_all` for Active Job's `perform_all_later`). _Improvements to logging and instrumentation, a better CLI tool, a way to run within an existing process in "async" mode, unique jobs and recurring, cron-like tasks are coming very soon._
5
+ Besides regular job enqueuing and processing, Solid Queue supports delayed jobs, concurrency controls, pausing queues, numeric priorities per job, priorities by queue order, and bulk enqueuing (`enqueue_all` for Active Job's `perform_all_later`). _Improvements to logging and instrumentation, a better CLI tool, a way to run within an existing process in "async" mode, and some way of specifying unique jobs are coming very soon._
6
6
 
7
7
  Solid Queue can be used with SQL databases such as MySQL, PostgreSQL or SQLite, and it leverages the `FOR UPDATE SKIP LOCKED` clause, if available, to avoid blocking and waiting on locks when polling jobs. It relies on Active Job for retries, discarding, error handling, serialization, or delays, and it's compatible with Ruby on Rails multi-threading.
8
8
 
@@ -66,6 +66,8 @@ $ bundle exec rake solid_queue:start
66
66
 
67
67
  This will start processing jobs in all queues using the default configuration. See [below](#configuration) to learn more about configuring Solid Queue.
68
68
 
69
+ For small projects, you can run Solid Queue on the same machine as your webserver. When you're ready to scale, Solid Queue supports horizontal scaling out-of-the-box. You can run Solid Queue on a separate server from your webserver, or even run `bundle exec rake solid_queue:start` on multiple machines at the same time. If you'd like to designate some machines to be only dispatchers or only workers, use `bundle exec rake solid_queue:dispatch` or `bundle exec rake solid_queue:work`, respectively.
70
+
69
71
  ## Requirements
70
72
  Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as they support `FOR UPDATE SKIP LOCKED`. You can use it with older versions, but in that case, you might run into lock waits if you run multiple workers for the same queue.
71
73
 
@@ -75,7 +77,7 @@ Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as t
75
77
 
76
78
  We have three types of processes in Solid Queue:
77
79
  - _Workers_ are in charge of picking jobs ready to run from queues and processing them. They work off the `solid_queue_ready_executions` table.
78
- - _Dispatchers_ are in charge of selecting jobs scheduled to run in the future that are due and _dispatching_ them, which is simply moving them from the `solid_queue_scheduled_executions` table over to the `solid_queue_ready_executions` table so that workers can pick them up. They also do some maintenance work related to concurrency controls.
80
+ - _Dispatchers_ are in charge of selecting jobs scheduled to run in the future that are due and _dispatching_ them, which is simply moving them from the `solid_queue_scheduled_executions` table over to the `solid_queue_ready_executions` table so that workers can pick them up. They're also in charge of managing [recurring tasks](#recurring-tasks), dispatching jobs to process them according to their schedule. On top of that, they do some maintenance work related to [concurrency controls](#concurrency-controls).
79
81
  - The _supervisor_ forks workers and dispatchers according to the configuration, controls their heartbeats, and sends them signals to stop and start them when needed.
80
82
 
81
83
  By default, Solid Queue will try to find your configuration under `config/solid_queue.yml`, but you can set a different path using the environment variable `SOLID_QUEUE_CONFIG`. This is what this configuration looks like:
@@ -115,8 +117,10 @@ Everything is optional. If no configuration is provided, Solid Queue will run wi
115
117
  This will create a worker fetching jobs from all queues starting with `staging`. The wildcard `*` is only allowed on its own or at the end of a queue name; you can't specify queue names such as `*_some_queue`. These will be ignored.
116
118
 
117
119
  Finally, you can combine prefixes with exact names, like `[ staging*, background ]`, and the behaviour with respect to order will be the same as with only exact names.
118
- - `threads`: this is the max size of the thread pool that each worker will have to run jobs. Each worker will fetch this number of jobs from their queue(s), at most and will post them to the thread pool to be run. By default, this is `5`. Only workers have this setting.
120
+ - `threads`: this is the max size of the thread pool that each worker will have to run jobs. Each worker will fetch this number of jobs from their queue(s), at most and will post them to the thread pool to be run. By default, this is `3`. Only workers have this setting.
119
121
  - `processes`: this is the number of worker processes that will be forked by the supervisor with the settings given. By default, this is `1`, just a single process. This setting is useful if you want to dedicate more than one CPU core to a queue or queues with the same configuration. Only workers have this setting.
122
+ - `concurrency_maintenance`: whether the dispatcher will perform the concurrency maintenance work. This is `true` by default, and it's useful if you don't use any [concurrency controls](#concurrency-controls) and want to disable it or if you run multiple dispatchers and want some of them to just dispatch jobs without doing anything else.
123
+ - `recurring_tasks`: a list of recurring tasks the dispatcher will manage. Read more details about this one in the [Recurring tasks](#recurring-tasks) section.
120
124
 
121
125
 
122
126
  ### Queue order and priorities
@@ -131,7 +135,7 @@ We recommend not mixing queue order with priorities but either choosing one or t
131
135
 
132
136
  ### Threads, processes and signals
133
137
 
134
- Workers in Solid Queue use a thread pool to run work in multiple threads, configurable via the `threads` parameter above. Besides this, parallelism can be achieved via multiple processes, configurable via different workers or the `processes` parameter above.
138
+ Workers in Solid Queue use a thread pool to run work in multiple threads, configurable via the `threads` parameter above. Besides this, parallelism can be achieved via multiple processes on one machine (configurable via different workers or the `processes` parameter above) or by horizontal scaling.
135
139
 
136
140
  The supervisor is in charge of managing these processes, and it responds to the following signals:
137
141
  - `TERM`, `INT`: starts graceful termination. The supervisor will send a `TERM` signal to its supervised processes, and it'll wait up to `SolidQueue.shutdown_timeout` time until they're done. If any supervised processes are still around by then, it'll send a `QUIT` signal to them to indicate they must exit.
@@ -167,6 +171,7 @@ There are several settings that control how Solid Queue works that you can set a
167
171
  - `preserve_finished_jobs`: whether to keep finished jobs in the `solid_queue_jobs` table—defaults to `true`.
168
172
  - `clear_finished_jobs_after`: period to keep finished jobs around, in case `preserve_finished_jobs` is true—defaults to 1 day. **Note:** Right now, there's no automatic cleanup of finished jobs. You'd need to do this by periodically invoking `SolidQueue::Job.clear_finished_in_batches`, but this will happen automatically in the near future.
169
173
  - `default_concurrency_control_period`: the value to be used as the default for the `duration` parameter in [concurrency controls](#concurrency-controls). It defaults to 3 minutes.
174
+ - `enqueue_after_transaction_commit`: whether the job queuing is deferred to after the current Active Record transaction is committed. The default is `false`. [Read more](https://github.com/rails/rails/pull/51426).
170
175
 
171
176
 
172
177
  ## Concurrency controls
@@ -229,7 +234,7 @@ failed_execution.retry # This will re-enqueue the job as if it was enqueued for
229
234
  failed_execution.discard # This will delete the job from the system
230
235
  ```
231
236
 
232
- However, we recommend taking a look at [mission_control-jobs](https://github.com/basecamp/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
237
+ However, we recommend taking a look at [mission_control-jobs](https://github.com/rails/mission_control-jobs), a dashboard where, among other things, you can examine and retry/discard failed jobs.
233
238
 
234
239
  ## Puma plugin
235
240
  We provide a Puma plugin if you want to run the Solid Queue's supervisor together with Puma and have Puma monitor and manage it. You just need to add
@@ -242,9 +247,12 @@ to your `puma.rb` configuration.
242
247
  ## Jobs and transactional integrity
243
248
  :warning: Having your jobs in the same ACID-compliant database as your application data enables a powerful yet sharp tool: taking advantage of transactional integrity to ensure some action in your app is not committed unless your job is also committed. This can be very powerful and useful, but it can also backfire if you base some of your logic on this behaviour, and in the future, you move to another active job backend, or if you simply move Solid Queue to its own database, and suddenly the behaviour changes under you.
244
249
 
250
+ By default, Solid Queue runs in the same DB as your app, and job enqueuing is _not_ deferred until any ongoing transaction is committed, which means that by default, you'll be taking advantage of this transactional integrity.
251
+
245
252
  If you prefer not to rely on this, or avoid relying on it unintentionally, you should make sure that:
246
- - Your jobs relying on specific records are always enqueued on [`after_commit` callbacks](https://guides.rubyonrails.org/active_record_callbacks.html#after-commit-and-after-rollback) or otherwise from a place where you're certain that whatever data the job will use has been committed to the database before the job is enqueued.
247
- - Or, to opt out completely from this behaviour, configure a database for Solid Queue, even if it's the same as your app, ensuring that a different connection on the thread handling requests or running jobs for your app will be used to enqueue jobs. For example:
253
+ - You set [`config.active_job.enqueue_after_transaction_commit`](https://edgeguides.rubyonrails.org/configuring.html#config-active-job-enqueue-after-transaction-commit) to `always`, if you're using Rails 7.2+.
254
+ - Or, your jobs relying on specific records are always enqueued on [`after_commit` callbacks](https://guides.rubyonrails.org/active_record_callbacks.html#after-commit-and-after-rollback) or otherwise from a place where you're certain that whatever data the job will use has been committed to the database before the job is enqueued.
255
+ - Or, you configure a database for Solid Queue, even if it's the same as your app, ensuring that a different connection on the thread handling requests or running jobs for your app will be used to enqueue jobs. For example:
248
256
 
249
257
  ```ruby
250
258
  class ApplicationRecord < ActiveRecord::Base
@@ -257,6 +265,51 @@ If you prefer not to rely on this, or avoid relying on it unintentionally, you s
257
265
  config.solid_queue.connects_to = { database: { writing: :primary, reading: :replica } }
258
266
  ```
259
267
 
268
+ ## Recurring tasks
269
+ Solid Queue supports defining recurring tasks that run at specific times in the future, on a regular basis like cron jobs. These are managed by dispatcher processes and as such, they can be defined in the dispatcher's configuration like this:
270
+ ```yml
271
+ dispatchers:
272
+ - polling_interval: 1
273
+ batch_size: 500
274
+ recurring_tasks:
275
+ my_periodic_job:
276
+ class: MyJob
277
+ args: [ 42, { status: "custom_status" } ]
278
+ schedule: every second
279
+ ```
280
+ `recurring_tasks` is a hash/dictionary, and the key will be the task key internally. Each task needs to have a class, which will be the job class to enqueue, and a schedule. The schedule is parsed using [Fugit](https://github.com/floraison/fugit), so it accepts anything [that Fugit accepts as a cron](https://github.com/floraison/fugit?tab=readme-ov-file#fugitcron). You can also provide arguments to be passed to the job, as a single argument, a hash, or an array of arguments that can also include kwargs as the last element in the array.
281
+
282
+ The job in the example configuration above will be enqueued every second as:
283
+ ```ruby
284
+ MyJob.perform_later(42, status: "custom_status")
285
+ ```
286
+
287
+ Tasks are enqueued at their corresponding times by the dispatcher that owns them, and each task schedules the next one. This is pretty much [inspired by what GoodJob does](https://github.com/bensheldon/good_job/blob/994ecff5323bf0337e10464841128fda100750e6/lib/good_job/cron_manager.rb).
288
+
289
+ It's possible to run multiple dispatchers with the same `recurring_tasks` configuration. To avoid enqueuing duplicate tasks at the same time, an entry in a new `solid_queue_recurring_executions` table is created in the same transaction as the job is enqueued. This table has a unique index on `task_key` and `run_at`, ensuring only one entry per task per time will be created. This only works if you have `preserve_finished_jobs` set to `true` (the default), and the guarantee applies as long as you keep the jobs around.
290
+
291
+ Finally, it's possible to configure jobs that aren't handled by Solid Queue. That's it, you can a have a job like this in your app:
292
+ ```ruby
293
+ class MyResqueJob < ApplicationJob
294
+ self.queue_adapter = :resque
295
+
296
+ def perform(arg)
297
+ # ..
298
+ end
299
+ end
300
+ ```
301
+
302
+ You can still configure this in Solid Queue:
303
+ ```yml
304
+ dispatchers:
305
+ - recurring_tasks:
306
+ my_periodic_resque_job:
307
+ class: MyResqueJob
308
+ args: 22
309
+ schedule: "*/5 * * * *"
310
+ ```
311
+ and the job will be enqueued via `perform_later` so it'll run in Resque. However, in this case we won't track any `solid_queue_recurring_execution` record for it and there won't be any guarantees that the job is enqueued only once each time.
312
+
260
313
  ## Inspiration
261
314
 
262
315
  Solid Queue has been inspired by [resque](https://github.com/resque/resque) and [GoodJob](https://github.com/bensheldon/good_job). We recommend checking out these projects as they're great examples from which we've learnt a lot.
@@ -10,21 +10,25 @@ module SolidQueue
10
10
  scope :expired, -> { where(expires_at: ...Time.current) }
11
11
 
12
12
  class << self
13
- def unblock(count)
14
- expired.distinct.limit(count).pluck(:concurrency_key).then do |concurrency_keys|
15
- release_many releasable(concurrency_keys)
13
+ def unblock(limit)
14
+ SolidQueue.instrument(:release_many_blocked, limit: limit) do |payload|
15
+ expired.distinct.limit(limit).pluck(:concurrency_key).then do |concurrency_keys|
16
+ payload[:size] = release_many releasable(concurrency_keys)
17
+ end
16
18
  end
17
19
  end
18
20
 
19
21
  def release_many(concurrency_keys)
20
22
  # We want to release exactly one blocked execution for each concurrency key, and we need to do it
21
23
  # one by one, locking each record and acquiring the semaphore individually for each of them:
22
- Array(concurrency_keys).each { |concurrency_key| release_one(concurrency_key) }
24
+ Array(concurrency_keys).count { |concurrency_key| release_one(concurrency_key) }
23
25
  end
24
26
 
25
27
  def release_one(concurrency_key)
26
28
  transaction do
27
- ordered.where(concurrency_key: concurrency_key).limit(1).non_blocking_lock.each(&:release)
29
+ if execution = ordered.where(concurrency_key: concurrency_key).limit(1).non_blocking_lock.first
30
+ execution.release
31
+ end
28
32
  end
29
33
  end
30
34
 
@@ -38,12 +42,14 @@ module SolidQueue
38
42
  end
39
43
 
40
44
  def release
41
- transaction do
42
- if acquire_concurrency_lock
43
- promote_to_ready
44
- destroy!
45
+ SolidQueue.instrument(:release_blocked, job_id: job.id, concurrency_key: concurrency_key, released: false) do |payload|
46
+ transaction do
47
+ if acquire_concurrency_lock
48
+ promote_to_ready
49
+ destroy!
45
50
 
46
- SolidQueue.logger.info("[SolidQueue] Unblocked job #{job.id} under #{concurrency_key}")
51
+ payload[:released] = true
52
+ end
47
53
  end
48
54
  end
49
55
  end
@@ -16,12 +16,16 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
16
16
  insert_all!(job_data)
17
17
  where(job_id: job_ids, process_id: process_id).load.tap do |claimed|
18
18
  block.call(claimed)
19
- SolidQueue.logger.info("[SolidQueue] Claimed #{claimed.size} jobs")
20
19
  end
21
20
  end
22
21
 
23
22
  def release_all
24
- includes(:job).each(&:release)
23
+ SolidQueue.instrument(:release_many_claimed) do |payload|
24
+ includes(:job).tap do |executions|
25
+ payload[:size] = executions.size
26
+ executions.each(&:release)
27
+ end
28
+ end
25
29
  end
26
30
 
27
31
  def discard_all_in_batches(*)
@@ -46,9 +50,11 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
46
50
  end
47
51
 
48
52
  def release
49
- transaction do
50
- job.dispatch_bypassing_concurrency_limits
51
- destroy!
53
+ SolidQueue.instrument(:release_claimed, job_id: job.id, process_id: process_id) do
54
+ transaction do
55
+ job.dispatch_bypassing_concurrency_limits
56
+ destroy!
57
+ end
52
58
  end
53
59
  end
54
60
 
@@ -9,9 +9,8 @@ module SolidQueue
9
9
  def dispatch_jobs(job_ids)
10
10
  jobs = Job.where(id: job_ids)
11
11
 
12
- Job.dispatch_all(jobs).map(&:id).tap do |dispatched_job_ids|
13
- where(job_id: dispatched_job_ids).order(:job_id).delete_all
14
- SolidQueue.logger.info("[SolidQueue] Dispatched #{dispatched_job_ids.size} jobs")
12
+ Job.dispatch_all(jobs).map(&:id).then do |dispatched_job_ids|
13
+ where(id: where(job_id: dispatched_job_ids).pluck(:id)).delete_all
15
14
  end
16
15
  end
17
16
  end
@@ -13,6 +13,10 @@ module SolidQueue
13
13
  belongs_to :job
14
14
 
15
15
  class << self
16
+ def type
17
+ model_name.element.sub("_execution", "").to_sym
18
+ end
19
+
16
20
  def create_all_from_jobs(jobs)
17
21
  insert_all execution_data_from_jobs(jobs)
18
22
  end
@@ -27,25 +31,32 @@ module SolidQueue
27
31
  pending = count
28
32
  discarded = 0
29
33
 
30
- loop do
31
- transaction do
32
- job_ids = limit(batch_size).order(:job_id).lock.pluck(:job_id)
34
+ SolidQueue.instrument(:discard_all, batch_size: batch_size, status: type, batches: 0, size: 0) do |payload|
35
+ loop do
36
+ transaction do
37
+ job_ids = limit(batch_size).order(:job_id).lock.pluck(:job_id)
38
+ discarded = discard_jobs job_ids
33
39
 
34
- discard_jobs job_ids
35
- discarded = where(job_id: job_ids).delete_all
36
- pending -= discarded
37
- end
40
+ where(job_id: job_ids).delete_all
41
+ pending -= discarded
42
+
43
+ payload[:size] += discarded
44
+ payload[:batches] += 1
45
+ end
38
46
 
39
- break if pending <= 0 || discarded == 0
47
+ break if pending <= 0 || discarded == 0
48
+ end
40
49
  end
41
50
  end
42
51
 
43
52
  def discard_all_from_jobs(jobs)
44
- transaction do
45
- job_ids = lock_all_from_jobs(jobs)
53
+ SolidQueue.instrument(:discard_all, jobs_size: jobs.size, status: type) do |payload|
54
+ transaction do
55
+ job_ids = lock_all_from_jobs(jobs)
46
56
 
47
- discard_jobs job_ids
48
- where(job_id: job_ids).delete_all
57
+ payload[:size] = discard_jobs job_ids
58
+ where(job_id: job_ids).delete_all
59
+ end
49
60
  end
50
61
  end
51
62
 
@@ -59,10 +70,16 @@ module SolidQueue
59
70
  end
60
71
  end
61
72
 
73
+ def type
74
+ self.class.type
75
+ end
76
+
62
77
  def discard
63
- with_lock do
64
- job.destroy
65
- destroy
78
+ SolidQueue.instrument(:discard, job_id: job_id, status: type) do
79
+ with_lock do
80
+ job.destroy
81
+ destroy
82
+ end
66
83
  end
67
84
  end
68
85
  end
@@ -11,15 +11,19 @@ module SolidQueue
11
11
  attr_accessor :exception
12
12
 
13
13
  def self.retry_all(jobs)
14
- transaction do
15
- dispatch_jobs lock_all_from_jobs(jobs)
14
+ SolidQueue.instrument(:retry_all, jobs_size: jobs.size) do |payload|
15
+ transaction do
16
+ payload[:size] = dispatch_jobs lock_all_from_jobs(jobs)
17
+ end
16
18
  end
17
19
  end
18
20
 
19
21
  def retry
20
- with_lock do
21
- job.prepare_for_execution
22
- destroy!
22
+ SolidQueue.instrument(:retry, job_id: job.id) do
23
+ with_lock do
24
+ job.prepare_for_execution
25
+ destroy!
26
+ end
23
27
  end
24
28
  end
25
29
 
@@ -33,5 +37,5 @@ module SolidQueue
33
37
  self.error = { exception_class: exception.class.name, message: exception.message, backtrace: exception.backtrace }
34
38
  end
35
39
  end
36
- end
40
+ end
37
41
  end
@@ -6,13 +6,13 @@ module SolidQueue
6
6
  extend ActiveSupport::Concern
7
7
 
8
8
  included do
9
- scope :clearable, ->(finished_before: SolidQueue.clear_finished_jobs_after.ago) { where.not(finished_at: nil).where(finished_at: ...finished_before) }
9
+ scope :clearable, ->(finished_before: SolidQueue.clear_finished_jobs_after.ago, class_name: nil) { where.not(finished_at: nil).where(finished_at: ...finished_before).where(class_name.present? ? { class_name: class_name } : {}) }
10
10
  end
11
11
 
12
12
  class_methods do
13
- def clear_finished_in_batches(batch_size: 500, finished_before: SolidQueue.clear_finished_jobs_after.ago)
13
+ def clear_finished_in_batches(batch_size: 500, finished_before: SolidQueue.clear_finished_jobs_after.ago, class_name: nil)
14
14
  loop do
15
- records_deleted = clearable(finished_before: finished_before).limit(batch_size).delete_all
15
+ records_deleted = clearable(finished_before: finished_before, class_name: class_name).limit(batch_size).delete_all
16
16
  break if records_deleted == 0
17
17
  end
18
18
  end
@@ -6,7 +6,7 @@ module SolidQueue
6
6
  extend ActiveSupport::Concern
7
7
 
8
8
  included do
9
- include Clearable, ConcurrencyControls, Schedulable
9
+ include ConcurrencyControls, Schedulable
10
10
 
11
11
  has_one :ready_execution
12
12
  has_one :claimed_execution
@@ -78,7 +78,7 @@ module SolidQueue
78
78
  end
79
79
 
80
80
  def finished!
81
- if preserve_finished_jobs?
81
+ if SolidQueue.preserve_finished_jobs?
82
82
  touch(:finished_at)
83
83
  else
84
84
  destroy!
@@ -93,7 +93,7 @@ module SolidQueue
93
93
  if finished?
94
94
  :finished
95
95
  elsif execution.present?
96
- execution.model_name.element.sub("_execution", "").to_sym
96
+ execution.type
97
97
  end
98
98
  end
99
99
 
@@ -117,10 +117,6 @@ module SolidQueue
117
117
  def execution
118
118
  %w[ ready claimed failed ].reduce(nil) { |acc, status| acc || public_send("#{status}_execution") }
119
119
  end
120
-
121
- def preserve_finished_jobs?
122
- SolidQueue.preserve_finished_jobs
123
- end
124
120
  end
125
121
  end
126
122
  end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SolidQueue
4
+ class Job
5
+ module Recurrable
6
+ extend ActiveSupport::Concern
7
+
8
+ included do
9
+ has_one :recurring_execution, dependent: :destroy
10
+ end
11
+ end
12
+ end
13
+ end
@@ -8,7 +8,7 @@ module SolidQueue
8
8
  included do
9
9
  has_one :scheduled_execution
10
10
 
11
- scope :scheduled, -> { where.not(finished_at: nil) }
11
+ scope :scheduled, -> { where(finished_at: nil) }
12
12
  end
13
13
 
14
14
  class_methods do
@@ -2,7 +2,7 @@
2
2
 
3
3
  module SolidQueue
4
4
  class Job < Record
5
- include Executable
5
+ include Executable, Clearable, Recurrable
6
6
 
7
7
  serialize :arguments, coder: JSON
8
8
 
@@ -4,15 +4,16 @@ module SolidQueue::Process::Prunable
4
4
  extend ActiveSupport::Concern
5
5
 
6
6
  included do
7
- scope :prunable, -> { where("last_heartbeat_at <= ?", SolidQueue.process_alive_threshold.ago) }
7
+ scope :prunable, -> { where(last_heartbeat_at: ..SolidQueue.process_alive_threshold.ago) }
8
8
  end
9
9
 
10
10
  class_methods do
11
11
  def prune
12
- prunable.non_blocking_lock.find_in_batches(batch_size: 50) do |batch|
13
- batch.each do |process|
14
- SolidQueue.logger.info("[SolidQueue] Pruning dead process #{process.id} - #{process.metadata}")
15
- process.deregister
12
+ SolidQueue.instrument :prune_processes, size: 0 do |payload|
13
+ prunable.non_blocking_lock.find_in_batches(batch_size: 50) do |batch|
14
+ payload[:size] += batch.size
15
+
16
+ batch.each { |process| process.deregister(pruned: true) }
16
17
  end
17
18
  end
18
19
  end
@@ -12,17 +12,24 @@ class SolidQueue::Process < SolidQueue::Record
12
12
  after_destroy -> { claimed_executions.release_all }
13
13
 
14
14
  def self.register(**attributes)
15
- create!(attributes.merge(last_heartbeat_at: Time.current))
15
+ SolidQueue.instrument :register_process, **attributes do
16
+ create!(attributes.merge(last_heartbeat_at: Time.current))
17
+ end
18
+ rescue Exception => error
19
+ SolidQueue.instrument :register_process, **attributes.merge(error: error)
20
+ raise
16
21
  end
17
22
 
18
23
  def heartbeat
19
24
  touch(:last_heartbeat_at)
20
25
  end
21
26
 
22
- def deregister
23
- destroy!
24
- rescue Exception
25
- SolidQueue.logger.error("[SolidQueue] Error deregistering process #{id} - #{metadata}")
26
- raise
27
+ def deregister(pruned: false)
28
+ SolidQueue.instrument :deregister_process, process: self, pruned: pruned, claimed_size: claimed_executions.size do |payload|
29
+ destroy!
30
+ rescue Exception => error
31
+ payload[:error] = error
32
+ raise
33
+ end
27
34
  end
28
35
  end
@@ -0,0 +1,26 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SolidQueue
4
+ class RecurringExecution < Execution
5
+ scope :clearable, -> { where.missing(:job) }
6
+
7
+ class << self
8
+ def record(task_key, run_at, &block)
9
+ transaction do
10
+ block.call.tap do |active_job|
11
+ create!(job_id: active_job.provider_job_id, task_key: task_key, run_at: run_at)
12
+ end
13
+ end
14
+ rescue ActiveRecord::RecordNotUnique
15
+ # Task already dispatched
16
+ end
17
+
18
+ def clear_in_batches(batch_size: 500)
19
+ loop do
20
+ records_deleted = clearable.limit(batch_size).delete_all
21
+ break if records_deleted == 0
22
+ end
23
+ end
24
+ end
25
+ end
26
+ end
@@ -16,7 +16,9 @@ module SolidQueue
16
16
  job_ids = next_batch(batch_size).non_blocking_lock.pluck(:job_id)
17
17
  if job_ids.empty? then []
18
18
  else
19
- dispatch_jobs(job_ids)
19
+ SolidQueue.instrument(:dispatch_scheduled, batch_size: batch_size) do |payload|
20
+ payload[:size] = dispatch_jobs(job_ids)
21
+ end
20
22
  end
21
23
  end
22
24
  end
@@ -70,7 +70,7 @@ module SolidQueue
70
70
  end
71
71
 
72
72
  def limit
73
- job.concurrency_limit
73
+ job.concurrency_limit || 1
74
74
  end
75
75
  end
76
76
  end
@@ -0,0 +1,14 @@
1
+ class CreateRecurringExecutions < ActiveRecord::Migration[7.1]
2
+ def change
3
+ create_table :solid_queue_recurring_executions do |t|
4
+ t.references :job, index: { unique: true }, null: false
5
+ t.string :task_key, null: false
6
+ t.datetime :run_at, null: false
7
+ t.datetime :created_at, null: false
8
+
9
+ t.index [ :task_key, :run_at ], unique: true
10
+ end
11
+
12
+ add_foreign_key :solid_queue_recurring_executions, :solid_queue_jobs, column: :job_id, on_delete: :cascade
13
+ end
14
+ end
@@ -8,6 +8,10 @@ module ActiveJob
8
8
  #
9
9
  # Rails.application.config.active_job.queue_adapter = :solid_queue
10
10
  class SolidQueueAdapter
11
+ def enqueue_after_transaction_commit?
12
+ SolidQueue.enqueue_after_transaction_commit
13
+ end
14
+
11
15
  def enqueue(active_job) # :nodoc:
12
16
  SolidQueue::Job.enqueue(active_job)
13
17
  end
@@ -4,7 +4,7 @@
4
4
  # batch_size: 500
5
5
  # workers:
6
6
  # - queues: "*"
7
- # threads: 5
7
+ # threads: 3
8
8
  # processes: 1
9
9
  # polling_interval: 0.1
10
10
  #
@@ -19,6 +19,7 @@ Puma::Plugin.create do
19
19
  end
20
20
 
21
21
  launcher.events.on_stopped { stop_solid_queue }
22
+ launcher.events.on_restart { stop_solid_queue }
22
23
  end
23
24
 
24
25
  private
@@ -11,7 +11,7 @@ module SolidQueue
11
11
  end
12
12
 
13
13
  def handle_thread_error(error)
14
- SolidQueue.logger.error("[SolidQueue] #{error}")
14
+ SolidQueue.instrument(:thread_error, error: error)
15
15
 
16
16
  if SolidQueue.on_thread_error
17
17
  SolidQueue.on_thread_error.call(error)