solid_queue 0.3.3 → 0.4.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (35) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +88 -14
  3. data/app/models/solid_queue/claimed_execution.rb +10 -3
  4. data/app/models/solid_queue/failed_execution.rb +37 -1
  5. data/app/models/solid_queue/job.rb +7 -0
  6. data/app/models/solid_queue/process/executor.rb +20 -0
  7. data/app/models/solid_queue/process/prunable.rb +15 -11
  8. data/app/models/solid_queue/process.rb +10 -9
  9. data/app/models/solid_queue/recurring_execution.rb +7 -3
  10. data/lib/puma/plugin/solid_queue.rb +39 -11
  11. data/lib/solid_queue/configuration.rb +18 -22
  12. data/lib/solid_queue/dispatcher/concurrency_maintenance.rb +1 -1
  13. data/lib/solid_queue/dispatcher/recurring_schedule.rb +4 -0
  14. data/lib/solid_queue/dispatcher/recurring_task.rb +14 -6
  15. data/lib/solid_queue/dispatcher.rb +7 -4
  16. data/lib/solid_queue/log_subscriber.rb +22 -13
  17. data/lib/solid_queue/processes/callbacks.rb +0 -7
  18. data/lib/solid_queue/processes/poller.rb +10 -11
  19. data/lib/solid_queue/processes/procline.rb +1 -1
  20. data/lib/solid_queue/processes/registrable.rb +9 -1
  21. data/lib/solid_queue/processes/runnable.rb +38 -7
  22. data/lib/solid_queue/processes/supervised.rb +2 -3
  23. data/lib/solid_queue/supervisor/async_supervisor.rb +44 -0
  24. data/lib/solid_queue/supervisor/fork_supervisor.rb +108 -0
  25. data/lib/solid_queue/supervisor/maintenance.rb +34 -0
  26. data/lib/solid_queue/{processes → supervisor}/pidfile.rb +2 -2
  27. data/lib/solid_queue/supervisor/pidfiled.rb +25 -0
  28. data/lib/solid_queue/supervisor/signals.rb +67 -0
  29. data/lib/solid_queue/supervisor.rb +32 -142
  30. data/lib/solid_queue/tasks.rb +1 -11
  31. data/lib/solid_queue/timer.rb +28 -0
  32. data/lib/solid_queue/version.rb +1 -1
  33. data/lib/solid_queue/worker.rb +4 -5
  34. metadata +13 -5
  35. data/lib/solid_queue/processes/signals.rb +0 -69
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 50d118a41d39dff4da741a1e7fb105b47571dd719b883fb53843e7b57f7aff66
4
- data.tar.gz: a1dc7ff9b71a07f41fab851ea0dd2f16170240e2a465098f98e75a96c33edb4b
3
+ metadata.gz: e12cccf2a1485f92c30925675e6a91bb36b063057390e57250a582440a046c8a
4
+ data.tar.gz: a1c857b509ff15124eed754cb8ed8613d95764f97e2f06758baf4031ea10d387
5
5
  SHA512:
6
- metadata.gz: f344e6ba1a395f014fc08437c1322eac9b3f5218874f4b824cacab042d20053ee9a5504601e37864e04ccc956fc1f08fe35979c9d4a7b19dc42cc53ae5502c56
7
- data.tar.gz: 27e1d8c00e2c68330d52e938bb00ecadaf246678e16b12b0e1269f8a4e8e3328a972b5a007a04d2839baf2edb480e9f0cae3b6099cd6009a75b11e85858ee6e2
6
+ metadata.gz: bf861243ec274d583b29222275303a3cb8383803291e911f8d2872ba110d92705978d31ceada27e246368410668fc7f2ba8a9bb368c19cdfff666ed74636ea3c
7
+ data.tar.gz: a6ccfc868515c105f67bbcbdcb9cda4b4ea790e77a61d16e8dd310e6e68244791f1fb2a0e693cec5c2af0c2ba93056b078269d97673884c4e80ade876d48fcc5
data/README.md CHANGED
@@ -66,7 +66,7 @@ $ bundle exec rake solid_queue:start
66
66
 
67
67
  This will start processing jobs in all queues using the default configuration. See [below](#configuration) to learn more about configuring Solid Queue.
68
68
 
69
- For small projects, you can run Solid Queue on the same machine as your webserver. When you're ready to scale, Solid Queue supports horizontal scaling out-of-the-box. You can run Solid Queue on a separate server from your webserver, or even run `bundle exec rake solid_queue:start` on multiple machines at the same time. If you'd like to designate some machines to be only dispatchers or only workers, use `bundle exec rake solid_queue:dispatch` or `bundle exec rake solid_queue:work`, respectively.
69
+ For small projects, you can run Solid Queue on the same machine as your webserver. When you're ready to scale, Solid Queue supports horizontal scaling out-of-the-box. You can run Solid Queue on a separate server from your webserver, or even run `bundle exec rake solid_queue:start` on multiple machines at the same time. Depending on the configuration, you can designate some machines to run only dispatchers or only workers. See the [configuration](#configuration) section for more details on this.
70
70
 
71
71
  ## Requirements
72
72
  Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as they support `FOR UPDATE SKIP LOCKED`. You can use it with older versions, but in that case, you might run into lock waits if you run multiple workers for the same queue.
@@ -75,10 +75,12 @@ Besides Rails 7.1, Solid Queue works best with MySQL 8+ or PostgreSQL 9.5+, as t
75
75
 
76
76
  ### Workers and dispatchers
77
77
 
78
- We have three types of processes in Solid Queue:
78
+ We have three types of actors in Solid Queue:
79
79
  - _Workers_ are in charge of picking jobs ready to run from queues and processing them. They work off the `solid_queue_ready_executions` table.
80
80
  - _Dispatchers_ are in charge of selecting jobs scheduled to run in the future that are due and _dispatching_ them, which is simply moving them from the `solid_queue_scheduled_executions` table over to the `solid_queue_ready_executions` table so that workers can pick them up. They're also in charge of managing [recurring tasks](#recurring-tasks), dispatching jobs to process them according to their schedule. On top of that, they do some maintenance work related to [concurrency controls](#concurrency-controls).
81
- - The _supervisor_ forks workers and dispatchers according to the configuration, controls their heartbeats, and sends them signals to stop and start them when needed.
81
+ - The _supervisor_ runs workers and dispatchers according to the configuration, controls their heartbeats, and stops and starts them when needed.
82
+
83
+ By default, Solid Queue runs in `fork` mode. This means the supervisor will fork a separate process for each supervised worker/dispatcher. There's also an `async` mode where each worker and dispatcher will be run as a thread of the supervisor process. This can be used with [the provided Puma plugin](#puma-plugin)
82
84
 
83
85
  By default, Solid Queue will try to find your configuration under `config/solid_queue.yml`, but you can set a different path using the environment variable `SOLID_QUEUE_CONFIG`. This is what this configuration looks like:
84
86
 
@@ -98,7 +100,18 @@ production:
98
100
  processes: 3
99
101
  ```
100
102
 
101
- Everything is optional. If no configuration is provided, Solid Queue will run with one dispatcher and one worker with default settings.
103
+ Everything is optional. If no configuration at all is provided, Solid Queue will run with one dispatcher and one worker with default settings. If you want to run only dispatchers or workers, you just need to include that section alone in the configuration. For example, with the following configuration:
104
+
105
+ ```yml
106
+ production:
107
+ dispatchers:
108
+ - polling_interval: 1
109
+ batch_size: 500
110
+ concurrency_maintenance_interval: 300
111
+ ```
112
+ the supervisor will run 1 dispatcher and no workers.
113
+
114
+ Here's an overview of the different options:
102
115
 
103
116
  - `polling_interval`: the time interval in seconds that workers and dispatchers will wait before checking for more jobs. This time defaults to `1` second for dispatchers and `0.1` seconds for workers.
104
117
  - `batch_size`: the dispatcher will dispatch jobs in batches of this size. The default is 500.
@@ -118,7 +131,7 @@ Everything is optional. If no configuration is provided, Solid Queue will run wi
118
131
 
119
132
  Finally, you can combine prefixes with exact names, like `[ staging*, background ]`, and the behaviour with respect to order will be the same as with only exact names.
120
133
  - `threads`: this is the max size of the thread pool that each worker will have to run jobs. Each worker will fetch this number of jobs from their queue(s), at most and will post them to the thread pool to be run. By default, this is `3`. Only workers have this setting.
121
- - `processes`: this is the number of worker processes that will be forked by the supervisor with the settings given. By default, this is `1`, just a single process. This setting is useful if you want to dedicate more than one CPU core to a queue or queues with the same configuration. Only workers have this setting.
134
+ - `processes`: this is the number of worker processes that will be forked by the supervisor with the settings given. By default, this is `1`, just a single process. This setting is useful if you want to dedicate more than one CPU core to a queue or queues with the same configuration. Only workers have this setting. **Note**: this option will be ignored if [running in `async` mode](#running-as-a-fork-or-asynchronously).
122
135
  - `concurrency_maintenance`: whether the dispatcher will perform the concurrency maintenance work. This is `true` by default, and it's useful if you don't use any [concurrency controls](#concurrency-controls) and want to disable it or if you run multiple dispatchers and want some of them to just dispatch jobs without doing anything else.
123
136
  - `recurring_tasks`: a list of recurring tasks the dispatcher will manage. Read more details about this one in the [Recurring tasks](#recurring-tasks) section.
124
137
 
@@ -145,6 +158,56 @@ When receiving a `QUIT` signal, if workers still have jobs in-flight, these will
145
158
 
146
159
  If processes have no chance of cleaning up before exiting (e.g. if someone pulls a cable somewhere), in-flight jobs might remain claimed by the processes executing them. Processes send heartbeats, and the supervisor checks and prunes processes with expired heartbeats, which will release any claimed jobs back to their queues. You can configure both the frequency of heartbeats and the threshold to consider a process dead. See the section below for this.
147
160
 
161
+
162
+ ### Dedicated database configuration
163
+
164
+ Solid Queue can be configured to run on a different database than the main application.
165
+
166
+ Configure the `connects_to` option in `config/application.rb` or your environment config, with the custom database configuration that will be used in the abstract `SolidQueue::Record` Active Record model.
167
+
168
+ ```ruby
169
+ # Use a separate DB for Solid Queue
170
+ config.solid_queue.connects_to = { database: { writing: :solid_queue_primary, reading: :solid_queue_replica } }
171
+ ```
172
+
173
+ Add the dedicated database configuration to `config/database.yml`, differentiating between the main app's database and the dedicated `solid_queue` database. Make sure to include the `migrations_paths` for the solid queue database. This is where migration files for Solid Queue tables will reside.
174
+
175
+ ```yml
176
+ default: &default
177
+ adapter: sqlite3
178
+ pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
179
+ timeout: 5000
180
+
181
+ solid_queue: &solid_queue
182
+ <<: *default
183
+ migrations_paths: db/solid_queue_migrate
184
+
185
+ development:
186
+ primary:
187
+ <<: *default
188
+ # ...
189
+ solid_queue_primary:
190
+ <<: *solid_queue
191
+ # ...
192
+ solid_queue_replica:
193
+ <<: *solid_queue
194
+ # ...
195
+ ```
196
+
197
+ Install migrations and specify the dedicated database name with the `DATABASE` option. This will create the Solid Queue migration files in a separate directory, matching the value provided in `migrations_paths` in `config/database.yml`.
198
+
199
+ ```bash
200
+ $ bin/rails solid_queue:install:migrations DATABASE=solid_queue
201
+ ```
202
+
203
+ Note: If you've already run the solid queue install command (`bin/rails generate solid_queue:install`), the migration files will have already been generated under the primary database's `db/migrate/` directory. You can remove these files and keep the ones generated by the database-specific migration installation above.
204
+
205
+ Finally, run the migrations:
206
+
207
+ ```bash
208
+ $ bin/rails db:migrate
209
+ ```
210
+
148
211
  ### Other configuration settings
149
212
  _Note_: The settings in this section should be set in your `config/application.rb` or your environment config like this: `config.solid_queue.silence_polling = true`
150
213
 
@@ -156,12 +219,6 @@ There are several settings that control how Solid Queue works that you can set a
156
219
  ```ruby
157
220
  -> (exception) { Rails.error.report(exception, handled: false) }
158
221
  ```
159
- - `connects_to`: a custom database configuration that will be used in the abstract `SolidQueue::Record` Active Record model. This is required to use a different database than the main app. For example:
160
-
161
- ```ruby
162
- # Use a separate DB for Solid Queue
163
- config.solid_queue.connects_to = { database: { writing: :solid_queue_primary, reading: :solid_queue_replica } }
164
- ```
165
222
  - `use_skip_locked`: whether to use `FOR UPDATE SKIP LOCKED` when performing locking reads. This will be automatically detected in the future, and for now, you'd only need to set this to `false` if your database doesn't support it. For MySQL, that'd be versions < 8, and for PostgreSQL, versions < 9.5. If you use SQLite, this has no effect, as writes are sequential.
166
223
  - `process_heartbeat_interval`: the heartbeat interval that all processes will follow—defaults to 60 seconds.
167
224
  - `process_alive_threshold`: how long to wait until a process is considered dead after its last heartbeat—defaults to 5 minutes.
@@ -173,6 +230,10 @@ There are several settings that control how Solid Queue works that you can set a
173
230
  - `default_concurrency_control_period`: the value to be used as the default for the `duration` parameter in [concurrency controls](#concurrency-controls). It defaults to 3 minutes.
174
231
  - `enqueue_after_transaction_commit`: whether the job queuing is deferred to after the current Active Record transaction is committed. The default is `false`. [Read more](https://github.com/rails/rails/pull/51426).
175
232
 
233
+ ## Errors when enqueuing
234
+ Solid Queue will raise a `SolidQueue::Job::EnqueueError` for any Active Record errors that happen when enqueuing a job. The reason for not raising `ActiveJob::EnqueueError` is that this one gets handled by Active Job, causing `perform_later` to return `false` and set `job.enqueue_error`, yielding the job to a block that you need to pass to `perform_later`. This works very well for your own jobs, but makes failure very hard to handle for jobs enqueued by Rails or other gems, such as `Turbo::Streams::BroadcastJob` or `ActiveStorage::AnalyzeJob`, because you don't control the call to `perform_later` in that cases.
235
+
236
+ In the case of recurring tasks, if such error is raised when enqueuing the job corresponding to the task, it'll be handled and logged but it won't bubble up.
176
237
 
177
238
  ## Concurrency controls
178
239
  Solid Queue extends Active Job with concurrency controls, that allows you to limit how many jobs of a certain type or with certain arguments can run at the same time. When limited in this way, jobs will be blocked from running, and they'll stay blocked until another job finishes and unblocks them, or after the set expiry time (concurrency limit's _duration_) elapses. Jobs are never discarded or lost, only blocked.
@@ -184,7 +245,8 @@ class MyJob < ApplicationJob
184
245
  # ...
185
246
  ```
186
247
  - `key` is the only required parameter, and it can be a symbol, a string or a proc that receives the job arguments as parameters and will be used to identify the jobs that need to be limited together. If the proc returns an Active Record record, the key will be built from its class name and `id`.
187
- - `to` is `1` by default, and `duration` is set to `SolidQueue.default_concurrency_control_period` by default, which itself defaults to `3 minutes`, but that you can configure as well.
248
+ - `to` is `1` by default.
249
+ - `duration` is set to `SolidQueue.default_concurrency_control_period` by default, which itself defaults to `3 minutes`, but that you can configure as well.
188
250
  - `group` is used to control the concurrency of different job classes together. It defaults to the job class name.
189
251
 
190
252
  When a job includes these controls, we'll ensure that, at most, the number of jobs (indicated as `to`) that yield the same `key` will be performed concurrently, and this guarantee will last for `duration` for each job enqueued. Note that there's no guarantee about _the order of execution_, only about jobs being performed at the same time (overlapping).
@@ -243,6 +305,18 @@ plugin :solid_queue
243
305
  ```
244
306
  to your `puma.rb` configuration.
245
307
 
308
+ ### Running as a fork or asynchronously
309
+
310
+ By default, the Puma plugin will fork additional processes for each worker and dispatcher so that they run in different processes. This provides the best isolation and performance, but can have additional memory usage.
311
+
312
+ Alternatively, workers and dispatchers can be run within the same Puma process(s). To do so just configure the plugin as:
313
+
314
+ ```ruby
315
+ plugin :solid_queue
316
+ solid_queue_mode :async
317
+ ```
318
+
319
+ Note that in this case, the `processes` configuration option will be ignored.
246
320
 
247
321
  ## Jobs and transactional integrity
248
322
  :warning: Having your jobs in the same ACID-compliant database as your application data enables a powerful yet sharp tool: taking advantage of transactional integrity to ensure some action in your app is not committed unless your job is also committed. This can be very powerful and useful, but it can also backfire if you base some of your logic on this behaviour, and in the future, you move to another active job backend, or if you simply move Solid Queue to its own database, and suddenly the behaviour changes under you.
@@ -252,7 +326,7 @@ By default, Solid Queue runs in the same DB as your app, and job enqueuing is _n
252
326
  If you prefer not to rely on this, or avoid relying on it unintentionally, you should make sure that:
253
327
  - You set [`config.active_job.enqueue_after_transaction_commit`](https://edgeguides.rubyonrails.org/configuring.html#config-active-job-enqueue-after-transaction-commit) to `always`, if you're using Rails 7.2+.
254
328
  - Or, your jobs relying on specific records are always enqueued on [`after_commit` callbacks](https://guides.rubyonrails.org/active_record_callbacks.html#after-commit-and-after-rollback) or otherwise from a place where you're certain that whatever data the job will use has been committed to the database before the job is enqueued.
255
- - Or, you configure a database for Solid Queue, even if it's the same as your app, ensuring that a different connection on the thread handling requests or running jobs for your app will be used to enqueue jobs. For example:
329
+ - Or, you configure a different database for Solid Queue, even if it's the same as your app, ensuring that a different connection on the thread handling requests or running jobs for your app will be used to enqueue jobs. For example:
256
330
 
257
331
  ```ruby
258
332
  class ApplicationRecord < ActiveRecord::Base
@@ -288,7 +362,7 @@ Tasks are enqueued at their corresponding times by the dispatcher that owns them
288
362
 
289
363
  It's possible to run multiple dispatchers with the same `recurring_tasks` configuration. To avoid enqueuing duplicate tasks at the same time, an entry in a new `solid_queue_recurring_executions` table is created in the same transaction as the job is enqueued. This table has a unique index on `task_key` and `run_at`, ensuring only one entry per task per time will be created. This only works if you have `preserve_finished_jobs` set to `true` (the default), and the guarantee applies as long as you keep the jobs around.
290
364
 
291
- Finally, it's possible to configure jobs that aren't handled by Solid Queue. That's it, you can a have a job like this in your app:
365
+ Finally, it's possible to configure jobs that aren't handled by Solid Queue. That is, you can have a job like this in your app:
292
366
  ```ruby
293
367
  class MyResqueJob < ApplicationJob
294
368
  self.queue_adapter = :resque
@@ -3,6 +3,8 @@
3
3
  class SolidQueue::ClaimedExecution < SolidQueue::Execution
4
4
  belongs_to :process
5
5
 
6
+ scope :orphaned, -> { where.missing(:process) }
7
+
6
8
  class Result < Struct.new(:success, :error)
7
9
  def success?
8
10
  success
@@ -13,9 +15,14 @@ class SolidQueue::ClaimedExecution < SolidQueue::Execution
13
15
  def claiming(job_ids, process_id, &block)
14
16
  job_data = Array(job_ids).collect { |job_id| { job_id: job_id, process_id: process_id } }
15
17
 
16
- insert_all!(job_data)
17
- where(job_id: job_ids, process_id: process_id).load.tap do |claimed|
18
- block.call(claimed)
18
+ SolidQueue.instrument(:claim, process_id: process_id, job_ids: job_ids) do |payload|
19
+ insert_all!(job_data)
20
+ where(job_id: job_ids, process_id: process_id).load.tap do |claimed|
21
+ block.call(claimed)
22
+
23
+ payload[:size] = claimed.size
24
+ payload[:claimed_job_ids] = claimed.map(&:job_id)
25
+ end
19
26
  end
20
27
  end
21
28
 
@@ -33,9 +33,45 @@ module SolidQueue
33
33
  end
34
34
 
35
35
  private
36
+ JSON_OVERHEAD = 256
37
+
36
38
  def expand_error_details_from_exception
37
39
  if exception
38
- self.error = { exception_class: exception.class.name, message: exception.message, backtrace: exception.backtrace }
40
+ self.error = { exception_class: exception_class_name, message: exception_message, backtrace: exception_backtrace }
41
+ end
42
+ end
43
+
44
+ def exception_class_name
45
+ exception.class.name
46
+ end
47
+
48
+ def exception_message
49
+ exception.message
50
+ end
51
+
52
+ def exception_backtrace
53
+ if (limit = determine_backtrace_size_limit) && exception.backtrace.to_json.bytesize > limit
54
+ truncate_backtrace(exception.backtrace, limit)
55
+ else
56
+ exception.backtrace
57
+ end
58
+ end
59
+
60
+ def determine_backtrace_size_limit
61
+ column = self.class.connection.schema_cache.columns_hash(self.class.table_name)["error"]
62
+ if column.limit.present?
63
+ column.limit - exception_class_name.bytesize - exception_message.bytesize - JSON_OVERHEAD
64
+ end
65
+ end
66
+
67
+ def truncate_backtrace(lines, limit)
68
+ [].tap do |truncated_backtrace|
69
+ lines.each do |line|
70
+ if (truncated_backtrace << line).to_json.bytesize > limit
71
+ truncated_backtrace.pop
72
+ break
73
+ end
74
+ end
39
75
  end
40
76
  end
41
77
  end
@@ -2,6 +2,8 @@
2
2
 
3
3
  module SolidQueue
4
4
  class Job < Record
5
+ class EnqueueError < StandardError; end
6
+
5
7
  include Executable, Clearable, Recurrable
6
8
 
7
9
  serialize :arguments, coder: JSON
@@ -37,6 +39,11 @@ module SolidQueue
37
39
 
38
40
  def create_from_active_job(active_job)
39
41
  create!(**attributes_from_active_job(active_job))
42
+ rescue ActiveRecord::ActiveRecordError => e
43
+ enqueue_error = EnqueueError.new("#{e.class.name}: #{e.message}").tap do |error|
44
+ error.set_backtrace e.backtrace
45
+ end
46
+ raise enqueue_error
40
47
  end
41
48
 
42
49
  def create_all_from_active_jobs(active_jobs)
@@ -0,0 +1,20 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SolidQueue
4
+ class Process
5
+ module Executor
6
+ extend ActiveSupport::Concern
7
+
8
+ included do
9
+ has_many :claimed_executions
10
+
11
+ after_destroy -> { claimed_executions.release_all }, if: :claims_executions?
12
+ end
13
+
14
+ private
15
+ def claims_executions?
16
+ kind == "Worker"
17
+ end
18
+ end
19
+ end
20
+ end
@@ -1,19 +1,23 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- module SolidQueue::Process::Prunable
4
- extend ActiveSupport::Concern
3
+ module SolidQueue
4
+ class Process
5
+ module Prunable
6
+ extend ActiveSupport::Concern
5
7
 
6
- included do
7
- scope :prunable, -> { where(last_heartbeat_at: ..SolidQueue.process_alive_threshold.ago) }
8
- end
8
+ included do
9
+ scope :prunable, -> { where(last_heartbeat_at: ..SolidQueue.process_alive_threshold.ago) }
10
+ end
9
11
 
10
- class_methods do
11
- def prune
12
- SolidQueue.instrument :prune_processes, size: 0 do |payload|
13
- prunable.non_blocking_lock.find_in_batches(batch_size: 50) do |batch|
14
- payload[:size] += batch.size
12
+ class_methods do
13
+ def prune
14
+ SolidQueue.instrument :prune_processes, size: 0 do |payload|
15
+ prunable.non_blocking_lock.find_in_batches(batch_size: 50) do |batch|
16
+ payload[:size] += batch.size
15
17
 
16
- batch.each { |process| process.deregister(pruned: true) }
18
+ batch.each { |process| process.deregister(pruned: true) }
19
+ end
20
+ end
17
21
  end
18
22
  end
19
23
  end
@@ -1,19 +1,18 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  class SolidQueue::Process < SolidQueue::Record
4
- include Prunable
4
+ include Executor, Prunable
5
5
 
6
- belongs_to :supervisor, class_name: "SolidQueue::Process", optional: true, inverse_of: :forks
7
- has_many :forks, class_name: "SolidQueue::Process", inverse_of: :supervisor, foreign_key: :supervisor_id, dependent: :destroy
8
- has_many :claimed_executions
6
+ belongs_to :supervisor, class_name: "SolidQueue::Process", optional: true, inverse_of: :supervisees
7
+ has_many :supervisees, class_name: "SolidQueue::Process", inverse_of: :supervisor, foreign_key: :supervisor_id, dependent: :destroy
9
8
 
10
9
  store :metadata, coder: JSON
11
10
 
12
- after_destroy -> { claimed_executions.release_all }
13
-
14
11
  def self.register(**attributes)
15
- SolidQueue.instrument :register_process, **attributes do
16
- create!(attributes.merge(last_heartbeat_at: Time.current))
12
+ SolidQueue.instrument :register_process, **attributes do |payload|
13
+ create!(attributes.merge(last_heartbeat_at: Time.current)).tap do |process|
14
+ payload[:process_id] = process.id
15
+ end
17
16
  end
18
17
  rescue Exception => error
19
18
  SolidQueue.instrument :register_process, **attributes.merge(error: error)
@@ -25,7 +24,9 @@ class SolidQueue::Process < SolidQueue::Record
25
24
  end
26
25
 
27
26
  def deregister(pruned: false)
28
- SolidQueue.instrument :deregister_process, process: self, pruned: pruned, claimed_size: claimed_executions.size do |payload|
27
+ SolidQueue.instrument :deregister_process, process: self, pruned: pruned do |payload|
28
+ payload[:claimed_size] = claimed_executions.size if claims_executions?
29
+
29
30
  destroy!
30
31
  rescue Exception => error
31
32
  payload[:error] = error
@@ -2,17 +2,21 @@
2
2
 
3
3
  module SolidQueue
4
4
  class RecurringExecution < Execution
5
+ class AlreadyRecorded < StandardError; end
6
+
5
7
  scope :clearable, -> { where.missing(:job) }
6
8
 
7
9
  class << self
8
10
  def record(task_key, run_at, &block)
9
11
  transaction do
10
12
  block.call.tap do |active_job|
11
- create!(job_id: active_job.provider_job_id, task_key: task_key, run_at: run_at)
13
+ if active_job
14
+ create!(job_id: active_job.provider_job_id, task_key: task_key, run_at: run_at)
15
+ end
12
16
  end
13
17
  end
14
- rescue ActiveRecord::RecordNotUnique
15
- # Task already dispatched
18
+ rescue ActiveRecord::RecordNotUnique => e
19
+ raise AlreadyRecorded
16
20
  end
17
21
 
18
22
  def clear_in_batches(batch_size: 500)
@@ -1,28 +1,50 @@
1
1
  require "puma/plugin"
2
2
 
3
+ module Puma
4
+ class DSL
5
+ def solid_queue_mode(mode = :fork)
6
+ @options[:solid_queue_mode] = mode.to_sym
7
+ end
8
+ end
9
+ end
10
+
3
11
  Puma::Plugin.create do
4
- attr_reader :puma_pid, :solid_queue_pid, :log_writer
12
+ attr_reader :puma_pid, :solid_queue_pid, :log_writer, :solid_queue_supervisor
5
13
 
6
14
  def start(launcher)
7
15
  @log_writer = launcher.log_writer
8
16
  @puma_pid = $$
9
17
 
10
- launcher.events.on_booted do
11
- @solid_queue_pid = fork do
12
- Thread.new { monitor_puma }
13
- SolidQueue::Supervisor.start(mode: :all)
14
- end
18
+ if launcher.options[:solid_queue_mode] == :async
19
+ start_async(launcher)
20
+ else
21
+ start_forked(launcher)
22
+ end
23
+ end
15
24
 
25
+ private
26
+ def start_forked(launcher)
16
27
  in_background do
17
28
  monitor_solid_queue
18
29
  end
30
+
31
+ launcher.events.on_booted do
32
+ @solid_queue_pid = fork do
33
+ Thread.new { monitor_puma }
34
+ SolidQueue::Supervisor.start(mode: :fork)
35
+ end
36
+ end
37
+
38
+ launcher.events.on_stopped { stop_solid_queue }
39
+ launcher.events.on_restart { stop_solid_queue }
19
40
  end
20
41
 
21
- launcher.events.on_stopped { stop_solid_queue }
22
- launcher.events.on_restart { stop_solid_queue }
23
- end
42
+ def start_async(launcher)
43
+ launcher.events.on_booted { @solid_queue_supervisor = SolidQueue::Supervisor.start(mode: :async) }
44
+ launcher.events.on_stopped { solid_queue_supervisor.stop }
45
+ launcher.events.on_restart { solid_queue_supervisor.stop; solid_queue_supervisor.start }
46
+ end
24
47
 
25
- private
26
48
  def stop_solid_queue
27
49
  Process.waitpid(solid_queue_pid, Process::WNOHANG)
28
50
  log "Stopping Solid Queue..."
@@ -51,12 +73,18 @@ Puma::Plugin.create do
51
73
  end
52
74
 
53
75
  def solid_queue_dead?
54
- Process.waitpid(solid_queue_pid, Process::WNOHANG)
76
+ if solid_queue_started?
77
+ Process.waitpid(solid_queue_pid, Process::WNOHANG)
78
+ end
55
79
  false
56
80
  rescue Errno::ECHILD, Errno::ESRCH
57
81
  true
58
82
  end
59
83
 
84
+ def solid_queue_started?
85
+ solid_queue_pid.present?
86
+ end
87
+
60
88
  def puma_dead?
61
89
  Process.ppid != puma_pid
62
90
  end
@@ -17,38 +17,30 @@ module SolidQueue
17
17
  recurring_tasks: []
18
18
  }
19
19
 
20
- def initialize(mode: :work, load_from: nil)
21
- @mode = mode
20
+ def initialize(mode: :fork, load_from: nil)
21
+ @mode = mode.to_s.inquiry
22
22
  @raw_config = config_from(load_from)
23
23
  end
24
24
 
25
25
  def processes
26
- case mode
27
- when :dispatch then dispatchers
28
- when :work then workers
29
- when :all then dispatchers + workers
30
- else raise "Invalid mode #{mode}"
31
- end
26
+ dispatchers + workers
32
27
  end
33
28
 
34
29
  def workers
35
- if mode.in? %i[ work all]
36
- workers_options.flat_map do |worker_options|
37
- processes = worker_options.fetch(:processes, WORKER_DEFAULTS[:processes])
38
- processes.times.map { Worker.new(**worker_options.with_defaults(WORKER_DEFAULTS)) }
30
+ workers_options.flat_map do |worker_options|
31
+ processes = if mode.fork?
32
+ worker_options.fetch(:processes, WORKER_DEFAULTS[:processes])
33
+ else
34
+ WORKER_DEFAULTS[:processes]
39
35
  end
40
- else
41
- []
36
+ processes.times.map { Worker.new(**worker_options.with_defaults(WORKER_DEFAULTS)) }
42
37
  end
43
38
  end
44
39
 
45
40
  def dispatchers
46
- if mode.in? %i[ dispatch all]
47
- dispatchers_options.map do |dispatcher_options|
48
- recurring_tasks = parse_recurring_tasks dispatcher_options[:recurring_tasks]
49
-
50
- Dispatcher.new **dispatcher_options.merge(recurring_tasks: recurring_tasks).with_defaults(DISPATCHER_DEFAULTS)
51
- end
41
+ dispatchers_options.map do |dispatcher_options|
42
+ recurring_tasks = parse_recurring_tasks dispatcher_options[:recurring_tasks]
43
+ Dispatcher.new **dispatcher_options.merge(recurring_tasks: recurring_tasks).with_defaults(DISPATCHER_DEFAULTS)
52
44
  end
53
45
  end
54
46
 
@@ -68,15 +60,19 @@ module SolidQueue
68
60
  end
69
61
 
70
62
  def workers_options
71
- @workers_options ||= (raw_config[:workers] || [ WORKER_DEFAULTS ])
63
+ @workers_options ||= options_from_raw_config(:workers, WORKER_DEFAULTS)
72
64
  .map { |options| options.dup.symbolize_keys }
73
65
  end
74
66
 
75
67
  def dispatchers_options
76
- @dispatchers_options ||= (raw_config[:dispatchers] || [ DISPATCHER_DEFAULTS ])
68
+ @dispatchers_options ||= options_from_raw_config(:dispatchers, DISPATCHER_DEFAULTS)
77
69
  .map { |options| options.dup.symbolize_keys }
78
70
  end
79
71
 
72
+ def options_from_raw_config(key, defaults)
73
+ raw_config.empty? ? [ defaults ] : Array(raw_config[key])
74
+ end
75
+
80
76
  def parse_recurring_tasks(tasks)
81
77
  Array(tasks).map do |id, options|
82
78
  Dispatcher::RecurringTask.from_configuration(id, **options)
@@ -25,7 +25,7 @@ module SolidQueue
25
25
  end
26
26
 
27
27
  def stop
28
- @concurrency_maintenance_task.shutdown
28
+ @concurrency_maintenance_task&.shutdown
29
29
  end
30
30
 
31
31
  private
@@ -11,6 +11,10 @@ module SolidQueue
11
11
  @scheduled_tasks = Concurrent::Hash.new
12
12
  end
13
13
 
14
+ def empty?
15
+ configured_tasks.empty?
16
+ end
17
+
14
18
  def load_tasks
15
19
  configured_tasks.each do |task|
16
20
  load_task(task)
@@ -31,15 +31,23 @@ module SolidQueue
31
31
 
32
32
  def enqueue(at:)
33
33
  SolidQueue.instrument(:enqueue_recurring_task, task: key, at: at) do |payload|
34
- if using_solid_queue_adapter?
34
+ active_job = if using_solid_queue_adapter?
35
35
  perform_later_and_record(run_at: at)
36
36
  else
37
37
  payload[:other_adapter] = true
38
38
 
39
- perform_later
40
- end.tap do |active_job|
41
- payload[:active_job_id] = active_job&.job_id
39
+ perform_later do |job|
40
+ unless job.successfully_enqueued?
41
+ payload[:enqueue_error] = job.enqueue_error&.message
42
+ end
43
+ end
42
44
  end
45
+
46
+ payload[:active_job_id] = active_job.job_id if active_job
47
+ rescue RecurringExecution::AlreadyRecorded
48
+ payload[:skipped] = true
49
+ rescue Job::EnqueueError => error
50
+ payload[:enqueue_error] = error.message
43
51
  end
44
52
  end
45
53
 
@@ -68,8 +76,8 @@ module SolidQueue
68
76
  RecurringExecution.record(key, run_at) { perform_later }
69
77
  end
70
78
 
71
- def perform_later
72
- job_class.perform_later(*arguments_with_kwargs)
79
+ def perform_later(&block)
80
+ job_class.perform_later(*arguments_with_kwargs, &block)
73
81
  end
74
82
 
75
83
  def arguments_with_kwargs