solid_queue 1.0.0.beta → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: '0185a60652945d7afadb3c4c8e646f9445eba35086ab88ddf1b93ff3b271e973'
4
- data.tar.gz: 363d19a3e07ced689dd4a3ac3ec36c7cdfeecfc61a5588ccb6f67c28b782a5bf
3
+ metadata.gz: f0d576a078aa17199edc6614015a0953027fe2660e98eab5ef31c23a7417c6e3
4
+ data.tar.gz: dfdac5d1ae0b53d48dea6e31abd70a39858cb48d41e2ef51f4db777797b5db1a
5
5
  SHA512:
6
- metadata.gz: 1fc522f72cf05273cd6d7fb98cbce44001c28532131952fb1bc139c2c80d808afb2bd7dc8acd337c3fe4834d30e1dc75a908aff7d3984a1e13dfbada8b25261f
7
- data.tar.gz: 96c3d62e399468a193dc2f915797308b645eaf8c60b109a4344a757b4d91c4accf80192c4ebcd26022d08f876a8b0e95788250dfe27eb58e0b723082c6147b2d
6
+ metadata.gz: a0583ab02da0aac3e812a9e13a3a9853aca53cd67d15cd1a8ce2d666b7f48e9c669dd52bd45a74b03b5177f2b93dd98a1d85ef9f86ab2c0b4aeb82d38f26bb3f
7
+ data.tar.gz: 6e6cdb1ad994664f8132774e2d337758a2fea55bbb46bb618acbed3319490acf23f0cf59f12a66ac7c798c55a92b69167073fe932b41a453dbdfd91189cb3ae4
data/README.md CHANGED
@@ -226,7 +226,6 @@ There are several settings that control how Solid Queue works that you can set a
226
226
 
227
227
  **This is not used for errors raised within a job execution**. Errors happening in jobs are handled by Active Job's `retry_on` or `discard_on`, and ultimately will result in [failed jobs](#failed-jobs-and-retries). This is for errors happening within Solid Queue itself.
228
228
 
229
- - `connects_to`: a custom database configuration that will be used in the abstract `SolidQueue::Record` Active Record model. This is required to use a different database than the main app. For example:
230
229
  - `use_skip_locked`: whether to use `FOR UPDATE SKIP LOCKED` when performing locking reads. This will be automatically detected in the future, and for now, you'd only need to set this to `false` if your database doesn't support it. For MySQL, that'd be versions < 8, and for PostgreSQL, versions < 9.5. If you use SQLite, this has no effect, as writes are sequential.
231
230
  - `process_heartbeat_interval`: the heartbeat interval that all processes will follow—defaults to 60 seconds.
232
231
  - `process_alive_threshold`: how long to wait until a process is considered dead after its last heartbeat—defaults to 5 minutes.
@@ -381,13 +380,14 @@ bin/jobs --recurring_schedule_file=config/schedule.yml
381
380
  The configuration itself looks like this:
382
381
 
383
382
  ```yml
384
- a_periodic_job:
385
- class: MyJob
386
- args: [ 42, { status: "custom_status" } ]
387
- schedule: every second
388
- a_cleanup_task:
389
- command: "DeletedStuff.clear_all"
390
- schedule: every day at 9am
383
+ production:
384
+ a_periodic_job:
385
+ class: MyJob
386
+ args: [ 42, { status: "custom_status" } ]
387
+ schedule: every second
388
+ a_cleanup_task:
389
+ command: "DeletedStuff.clear_all"
390
+ schedule: every day at 9am
391
391
  ```
392
392
 
393
393
  Tasks are specified as a hash/dictionary, where the key will be the task's key internally. Each task needs to either have a `class`, which will be the job class to enqueue, or a `command`, which will be eval'ed in the context of a job (`SolidQueue::RecurringJob`) that will be enqueued according to its schedule, in the `solid_queue_recurring` queue.
@@ -409,6 +409,8 @@ Tasks are enqueued at their corresponding times by the scheduler, and each task
409
409
 
410
410
  It's possible to run multiple schedulers with the same `recurring_tasks` configuration, for example, if you have multiple servers for redundancy, and you run the `scheduler` in more than one of them. To avoid enqueuing duplicate tasks at the same time, an entry in a new `solid_queue_recurring_executions` table is created in the same transaction as the job is enqueued. This table has a unique index on `task_key` and `run_at`, ensuring only one entry per task per time will be created. This only works if you have `preserve_finished_jobs` set to `true` (the default), and the guarantee applies as long as you keep the jobs around.
411
411
 
412
+ **Note**: a single recurring schedule is supported, so you can have multiple schedulers using the same schedule, but not multiple schedulers using different configurations.
413
+
412
414
  Finally, it's possible to configure jobs that aren't handled by Solid Queue. That is, you can have a job like this in your app:
413
415
  ```ruby
414
416
  class MyResqueJob < ApplicationJob
@@ -4,7 +4,7 @@ module SolidQueue
4
4
  class Record < ActiveRecord::Base
5
5
  self.abstract_class = true
6
6
 
7
- connects_to **SolidQueue.connects_to if SolidQueue.connects_to
7
+ connects_to(**SolidQueue.connects_to) if SolidQueue.connects_to
8
8
 
9
9
  def self.non_blocking_lock
10
10
  if SolidQueue.use_skip_locked
@@ -67,11 +67,15 @@ module SolidQueue
67
67
  end
68
68
  end
69
69
 
70
- payload[:active_job_id] = active_job.job_id if active_job
70
+ active_job.tap do |enqueued_job|
71
+ payload[:active_job_id] = enqueued_job.job_id
72
+ end
71
73
  rescue RecurringExecution::AlreadyRecorded
72
74
  payload[:skipped] = true
75
+ false
73
76
  rescue Job::EnqueueError => error
74
77
  payload[:enqueue_error] = error.message
78
+ false
75
79
  end
76
80
  end
77
81
 
@@ -1,9 +1,10 @@
1
- # periodic_cleanup:
2
- # class: CleanSoftDeletedRecordsJob
3
- # queue: background
4
- # args: [ 1000, { batch_size: 500 } ]
5
- # schedule: every hour
6
- # periodic_command:
7
- # command: "SoftDeletedRecord.due.delete_all"
8
- # priority: 2
9
- # schedule: at 5am every day
1
+ # production:
2
+ # periodic_cleanup:
3
+ # class: CleanSoftDeletedRecordsJob
4
+ # queue: background
5
+ # args: [ 1000, { batch_size: 500 } ]
6
+ # schedule: every hour
7
+ # periodic_command:
8
+ # command: "SoftDeletedRecord.due.delete_all"
9
+ # priority: 2
10
+ # schedule: at 5am every day
@@ -1,4 +1,4 @@
1
- ActiveRecord::Schema[7.1].define(version: 2024_09_04_193154) do
1
+ ActiveRecord::Schema[7.1].define(version: 1) do
2
2
  create_table "solid_queue_blocked_executions", force: :cascade do |t|
3
3
  t.bigint "job_id", null: false
4
4
  t.string "queue_name", null: false
@@ -111,7 +111,7 @@ module SolidQueue
111
111
  end
112
112
 
113
113
  def recurring_tasks_config
114
- @recurring_tasks ||= config_from options[:recurring_schedule_file]
114
+ @recurring_tasks_config ||= config_from options[:recurring_schedule_file]
115
115
  end
116
116
 
117
117
 
@@ -41,10 +41,6 @@ module SolidQueue::Processes
41
41
  end
42
42
  end
43
43
 
44
- def run
45
- raise NotImplementedError
46
- end
47
-
48
44
  def shutting_down?
49
45
  stopped? || (running_as_fork? && supervisor_went_away?) || finished? || !registered?
50
46
  end
@@ -29,7 +29,7 @@ module SolidQueue
29
29
  else
30
30
  FileUtils.mkdir_p File.dirname(path)
31
31
  end
32
- rescue Errno::ESRCH => e
32
+ rescue Errno::ESRCH
33
33
  # Process is dead, ignore, just delete the file
34
34
  delete
35
35
  rescue Errno::EPERM
@@ -1,3 +1,3 @@
1
1
  module SolidQueue
2
- VERSION = "1.0.0.beta"
2
+ VERSION = "1.0.1"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: solid_queue
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0.beta
4
+ version: 1.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Rosa Gutierrez
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-09-16 00:00:00.000000000 Z
11
+ date: 2024-11-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord