workhorse 0.6.5 → 1.0.0.beta0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 58ce0cb4d0d2d81b7da8524a55eac6ae7ed0195fc4d49fbe3b4dd99ef8d8c4d6
4
- data.tar.gz: 2007bf72e0603426967fb7b576f624fcb3520632150ccfc681caf2ff0725723a
3
+ metadata.gz: 5dd13b11d5f7488e58812f57ad2bb70da07b95f18b89653687bf9ec579bbcaa5
4
+ data.tar.gz: a0e7c53cf4de56b6109830decf16f6f70a8aead13ca703a795ce7709234c1b05
5
5
  SHA512:
6
- metadata.gz: aa66c279dfbcd0096437badb3561706eba17e3bc04ef422e70246e493de192c9d82ac1e083d85153bd8897c5fca8adbd7720ed9c4f25776bc3ed57d96d251de7
7
- data.tar.gz: a3e01f2b85dfd1d1ead2f0d043bbfadf8d066859a7360263d17f98855abc30da1ffbcc0092e1171eb89d7834058e471ae6fc05d88d10a2a8ab573220e3621405
6
+ metadata.gz: 2c6e9162d5b2735cd1de678b56e4c4e4960e25776a64cc45dfe5fbfee4fe0e50b1b3a5d15e81ca65ebfe496cce6ac4d81cfe1ba47a65b591a54916807bbe5c68
7
+ data.tar.gz: a24094b51e7751010920e39ad6ac3924dbccdfb84f1728efec73ed1cc0e9376a873cdb2ff4a562f49d78b17ae08e5659fd596684b299fc35b8bdcd228be15f0e
@@ -1,5 +1,56 @@
1
1
  # Workhorse Changelog
2
2
 
3
+ ## 1.0.0.beta0 - 2020-08-19
4
+
5
+ This is a stability release that is still experimental and has to be tested in
6
+ battle before it can be considered stable.
7
+
8
+ * Simplify locking during polling. Other than locking individual jobs, pollers
9
+ now acquire a global lock. While this can lead to many pollers waiting for
10
+ each others locks, performing a poll is usually done very quickly and the
11
+ performance drawback is to be considered neglegible. This change should work
12
+ around some deadlock issues as well as an issue where a job was obtained by
13
+ more than one poller.
14
+
15
+ * Shut down worker if polling encountered any kind of error (running jobs will
16
+ be completed whenever possible). This leads to potential watcher jobs being
17
+ able to restore the failed process.
18
+
19
+ * Make unit test database connection configurable using environment variables
20
+ `DB_NAME`, `DB_USERNAME`, `DB_PASSWORD` and `DB_HOST`. This is only relevant
21
+ if you are working on workhorse and need to run the unit tests.
22
+
23
+ * Fix misbehaviour where queueless jobs were not picked up by workers as long as
24
+ a named queue was in a locked state.
25
+
26
+ * Add built-in job `Workhorse::Jobs::DetectStaleJobsJob` which you can schedule.
27
+ It picks up jobs that remained `locked` or `started` (running) for more than a
28
+ certain amount of time. If any of these jobs are found, an exception is thrown
29
+ (which may cause a notification if you configured `on_exception` accordingly).
30
+ See the job's API documentation for more information.
31
+
32
+ **If using oracle:** Make sure to grant execute permission to the package
33
+ `DBMS_LOCK` for your oracle database schema:
34
+
35
+ ```GRANT execute ON DBMS_LOCK TO <schema-name>;```
36
+
37
+ ## 0.6.9 - 2020-04-22
38
+
39
+ * Fix error where processes may have mistakenly been detected as running (add a
40
+ further improvement to the fix in 0.6.7).
41
+
42
+ ## 0.6.8 - 2020-04-07
43
+
44
+ * Fix bug introduced in 0.6.7 where all processes were detected as running
45
+
46
+ ## 0.6.7 - 2020-04-07
47
+
48
+ * Fix error where processes may have mistakenly been detected as running
49
+
50
+ ## 0.6.6 - 2020-04-06
51
+
52
+ * Fix error `No workers are defined.` when definining exactly 1 worker.
53
+
3
54
  ## 0.6.5 - 2020-03-18
4
55
 
5
56
  * Call `on_exception` callback on errors during polling.
data/FAQ.md CHANGED
@@ -74,4 +74,8 @@ production mode.
74
74
 
75
75
  ## Why does workhorse not support timeouts?
76
76
 
77
- Generic timeout implementations are [a dangerous thing](http://www.mikeperham.com/2015/05/08/timeout-rubys-most-dangerous-api/) in Ruby. This is why we decided against providing this feature in Workhorse and recommend to implement the timeouts inside of your jobs - i.e. via network timeouts.
77
+ Generic timeout implementations are [a dangerous
78
+ thing](http://www.mikeperham.com/2015/05/08/timeout-rubys-most-dangerous-api/)
79
+ in Ruby. This is why we decided against providing this feature in Workhorse and
80
+ recommend to implement the timeouts inside of your jobs - i.e. via network
81
+ timeouts.
data/README.md CHANGED
@@ -66,6 +66,15 @@ What it does not do:
66
66
 
67
67
  Please customize the initializer and worker script to your liking.
68
68
 
69
+ ### Oracle
70
+
71
+ When using Oracle databases, make sure your schema has access to the package
72
+ `DBMS_LOCK`:
73
+
74
+ ```
75
+ GRANT execute ON DBMS_LOCK TO <schema-name>;
76
+ ```
77
+
69
78
  ## Queuing jobs
70
79
 
71
80
  ### Basic jobs
@@ -320,7 +329,6 @@ DbJob.started
320
329
  DbJob.succeeded
321
330
  DbJob.failed
322
331
  ```
323
-
324
332
  ### Resetting jobs
325
333
 
326
334
  Jobs in a state other than `waiting` are either being processed or else already
@@ -352,7 +360,6 @@ Performing a reset will reset the job state to `waiting` and it will be
352
360
  processed again. All meta fields will be reset as well. See inline documentation
353
361
  of `Workhorse::DbJob#reset!` for more details.
354
362
 
355
-
356
363
  ## Using workhorse with Rails / ActiveJob
357
364
 
358
365
  While workhorse can be used though its custom interface as documented above, it
@@ -365,6 +372,45 @@ To use workhorse as your ActiveJob backend, set the `queue_adapter` to
365
372
  configuration or else using `self.queue_adapter` in a job class inheriting from
366
373
  `ActiveJob`. See ActiveJob documentation for more details.
367
374
 
375
+ ## Cleaning up jobs
376
+
377
+ Per default, jobs remain in the database, no matter in which state. This can
378
+ eventually lead to a very large jobs database. You are advised to clean your
379
+ jobs database on a regular interval. Workhorse provides the job
380
+ `Workhose::Jobs::CleanupSucceededJobs` for this purpose that cleans up all
381
+ succeeded jobs. You can run this using your scheduler in a specific interval.
382
+
383
+ ## Caveats
384
+
385
+ ### Errors during polling / crashed workers
386
+
387
+ Each worker process includes one thread that polls the database for jobs and
388
+ dispatches them to individual worker threads. In case of an error in the poller
389
+ (usually due to a database connection drop), the poller aborts and gracefully
390
+ shuts down the entire worker. Jobs still being processed by this worker are
391
+ attempted to be completed during this shutdown (which only works if the database
392
+ connection is still active).
393
+
394
+ This means that you should always have an external *watcher* (usually a
395
+ cronjob), that calls the `workhorse watch` command regularly. This would
396
+ automatically restart crashed worker processes.
397
+
398
+ ### Stuck queues
399
+
400
+ Jobs in named queues (non-null queues) are always run sequentially. This means
401
+ that if a job in such a queue is stuck in states `locked` or `started` (i.e. due
402
+ to a database connection failure), no more jobs of this queue will be run as the
403
+ entire queue is considered locked to ensure that no jobs of the same queue run
404
+ in parallel.
405
+
406
+ For this purpose, Workhorse provides the built-in job
407
+ `Workhorse::Jobs::DetectStaleJobsJob` which you are advised schedule on a
408
+ regular basis. It picks up jobs that remained `locked` or `started` (running)
409
+ for more than a certain amount of time. If any of these jobs are found, an
410
+ exception is thrown (which may cause a notification if you configured
411
+ `on_exception` accordingly). See the job's API documentation for more
412
+ information.
413
+
368
414
  ## Frequently asked questions
369
415
 
370
416
  Please consult the [FAQ](FAQ.md).
data/Rakefile CHANGED
@@ -19,6 +19,7 @@ task :gemspec do
19
19
  spec.add_development_dependency 'colorize'
20
20
  spec.add_development_dependency 'benchmark-ips'
21
21
  spec.add_development_dependency 'activejob'
22
+ spec.add_development_dependency 'pry'
22
23
  spec.add_dependency 'activesupport'
23
24
  spec.add_dependency 'activerecord'
24
25
  spec.add_dependency 'schemacop', '~> 2.0'
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.6.5
1
+ 1.0.0.beta0
@@ -46,6 +46,7 @@ require 'workhorse/worker'
46
46
  require 'workhorse/jobs/run_rails_op'
47
47
  require 'workhorse/jobs/run_active_job'
48
48
  require 'workhorse/jobs/cleanup_succeeded_jobs'
49
+ require 'workhorse/jobs/detect_stale_jobs_job'
49
50
 
50
51
  # Daemon functionality is not available on java platforms
51
52
  if RUBY_PLATFORM != 'java'
@@ -21,7 +21,7 @@ module Workhorse
21
21
 
22
22
  @count = @workers.count
23
23
 
24
- fail 'No workers are defined.' if @count == 1
24
+ fail 'No workers are defined.' if @count < 1
25
25
 
26
26
  FileUtils.mkdir_p('tmp/pids')
27
27
 
@@ -162,9 +162,9 @@ module Workhorse
162
162
 
163
163
  def process?(pid)
164
164
  return begin
165
- Process.getpgid(pid)
165
+ Process.kill(0, pid)
166
166
  true
167
- rescue Errno::ESRCH
167
+ rescue Errno::EPERM, Errno::ESRCH
168
168
  false
169
169
  end
170
170
  end
@@ -69,7 +69,14 @@ module Workhorse
69
69
  fail "Dirty jobs can't be locked."
70
70
  end
71
71
 
72
+ # TODO: Remove this debug output
73
+ # if Workhorse::DbJob.lock.find(id).locked_at
74
+ # puts "Already locked (with FOR UPDATE)"
75
+ # end
76
+
72
77
  if locked_at
78
+ # TODO: Remove this debug output
79
+ # puts "Already locked. Job: #{self.id} Worker: #{worker_id}"
73
80
  fail "Job #{id} is already locked by #{locked_by.inspect}."
74
81
  end
75
82
 
@@ -1,6 +1,6 @@
1
1
  module Workhorse::Jobs
2
2
  class CleanupSucceededJobs
3
- # Instantiates a new cleanup job
3
+ # Instantiates a new job.
4
4
  #
5
5
  # @param max_age [Integer] The maximal age of jobs to retain, in days. Will
6
6
  # be evaluated at perform time.
@@ -0,0 +1,48 @@
1
+ module Workhorse::Jobs
2
+ class DetectStaleJobsJob
3
+ # Instantiates a new stale detection job.
4
+ #
5
+ # @param locked_to_started_threshold [Integer] The maximum number of seconds
6
+ # a job is allowed to stay 'locked' before this job throws an exception.
7
+ # Set this to 0 to skip this check.
8
+ # @param run_time_threshold [Integer] The maximum number of seconds
9
+ # a job is allowed to run before this job throws an exception. Set this to
10
+ # 0 to skip this check.
11
+ def initialize(locked_to_started_threshold: 3 * 60, run_time_threshold: 12 * 60)
12
+ @locked_to_started_threshold = locked_to_started_threshold
13
+ @run_time_threshold = run_time_threshold
14
+ end
15
+
16
+ def perform
17
+ messages = []
18
+
19
+ # Detect jobs that are locked for too long #
20
+ if @locked_to_started_threshold != 0
21
+ rel = Workhorse::DbJob.locked
22
+ rel = rel.where('locked_at < ?', @locked_to_started_threshold.seconds.ago)
23
+ ids = rel.pluck(:id)
24
+
25
+ if ids.size > 0
26
+ messages << "Detected #{ids.size} jobs that were locked more than "\
27
+ "#{@locked_to_started_threshold}s ago and might be stale: #{ids.inspect}."
28
+ end
29
+ end
30
+
31
+ # Detect jobs that are running for too long #
32
+ if @run_time_threshold != 0
33
+ rel = Workhorse::DbJob.started
34
+ rel = rel.where('started_at < ?', @run_time_threshold.seconds.ago)
35
+ ids = rel.pluck(:id)
36
+
37
+ if ids.size > 0
38
+ messages << "Detected #{ids.size} jobs that are running for longer than "\
39
+ "#{@run_time_threshold}s ago and might be stale: #{ids.inspect}."
40
+ end
41
+ end
42
+
43
+ if messages.any?
44
+ fail messages.join(' ')
45
+ end
46
+ end
47
+ end
48
+ end
@@ -1,5 +1,11 @@
1
1
  module Workhorse
2
2
  class Poller
3
+ MIN_LOCK_TIMEOUT = 0.1 # In seconds
4
+ MAX_LOCK_TIMEOUT = 1.0 # In seconds
5
+
6
+ ORACLE_LOCK_MODE = 6 # X_MODE (exclusive)
7
+ ORACLE_LOCK_HANDLE = 478564848 # Randomly chosen number
8
+
3
9
  attr_reader :worker
4
10
  attr_reader :table
5
11
 
@@ -20,15 +26,20 @@ module Workhorse
20
26
  @running = true
21
27
 
22
28
  @thread = Thread.new do
23
- begin
24
- loop do
25
- break unless running?
29
+ loop do
30
+ break unless running?
31
+
32
+ begin
26
33
  poll
27
34
  sleep
35
+ rescue Exception => e
36
+ worker.log %(Poll encountered exception:\n#{e.message}\n#{e.backtrace.join("\n")})
37
+ worker.log 'Worker shutting down...'
38
+ Workhorse.on_exception.call(e)
39
+ @running = false
40
+ worker.instance_variable_get(:@pool).shutdown
41
+ break
28
42
  end
29
- rescue Exception => e
30
- worker.log %(Poller stopped with exception:\n#{e.message}\n#{e.backtrace.join("\n")})
31
- Workhorse.on_exception.call(e)
32
43
  end
33
44
  end
34
45
  end
@@ -61,24 +72,55 @@ module Workhorse
61
72
  end
62
73
  end
63
74
 
75
+ def with_global_lock(name: :workhorse, timeout: 2, &block)
76
+ if @is_oracle
77
+ result = Workhorse::DbJob.connection.select_all(
78
+ "SELECT DBMS_LOCK.REQUEST(#{ORACLE_LOCK_HANDLE}, #{ORACLE_LOCK_MODE}, #{timeout}) FROM DUAL"
79
+ ).first.values.last
80
+
81
+ success = result == 0
82
+ else
83
+ result = Workhorse::DbJob.connection.select_all(
84
+ "SELECT GET_LOCK(CONCAT(DATABASE(), '_#{name}'), #{timeout})"
85
+ ).first.values.last
86
+ success = result == 1
87
+ end
88
+
89
+ return unless success
90
+
91
+ yield
92
+ ensure
93
+ if success
94
+ if @is_oracle
95
+ Workhorse::DbJob.connection.execute("SELECT DBMS_LOCK.RELEASE(#{ORACLE_LOCK_HANDLE}) FROM DUAL")
96
+ else
97
+ Workhorse::DbJob.connection.execute("SELECT RELEASE_LOCK(CONCAT(DATABASE(), '_#{name}'))")
98
+ end
99
+ end
100
+ end
101
+
64
102
  def poll
65
103
  @instant_repoll.make_false
66
104
 
67
- Workhorse.tx_callback.call do
68
- # As we are the only thread posting into the worker pool, it is safe to
69
- # get the number of idle threads without mutex synchronization. The
70
- # actual number of idle workers at time of posting can only be larger
71
- # than or equal to the number we get here.
72
- idle = worker.idle
73
-
74
- worker.log "Polling DB for jobs (#{idle} available threads)...", :debug
75
-
76
- unless idle.zero?
77
- jobs = queued_db_jobs(idle)
78
- jobs.each do |job|
79
- worker.log "Marking job #{job.id} as locked", :debug
80
- job.mark_locked!(worker.id)
81
- worker.perform job
105
+ timeout = [MIN_LOCK_TIMEOUT, [MAX_LOCK_TIMEOUT, worker.polling_interval].min].max
106
+
107
+ with_global_lock timeout: timeout do
108
+ Workhorse.tx_callback.call do
109
+ # As we are the only thread posting into the worker pool, it is safe to
110
+ # get the number of idle threads without mutex synchronization. The
111
+ # actual number of idle workers at time of posting can only be larger
112
+ # than or equal to the number we get here.
113
+ idle = worker.idle
114
+
115
+ worker.log "Polling DB for jobs (#{idle} available threads)...", :debug
116
+
117
+ unless idle.zero?
118
+ jobs = queued_db_jobs(idle)
119
+ jobs.each do |job|
120
+ worker.log "Marking job #{job.id} as locked", :debug
121
+ job.mark_locked!(worker.id)
122
+ worker.perform job
123
+ end
82
124
  end
83
125
  end
84
126
  end
@@ -86,16 +128,6 @@ module Workhorse
86
128
 
87
129
  # Returns an Array of #{Workhorse::DbJob}s that can be started
88
130
  def queued_db_jobs(limit)
89
- # ---------------------------------------------------------------
90
- # Lock all queued jobs that are waiting
91
- # ---------------------------------------------------------------
92
- Workhorse::DbJob.connection.execute(
93
- Workhorse::DbJob.select('null').where(
94
- table[:queue].not_eq(nil)
95
- .and(table[:state].eq(:waiting))
96
- ).lock.to_sql
97
- )
98
-
99
131
  # ---------------------------------------------------------------
100
132
  # Select jobs to execute
101
133
  # ---------------------------------------------------------------
@@ -147,20 +179,6 @@ module Workhorse
147
179
  # Limit number of records
148
180
  select = agnostic_limit(select, limit)
149
181
 
150
- # Wrap the entire query in an other subselect to enable locking under
151
- # Oracle SQL. As MySQL is able to lock the records without this additional
152
- # complication, only do this when using the Oracle backend.
153
- if @is_oracle
154
- if AREL_GTE_7
155
- select = Arel::SelectManager.new(Arel.sql('(' + select.to_sql + ')'))
156
- else
157
- select = Arel::SelectManager.new(ActiveRecord::Base, Arel.sql('(' + select.to_sql + ')'))
158
- end
159
- select = table.project(Arel.star).where(table[:id].in(select.project(:id)))
160
- end
161
-
162
- select = select.lock
163
-
164
182
  return Workhorse::DbJob.find_by_sql(select.to_sql).to_a
165
183
  end
166
184
 
@@ -214,7 +232,7 @@ module Workhorse
214
232
  .where(table[:state].in(bad_states))
215
233
  # .distinct is not chainable in older Arel versions
216
234
  bad_queues_select.distinct
217
- select = select.where(table[:queue].not_in(bad_queues_select))
235
+ select = select.where(table[:queue].not_in(bad_queues_select).or(table[:queue].eq(nil)))
218
236
 
219
237
  # Restrict queues to valid ones as indicated by the options given to the
220
238
  # worker
@@ -1,6 +1,8 @@
1
1
  require 'minitest/autorun'
2
2
  require 'active_record'
3
3
  require 'active_job'
4
+ require 'pry'
5
+ require 'colorize'
4
6
  require 'mysql2'
5
7
  require 'benchmark'
6
8
  require 'jobs'
@@ -40,7 +42,14 @@ class WorkhorseTest < ActiveSupport::TestCase
40
42
  end
41
43
  end
42
44
 
43
- ActiveRecord::Base.establish_connection adapter: 'mysql2', database: 'workhorse', username: 'travis', password: '', pool: 10, host: :localhost
45
+ ActiveRecord::Base.establish_connection(
46
+ adapter: 'mysql2',
47
+ database: ENV['DB_NAME'] || 'workhorse',
48
+ username: ENV['DB_USERNAME'] || 'root',
49
+ password: ENV['DB_PASSWORD'] || '',
50
+ host: ENV['DB_HOST'] || '127.0.0.1',
51
+ pool: 10
52
+ )
44
53
 
45
54
  require 'db_schema'
46
55
  require 'workhorse'
@@ -48,6 +48,24 @@ class Workhorse::PollerTest < WorkhorseTest
48
48
  assert_equal %w[q1 q2], w.poller.send(:valid_queues)
49
49
  end
50
50
 
51
+ def test_valid_queues
52
+ w = Workhorse::Worker.new(polling_interval: 60)
53
+
54
+ assert_equal [], w.poller.send(:valid_queues)
55
+
56
+ Workhorse.enqueue BasicJob.new(sleep_time: 2), queue: nil
57
+
58
+ assert_equal [nil], w.poller.send(:valid_queues)
59
+
60
+ a_job = Workhorse.enqueue BasicJob.new(sleep_time: 2), queue: :a
61
+
62
+ assert_equal [nil, 'a'], w.poller.send(:valid_queues)
63
+
64
+ a_job.update_attribute :state, :locked
65
+
66
+ assert_equal [nil], w.poller.send(:valid_queues)
67
+ end
68
+
51
69
  def test_no_queues
52
70
  w = Workhorse::Worker.new(polling_interval: 60)
53
71
  assert_equal [], w.poller.send(:valid_queues)
@@ -96,6 +114,67 @@ class Workhorse::PollerTest < WorkhorseTest
96
114
  assert_equal 1, Workhorse::DbJob.where(state: :succeeded).count
97
115
  end
98
116
 
117
+ def test_already_locked_issue
118
+ # Create 100 jobs
119
+ 100.times do |i|
120
+ Workhorse.enqueue BasicJob.new(some_param: i, sleep_time: 0)
121
+ end
122
+
123
+ # Create 25 worker processes that work for 10s each
124
+ 25.times do
125
+ Process.fork do
126
+ work 10, pool_size: 1, polling_interval: 0.1
127
+ end
128
+ end
129
+
130
+ # Create additional 100 jobs that are scheduled while the workers are
131
+ # already polling (to make sure those are picked up as well)
132
+ 100.times do
133
+ sleep 0.05
134
+ Workhorse.enqueue BasicJob.new(sleep_time: 0)
135
+ end
136
+
137
+ # Wait for all forked processes to finish (should take ~10s)
138
+ Process.waitall
139
+
140
+ total = Workhorse::DbJob.count
141
+ succeeded = Workhorse::DbJob.succeeded.count
142
+ used_workers = Workhorse::DbJob.lock.pluck(:locked_by).uniq.size
143
+
144
+ # Make sure there are 200 jobs, all jobs have succeeded and that all of the
145
+ # workers have had their turn.
146
+ assert_equal 200, total
147
+ assert_equal 200, succeeded
148
+ assert_equal 25, used_workers
149
+ end
150
+
151
+ def test_connection_loss
152
+ $thread_conn = nil
153
+
154
+ Workhorse.enqueue BasicJob.new(sleep_time: 3)
155
+
156
+ t = Thread.new do
157
+ w = Workhorse::Worker.new(pool_size: 5, polling_interval: 0.1)
158
+ w.start
159
+
160
+ sleep 0.5
161
+
162
+ w.poller.define_singleton_method :poll do
163
+ fail ActiveRecord::StatementInvalid, 'Mysql2::Error: Connection was killed'
164
+ end
165
+
166
+ w.wait
167
+ end
168
+
169
+ assert_nothing_raised do
170
+ Timeout.timeout(6) do
171
+ t.join
172
+ end
173
+ end
174
+
175
+ assert_equal 1, Workhorse::DbJob.succeeded.count
176
+ end
177
+
99
178
  private
100
179
 
101
180
  def setup
@@ -1,15 +1,15 @@
1
1
  # -*- encoding: utf-8 -*-
2
- # stub: workhorse 0.6.5 ruby lib
2
+ # stub: workhorse 1.0.0.beta0 ruby lib
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = "workhorse".freeze
6
- s.version = "0.6.5"
6
+ s.version = "1.0.0.beta0"
7
7
 
8
- s.required_rubygems_version = Gem::Requirement.new(">= 0".freeze) if s.respond_to? :required_rubygems_version=
8
+ s.required_rubygems_version = Gem::Requirement.new("> 1.3.1".freeze) if s.respond_to? :required_rubygems_version=
9
9
  s.require_paths = ["lib".freeze]
10
10
  s.authors = ["Sitrox".freeze]
11
- s.date = "2020-03-18"
12
- s.files = [".gitignore".freeze, ".releaser_config".freeze, ".rubocop.yml".freeze, ".travis.yml".freeze, "CHANGELOG.md".freeze, "FAQ.md".freeze, "Gemfile".freeze, "LICENSE".freeze, "README.md".freeze, "RUBY_VERSION".freeze, "Rakefile".freeze, "VERSION".freeze, "bin/rubocop".freeze, "lib/active_job/queue_adapters/workhorse_adapter.rb".freeze, "lib/generators/workhorse/install_generator.rb".freeze, "lib/generators/workhorse/templates/bin/workhorse.rb".freeze, "lib/generators/workhorse/templates/config/initializers/workhorse.rb".freeze, "lib/generators/workhorse/templates/create_table_jobs.rb".freeze, "lib/workhorse.rb".freeze, "lib/workhorse/daemon.rb".freeze, "lib/workhorse/daemon/shell_handler.rb".freeze, "lib/workhorse/db_job.rb".freeze, "lib/workhorse/enqueuer.rb".freeze, "lib/workhorse/jobs/cleanup_succeeded_jobs.rb".freeze, "lib/workhorse/jobs/run_active_job.rb".freeze, "lib/workhorse/jobs/run_rails_op.rb".freeze, "lib/workhorse/performer.rb".freeze, "lib/workhorse/poller.rb".freeze, "lib/workhorse/pool.rb".freeze, "lib/workhorse/scoped_env.rb".freeze, "lib/workhorse/worker.rb".freeze, "test/active_job/queue_adapters/workhorse_adapter_test.rb".freeze, "test/lib/db_schema.rb".freeze, "test/lib/jobs.rb".freeze, "test/lib/test_helper.rb".freeze, "test/workhorse/db_job_test.rb".freeze, "test/workhorse/enqueuer_test.rb".freeze, "test/workhorse/performer_test.rb".freeze, "test/workhorse/poller_test.rb".freeze, "test/workhorse/pool_test.rb".freeze, "test/workhorse/worker_test.rb".freeze, "workhorse.gemspec".freeze]
11
+ s.date = "2020-08-19"
12
+ s.files = [".gitignore".freeze, ".releaser_config".freeze, ".rubocop.yml".freeze, ".travis.yml".freeze, "CHANGELOG.md".freeze, "FAQ.md".freeze, "Gemfile".freeze, "LICENSE".freeze, "README.md".freeze, "RUBY_VERSION".freeze, "Rakefile".freeze, "VERSION".freeze, "bin/rubocop".freeze, "lib/active_job/queue_adapters/workhorse_adapter.rb".freeze, "lib/generators/workhorse/install_generator.rb".freeze, "lib/generators/workhorse/templates/bin/workhorse.rb".freeze, "lib/generators/workhorse/templates/config/initializers/workhorse.rb".freeze, "lib/generators/workhorse/templates/create_table_jobs.rb".freeze, "lib/workhorse.rb".freeze, "lib/workhorse/daemon.rb".freeze, "lib/workhorse/daemon/shell_handler.rb".freeze, "lib/workhorse/db_job.rb".freeze, "lib/workhorse/enqueuer.rb".freeze, "lib/workhorse/jobs/cleanup_succeeded_jobs.rb".freeze, "lib/workhorse/jobs/detect_stale_jobs_job.rb".freeze, "lib/workhorse/jobs/run_active_job.rb".freeze, "lib/workhorse/jobs/run_rails_op.rb".freeze, "lib/workhorse/performer.rb".freeze, "lib/workhorse/poller.rb".freeze, "lib/workhorse/pool.rb".freeze, "lib/workhorse/scoped_env.rb".freeze, "lib/workhorse/worker.rb".freeze, "test/active_job/queue_adapters/workhorse_adapter_test.rb".freeze, "test/lib/db_schema.rb".freeze, "test/lib/jobs.rb".freeze, "test/lib/test_helper.rb".freeze, "test/workhorse/db_job_test.rb".freeze, "test/workhorse/enqueuer_test.rb".freeze, "test/workhorse/performer_test.rb".freeze, "test/workhorse/poller_test.rb".freeze, "test/workhorse/pool_test.rb".freeze, "test/workhorse/worker_test.rb".freeze, "workhorse.gemspec".freeze]
13
13
  s.rubygems_version = "3.0.3".freeze
14
14
  s.summary = "Multi-threaded job backend with database queuing for ruby.".freeze
15
15
  s.test_files = ["test/active_job/queue_adapters/workhorse_adapter_test.rb".freeze, "test/lib/db_schema.rb".freeze, "test/lib/jobs.rb".freeze, "test/lib/test_helper.rb".freeze, "test/workhorse/db_job_test.rb".freeze, "test/workhorse/enqueuer_test.rb".freeze, "test/workhorse/performer_test.rb".freeze, "test/workhorse/poller_test.rb".freeze, "test/workhorse/pool_test.rb".freeze, "test/workhorse/worker_test.rb".freeze]
@@ -26,6 +26,7 @@ Gem::Specification.new do |s|
26
26
  s.add_development_dependency(%q<colorize>.freeze, [">= 0"])
27
27
  s.add_development_dependency(%q<benchmark-ips>.freeze, [">= 0"])
28
28
  s.add_development_dependency(%q<activejob>.freeze, [">= 0"])
29
+ s.add_development_dependency(%q<pry>.freeze, [">= 0"])
29
30
  s.add_runtime_dependency(%q<activesupport>.freeze, [">= 0"])
30
31
  s.add_runtime_dependency(%q<activerecord>.freeze, [">= 0"])
31
32
  s.add_runtime_dependency(%q<schemacop>.freeze, ["~> 2.0"])
@@ -39,6 +40,7 @@ Gem::Specification.new do |s|
39
40
  s.add_dependency(%q<colorize>.freeze, [">= 0"])
40
41
  s.add_dependency(%q<benchmark-ips>.freeze, [">= 0"])
41
42
  s.add_dependency(%q<activejob>.freeze, [">= 0"])
43
+ s.add_dependency(%q<pry>.freeze, [">= 0"])
42
44
  s.add_dependency(%q<activesupport>.freeze, [">= 0"])
43
45
  s.add_dependency(%q<activerecord>.freeze, [">= 0"])
44
46
  s.add_dependency(%q<schemacop>.freeze, ["~> 2.0"])
@@ -53,6 +55,7 @@ Gem::Specification.new do |s|
53
55
  s.add_dependency(%q<colorize>.freeze, [">= 0"])
54
56
  s.add_dependency(%q<benchmark-ips>.freeze, [">= 0"])
55
57
  s.add_dependency(%q<activejob>.freeze, [">= 0"])
58
+ s.add_dependency(%q<pry>.freeze, [">= 0"])
56
59
  s.add_dependency(%q<activesupport>.freeze, [">= 0"])
57
60
  s.add_dependency(%q<activerecord>.freeze, [">= 0"])
58
61
  s.add_dependency(%q<schemacop>.freeze, ["~> 2.0"])
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: workhorse
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.6.5
4
+ version: 1.0.0.beta0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Sitrox
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-03-18 00:00:00.000000000 Z
11
+ date: 2020-08-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -122,6 +122,20 @@ dependencies:
122
122
  - - ">="
123
123
  - !ruby/object:Gem::Version
124
124
  version: '0'
125
+ - !ruby/object:Gem::Dependency
126
+ name: pry
127
+ requirement: !ruby/object:Gem::Requirement
128
+ requirements:
129
+ - - ">="
130
+ - !ruby/object:Gem::Version
131
+ version: '0'
132
+ type: :development
133
+ prerelease: false
134
+ version_requirements: !ruby/object:Gem::Requirement
135
+ requirements:
136
+ - - ">="
137
+ - !ruby/object:Gem::Version
138
+ version: '0'
125
139
  - !ruby/object:Gem::Dependency
126
140
  name: activesupport
127
141
  requirement: !ruby/object:Gem::Requirement
@@ -208,6 +222,7 @@ files:
208
222
  - lib/workhorse/db_job.rb
209
223
  - lib/workhorse/enqueuer.rb
210
224
  - lib/workhorse/jobs/cleanup_succeeded_jobs.rb
225
+ - lib/workhorse/jobs/detect_stale_jobs_job.rb
211
226
  - lib/workhorse/jobs/run_active_job.rb
212
227
  - lib/workhorse/jobs/run_rails_op.rb
213
228
  - lib/workhorse/performer.rb
@@ -240,9 +255,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
240
255
  version: '0'
241
256
  required_rubygems_version: !ruby/object:Gem::Requirement
242
257
  requirements:
243
- - - ">="
258
+ - - ">"
244
259
  - !ruby/object:Gem::Version
245
- version: '0'
260
+ version: 1.3.1
246
261
  requirements: []
247
262
  rubygems_version: 3.1.2
248
263
  signing_key: