queue_classic_plus 1.0.0.alpha2 → 4.0.0.alpha8

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: f5b1934a03303e576a999e71f978a68da13f0298
4
- data.tar.gz: d1de5250aa7b2ddfe4114043d87e5a95986145f8
2
+ SHA256:
3
+ metadata.gz: 69af5e1cbbcc08c7cd0acad4c0f06e112a9fff09756a06c9ae2ffd80a71ab39a
4
+ data.tar.gz: cd3cc486050b9de66397099f81e0277aac83f5469bb925456527b331dec85a6a
5
5
  SHA512:
6
- metadata.gz: ebadc9325c0307789f6a6216047a2c2cfe240d4364b613ee9eefc9a479224a9196813146b90ed251124cd5ad7a4966556c8fca8e1b48a490cfd806bf95124ff3
7
- data.tar.gz: 619c736ed7b74f94841328acfa27926e2799fecc6589974d9b79de236c1f396ef126d1e80205df223e626536b69c1c0a834014e09e190b000635e571c2a8e677
6
+ metadata.gz: 8d3980cb5a576ff5681812a2e5d7497c35ff98dc84934778f4dd3c867643e10e9d87a38499bad6850cc72d1c4cd519a91a2509011f8ad916e78b10be3c2e076e
7
+ data.tar.gz: c31d0baa284b76731b268e29ab21b3ae0e679f7404345d3b449fb755a34df11100d6f6790a6f9e0dfb373ed908a8fa365575d88184f96d28a079270be5e67605
@@ -0,0 +1,67 @@
1
+ version: 2.1
2
+
3
+ jobs:
4
+ test:
5
+ docker:
6
+ - image: circleci/ruby:2.7.4-node
7
+ auth:
8
+ username: $DOCKERHUB_USERNAME
9
+ password: $DOCKERHUB_TOKEN
10
+ environment:
11
+ DATABASE_URL: postgres://circleci:circleci@127.0.0.1:5432/queue_classic_plus_test
12
+ - image: circleci/postgres:9.6.6-alpine
13
+ auth:
14
+ username: $DOCKERHUB_USERNAME
15
+ password: $DOCKERHUB_TOKEN
16
+ environment:
17
+ POSTGRES_USER: circleci
18
+ POSTGRES_PASSWORD: circleci
19
+ POSTGRES_DB: queue_classic_plus_test
20
+ steps:
21
+ - checkout
22
+ - run:
23
+ name: run tests
24
+ command: |
25
+ bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3
26
+ bundle exec rspec
27
+
28
+ push_to_rubygems:
29
+ docker:
30
+ - image: circleci/ruby:2.7.4
31
+ auth:
32
+ username: $DOCKERHUB_USERNAME
33
+ password: $DOCKERHUB_TOKEN
34
+ steps:
35
+ - checkout
36
+ - run:
37
+ name: Create .gem/credentials file
38
+ command: |
39
+ mkdir ~/.gem
40
+ echo "---
41
+ :rubygems_api_key: $RUBYGEMS_API_KEY
42
+ " > ~/.gem/credentials
43
+ chmod 600 ~/.gem/credentials
44
+ - run:
45
+ name: Release to rubygems
46
+ command: |
47
+ gem build queue_classic_plus
48
+ gem push queue_classic_plus-*.gem
49
+
50
+ workflows:
51
+ version: 2
52
+ gem_release:
53
+ jobs:
54
+ - test:
55
+ context:
56
+ - DockerHub
57
+
58
+ - push_to_rubygems:
59
+ filters:
60
+ branches:
61
+ ignore:
62
+ - /.*/
63
+ tags:
64
+ only:
65
+ - /^v.*/
66
+ context:
67
+ - DockerHub
@@ -0,0 +1,8 @@
1
+ version: 2
2
+ updates:
3
+ - package-ecosystem: bundler
4
+ directory: "/"
5
+ schedule:
6
+ interval: daily
7
+ time: "13:00"
8
+ open-pull-requests-limit: 10
data/.gitignore CHANGED
@@ -1,6 +1,7 @@
1
1
  *.gem
2
2
  *.rbc
3
3
  .bundle
4
+ .byebug_history
4
5
  .config
5
6
  .yardoc
6
7
  Gemfile.lock
@@ -9,6 +10,7 @@ _yardoc
9
10
  coverage
10
11
  doc/
11
12
  lib/bundler/man
13
+ log/
12
14
  pkg
13
15
  rdoc
14
16
  spec/reports
@@ -21,3 +23,4 @@ tmp
21
23
  *.a
22
24
  mkmf.log
23
25
  tags
26
+ .project
data/Gemfile CHANGED
@@ -12,6 +12,10 @@ group :development do
12
12
  end
13
13
 
14
14
  group :test do
15
+ gem 'byebug'
16
+ gem 'rake'
15
17
  gem 'rspec'
16
18
  gem 'timecop'
19
+ gem 'newrelic_rpm'
20
+ gem 'ddtrace'
17
21
  end
data/README.md CHANGED
@@ -1,12 +1,12 @@
1
1
  # QueueClassicPlus
2
2
 
3
- [![Build Status](https://travis-ci.org/rainforestapp/queue_classic_plus.svg?branch=master)](https://travis-ci.org/rainforestapp/queue_classic_plus)
3
+ [![rainforestapp](https://circleci.com/gh/rainforestapp/queue_classic_plus.svg?branch=master)](https://app.circleci.com/pipelines/github/rainforestapp/queue_classic_plus?branch=master)
4
4
 
5
- [QueueClassic](https://github.com/QueueClassic/queue_classic) is a simple Postgresql back DB queue. However, it's a little too simple to use it as the main queueing system of a medium to large app.
5
+ [queue_classic](https://github.com/QueueClassic/queue_classic) is a simple Postgresql backed DB queue. However, it's a little too simple to use it as the main queueing system of a medium to large app. This was developed at [Rainforest QA](https://www.rainforestqa.com/).
6
6
 
7
7
  QueueClassicPlus adds many lacking features to QueueClassic.
8
8
 
9
- - Standard job format
9
+ - Standardized job format
10
10
  - Retry on specific exceptions
11
11
  - Singleton jobs
12
12
  - Metrics
@@ -101,11 +101,21 @@ Jobs::UpdateMetrics.do 'type_a' # does not enqueues job since it's already queue
101
101
  Jobs::UpdateMetrics.do 'type_b' # enqueues job as the arguments are different.
102
102
  ```
103
103
 
104
+ #### Transactions
105
+
106
+ By default, all QueueClassicPlus jobs are executed in a PostgreSQL
107
+ transaction. This decision was made because most jobs are usually
108
+ pretty small and it's preferable to have all the benefits of the
109
+ transaction. You can optionally specify a postgres statement timeout
110
+ (in seconds) for all transactions with the environment variable
111
+ `POSTGRES_STATEMENT_TIMEOUT`.
112
+
113
+ You can disable this feature on a per job basis in the following way:
114
+
104
115
  ```ruby
105
116
  class Jobs::NoTransaction < QueueClassicPlus::Base
106
117
  # Don't run the perform method in a transaction
107
118
  skip_transaction!
108
-
109
119
  @queue = :low
110
120
 
111
121
  def self.perform(user_id)
@@ -114,19 +124,13 @@ class Jobs::NoTransaction < QueueClassicPlus::Base
114
124
  end
115
125
  ```
116
126
 
117
- #### Transaction
118
-
119
- By default, all QueueClassicPlus jobs are executed in a PostgreSQL transaction. This decision was made because most jobs are usually pretty small and it's preferable to have all the benefits of the transaction.
120
-
121
- You can disable this feature on a per job basis in the follwing way:
122
-
123
127
  ## Advanced configuration
124
128
 
125
129
  If you want to log exceptions in your favorite exception tracker. You can configured it like sso:
126
130
 
127
131
  ```ruby
128
132
  QueueClassicPlus.exception_handler = -> (exception, job) do
129
- Raven.capture_exception(exception, extra: {job: job, env: ENV})
133
+ Sentry.capture_exception(exception, extra: { job: job, env: ENV })
130
134
  end
131
135
  ```
132
136
 
@@ -146,6 +150,12 @@ If you are using NewRelic and want to push performance data to it, you can add t
146
150
  require "queue_classic_plus/new_relic"
147
151
  ```
148
152
 
153
+ To instrument DataDog monitoring add this to your QC initializer:
154
+
155
+ ```ruby
156
+ require "queue_classic_plus/datadog"
157
+ ```
158
+
149
159
  ## Contributing
150
160
 
151
161
  1. Fork it ( https://github.com/[my-github-username]/queue_classic_plus/fork )
@@ -10,11 +10,16 @@ module QueueClassicPlus
10
10
  inheritable_attr :skip_transaction
11
11
  inheritable_attr :retries_on
12
12
  inheritable_attr :max_retries
13
+ inheritable_attr :disable_retries
13
14
 
14
- self.max_retries = 0
15
+ self.max_retries = 5
15
16
  self.retries_on = {}
17
+ self.disable_retries = false
16
18
 
17
19
  def self.retry!(on: RuntimeError, max: 5)
20
+ if self.disable_retries
21
+ raise 'retry! should not be used in conjuction with disable_retries!'
22
+ end
18
23
  Array(on).each {|e| self.retries_on[e] = true}
19
24
  self.max_retries = max
20
25
  end
@@ -23,6 +28,14 @@ module QueueClassicPlus
23
28
  self.retries_on[exception.class] || self.retries_on.keys.any? {|klass| exception.is_a? klass}
24
29
  end
25
30
 
31
+ def self.disable_retries!
32
+ unless self.retries_on.empty?
33
+ raise 'disable_retries! should not be enabled in conjunction with retry!'
34
+ end
35
+
36
+ self.disable_retries = true
37
+ end
38
+
26
39
  def self.lock!
27
40
  self.locked = true
28
41
  end
@@ -54,7 +67,7 @@ module QueueClassicPlus
54
67
  )
55
68
  AS x"
56
69
 
57
- result = QC.default_conn_adapter.execute(q, @queue, method, args.to_json)
70
+ result = QC.default_conn_adapter.execute(q, @queue, method, JSON.dump(serialized(args)))
58
71
  result['count'].to_i == 0
59
72
  else
60
73
  true
@@ -63,7 +76,7 @@ module QueueClassicPlus
63
76
 
64
77
  def self.enqueue(method, *args)
65
78
  if can_enqueue?(method, *args)
66
- queue.enqueue(method, *args)
79
+ queue.enqueue(method, *serialized(args))
67
80
  end
68
81
  end
69
82
 
@@ -73,11 +86,11 @@ module QueueClassicPlus
73
86
 
74
87
  def self.enqueue_perform_in(time, *args)
75
88
  raise "Can't enqueue in the future for locked jobs" if locked?
76
- queue.enqueue_in(time, "#{self.to_s}._perform", *args)
89
+ queue.enqueue_in(time, "#{self.to_s}._perform", *serialized(args))
77
90
  end
78
91
 
79
92
  def self.restart_in(time, remaining_retries, *args)
80
- queue.enqueue_retry_in(time, "#{self.to_s}._perform", remaining_retries, *args)
93
+ queue.enqueue_retry_in(time, "#{self.to_s}._perform", remaining_retries, *serialized(args))
81
94
  end
82
95
 
83
96
  def self.do(*args)
@@ -89,10 +102,13 @@ module QueueClassicPlus
89
102
  def self._perform(*args)
90
103
  Metrics.timing("qu_perform_time", source: librato_key) do
91
104
  if skip_transaction
92
- perform *args
105
+ perform(*deserialized(args))
93
106
  else
94
107
  transaction do
95
- perform *args
108
+ # .to_i defaults to 0, which means no timeout in postgres
109
+ timeout = ENV['POSTGRES_STATEMENT_TIMEOUT'].to_i * 1000
110
+ execute "SET LOCAL statement_timeout = #{timeout}"
111
+ perform(*deserialized(args))
96
112
  end
97
113
  end
98
114
  end
@@ -103,7 +119,7 @@ module QueueClassicPlus
103
119
  end
104
120
 
105
121
  def self.transaction(options = {}, &block)
106
- if defined?(ActiveRecord)
122
+ if defined?(ActiveRecord) && ActiveRecord::Base.connected?
107
123
  # If ActiveRecord is loaded, we use it's own transaction mechanisn since
108
124
  # it has slightly different semanctics for rollback.
109
125
  ActiveRecord::Base.transaction(options, &block)
@@ -126,7 +142,26 @@ module QueueClassicPlus
126
142
  execute q
127
143
  end
128
144
 
145
+ protected
146
+
147
+ def self.serialized(args)
148
+ if defined?(Rails)
149
+ ActiveJob::Arguments.serialize(args)
150
+ else
151
+ args
152
+ end
153
+ end
154
+
155
+ def self.deserialized(args)
156
+ if defined?(Rails)
157
+ ActiveJob::Arguments.deserialize(args)
158
+ else
159
+ args
160
+ end
161
+ end
162
+
129
163
  private
164
+
130
165
  def self.execute(sql, *args)
131
166
  QC.default_conn_adapter.execute(sql, *args)
132
167
  end
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module QueueClassicDatadog
4
+ def _perform(*args)
5
+ Datadog.tracer.trace('qc.job', service_name: 'qc.job', resource: "#{name}#perform") do |_|
6
+ super
7
+ end
8
+ end
9
+
10
+ QueueClassicPlus::Base.singleton_class.send(:prepend, QueueClassicDatadog)
11
+ end
@@ -28,7 +28,9 @@ module QueueClassicPlus
28
28
  end
29
29
 
30
30
  def self.uncloneable
31
- [Symbol, TrueClass, FalseClass, NilClass]
31
+ tmp = [Symbol, TrueClass, FalseClass, NilClass]
32
+ tmp += [Fixnum, Bignum] if RUBY_VERSION < '2.4.0'
33
+ tmp
32
34
  end
33
35
  end
34
36
  end
@@ -7,7 +7,7 @@ module QueueClassicPlus
7
7
 
8
8
  class Metrics
9
9
  def self.timing(*args, &block)
10
- provider.timing *args, &block
10
+ provider.timing(*args, &block)
11
11
  end
12
12
 
13
13
  def self.increment(*args)
@@ -1,30 +1,29 @@
1
1
  require 'new_relic/agent/method_tracer'
2
2
 
3
- QueueClassicPlus::Base.class_eval do
4
- class << self
5
- include NewRelic::Agent::Instrumentation::ControllerInstrumentation
3
+ module QueueClassicNewRelic
4
+ include NewRelic::Agent::Instrumentation::ControllerInstrumentation
6
5
 
7
- def new_relic_key
8
- "Custom/QueueClassicPlus/#{librato_key}"
9
- end
6
+ def new_relic_key
7
+ "Custom/QueueClassicPlus/#{librato_key}"
8
+ end
10
9
 
11
- def _perform_with_new_relic(*args)
12
- opts = {
13
- name: 'perform',
14
- class_name: self.name,
15
- category: 'OtherTransaction/QueueClassicPlus',
16
- }
10
+ def _perform(*args)
11
+ opts = {
12
+ name: 'perform',
13
+ class_name: self.name,
14
+ category: 'OtherTransaction/QueueClassicPlus',
15
+ }
17
16
 
18
- perform_action_with_newrelic_trace(opts) do
19
- if NewRelic::Agent.config[:'queue_classic_plus.capture_params']
20
- NewRelic::Agent.add_custom_parameters(job_arguments: args)
21
- end
22
- _perform_without_new_relic *args
17
+ perform_action_with_newrelic_trace(opts) do
18
+ if NewRelic::Agent.config[:'queue_classic_plus.capture_params']
19
+ NewRelic::Agent.add_custom_parameters(job_arguments: args)
23
20
  end
24
- end
25
21
 
26
- alias_method_chain :_perform, :new_relic
22
+ super
23
+ end
27
24
  end
25
+
26
+ QueueClassicPlus::Base.singleton_class.send(:prepend, QueueClassicNewRelic)
28
27
  end
29
28
 
30
29
  QueueClassicPlus::CustomWorker.class_eval do
@@ -1,11 +1,54 @@
1
1
  module QC
2
2
  class Queue
3
+
3
4
  def enqueue_retry_in(seconds, method, remaining_retries, *args)
4
5
  QC.log_yield(:measure => 'queue.enqueue') do
5
- s = "INSERT INTO #{TABLE_NAME} (q_name, method, args, scheduled_at, remaining_retries)
6
+ s = "INSERT INTO #{QC.table_name} (q_name, method, args, scheduled_at, remaining_retries)
6
7
  VALUES ($1, $2, $3, now() + interval '#{seconds.to_i} seconds', $4)"
7
- res = conn_adapter.execute(s, name, method, JSON.dump(args), remaining_retries)
8
+
9
+ conn_adapter.execute(s, name, method, JSON.dump(args), remaining_retries)
8
10
  end
9
11
  end
12
+
13
+ def lock
14
+ QC.log_yield(:measure => 'queue.lock') do
15
+ s = <<~SQL
16
+ WITH selected_job AS (
17
+ SELECT id
18
+ FROM queue_classic_jobs
19
+ WHERE
20
+ locked_at IS NULL AND
21
+ q_name = $1 AND
22
+ scheduled_at <= now()
23
+ LIMIT 1
24
+ FOR NO KEY UPDATE SKIP LOCKED
25
+ )
26
+ UPDATE queue_classic_jobs
27
+ SET
28
+ locked_at = now(),
29
+ locked_by = pg_backend_pid()
30
+ FROM selected_job
31
+ WHERE queue_classic_jobs.id = selected_job.id
32
+ RETURNING *
33
+ SQL
34
+
35
+ if r = conn_adapter.execute(s, name)
36
+ {}.tap do |job|
37
+ job[:id] = r["id"]
38
+ job[:q_name] = r["q_name"]
39
+ job[:method] = r["method"]
40
+ job[:args] = JSON.parse(r["args"])
41
+ job[:remaining_retries] = r["remaining_retries"]&.to_s
42
+ if r["scheduled_at"]
43
+ # ActiveSupport may cast time strings to Time
44
+ job[:scheduled_at] = r["scheduled_at"].kind_of?(Time) ? r["scheduled_at"] : Time.parse(r["scheduled_at"])
45
+ ttl = Integer((Time.now - job[:scheduled_at]) * 1000)
46
+ QC.measure("time-to-lock=#{ttl}ms source=#{name}")
47
+ end
48
+ end
49
+ end
50
+ end
51
+ end
52
+
10
53
  end
11
- end
54
+ end
@@ -2,13 +2,26 @@ namespace :qc_plus do
2
2
  desc "Start a new worker for the (default or $QUEUE) queue"
3
3
  task :work => :environment do
4
4
  puts "Starting up worker for queue #{ENV['QUEUE']}"
5
+
6
+ # ActiveRecord::RecordNotFound is ignored by Sentry by default,
7
+ # which shouldn't happen in background jobs.
8
+ if defined?(Sentry)
9
+ Sentry.init do |config|
10
+ config.excluded_exceptions = []
11
+ config.background_worker_threads = 0 if Gem::Version.new(Sentry::VERSION) >= Gem::Version.new('4.1.0')
12
+ end
13
+ elsif defined?(Raven)
14
+ Raven.configure do |config|
15
+ config.excluded_exceptions = []
16
+ end
17
+ end
18
+
5
19
  @worker = QueueClassicPlus::CustomWorker.new
6
20
 
7
21
  trap('INT') do
8
22
  $stderr.puts("Received INT. Shutting down.")
9
23
  if !@worker.running
10
- $stderr.puts("Worker has stopped running. Exit.")
11
- exit(1)
24
+ $stderr.puts("Worker has already stopped running.")
12
25
  end
13
26
  @worker.stop
14
27
  end
@@ -19,5 +32,6 @@ namespace :qc_plus do
19
32
  end
20
33
 
21
34
  @worker.start
35
+ $stderr.puts 'Shut down successfully'
22
36
  end
23
37
  end
@@ -1,3 +1,3 @@
1
1
  module QueueClassicPlus
2
- VERSION = "1.0.0.alpha2"
2
+ VERSION = '4.0.0.alpha8'.freeze
3
3
  end
@@ -1,47 +1,89 @@
1
+ require 'pg'
2
+ require 'queue_classic'
3
+
1
4
  module QueueClassicPlus
2
5
  class CustomWorker < QC::Worker
6
+ CONNECTION_ERRORS = [PG::UnableToSend, PG::ConnectionBad].freeze
3
7
  BACKOFF_WIDTH = 10
4
8
  FailedQueue = QC::Queue.new("failed_jobs")
5
9
 
6
- def enqueue_failed(job, e)
7
- sql = "INSERT INTO #{QC::TABLE_NAME} (q_name, method, args, last_error) VALUES ('failed_jobs', $1, $2, $3)"
8
- last_error = e.backtrace ? ([e.message] + e.backtrace ).join("\n") : e.message
9
- QC.default_conn_adapter.execute sql, job[:method], JSON.dump(job[:args]), last_error
10
+ def handle_failure(job, e)
11
+ QueueClassicPlus.logger.info "Handling exception #{e.class} - #{e.message} for job #{job[:id]}"
10
12
 
11
- QueueClassicPlus.exception_handler.call(e, job)
12
- Metrics.increment("qc.errors", source: @q_name)
13
- end
13
+ force_retry = false
14
+ if connection_error?(e)
15
+ # If we've got here, unfortunately ActiveRecord's rollback mechanism may
16
+ # not have kicked in yet and we might be in a failed transaction. To be
17
+ # *absolutely* sure the retry/failure gets enqueued, we do a rollback
18
+ # just in case (and if we're not in a transaction it will be a no-op).
19
+ QueueClassicPlus.logger.info "Reset connection for job #{job[:id]}"
20
+ @conn_adapter.connection.reset
21
+ @conn_adapter.execute 'ROLLBACK'
14
22
 
15
- def handle_failure(job, e)
16
- QueueClassicPlus.logger.info "Handling exception #{e.message} for job #{job[:id]}"
17
- klass = job_klass(job)
23
+ # We definitely want to retry because the connection was lost mid-task.
24
+ force_retry = true
25
+ end
18
26
 
27
+ @failed_job = job
28
+ @failed_job_args = failed_job_class ? failed_job_class.deserialized(job[:args]) : job[:args]
29
+
30
+ if force_retry && !(failed_job_class.respond_to?(:disable_retries) && failed_job_class.disable_retries)
31
+ Metrics.increment("qc.force_retry", source: @q_name)
32
+ retry_with_remaining(e)
19
33
  # The mailers doesn't have a retries_on?
20
- if klass && klass.respond_to?(:retries_on?) && klass.retries_on?(e)
21
- remaining_retries = job[:remaining_retries] || klass.max_retries
22
- remaining_retries -= 1
23
-
24
- if remaining_retries > 0
25
- klass.restart_in((klass.max_retries - remaining_retries) * BACKOFF_WIDTH,
26
- remaining_retries,
27
- *job[:args])
28
- else
29
- enqueue_failed(job, e)
30
- end
34
+ elsif failed_job_class && failed_job_class.respond_to?(:retries_on?) && failed_job_class.retries_on?(e)
35
+ Metrics.increment("qc.retry", source: @q_name)
36
+ retry_with_remaining(e)
31
37
  else
32
- enqueue_failed(job, e)
38
+ enqueue_failed(e)
33
39
  end
34
40
 
35
- FailedQueue.delete(job[:id])
41
+ FailedQueue.delete(@failed_job[:id])
36
42
  end
37
43
 
38
44
  private
39
- def job_klass(job)
45
+
46
+ def retry_with_remaining(e)
47
+ if remaining_retries > 0
48
+ failed_job_class.restart_in(backoff, remaining_retries - 1, *@failed_job_args)
49
+ else
50
+ enqueue_failed(e)
51
+ end
52
+ end
53
+
54
+ def max_retries
55
+ failed_job_class.respond_to?(:max_retries) ? failed_job_class.max_retries : 5
56
+ end
57
+
58
+ def remaining_retries
59
+ (@failed_job[:remaining_retries] || max_retries).to_i
60
+ end
61
+
62
+ def failed_job_class
40
63
  begin
41
- Object.const_get(job[:method].split('.')[0])
64
+ Object.const_get(@failed_job[:method].split('.')[0])
42
65
  rescue NameError
43
66
  nil
44
67
  end
45
68
  end
69
+
70
+ def backoff
71
+ (max_retries - (remaining_retries - 1)) * BACKOFF_WIDTH
72
+ end
73
+
74
+ def connection_error?(e)
75
+ CONNECTION_ERRORS.any? { |klass| e.kind_of? klass } ||
76
+ (e.respond_to?(:original_exception) &&
77
+ CONNECTION_ERRORS.any? { |klass| e.original_exception.kind_of? klass })
78
+ end
79
+
80
+ def enqueue_failed(e)
81
+ sql = "INSERT INTO #{QC.table_name} (q_name, method, args, last_error) VALUES ('failed_jobs', $1, $2, $3)"
82
+ last_error = e.backtrace ? ([e.message] + e.backtrace ).join("\n") : e.message
83
+ QC.default_conn_adapter.execute sql, @failed_job[:method], JSON.dump(@failed_job_args), last_error
84
+
85
+ QueueClassicPlus.exception_handler.call(e, @failed_job)
86
+ Metrics.increment("qc.errors", source: @q_name)
87
+ end
46
88
  end
47
89
  end
@@ -14,19 +14,19 @@ module QueueClassicPlus
14
14
  require 'queue_classic_plus/railtie' if defined?(Rails)
15
15
 
16
16
  def self.migrate(c = QC::default_conn_adapter.connection)
17
- conn = QC::ConnAdapter.new(c)
17
+ conn = QC::ConnAdapter.new(connection: c)
18
18
  conn.execute("ALTER TABLE queue_classic_jobs ADD COLUMN last_error TEXT")
19
19
  conn.execute("ALTER TABLE queue_classic_jobs ADD COLUMN remaining_retries INTEGER")
20
20
  end
21
21
 
22
22
  def self.demigrate(c = QC::default_conn_adapter.connection)
23
- conn = QC::ConnAdapter.new(c)
23
+ conn = QC::ConnAdapter.new(connection: c)
24
24
  conn.execute("ALTER TABLE queue_classic_jobs DROP COLUMN last_error")
25
25
  conn.execute("ALTER TABLE queue_classic_jobs DROP COLUMN remaining_retries")
26
26
  end
27
27
 
28
28
  def self.exception_handler
29
- @exception_handler ||= -> (exception, job) { nil }
29
+ @exception_handler ||= ->(exception, job) { nil }
30
30
  end
31
31
 
32
32
  def self.exception_handler=(handler)
@@ -18,7 +18,13 @@ Gem::Specification.new do |spec|
18
18
  spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
19
19
  spec.require_paths = ["lib"]
20
20
 
21
- spec.add_dependency "queue_classic", ">= 3.1.0"
22
- spec.add_development_dependency "bundler", "~> 1.6"
21
+ spec.add_dependency "queue_classic", "4.0.0.pre.alpha1"
22
+ if Gem::Version.new(RUBY_VERSION) < Gem::Version.new('2.3.0')
23
+ spec.add_development_dependency "bundler", "~> 1.6"
24
+ else
25
+ spec.add_development_dependency "bundler", "~> 2.0"
26
+ end
23
27
  spec.add_development_dependency "rake"
28
+ spec.add_development_dependency "activerecord", "~> 6.0"
29
+ spec.add_development_dependency "activejob"
24
30
  end