que 0.10.0 → 0.11.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -4,60 +4,68 @@ Que does everything it can to ensure that jobs are worked exactly once, but if s
4
4
 
5
5
  The safest type of job is one that reads in data, either from the database or from external APIs, then does some number crunching and writes the results to the database. These jobs are easy to make safe - simply write the results to the database inside a transaction, and also have the job destroy itself inside that transaction, like so:
6
6
 
7
- class UpdateWidgetPrice < Que::Job
8
- def run(widget_id)
9
- widget = Widget[widget_id]
10
- price = ExternalService.get_widget_price(widget_id)
11
-
12
- ActiveRecord::Base.transaction do
13
- # Make changes to the database.
14
- widget.update :price => price
15
-
16
- # Destroy the job.
17
- destroy
18
- end
19
- end
7
+ ```ruby
8
+ class UpdateWidgetPrice < Que::Job
9
+ def run(widget_id)
10
+ widget = Widget[widget_id]
11
+ price = ExternalService.get_widget_price(widget_id)
12
+
13
+ ActiveRecord::Base.transaction do
14
+ # Make changes to the database.
15
+ widget.update :price => price
16
+
17
+ # Destroy the job.
18
+ destroy
20
19
  end
20
+ end
21
+ end
22
+ ```
21
23
 
22
24
  Here, you're taking advantage of the guarantees of an [ACID](https://en.wikipedia.org/wiki/ACID) database. The job is destroyed along with the other changes, so either the write will succeed and the job will be run only once, or it will fail and the database will be left untouched. But even if it fails, the job can simply be retried, and there are no lingering effects from the first attempt, so no big deal.
23
25
 
24
26
  The more difficult type of job is one that makes changes that can't be controlled transactionally. For example, writing to an external service:
25
27
 
26
- class ChargeCreditCard < Que::Job
27
- def run(user_id, credit_card_id)
28
- CreditCardService.charge(credit_card_id, :amount => "$10.00")
28
+ ```ruby
29
+ class ChargeCreditCard < Que::Job
30
+ def run(user_id, credit_card_id)
31
+ CreditCardService.charge(credit_card_id, :amount => "$10.00")
29
32
 
30
- ActiveRecord::Base.transaction do
31
- User.where(:id => user_id).update_all :charged_at => Time.now
32
- destroy
33
- end
34
- end
33
+ ActiveRecord::Base.transaction do
34
+ User.where(:id => user_id).update_all :charged_at => Time.now
35
+ destroy
35
36
  end
37
+ end
38
+ end
39
+ ```
36
40
 
37
41
  What if the process abruptly dies after we tell the provider to charge the credit card, but before we finish the transaction? Que will retry the job, but there's no way to tell where (or even if) it failed the first time. The credit card will be charged a second time, and then you've got an angry customer. The ideal solution in this case is to make the job [idempotent](https://en.wikipedia.org/wiki/Idempotence), meaning that it will have the same effect no matter how many times it is run:
38
42
 
39
- class ChargeCreditCard < Que::Job
40
- def run(user_id, credit_card_id)
41
- unless CreditCardService.check_for_previous_charge(credit_card_id)
42
- CreditCardService.charge(credit_card_id, :amount => "$10.00")
43
- end
44
-
45
- ActiveRecord::Base.transaction do
46
- User.where(:id => user_id).update_all :charged_at => Time.now
47
- destroy
48
- end
49
- end
43
+ ```ruby
44
+ class ChargeCreditCard < Que::Job
45
+ def run(user_id, credit_card_id)
46
+ unless CreditCardService.check_for_previous_charge(credit_card_id)
47
+ CreditCardService.charge(credit_card_id, :amount => "$10.00")
50
48
  end
51
49
 
50
+ ActiveRecord::Base.transaction do
51
+ User.where(:id => user_id).update_all :charged_at => Time.now
52
+ destroy
53
+ end
54
+ end
55
+ end
56
+ ```
57
+
52
58
  This makes the job slightly more complex, but reliable (or, at least, as reliable as your credit card service).
53
59
 
54
60
  Finally, there are some jobs where you won't want to write to the database at all:
55
61
 
56
- class SendVerificationEmail < Que::Job
57
- def run(email_address)
58
- Mailer.verification_email(email_address).deliver
59
- end
60
- end
62
+ ```ruby
63
+ class SendVerificationEmail < Que::Job
64
+ def run(email_address)
65
+ Mailer.verification_email(email_address).deliver
66
+ end
67
+ end
68
+ ```
61
69
 
62
70
  In this case, we don't have any no way to prevent the occasional double-sending of an email. But, for ease of use, you can leave out the transaction and the `destroy` call entirely - Que will recognize that the job wasn't destroyed and will clean it up for you.
63
71
 
@@ -69,36 +77,40 @@ Que doesn't offer a general way to kill jobs that have been running too long, be
69
77
 
70
78
  However, if there's part of your job that is prone to hang (due to an API call or other HTTP request that never returns, for example), you can timeout those individual parts of your job relatively safely. For example, consider a job that needs to make an HTTP request and then write to the database:
71
79
 
72
- require 'net/http'
80
+ ```ruby
81
+ require 'net/http'
73
82
 
74
- class ScrapeStuff < Que::Job
75
- def run(domain_to_scrape, path_to_scrape)
76
- result = Net::HTTP.get(domain_to_scrape, path_to_scrape)
83
+ class ScrapeStuff < Que::Job
84
+ def run(domain_to_scrape, path_to_scrape)
85
+ result = Net::HTTP.get(domain_to_scrape, path_to_scrape)
77
86
 
78
- ActiveRecord::Base.transaction do
79
- # Insert result...
87
+ ActiveRecord::Base.transaction do
88
+ # Insert result...
80
89
 
81
- destroy
82
- end
83
- end
90
+ destroy
84
91
  end
92
+ end
93
+ end
94
+ ```
85
95
 
86
96
  That request could take a very long time, or never return at all. Let's wrap it in a five-second timeout:
87
97
 
88
- require 'net/http'
89
- require 'timeout'
98
+ ```ruby
99
+ require 'net/http'
100
+ require 'timeout'
90
101
 
91
- class ScrapeStuff < Que::Job
92
- def run(domain_to_scrape, path_to_scrape)
93
- result = Timeout.timeout(5){Net::HTTP.get(domain_to_scrape, path_to_scrape)}
102
+ class ScrapeStuff < Que::Job
103
+ def run(domain_to_scrape, path_to_scrape)
104
+ result = Timeout.timeout(5){Net::HTTP.get(domain_to_scrape, path_to_scrape)}
94
105
 
95
- ActiveRecord::Base.transaction do
96
- # Insert result...
106
+ ActiveRecord::Base.transaction do
107
+ # Insert result...
97
108
 
98
- destroy
99
- end
100
- end
109
+ destroy
101
110
  end
111
+ end
112
+ end
113
+ ```
102
114
 
103
115
  Now, if the request takes more than five seconds, a `Timeout::Error` will be raised and Que will just retry the job later. This solution isn't perfect, since Timeout uses Thread#kill under the hood, which can lead to unpredictable behavior. But it's separate from our transaction, so there's no risk of losing data - even a catastrophic error that left Net::HTTP in a bad state would be fixable by restarting the process.
104
116
 
@@ -14,6 +14,20 @@ module Que
14
14
  end
15
15
  end
16
16
 
17
+ def cleanup!
18
+ # ActiveRecord will check out connections to the current thread when
19
+ # queries are executed and not return them to the pool until
20
+ # explicitly requested to. The wisdom of this API is questionable, and
21
+ # it doesn't pose a problem for the typical case of workers using a
22
+ # single PG connection (since we ensure that connection is checked in
23
+ # and checked out responsibly), but since ActiveRecord supports
24
+ # connections to multiple databases, it's easy for people using that
25
+ # feature to unknowingly leak connections to other databases. So, take
26
+ # the additional step of telling ActiveRecord to check in all of the
27
+ # current thread's connections between jobs.
28
+ ::ActiveRecord::Base.clear_active_connections!
29
+ end
30
+
17
31
  class TransactionCallback
18
32
  def has_transactional_callbacks?
19
33
  true
@@ -21,6 +21,11 @@ module Que
21
21
  raise NotImplementedError
22
22
  end
23
23
 
24
+ # Called after Que has returned its connection to whatever pool it's
25
+ # using.
26
+ def cleanup!
27
+ end
28
+
24
29
  # Called after a job is queued in async mode, to prompt a worker to
25
30
  # wake up after the current transaction commits. Not all adapters will
26
31
  # implement this.
@@ -60,7 +65,7 @@ module Que
60
65
  checkout do |conn|
61
66
  # Prepared statement errors have the potential to foul up the entire
62
67
  # transaction, so if we're in one, err on the side of safety.
63
- return execute_sql(SQL[name], params) if in_transaction?
68
+ return execute_sql(SQL[name], params) if Que.disable_prepared_statements || in_transaction?
64
69
 
65
70
  statements = @prepared_statements[conn] ||= {}
66
71
 
@@ -86,21 +91,6 @@ module Que
86
91
  end
87
92
  end
88
93
 
89
- HASH_DEFAULT_PROC = proc { |hash, key| hash[key.to_s] if Symbol === key }
90
-
91
- INDIFFERENTIATOR = proc do |object|
92
- case object
93
- when Array
94
- object.each(&INDIFFERENTIATOR)
95
- when Hash
96
- object.default_proc = HASH_DEFAULT_PROC
97
- object.each { |key, value| object[key] = INDIFFERENTIATOR.call(value) }
98
- object
99
- else
100
- object
101
- end
102
- end
103
-
104
94
  CAST_PROCS = {}
105
95
 
106
96
  # Integer, bigint, smallint:
@@ -128,11 +118,7 @@ module Que
128
118
  end
129
119
  end
130
120
 
131
- if result.first.respond_to?(:with_indifferent_access)
132
- output.map(&:with_indifferent_access)
133
- else
134
- output.each(&INDIFFERENTIATOR)
135
- end
121
+ output.map!(&Que.json_converter)
136
122
  end
137
123
  end
138
124
  end
data/lib/que/job.rb CHANGED
@@ -79,65 +79,70 @@ module Que
79
79
  # Since we're taking session-level advisory locks, we have to hold the
80
80
  # same connection throughout the process of getting a job, working it,
81
81
  # deleting it, and removing the lock.
82
- Que.adapter.checkout do
83
- begin
84
- if job = Que.execute(:lock_job, [queue]).first
85
- # Edge case: It's possible for the lock_job query to have
86
- # grabbed a job that's already been worked, if it took its MVCC
87
- # snapshot while the job was processing, but didn't attempt the
88
- # advisory lock until it was finished. Since we have the lock, a
89
- # previous worker would have deleted it by now, so we just
90
- # double check that it still exists before working it.
91
-
92
- # Note that there is currently no spec for this behavior, since
93
- # I'm not sure how to reliably commit a transaction that deletes
94
- # the job in a separate thread between lock_job and check_job.
95
- if Que.execute(:check_job, job.values_at(:queue, :priority, :run_at, :job_id)).none?
96
- {:event => :job_race_condition}
82
+ return_value =
83
+ Que.adapter.checkout do
84
+ begin
85
+ if job = Que.execute(:lock_job, [queue]).first
86
+ # Edge case: It's possible for the lock_job query to have
87
+ # grabbed a job that's already been worked, if it took its MVCC
88
+ # snapshot while the job was processing, but didn't attempt the
89
+ # advisory lock until it was finished. Since we have the lock, a
90
+ # previous worker would have deleted it by now, so we just
91
+ # double check that it still exists before working it.
92
+
93
+ # Note that there is currently no spec for this behavior, since
94
+ # I'm not sure how to reliably commit a transaction that deletes
95
+ # the job in a separate thread between lock_job and check_job.
96
+ if Que.execute(:check_job, job.values_at(:queue, :priority, :run_at, :job_id)).none?
97
+ {:event => :job_race_condition}
98
+ else
99
+ klass = class_for(job[:job_class])
100
+ klass.new(job)._run
101
+ {:event => :job_worked, :job => job}
102
+ end
97
103
  else
98
- klass = class_for(job[:job_class])
99
- klass.new(job)._run
100
- {:event => :job_worked, :job => job}
104
+ {:event => :job_unavailable}
101
105
  end
102
- else
103
- {:event => :job_unavailable}
104
- end
105
- rescue => error
106
- begin
107
- if job
108
- count = job[:error_count].to_i + 1
109
- interval = klass && klass.respond_to?(:retry_interval) && klass.retry_interval || retry_interval
110
- delay = interval.respond_to?(:call) ? interval.call(count) : interval
111
- message = "#{error.message}\n#{error.backtrace.join("\n")}"
112
- Que.execute :set_error, [count, delay, message] + job.values_at(:queue, :priority, :run_at, :job_id)
106
+ rescue => error
107
+ begin
108
+ if job
109
+ count = job[:error_count].to_i + 1
110
+ interval = klass && klass.respond_to?(:retry_interval) && klass.retry_interval || retry_interval
111
+ delay = interval.respond_to?(:call) ? interval.call(count) : interval
112
+ message = "#{error.message}\n#{error.backtrace.join("\n")}"
113
+ Que.execute :set_error, [count, delay, message] + job.values_at(:queue, :priority, :run_at, :job_id)
114
+ end
115
+ rescue
116
+ # If we can't reach the database for some reason, too bad, but
117
+ # don't let it crash the work loop.
113
118
  end
114
- rescue
115
- # If we can't reach the database for some reason, too bad, but
116
- # don't let it crash the work loop.
117
- end
118
119
 
119
- if Que.error_handler
120
- # Similarly, protect the work loop from a failure of the error handler.
121
- Que.error_handler.call(error, job) rescue nil
122
- end
120
+ if Que.error_handler
121
+ # Similarly, protect the work loop from a failure of the error handler.
122
+ Que.error_handler.call(error, job) rescue nil
123
+ end
123
124
 
124
- return {:event => :job_errored, :error => error, :job => job}
125
- ensure
126
- # Clear the advisory lock we took when locking the job. Important
127
- # to do this so that they don't pile up in the database. Again, if
128
- # we can't reach the database, don't crash the work loop.
129
- begin
130
- Que.execute "SELECT pg_advisory_unlock($1)", [job[:job_id]] if job
131
- rescue
125
+ return {:event => :job_errored, :error => error, :job => job}
126
+ ensure
127
+ # Clear the advisory lock we took when locking the job. Important
128
+ # to do this so that they don't pile up in the database. Again, if
129
+ # we can't reach the database, don't crash the work loop.
130
+ begin
131
+ Que.execute "SELECT pg_advisory_unlock($1)", [job[:job_id]] if job
132
+ rescue
133
+ end
132
134
  end
133
135
  end
134
- end
136
+
137
+ Que.adapter.cleanup!
138
+
139
+ return_value
135
140
  end
136
141
 
137
142
  private
138
143
 
139
144
  def class_for(string)
140
- string.split('::').inject(Object, &:const_get)
145
+ Que.constantize(string)
141
146
  end
142
147
  end
143
148
  end
data/lib/que/railtie.rb CHANGED
@@ -2,28 +2,13 @@ module Que
2
2
  class Railtie < Rails::Railtie
3
3
  config.que = Que
4
4
 
5
- Que.logger = proc { Rails.logger }
6
- Que.mode = :sync if Rails.env.test?
7
- Que.connection = ::ActiveRecord if defined? ::ActiveRecord
5
+ Que.logger = proc { Rails.logger }
6
+ Que.mode = :sync if Rails.env.test?
7
+ Que.connection = ::ActiveRecord if defined? ::ActiveRecord
8
+ Que.json_converter = :with_indifferent_access.to_proc
8
9
 
9
10
  rake_tasks do
10
11
  load 'que/rake_tasks.rb'
11
12
  end
12
-
13
- initializer 'que.setup' do
14
- ActiveSupport.on_load :after_initialize do
15
- # Only start up the worker pool if running as a server.
16
- Que.mode ||= :async if defined? Rails::Server
17
-
18
- at_exit do
19
- if Que.mode == :async
20
- $stdout.puts "Finishing Que's current jobs before exiting..."
21
- Que.worker_count = 0
22
- Que.mode = :off
23
- $stdout.puts "Que's jobs finished, exiting..."
24
- end
25
- end
26
- end
27
- end
28
13
  end
29
14
  end
@@ -12,6 +12,7 @@ namespace :que do
12
12
  Que.logger.level = Logger.const_get((ENV['QUE_LOG_LEVEL'] || 'INFO').upcase)
13
13
  Que.worker_count = (ENV['QUE_WORKER_COUNT'] || 4).to_i
14
14
  Que.wake_interval = (ENV['QUE_WAKE_INTERVAL'] || 0.1).to_f
15
+ Que.queue_name = ENV['QUE_QUEUE'] if ENV['QUE_QUEUE']
15
16
  Que.mode = :async
16
17
 
17
18
  # When changing how signals are caught, be sure to test the behavior with
data/lib/que/sql.rb CHANGED
@@ -1,8 +1,47 @@
1
1
  module Que
2
2
  SQL = {
3
- # Thanks to RhodiumToad in #postgresql for help with the job lock CTE.
3
+ # Locks a job using a Postgres recursive CTE [1].
4
+ #
5
+ # As noted by the Postgres documentation, it may be slightly easier to
6
+ # think about this expression as iteration rather than recursion, despite
7
+ # the `RECURSION` nomenclature defined by the SQL standards committee.
8
+ # Recursion is used here so that jobs in the table can be iterated one-by-
9
+ # one until a lock can be acquired, where a non-recursive `SELECT` would
10
+ # have the undesirable side-effect of locking multiple jobs at once. i.e.
11
+ # Consider that the following would have the worker lock *all* unlocked
12
+ # jobs:
13
+ #
14
+ # SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked
15
+ # FROM que_jobs AS j;
16
+ #
17
+ # The CTE will initially produce an "anchor" from the non-recursive term
18
+ # (i.e. before the `UNION`), and then use it as the contents of the
19
+ # working table as it continues to iterate through `que_jobs` looking for
20
+ # a lock. The jobs table has a sort on (priority, run_at, job_id) which
21
+ # allows it to walk the jobs table in a stable manner. As noted above, the
22
+ # recursion examines one job at a time so that it only ever acquires a
23
+ # single lock.
24
+ #
25
+ # The recursion has two possible end conditions:
26
+ #
27
+ # 1. If a lock *can* be acquired, it bubbles up to the top-level `SELECT`
28
+ # outside of the `job` CTE which stops recursion because it is
29
+ # constrained with a `LIMIT` of 1.
30
+ #
31
+ # 2. If a lock *cannot* be acquired, the recursive term of the expression
32
+ # (i.e. what's after the `UNION`) will return an empty result set
33
+ # because there are no more candidates left that could possibly be
34
+ # locked. This empty result automatically ends recursion.
35
+ #
36
+ # Note that this query can be easily modified to lock any number of jobs
37
+ # by tweaking the LIMIT clause in the main SELECT statement.
38
+ #
39
+ # [1] http://www.postgresql.org/docs/devel/static/queries-with.html
40
+ #
41
+ # Thanks to RhodiumToad in #postgresql for help with the original version
42
+ # of the job lock CTE.
4
43
  :lock_job => %{
5
- WITH RECURSIVE job AS (
44
+ WITH RECURSIVE jobs AS (
6
45
  SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked
7
46
  FROM (
8
47
  SELECT j
@@ -20,18 +59,18 @@ module Que
20
59
  FROM que_jobs AS j
21
60
  WHERE queue = $1::text
22
61
  AND run_at <= now()
23
- AND (priority, run_at, job_id) > (job.priority, job.run_at, job.job_id)
62
+ AND (priority, run_at, job_id) > (jobs.priority, jobs.run_at, jobs.job_id)
24
63
  ORDER BY priority, run_at, job_id
25
64
  LIMIT 1
26
65
  ) AS j
27
- FROM job
28
- WHERE NOT job.locked
66
+ FROM jobs
67
+ WHERE jobs.job_id IS NOT NULL
29
68
  LIMIT 1
30
69
  ) AS t1
31
70
  )
32
71
  )
33
72
  SELECT queue, priority, run_at, job_id, job_class, args, error_count
34
- FROM job
73
+ FROM jobs
35
74
  WHERE locked
36
75
  LIMIT 1
37
76
  }.freeze,
data/lib/que/version.rb CHANGED
@@ -1,3 +1,3 @@
1
1
  module Que
2
- Version = '0.10.0'
2
+ Version = '0.11.0'
3
3
  end
data/lib/que/worker.rb CHANGED
@@ -119,6 +119,7 @@ module Que
119
119
 
120
120
  class << self
121
121
  attr_reader :mode, :wake_interval, :worker_count
122
+ attr_accessor :queue_name
122
123
 
123
124
  # In order to work in a forking webserver, we need to be able to accept
124
125
  # worker_count and wake_interval settings without actually instantiating
@@ -162,7 +163,7 @@ module Que
162
163
 
163
164
  def set_up_workers
164
165
  if worker_count > workers.count
165
- workers.push(*(worker_count - workers.count).times.map{new(ENV['QUE_QUEUE'] || '')})
166
+ workers.push(*(worker_count - workers.count).times.map{new(queue_name || '')})
166
167
  elsif worker_count < workers.count
167
168
  workers.pop(workers.count - worker_count).each(&:stop).each(&:wait_until_stopped)
168
169
  end
data/lib/que.rb CHANGED
@@ -16,9 +16,38 @@ module Que
16
16
  JSON_MODULE = JSON
17
17
  end
18
18
 
19
+ HASH_DEFAULT_PROC = proc { |hash, key| hash[key.to_s] if Symbol === key }
20
+
21
+ INDIFFERENTIATOR = proc do |object|
22
+ case object
23
+ when Array
24
+ object.each(&INDIFFERENTIATOR)
25
+ when Hash
26
+ object.default_proc = HASH_DEFAULT_PROC
27
+ object.each { |key, value| object[key] = INDIFFERENTIATOR.call(value) }
28
+ object
29
+ else
30
+ object
31
+ end
32
+ end
33
+
34
+ SYMBOLIZER = proc do |object|
35
+ case object
36
+ when Hash
37
+ object.keys.each do |key|
38
+ object[key.to_sym] = SYMBOLIZER.call(object.delete(key))
39
+ end
40
+ object
41
+ when Array
42
+ object.map! { |e| SYMBOLIZER.call(e) }
43
+ else
44
+ object
45
+ end
46
+ end
47
+
19
48
  class << self
20
49
  attr_accessor :error_handler
21
- attr_writer :logger, :adapter, :log_formatter
50
+ attr_writer :logger, :adapter, :log_formatter, :disable_prepared_statements, :json_converter
22
51
 
23
52
  def connection=(connection)
24
53
  self.adapter =
@@ -96,6 +125,19 @@ module Que
96
125
  @log_formatter ||= JSON_MODULE.method(:dump)
97
126
  end
98
127
 
128
+ def disable_prepared_statements
129
+ @disable_prepared_statements || false
130
+ end
131
+
132
+ def constantize(camel_cased_word)
133
+ if camel_cased_word.respond_to?(:constantize)
134
+ # Use ActiveSupport's version if it exists.
135
+ camel_cased_word.constantize
136
+ else
137
+ string.split('::').inject(Object, &:const_get)
138
+ end
139
+ end
140
+
99
141
  # A helper method to manage transactions, used mainly by the migration
100
142
  # system. It's available for general use, but if you're using an ORM that
101
143
  # provides its own transaction helper, be sure to use that instead, or the
@@ -122,8 +164,12 @@ module Que
122
164
  end
123
165
  end
124
166
 
167
+ def json_converter
168
+ @json_converter ||= INDIFFERENTIATOR
169
+ end
170
+
125
171
  # Copy some of the Worker class' config methods here for convenience.
126
- [:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :wake!, :wake_all!].each do |meth|
172
+ [:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :queue_name, :queue_name=, :wake!, :wake_all!].each do |meth|
127
173
  define_method(meth) { |*args| Worker.send(meth, *args) }
128
174
  end
129
175
  end
data/que.gemspec CHANGED
@@ -14,7 +14,7 @@ Gem::Specification.new do |spec|
14
14
  spec.license = 'MIT'
15
15
 
16
16
  spec.files = `git ls-files`.split($/)
17
- spec.executables = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
17
+ spec.executables = ['que']
18
18
  spec.test_files = spec.files.grep(%r{^(test|spec|features)/})
19
19
  spec.require_paths = ['lib']
20
20
 
@@ -67,10 +67,17 @@ unless defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby'
67
67
  end
68
68
 
69
69
  it "should instantiate args as ActiveSupport::HashWithIndifferentAccess" do
70
- ArgsJob.enqueue :param => 2
71
- Que::Job.work
72
- $passed_args.first[:param].should == 2
73
- $passed_args.first.should be_an_instance_of ActiveSupport::HashWithIndifferentAccess
70
+ begin
71
+ # Mimic the setting in the Railtie.
72
+ Que.json_converter = :with_indifferent_access.to_proc
73
+
74
+ ArgsJob.enqueue :param => 2
75
+ Que::Job.work
76
+ $passed_args.first[:param].should == 2
77
+ $passed_args.first.should be_an_instance_of ActiveSupport::HashWithIndifferentAccess
78
+ ensure
79
+ Que.json_converter = Que::INDIFFERENTIATOR
80
+ end
74
81
  end
75
82
 
76
83
  it "should support Rails' special extensions for times" do
@@ -119,5 +126,25 @@ unless defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby'
119
126
  Que.adapter.should be_in_transaction
120
127
  end
121
128
  end
129
+
130
+ it "should not leak connections to other databases when using ActiveRecord's multiple database support" do
131
+ class SecondDatabaseModel < ActiveRecord::Base
132
+ establish_connection(QUE_URL)
133
+ end
134
+
135
+ SecondDatabaseModel.clear_active_connections!
136
+ SecondDatabaseModel.connection_handler.active_connections?.should == false
137
+
138
+ class SecondDatabaseModelJob < Que::Job
139
+ def run(*args)
140
+ SecondDatabaseModel.connection.execute("SELECT 1")
141
+ end
142
+ end
143
+
144
+ SecondDatabaseModelJob.enqueue
145
+ Que::Job.work
146
+
147
+ SecondDatabaseModel.connection_handler.active_connections?.should == false
148
+ end
122
149
  end
123
150
  end