que 0.10.0 → 0.11.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: ea48a3d241852be82ac3310d9a9b77dd5c752c66
4
- data.tar.gz: 9c959569e9b5eb520dffde4a21f08875d9efbbce
3
+ metadata.gz: 51b50d40120fb319646d1b216aa7f67931e9614b
4
+ data.tar.gz: c82fb2957389d088f3d6190311f9dae376fe82f6
5
5
  SHA512:
6
- metadata.gz: 794d3b3a6e55beaa75a8142324a8eac2512505114b401aa1b306ef20e0e08f58913a8bb6685dd599a4bcadbdfec295c97637b22b6ddfdf8d2862225444e76ced
7
- data.tar.gz: b186100af249195025f731e551155dcdcd9e4797a7164d0c71a9604842e5a3d604bd47f6b5331e0773bdfad1018332d21463266675251afeb5034407fc7dcae6
6
+ metadata.gz: 20c1f09a17e33aa0589ff01ef0983cf2000e9ae0be3594126a3c13aca10cca5d2b236f9296b17b8111d560fe2c62bc1d17924a3a00242ce31e768f95cdda39c8
7
+ data.tar.gz: e7f761b3622ae87e88f1ef85f81ebe8597ad82c4b1b07ce5aa00c0dc454cea08175ebf0fa2e01ebe0d7500220ee55f0035c8772945cd9ea95d63dfc5ca83c0f7
data/CHANGELOG.md CHANGED
@@ -1,3 +1,17 @@
1
+ ### 0.11.0 (2015-09-04)
2
+
3
+ * A command-line program has been added that can be used to work jobs in a more flexible manner than the previous rake task. Run `que -h` for more information.
4
+
5
+ * The `rake que:work` rake task that was specific to Rails has been removed in favor of the CLI, and the various QUE_* environment variables no longer have any effect.
6
+
7
+ * The worker pool will no longer start automatically in the same process when running the rails server - this behavior was too prone to breakage. If you'd like to recreate the old behavior, you can manually set `Que.mode = :async` in your app whenever conditions are appropriate (classes have loaded, a database connection has been established, and the process will not be forking).
8
+
9
+ * Add a Que.disable_prepared_transactions= configuration option, to make it easier to use tools like pgbouncer. (#110)
10
+
11
+ * Add a Que.json_converter= option, to configure how arguments are transformed before being passed to the job. By default this is set to the `Que::INDIFFERENTIATOR` proc, which provides simple indifferent access (via strings or symbols) to args hashes. If you're using Rails, the default is to convert the args to HashWithIndifferentAccess instead. You can also pass it the Que::SYMBOLIZER proc, which will destructively convert all keys in the args hash to symbols (this will probably be the default in Que 1.0). If you want to define a custom converter, you will usually want to pass this option a proc, and you'll probably want it to be recursive. See the implementations of Que::INDIFFERENTIATOR and Que::SYMBOLIZER for examples. (#113)
12
+
13
+ * When using Que with ActiveRecord, workers now call `ActiveRecord::Base.clear_active_connections!` between jobs. This cleans up connections that ActiveRecord leaks when it is used to access mutliple databases. (#116)
14
+
1
15
  ### 0.10.0 (2015-03-18)
2
16
 
3
17
  * When working jobs via the rake task, Rails applications are now eager-loaded if present, to avoid problems with multithreading and autoloading. (#96) (hmarr)
data/README.md CHANGED
@@ -23,6 +23,8 @@ Que also includes a worker pool, so that multiple threads can process jobs in th
23
23
 
24
24
  Que is tested on Ruby 2.0, Rubinius and JRuby (with the `jruby-pg` gem, which is [not yet functional with ActiveRecord](https://github.com/chanks/que/issues/4#issuecomment-29561356)). It requires Postgres 9.2+ for the JSON datatype.
25
25
 
26
+ **Please note** - Que's job table undergoes a lot of churn when it is under high load, and like any heavily-written table, is susceptible to bloat and slowness if Postgres isn't able to clean it up. The most common cause of this is long-running transactions, so it's recommended to try to keep all transactions against the database housing Que's job table as short as possible. This is good advice to remember for any high-activity database, but bears emphasizing when using tables that undergo a lot of writes.
27
+
26
28
 
27
29
  ## Installation
28
30
 
@@ -98,11 +100,12 @@ ChargeCreditCard.enqueue current_user.id, :credit_card_id => card.id, :run_at =>
98
100
 
99
101
  To determine what happens when a job is queued, you can set Que's mode. There are a few options for the mode:
100
102
 
101
- * `Que.mode = :off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task to do this, see below). This is the default when running `bin/rails console`.
103
+ * `Que.mode = :off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task included that will do this, `rake que:work`). This is the default when running `bin/rails console`.
102
104
  * `Que.mode = :async` - In this mode, a pool of background workers is spun up, each running in their own thread. This is the default when running `bin/rails server`. See the docs for [more information on managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md).
103
105
  * `Que.mode = :sync` - In this mode, any jobs you queue will be run in the same thread, synchronously (that is, `MyJob.enqueue` runs the job and won't return until it's completed). This makes your application's behavior easier to test, so it's the default in the test environment.
104
106
 
105
- If you're using ActiveRecord to dump your database's schema, you'll probably want to [set schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly.
107
+ **If you're using ActiveRecord to dump your database's schema, [set your schema_format to :sql](http://guides.rubyonrails.org/migrations.html#types-of-schema-dumps) so that Que's table structure is managed correctly.** (You can use schema_format as :ruby if you want but keep in mind this is highly advised against, as some parts of Que will not work.)
108
+
106
109
 
107
110
  ## Related Projects
108
111
 
@@ -112,15 +115,19 @@ If you're using ActiveRecord to dump your database's schema, you'll probably wan
112
115
 
113
116
  If you have a project that uses or relates to Que, feel free to submit a PR adding it to the list!
114
117
 
115
- ## Contributing
116
118
 
117
- 1. Fork it
118
- 2. Create your feature branch (`git checkout -b my-new-feature`)
119
- 3. Commit your changes (`git commit -am 'Add some feature'`)
120
- 4. Push to the branch (`git push origin my-new-feature`)
121
- 5. Create new Pull Request
119
+ ## Community and Contributing
120
+
121
+ * For bugs in the library, please feel free to [open an issue](https://github.com/chanks/que/issues/new).
122
+ * For general discussion and questions/concerns that don't relate to obvious bugs, try posting on the [que-talk Google Group](https://groups.google.com/forum/#!forum/que-talk).
123
+ * For contributions, pull requests submitted via Github are welcome.
124
+
125
+ Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in [que-talk](https://groups.google.com/forum/#!forum/que-talk) first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.
126
+
127
+
128
+ ### Specs
122
129
 
123
- A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions (especially on Rubinius). As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:
130
+ A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions (especially on interpreters without a global lock, like Rubinius or JRuby). As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:
124
131
 
125
132
  for i in {1..1000}; do rspec -b --seed $i; done
126
133
 
data/bin/que ADDED
@@ -0,0 +1,85 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require 'optparse'
4
+ require 'ostruct'
5
+ require 'logger'
6
+
7
+ options = OpenStruct.new
8
+
9
+ OptionParser.new do |opts|
10
+ opts.banner = 'usage: que [options] file/to/require ...'
11
+
12
+ opts.on('-w', '--worker-count [COUNT]', Integer, "Set number of workers in process (default: 4)") do |worker_count|
13
+ options.worker_count = worker_count
14
+ end
15
+
16
+ opts.on('-i', '--wake-interval [INTERVAL]', Float, "Set maximum interval between polls of the job queue (in seconds) (default: 0.1)") do |wake_interval|
17
+ options.wake_interval = wake_interval
18
+ end
19
+
20
+ opts.on('-l', '--log-level [LEVEL]', String, "Set level of Que's logger (debug, info, warn, error, fatal) (default: info)") do |log_level|
21
+ options.log_level = log_level
22
+ end
23
+
24
+ opts.on('-q', '--queue-name [NAME]', String, "Set the name of the queue to work jobs from (default: the default queue)") do |queue_name|
25
+ options.queue_name = queue_name
26
+ end
27
+
28
+ opts.on('-v', '--version', "Show Que version") do
29
+ $stdout.puts "Que version #{Version}"
30
+ exit 0
31
+ end
32
+
33
+ opts.on('-h', '--help', "Show help text") do
34
+ $stdout.puts opts
35
+ exit 0
36
+ end
37
+ end.parse!(ARGV)
38
+
39
+ if ARGV.length.zero?
40
+ $stdout.puts <<-OUTPUT
41
+ You didn't include any Ruby files to require!
42
+ Que needs to be able to load your application before it can process jobs.
43
+ (Hint: If you're using Rails, try `que ./config/environment.rb`)
44
+ (Or use `que -h` for a list of options)
45
+ OUTPUT
46
+ exit 1
47
+ end
48
+
49
+ ARGV.each do |file|
50
+ begin
51
+ require file
52
+ rescue LoadError
53
+ $stdout.puts "Could not load file '#{file}'"
54
+ end
55
+ end
56
+
57
+ Que.logger ||= Logger.new(STDOUT)
58
+
59
+ begin
60
+ if log_level = options.log_level
61
+ Que.logger.level = Logger.const_get(log_level.upcase)
62
+ end
63
+ rescue NameError
64
+ $stdout.puts "Bad logging level: #{log_level}"
65
+ exit 1
66
+ end
67
+
68
+ Que.queue_name = options.queue_name || Que.queue_name || nil
69
+ Que.worker_count = options.worker_count || Que.worker_count || 4
70
+ Que.wake_interval = options.wake_interval || Que.wake_interval || 0.1
71
+ Que.mode = :async
72
+
73
+ stop = false
74
+ %w(INT TERM).each { |signal| trap(signal) { stop = true } }
75
+
76
+ loop do
77
+ sleep 0.01
78
+ break if stop
79
+ end
80
+
81
+ $stdout.puts
82
+ $stdout.puts "Finishing Que's current jobs before exiting..."
83
+ Que.worker_count = 0
84
+ Que.mode = :off
85
+ $stdout.puts "Que's jobs finished, exiting..."
@@ -4,17 +4,21 @@ If you're using both Rails and ActiveRecord, the README describes how to get sta
4
4
 
5
5
  If you're using ActiveRecord outside of Rails, you'll need to tell Que to piggyback on its connection pool after you've connected to the database:
6
6
 
7
- ActiveRecord::Base.establish_connection(ENV['DATABASE_URL'])
7
+ ```ruby
8
+ ActiveRecord::Base.establish_connection(ENV['DATABASE_URL'])
8
9
 
9
- require 'que'
10
- Que.connection = ActiveRecord
10
+ require 'que'
11
+ Que.connection = ActiveRecord
12
+ ```
11
13
 
12
14
  Then you can queue jobs just as you would in Rails:
13
15
 
14
- ActiveRecord::Base.transaction do
15
- @user = User.create(params[:user])
16
- SendRegistrationEmail.enqueue :user_id => @user.id
17
- end
16
+ ```ruby
17
+ ActiveRecord::Base.transaction do
18
+ @user = User.create(params[:user])
19
+ SendRegistrationEmail.enqueue :user_id => @user.id
20
+ end
21
+ ```
18
22
 
19
23
  There are other docs to read if you're using [Sequel](https://github.com/chanks/que/blob/master/docs/using_sequel.md) or [plain Postgres connections](https://github.com/chanks/que/blob/master/docs/using_plain_connections.md) (with no ORM at all) instead of ActiveRecord.
20
24
 
@@ -22,62 +26,75 @@ There are other docs to read if you're using [Sequel](https://github.com/chanks/
22
26
 
23
27
  If you want to run a worker pool in your web process and you're using a forking webserver like Phusion Passenger (in smart spawning mode), Unicorn or Puma in some configurations, you'll want to set `Que.mode = :off` in your application configuration and only start up the worker pool in the child processes after the DB connection has been reestablished. So, for Puma:
24
28
 
25
- # config/puma.rb
26
- on_worker_boot do
27
- ActiveRecord::Base.establish_connection
29
+ ```ruby
30
+ # config/puma.rb
31
+ on_worker_boot do
32
+ ActiveRecord::Base.establish_connection
28
33
 
29
- Que.mode = :async
30
- end
34
+ Que.mode = :async
35
+ end
36
+ ```
31
37
 
32
38
  And for Unicorn:
33
39
 
34
- # config/unicorn.rb
35
- after_fork do |server, worker|
36
- ActiveRecord::Base.establish_connection
40
+ ```ruby
41
+ # config/unicorn.rb
42
+ after_fork do |server, worker|
43
+ ActiveRecord::Base.establish_connection
37
44
 
38
- Que.mode = :async
39
- end
45
+ Que.mode = :async
46
+ end
47
+ ```
40
48
 
41
49
  And for Phusion Passenger:
42
50
 
43
- # config.ru
44
- if defined?(PhusionPassenger)
45
- PhusionPassenger.on_event(:starting_worker_process) do |forked|
46
- if forked
47
- Que.mode = :async
48
- end
49
- end
51
+ ```ruby
52
+ # config.ru
53
+ if defined?(PhusionPassenger)
54
+ PhusionPassenger.on_event(:starting_worker_process) do |forked|
55
+ if forked
56
+ Que.mode = :async
50
57
  end
51
-
58
+ end
59
+ end
60
+ ```
52
61
 
53
62
  ### Managing the Jobs Table
54
63
 
55
64
  After you've connected Que to the database, you can manage the jobs table:
56
65
 
57
- # Create/update the jobs table to the latest schema version:
58
- Que.migrate!
66
+ ```ruby
67
+ # Create/update the jobs table to the latest schema version:
68
+ Que.migrate!
69
+ ```
59
70
 
60
71
  You'll want to migrate to a specific version if you're using migration files, to ensure that they work the same way even when you upgrade Que in the future:
61
72
 
62
- # Update the schema to version #3.
63
- Que.migrate! :version => 3
73
+ ```ruby
74
+ # Update the schema to version #3.
75
+ Que.migrate! :version => 3
64
76
 
65
- # To reverse the migration, drop the jobs table entirely:
66
- Que.migrate! :version => 0
77
+ # To reverse the migration, drop the jobs table entirely:
78
+ Que.migrate! :version => 0
79
+ ```
67
80
 
68
81
  There's also a helper method to clear all jobs from the jobs table:
69
82
 
70
- Que.clear!
83
+ ```ruby
84
+ Que.clear!
85
+ ```
71
86
 
72
87
  ### Other Setup
73
88
 
74
89
  You'll need to set Que's mode manually:
75
90
 
76
- # Start the worker pool:
77
- Que.mode = :async
91
+ ```ruby
92
+ # Start the worker pool:
93
+ Que.mode = :async
78
94
 
79
- # Or, when testing:
80
- Que.mode = :sync
95
+ # Or, when testing:
96
+ Que.mode = :sync
97
+ ```
81
98
 
82
99
  Be sure to read the docs on [managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md) for more information on using the worker pool.
83
100
 
@@ -2,43 +2,47 @@
2
2
 
3
3
  One of Que's goals to be easily extensible and hackable (and if anyone has any suggestions on ways to accomplish that, please [open an issue](https://github.com/chanks/que/issues)). This document is meant to demonstrate some of the ways Que can be used to accomplish different tasks that it's not already designed for.
4
4
 
5
+ Some of these features may be moved into core Que at some point, depending on how commonly useful they are.
6
+
5
7
  ### Recurring Jobs
6
8
 
7
9
  Que's support for scheduling jobs makes it easy to implement reliable recurring jobs. For example, suppose you want to run a job every hour that processes the users created in that time:
8
10
 
9
- class CronJob < Que::Job
10
- # Default repetition interval in seconds. Can be overridden in
11
- # subclasses. Can use 1.minute if using Rails.
12
- INTERVAL = 60
11
+ ```ruby
12
+ class CronJob < Que::Job
13
+ # Default repetition interval in seconds. Can be overridden in
14
+ # subclasses. Can use 1.minute if using Rails.
15
+ INTERVAL = 60
13
16
 
14
- attr_reader :start_at, :end_at, :run_again_at, :time_range
17
+ attr_reader :start_at, :end_at, :run_again_at, :time_range
15
18
 
16
- def _run
17
- args = attrs[:args].first
18
- @start_at, @end_at = Time.at(args.delete('start_at')), Time.at(args.delete('end_at'))
19
- @run_again_at = @end_at + self.class::INTERVAL
20
- @time_range = @start_at...@end_at
19
+ def _run
20
+ args = attrs[:args].first
21
+ @start_at, @end_at = Time.at(args.delete('start_at')), Time.at(args.delete('end_at'))
22
+ @run_again_at = @end_at + self.class::INTERVAL
23
+ @time_range = @start_at...@end_at
21
24
 
22
- super
25
+ super
23
26
 
24
- args['start_at'] = @end_at.to_f
25
- args['end_at'] = @run_again_at.to_f
26
- self.class.enqueue(args, run_at: @run_again_at)
27
- end
28
- end
27
+ args['start_at'] = @end_at.to_f
28
+ args['end_at'] = @run_again_at.to_f
29
+ self.class.enqueue(args, run_at: @run_again_at)
30
+ end
31
+ end
29
32
 
30
- class MyCronJob < CronJob
31
- INTERVAL = 3600
33
+ class MyCronJob < CronJob
34
+ INTERVAL = 3600
32
35
 
33
- def run(args)
34
- User.where(created_at: time_range).each { ... }
35
- end
36
- end
36
+ def run(args)
37
+ User.where(created_at: time_range).each { ... }
38
+ end
39
+ end
37
40
 
38
- # To enqueue:
39
- tf = Time.now
40
- t0 = Time.now - 3600
41
- MyCronJob.enqueue :start_at => t0.to_f, :end_at => tf.to_f
41
+ # To enqueue:
42
+ tf = Time.now
43
+ t0 = Time.now - 3600
44
+ MyCronJob.enqueue :start_at => t0.to_f, :end_at => tf.to_f
45
+ ```
42
46
 
43
47
  Note that instead of using Time.now in our database query, and requeueing the job at 1.hour.from_now, we use job arguments to track start and end times. This lets us correct for delays in running the job. Suppose that there's a backlog of priority jobs, or that the worker briefly goes down, and this job, which was supposed to run at 11:00 a.m. isn't run until 11:05 a.m. A lazier implementation would look for users created after 1.hour.ago, and miss those that signed up between 10:00 a.m. and 10:05 a.m.
44
48
 
@@ -52,7 +56,9 @@ Finally, by passing both the start and end times for the period to be processed,
52
56
 
53
57
  DelayedJob offers a simple API for delaying methods to objects:
54
58
 
55
- @user.delay.activate!(@device)
59
+ ```ruby
60
+ @user.delay.activate!(@device)
61
+ ```
56
62
 
57
63
  The API is pleasant, but implementing it requires storing marshalled Ruby objects in the database, which is both inefficient and prone to bugs - for example, if you deploy an update that changes the name of an instance variable (a contained, internal change that might seem completely innocuous), the marshalled objects in the database will retain the old instance variable name and will behave unexpectedly when unmarshalled into the new Ruby code.
58
64
 
@@ -60,27 +66,29 @@ This is the danger of mixing the ephemeral state of a Ruby object in memory with
60
66
 
61
67
  That said, if you want to queue jobs in the DelayedJob style, that can be done relatively easily:
62
68
 
63
- class Delayed < Que::Job
64
- def run(receiver, method, args)
65
- Marshal.load(receiver).send method, *Marshal.load(args)
66
- end
67
- end
68
-
69
- class DelayedAction
70
- def initialize(receiver)
71
- @receiver = receiver
72
- end
73
-
74
- def method_missing(method, *args)
75
- Delayed.enqueue Marshal.dump(@receiver), method, Marshal.dump(args)
76
- end
77
- end
78
-
79
- class Object
80
- def delay
81
- DelayedAction.new(self)
82
- end
83
- end
69
+ ```ruby
70
+ class Delayed < Que::Job
71
+ def run(receiver, method, args)
72
+ Marshal.load(receiver).send method, *Marshal.load(args)
73
+ end
74
+ end
75
+
76
+ class DelayedAction
77
+ def initialize(receiver)
78
+ @receiver = receiver
79
+ end
80
+
81
+ def method_missing(method, *args)
82
+ Delayed.enqueue Marshal.dump(@receiver), method, Marshal.dump(args)
83
+ end
84
+ end
85
+
86
+ class Object
87
+ def delay
88
+ DelayedAction.new(self)
89
+ end
90
+ end
91
+ ```
84
92
 
85
93
  You can replace Marshal with YAML if you like.
86
94
 
@@ -88,50 +96,105 @@ You can replace Marshal with YAML if you like.
88
96
 
89
97
  You may find it a hassle to keep an individual class file for each type of job. QueueClassic has a simpler design, wherein you simply give it a class method to call, like:
90
98
 
91
- QC.enqueue("Kernel.puts", "hello world")
99
+ ```ruby
100
+ QC.enqueue("Kernel.puts", "hello world")
101
+ ```
92
102
 
93
103
  You can mimic this style with Que by using a simple job class:
94
104
 
95
- class Command < Que::Job
96
- def run(method, *args)
97
- receiver, message = method.split('.')
98
- Object.const_get(receiver).send(message, *args)
99
- end
100
- end
105
+ ```ruby
106
+ class Command < Que::Job
107
+ def run(method, *args)
108
+ receiver, message = method.split('.')
109
+ Object.const_get(receiver).send(message, *args)
110
+ end
111
+ end
101
112
 
102
- # Then:
113
+ # Then:
103
114
 
104
- Command.enqueue "Kernel.puts", "hello world"
115
+ Command.enqueue "Kernel.puts", "hello world"
116
+ ```
105
117
 
106
118
  ### Retaining Finished Jobs
107
119
 
108
120
  Que deletes jobs from the queue as they are worked, in order to keep the `que_jobs` table and index small and efficient. If you have a need to hold onto finished jobs, the recommended way to do this is to add a second table to hold them, and then insert them there as they are deleted from the queue. You can use Ruby's inheritance mechanics to do this cleanly:
109
121
 
110
- Que.execute "CREATE TABLE finished_jobs AS SELECT * FROM que_jobs LIMIT 0"
111
- # Or, better, use a proper CREATE TABLE with not-null constraints, and add whatever indexes you like.
122
+ ```ruby
123
+ Que.execute "CREATE TABLE finished_jobs AS SELECT * FROM que_jobs LIMIT 0"
124
+ # Or, better, use a proper CREATE TABLE with not-null constraints, and add whatever indexes you like.
112
125
 
113
- class MyJobClass < Que::Job
114
- def destroy
115
- Que.execute "INSERT INTO finished_jobs SELECT * FROM que_jobs WHERE queue = $1::text AND priority = $2::integer AND run_at = $3::timestamptz AND job_id = $4::bigint", @attrs.values_at(:queue, :priority, :run_at, :job_id)
116
- super
117
- end
118
- end
126
+ class MyJobClass < Que::Job
127
+ def destroy
128
+ Que.execute "INSERT INTO finished_jobs SELECT * FROM que_jobs WHERE queue = $1::text AND priority = $2::integer AND run_at = $3::timestamptz AND job_id = $4::bigint", @attrs.values_at(:queue, :priority, :run_at, :job_id)
129
+ super
130
+ end
131
+ end
132
+ ```
119
133
 
120
134
  Then just have your job classes inherit from MyJobClass instead of Que::Job. If you need to query the jobs table and you want to include both finished and unfinished jobs, you might use:
121
135
 
122
- Que.execute "CREATE VIEW all_jobs AS SELECT * FROM que_jobs UNION ALL SELECT * FROM finished_jobs"
123
- Que.execute "SELECT * FROM all_jobs"
136
+ ```ruby
137
+ Que.execute "CREATE VIEW all_jobs AS SELECT * FROM que_jobs UNION ALL SELECT * FROM finished_jobs"
138
+ Que.execute "SELECT * FROM all_jobs"
139
+ ```
124
140
 
125
141
  Alternately, if you want a more foolproof solution and you're not scared of PostgreSQL, you can use a trigger:
126
142
 
127
- CREATE FUNCTION please_save_my_job()
128
- RETURNS trigger
129
- LANGUAGE plpgsql
130
- AS $$
131
- BEGIN
132
- INSERT INTO finished_jobs SELECT (OLD).*;
133
- RETURN OLD;
134
- END;
135
- $$;
136
-
137
- CREATE TRIGGER keep_all_my_old_jobs BEFORE DELETE ON que_jobs FOR EACH ROW EXECUTE PROCEDURE please_save_my_job();
143
+ ```sql
144
+ CREATE FUNCTION please_save_my_job()
145
+ RETURNS trigger
146
+ LANGUAGE plpgsql
147
+ AS $$
148
+ BEGIN
149
+ INSERT INTO finished_jobs SELECT (OLD).*;
150
+ RETURN OLD;
151
+ END;
152
+ $$;
153
+
154
+ CREATE TRIGGER keep_all_my_old_jobs BEFORE DELETE ON que_jobs FOR EACH ROW EXECUTE PROCEDURE please_save_my_job();
155
+ ```
156
+
157
+ ### Not Retrying Certain Failed Jobs
158
+
159
+ By default, when jobs fail, Que reschedules them to be retried later. If instead you'd like certain jobs to not be retried, and instead move them elsewhere to be examined later, you can accomplish that easily. First, we need a place for the failed jobs to be stored:
160
+
161
+ ```sql
162
+ CREATE TABLE failed_jobs AS SELECT * FROM que_jobs LIMIT 0
163
+ ```
164
+
165
+ Then, create a module that you can use in the jobs you don't want to retry:
166
+
167
+ ```ruby
168
+ module SkipRetries
169
+ def run(*args)
170
+ super
171
+ rescue
172
+ sql = <<-SQL
173
+ WITH failed AS (
174
+ DELETE
175
+ FROM que_jobs
176
+ WHERE queue = $1::text
177
+ AND priority = $2::smallint
178
+ AND run_at = $3::timestamptz
179
+ AND job_id = $4::bigint
180
+ RETURNING *
181
+ )
182
+ INSERT INTO failed_jobs
183
+ SELECT * FROM failed;
184
+ SQL
185
+
186
+ Que.execute sql, @attrs.values_at(:queue, :priority, :run_at, :job_id)
187
+
188
+ raise # Reraises caught error.
189
+ end
190
+ end
191
+
192
+ class RunOnceJob < Que::Job
193
+ prepend SkipRetries
194
+
195
+ def run(*args)
196
+ # Do something - if this job runs an error it'll be moved to the
197
+ # failed_jobs table and not retried.
198
+ end
199
+ end
200
+ ```
@@ -4,40 +4,44 @@ If an error is raised and left uncaught by your job, Que will save the error mes
4
4
 
5
5
  If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count^4 + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's. Alternately, you can define your own retry logic by setting an interval to delay each time, or a callable that accepts the number of failures and returns an interval:
6
6
 
7
- class MyJob < Que::Job
8
- # Just retry a failed job every 5 seconds:
9
- @retry_interval = 5
7
+ ```ruby
8
+ class MyJob < Que::Job
9
+ # Just retry a failed job every 5 seconds:
10
+ @retry_interval = 5
10
11
 
11
- # Always retry this job immediately (not recommended, or transient
12
- # errors will spam your error reporting):
13
- @retry_interval = 0
12
+ # Always retry this job immediately (not recommended, or transient
13
+ # errors will spam your error reporting):
14
+ @retry_interval = 0
14
15
 
15
- # Increase the delay by 30 seconds every time this job fails:
16
- @retry_interval = proc { |count| count * 30 }
17
- end
16
+ # Increase the delay by 30 seconds every time this job fails:
17
+ @retry_interval = proc { |count| count * 30 }
18
+ end
19
+ ```
18
20
 
19
21
  Unlike DelayedJob, however, there is currently no maximum number of failures after which jobs will be deleted. Que's assumption is that if a job is erroring perpetually (and not just transiently), you will want to take action to get the job working properly rather than simply losing it silently.
20
22
 
21
23
  If you're using an error notification system (highly recommended, of course), you can hook Que into it by setting a callable as the error handler:
22
24
 
23
- Que.error_handler = proc do |error, job|
24
- # Do whatever you want with the error object or job row here.
25
-
26
- # Note that the job passed is not the actual job object, but the hash
27
- # representing the job row in the database, which looks like:
28
-
29
- # {
30
- # "queue" => "my_queue",
31
- # "priority" => 100,
32
- # "run_at" => 2015-03-06 11:07:08 -0500,
33
- # "job_id" => 65,
34
- # "job_class" => "MyJob",
35
- # "args" => ['argument', 78],
36
- # "error_count" => 0
37
- # }
38
-
39
- # This is done because the job may not have been able to be deserialized
40
- # properly, if the name of the job class was changed or the job is being
41
- # retrieved and worked by the wrong app. The job argument may also be
42
- # nil, if there was a connection failure or something similar.
43
- end
25
+ ```ruby
26
+ Que.error_handler = proc do |error, job|
27
+ # Do whatever you want with the error object or job row here.
28
+
29
+ # Note that the job passed is not the actual job object, but the hash
30
+ # representing the job row in the database, which looks like:
31
+
32
+ # {
33
+ # "queue" => "my_queue",
34
+ # "priority" => 100,
35
+ # "run_at" => 2015-03-06 11:07:08 -0500,
36
+ # "job_id" => 65,
37
+ # "job_class" => "MyJob",
38
+ # "args" => ['argument', 78],
39
+ # "error_count" => 0
40
+ # }
41
+
42
+ # This is done because the job may not have been able to be deserialized
43
+ # properly, if the name of the job class was changed or the job is being
44
+ # retrieved and worked by the wrong app. The job argument may also be
45
+ # nil, if there was a connection failure or something similar.
46
+ end
47
+ ```