que 0.2.0 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: f76b4a74e17281ea1007a2c313e146bcda43e1fb
4
- data.tar.gz: 6ff1046d4cf009062327e79fb9be628032b252ce
3
+ metadata.gz: eca6d367104b9c0dcb8c5fb591641bebdc7c4a8f
4
+ data.tar.gz: a71620a15051793942059b7c423a2f78cc3b6f16
5
5
  SHA512:
6
- metadata.gz: 319ea6bea73cdd606b4c3103beee120564aa29976255b2eb14992029b2f13768e33164b5134ad5e0b39a038086380294f02d7226a5c2e1f5d092ef32ef03a2e0
7
- data.tar.gz: 44144ce58a99d38ced64a4e6259bafa66c7d6373ad125844efc27f45c990c764ad1e04e8e1e1b33f834e9b9be6ae3e812f2cfd3f9ce65303d19a070070b63832
6
+ metadata.gz: d59625b0b10cca06e54f9a6714bc5cb701d4895936ad9a5ff4a6500ab1f165f0ad5f3c293fded447979e3aa34379480591518e0819160c857a2a22a754c70d8b
7
+ data.tar.gz: 92741560f486b993a950538cb8187fa7183ef02822d386479b978401d5fc08d5750a93418b44c68fa66eb5ace6d90691fb9c9c86e472caad59e5650cab3b27cb
@@ -1,3 +1,15 @@
1
+ ### 0.3.0 (2013-12-21)
2
+
3
+ * Add Que.stop!, which immediately kills all jobs being worked in the process.
4
+
5
+ This can leave database connections and such in an unpredictable state, and so should only be used when the process is exiting.
6
+
7
+ * Use Que.stop! to safely handle processes that exit while Que is running.
8
+
9
+ Previously, a job that was in the middle of a transaction when the process was killed with SIGINT or SIGTERM would have had its work committed prematurely.
10
+
11
+ * Clean up internals and hammer out several race conditions.
12
+
1
13
  ### 0.2.0 (2013-11-30)
2
14
 
3
15
  * Officially support JRuby 1.7.5+. Earlier versions may work.
data/README.md CHANGED
@@ -1,44 +1,28 @@
1
1
  # Que
2
2
 
3
- Que is a queue for Ruby applications that manages jobs using PostgreSQL's advisory locks. There are several advantages to this design:
3
+ **TL;DR: Que is a high-performance alternative to DelayedJob or QueueClassic that improves the reliability of your application by helping you keep your jobs [consistent](https://en.wikipedia.org/wiki/ACID#Consistency) with the rest of your data.**
4
4
 
5
- * **Safety** - If a Ruby process dies, its jobs won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
6
- * **Efficiency** - Locking a job doesn't incur a disk write or hold open a transaction.
7
- * **Concurrency** - There's no locked_at column or SELECT FOR UPDATE-style locking, so workers don't block each other when locking jobs.
5
+ Que is a queue for Ruby and PostgreSQL that manages jobs using [advisory locks](http://www.postgresql.org/docs/current/static/explicit-locking.html#ADVISORY-LOCKS), which gives it several advantages over other RDBMS-backed queues:
8
6
 
9
- Additionally, there are the general benefits of storing jobs in Postgres rather than a dedicated queue:
7
+ * **Concurrency** - Workers don't block each other when trying to lock jobs, as often occurs with "SELECT FOR UPDATE"-style locking. This allows for very high throughput with a large number of workers.
8
+ * **Efficiency** - Locks are held in memory, so locking a job doesn't incur a disk write. These first two points are what limit performance with other queues - all workers trying to lock jobs have to wait behind one that's persisting its UPDATE on a locked_at column to disk (and the disks of however many other servers your database is replicating to). Under heavy load, Que's bottleneck is CPU, not I/O.
9
+ * **Safety** - If a Ruby process dies, the jobs it is working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
10
10
 
11
- * **Transactional control** - Queue a job along with other changes to your database, and it'll commit or rollback with everything else.
12
- * **Atomic backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), this is important if you don't want to lose anything during a restoration.
13
- * **Fewer dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
11
+ Additionally, there are the general benefits of storing jobs in Postgres, alongside the rest of your data, rather than in Redis or a dedicated queue:
14
12
 
15
- Que's primary goal is reliability. When it's stable, you should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que works hard to make sure that jobs you queue are performed exactly once.
13
+ * **Transactional Control** - Queue a job along with other changes to your database, and it'll commit or rollback with everything else. If you're using ActiveRecord or Sequel, Que can piggyback on their connections, so setup is simple and jobs are protected by the transactions you're already using.
14
+ * **Atomic Backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
15
+ * **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
16
16
 
17
- Que's secondary goal is performance. It won't be able to match the speed or throughput of a dedicated queue, or maybe even a Redis-backed queue, but it should be plenty fast for most use cases. It also includes a worker pool, so that multiple threads can process jobs in the same process. It can even do this in the background of your web process - if you're running on Heroku, for example, you won't need to run a separate worker dyno.
17
+ Que's primary goal is reliability. When it's stable, you should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the wiki page on [how to write a reliable job](https://github.com/chanks/que/wiki/Writing-Reliable-Jobs)).
18
18
 
19
- The rakefile includes a benchmark that tries to compare the performance and concurrency of Que's locking mechanism to that of DelayedJob and QueueClassic. On my i5 quad-core laptop, the results are along the lines of:
19
+ Que's secondary goal is performance. It won't be able to match the speed or throughput of a dedicated queue, or maybe even a Redis-backed queue, but it should be fast enough for most use cases. In [benchmarks](https://github.com/chanks/queue-shootout) on an AWS c3.8xlarge instance, Que approaches 10,000 jobs per second, or about twenty times the throughput of DelayedJob or QueueClassic. You are encouraged to try things out on your own production hardware, though.
20
20
 
21
- ~/que $ rake benchmark_queues
22
- Benchmarking 1000 jobs, 10 workers and synchronous_commit = on...
23
- Benchmarking delayed_job... 1000 jobs in 30.086127964 seconds = 33 jobs per second
24
- Benchmarking queue_classic... 1000 jobs in 19.642309724 seconds = 51 jobs per second
25
- Benchmarking que... 1000 jobs in 2.31483287 seconds = 432 jobs per second
26
- Benchmarking que_lateral... 1000 jobs in 2.383887915 seconds = 419 jobs per second
21
+ Que also includes a worker pool, so that multiple threads can process jobs in the same process. It can even do this in the background of your web process - if you're running on Heroku, for example, you won't need to run a separate worker dyno.
27
22
 
28
- Or, minus the I/O limitations of my 5400 rpm hard drive:
23
+ *Please be careful when running Que in production. It's still very new compared to other RDBMS-backed queues, and there may be issues that haven't been ironed out yet. Bug reports are welcome.*
29
24
 
30
- ~/que $ SYNCHRONOUS_COMMIT=off rake benchmark_queues
31
- Benchmarking 1000 jobs, 10 workers and synchronous_commit = off...
32
- Benchmarking delayed_job... 1000 jobs in 4.906474583 seconds = 204 jobs per second
33
- Benchmarking queue_classic... 1000 jobs in 1.587542394 seconds = 630 jobs per second
34
- Benchmarking que... 1000 jobs in 0.39063824 seconds = 2560 jobs per second
35
- Benchmarking que_lateral... 1000 jobs in 0.392068154 seconds = 2551 jobs per second
36
-
37
- As always, this is a single naive benchmark that doesn't represent anything real, take it with a grain of salt, try it for yourself, etc.
38
-
39
- **Que was extracted from an app of mine that ran in production for a few months. That queue worked well, but Que has been adapted somewhat from that design in order to support multiple ORMs and other features. Please don't trust Que with your production data until we've all tried to break it a few times.**
40
-
41
- Right now, Que is only tested on Ruby 2.0 - it may work on other versions. It requires Postgres 9.2+ for the JSON type. The benchmark requires Postgres 9.3, since it also tests a variant of the typical locking query that uses the new LATERAL syntax.
25
+ Que is tested on Ruby 2.0, Rubinius and JRuby (with the `jruby-pg` gem, which is [not yet functional with ActiveRecord](https://github.com/chanks/que/issues/4#issuecomment-29561356)). It requires Postgres 9.2+ for the JSON datatype.
42
26
 
43
27
  ## Installation
44
28
 
@@ -58,7 +42,7 @@ Or install it yourself as:
58
42
 
59
43
  The following is assuming you're using Rails 4.0. Que hasn't been tested with previous versions of Rails.
60
44
 
61
- First, generate a migration for the jobs table.
45
+ First, generate a migration for the job table.
62
46
 
63
47
  rails generate que:install
64
48
  rake db:migrate
@@ -71,7 +55,7 @@ Create a class for each type of job you want to run:
71
55
  @default_priority = 3
72
56
  @default_run_at = proc { 1.minute.from_now }
73
57
 
74
- def run(user_id, card_id)
58
+ def run(user_id, card_id, your_options = {})
75
59
  # Do stuff.
76
60
 
77
61
  ActiveRecord::Base.transaction do
@@ -87,24 +71,24 @@ Create a class for each type of job you want to run:
87
71
  end
88
72
  end
89
73
 
90
- Queue your job. Again, it's best to do this in a transaction with other changes you're making.
74
+ Queue your job. Again, it's best to do this in a transaction with other changes you're making. Also note that any arguments you pass will be serialized to JSON and back again, so stick to simple types (strings, integers, floats, hashes, and arrays).
91
75
 
92
76
  ActiveRecord::Base.transaction do
93
77
  # Persist credit card information
94
78
  card = CreditCard.create(params[:credit_card])
95
- ChargeCreditCard.queue(current_user.id, card.id)
79
+ ChargeCreditCard.queue(current_user.id, card.id, :your_custom_option => 'whatever')
96
80
  end
97
81
 
98
82
  You can also schedule it to run at a specific time, or with a specific priority:
99
83
 
100
84
  # 1 is high priority, 5 is low priority.
101
- ChargeCreditCard.queue current_user.id, card.id, :run_at => 1.day.from_now, :priority => 5
85
+ ChargeCreditCard.queue current_user.id, card.id, :your_custom_option => 'whatever', :run_at => 1.day.from_now, :priority => 5
102
86
 
103
- There are a few ways to work jobs. In development and production, the default is for Que to run a pool of workers to process jobs in their own background threads. If you like, you can disable this behavior when configuring your Rails app:
87
+ To determine what happens when a job is queued, you can set Que's mode with `Que.mode = :off` or `config.que.mode = :off` in your application configuration. There are a few options for the mode:
104
88
 
105
- config.que.mode = :off
106
-
107
- You can also change the mode at any time with Que.mode = :off. The other options are :async and :sync. :async runs the background workers, while :sync will run any jobs you queue synchronously (that is, MyJob.queue runs the job immediately and won't return until it's completed). This makes your application's behavior easier to test, so it's the default in the test environment.
89
+ * `:off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task to do this, see below). This is the default when running `rails console` in the development or production environments.
90
+ * `:async` - In this mode, a pool of background workers is spun up, each running in their own thread. They will intermittently check for new jobs. This is the default when running `rails server` in the development or production environments. By default, there are 4 workers and they'll check for a new job every 5 seconds. You can modify these options with `Que.worker_count = 8` or `config.que.worker_count = 8` and `Que.sleep_period = 1` or `config.que.sleep_period = 1`.
91
+ * `:sync` - In this mode, any jobs you queue will be run in the same thread, synchronously (that is, `MyJob.queue` runs the job and won't return until it's completed). This makes your application's behavior easier to test, so it's the default in the test environment.
108
92
 
109
93
  If you don't want to run workers in your web process, you can also work jobs in a rake task, similar to how other queueing systems work:
110
94
 
@@ -114,25 +98,16 @@ If you don't want to run workers in your web process, you can also work jobs in
114
98
  # Or configure the number of workers.
115
99
  WORKER_COUNT=8 rake que:work
116
100
 
117
- # If your app code isn't thread-safe, you can stick to one worker.
101
+ # If your app code isn't thread-safe, be sure to stick to one worker.
118
102
  WORKER_COUNT=1 rake que:work
119
103
 
120
- If an error causes a job to fail, Que will repeat that job at intervals that increase exponentially with each error (using the same algorithm as DelayedJob). You can also hook Que into whatever error notification system you're using:
104
+ If an error causes a job to fail, Que will repeat that job at exponentially-increasing intervals, similar to DelayedJob (the job will be retried at 4 seconds, 19 seconds, 84 seconds, 259 seconds...). You can also hook Que into whatever error notification system you're using:
121
105
 
122
106
  config.que.error_handler = proc do |error|
123
107
  # Do whatever you want with the error object.
124
108
  end
125
109
 
126
- You can find documentation on more issues at the project's [Github wiki](https://github.com/chanks/que/wiki).
127
-
128
- ## TODO
129
-
130
- These aren't promises, just ideas for possible features:
131
-
132
- * Use LISTEN/NOTIFY to check for new jobs (or simpler, just wake a worker in the same process after a transaction commits a new job).
133
- * Multiple queues (in multiple tables?)
134
- * Integration with ActionMailer for easier mailings.
135
- * Add options for max_run_time and max_attempts. Make them specific to job classes.
110
+ You can find more documentation on the [Github wiki](https://github.com/chanks/que/wiki).
136
111
 
137
112
  ## Contributing
138
113
 
@@ -141,3 +116,13 @@ These aren't promises, just ideas for possible features:
141
116
  3. Commit your changes (`git commit -am 'Add some feature'`)
142
117
  4. Push to the branch (`git push origin my-new-feature`)
143
118
  5. Create new Pull Request
119
+
120
+ A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions (especially on Rubinius). As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:
121
+
122
+ for i in {1..1000}; do rspec -b --seed $i; done
123
+
124
+ This will iterate the specs one thousand times, each with a different ordering. If the specs hang, note what the seed number was on that iteration. For example, if the previous specs finished with a "Randomized with seed 328", you know that there's a hang with seed 329, and you can narrow it down to a specific spec with:
125
+
126
+ for i in {1..1000}; do LOG_SPEC=true rspec -b --seed 329; done
127
+
128
+ Note that we iterate because there's no guarantee that the hang would reappear with a single additional run, so we need to rerun the specs until it reappears. The LOG_SPEC parameter will output the name and file location of each spec before it is run, so you can easily tell which spec is hanging, and you can continue narrowing things down from there.
@@ -6,7 +6,7 @@ module Que
6
6
  class InstallGenerator < Rails::Generators::Base
7
7
  include Rails::Generators::Migration
8
8
 
9
- namespace "que:install"
9
+ namespace 'que:install'
10
10
  self.source_paths << File.join(File.dirname(__FILE__), 'templates')
11
11
  desc "Generates a migration to add Que's job table."
12
12
 
data/lib/que.rb CHANGED
@@ -50,9 +50,9 @@ module Que
50
50
  logger.send level, "[Que] #{text}" if logger
51
51
  end
52
52
 
53
- # Duplicate some Worker config methods to the Que module for convenience.
54
- [:mode, :mode=, :worker_count=, :sleep_period, :sleep_period=].each do |meth|
55
- define_method(meth){|*args| Worker.send(meth, *args)}
53
+ # Copy some of the Worker class' config methods here for convenience.
54
+ [:mode, :mode=, :worker_count=, :sleep_period, :sleep_period=, :stop!].each do |meth|
55
+ define_method(meth) { |*args| Worker.send(meth, *args) }
56
56
  end
57
57
  end
58
58
  end
@@ -7,7 +7,7 @@ module Que
7
7
  end
8
8
 
9
9
  # Subclasses should define their own run methods, but keep an empty one
10
- # here so we can just do Que::Job.queue in testing.
10
+ # here so that Que::Job.queue can queue an empty job in testing.
11
11
  def run(*args)
12
12
  end
13
13
 
@@ -54,11 +54,11 @@ module Que
54
54
  end
55
55
 
56
56
  def work
57
- # Job.work will typically be called in a loop, where we'd sleep when
58
- # there's no more work to be done, so its return value should reflect
59
- # whether we should hit the database again or not. So, return truthy
60
- # if we worked a job or encountered a typical error while working a
61
- # job, and falsy if we found nothing to do or hit a connection error.
57
+ # Job.work is typically called in a loop, where we sleep when there's
58
+ # no more work to be done, so its return value should reflect whether
59
+ # we should look for another job or not. So, return truthy if we
60
+ # worked a job or encountered a typical error while working a job, and
61
+ # falsy if we found nothing to do or hit a connection error.
62
62
 
63
63
  # Since we're taking session-level advisory locks, we have to hold the
64
64
  # same connection throughout the process of getting a job, working it,
@@ -66,16 +66,16 @@ module Que
66
66
  Que.adapter.checkout do
67
67
  begin
68
68
  if row = Que.execute(:lock_job).first
69
- # Edge case: It's possible to have grabbed a job that's already
70
- # been worked, if the SELECT took its MVCC snapshot while the
71
- # job was processing, but didn't attempt the advisory lock until
72
- # it was finished. Now that we have the job lock, we know that a
69
+ # Edge case: It's possible for the lock_job query to have
70
+ # grabbed a job that's already been worked, if it took its MVCC
71
+ # snapshot while the job was processing, but didn't attempt the
72
+ # advisory lock until it was finished. Since we have the lock, a
73
73
  # previous worker would have deleted it by now, so we just
74
74
  # double check that it still exists before working it.
75
75
 
76
76
  # Note that there is currently no spec for this behavior, since
77
77
  # I'm not sure how to reliably commit a transaction that deletes
78
- # the job in a separate thread between this lock and check.
78
+ # the job in a separate thread between lock_job and check_job.
79
79
  return true if Que.execute(:check_job, [row['priority'], row['run_at'], row['job_id']]).none?
80
80
 
81
81
  run_job(row)
@@ -86,25 +86,27 @@ module Que
86
86
  rescue => error
87
87
  begin
88
88
  if row
89
- # Borrowed the exponential backoff formula and error data format from delayed_job.
89
+ # Borrowed the backoff formula and error data format from delayed_job.
90
90
  count = row['error_count'].to_i + 1
91
- run_at = Time.now + (count ** 4 + 3)
91
+ run_at = count ** 4 + 3
92
92
  message = "#{error.message}\n#{error.backtrace.join("\n")}"
93
93
  Que.execute :set_error, [count, run_at, message, row['priority'], row['run_at'], row['job_id']]
94
94
  end
95
95
  rescue
96
- # If we can't reach the DB for some reason, too bad, but don't
97
- # let it crash the work loop.
96
+ # If we can't reach the database for some reason, too bad, but
97
+ # don't let it crash the work loop.
98
98
  end
99
99
 
100
100
  if Que.error_handler
101
+ # Similarly, protect the work loop from a failure of the error handler.
101
102
  Que.error_handler.call(error) rescue nil
102
103
  end
103
104
 
104
105
  # If it's a garden variety error, we can just return true, pick up
105
106
  # another job, no big deal. If it's a PG::Error, though, assume
106
107
  # it's a disconnection or something and that we shouldn't just hit
107
- # the database again right away.
108
+ # the database again right away. We could be a lot more
109
+ # sophisticated about what errors we delay for, though.
108
110
  return !error.is_a?(PG::Error)
109
111
  ensure
110
112
  # Clear the advisory lock we took when locking the job. Important
@@ -1,19 +1,23 @@
1
1
  module Que
2
2
  class Railtie < Rails::Railtie
3
3
  config.que = Que
4
- config.que.connection = ::ActiveRecord if defined?(::ActiveRecord)
5
- config.que.mode = :sync if Rails.env.test?
4
+
5
+ Que.mode = :sync if Rails.env.test?
6
+ Que.connection = ::ActiveRecord if defined?(::ActiveRecord)
6
7
 
7
8
  rake_tasks do
8
9
  load 'que/rake_tasks.rb'
9
10
  end
10
11
 
11
- initializer "que.setup" do
12
- ActiveSupport.on_load(:after_initialize) do
12
+ initializer 'que.setup' do
13
+ ActiveSupport.on_load :after_initialize do
13
14
  Que.logger ||= Rails.logger
14
15
 
15
16
  # Only start up the worker pool if running as a server.
16
17
  Que.mode ||= :async if defined? Rails::Server
18
+
19
+ # When the process exits, safely interrupt any jobs that are still running.
20
+ at_exit { Que.stop! }
17
21
  end
18
22
  end
19
23
  end
@@ -1,21 +1,26 @@
1
- require 'logger'
2
-
3
1
  namespace :que do
4
2
  desc "Process Que's jobs using a worker pool"
5
3
  task :work => :environment do
4
+ require 'logger'
5
+
6
6
  Que.logger = Logger.new(STDOUT)
7
7
  Que.mode = :async
8
8
  Que.worker_count = (ENV['WORKER_COUNT'] || 4).to_i
9
9
 
10
- %w(INT TERM).each do |signal|
11
- trap signal do
12
- puts "SIG#{signal} caught, finishing current jobs and shutting down..."
13
- Que.mode = :off
14
- $stop = true
15
- end
10
+ # When changing how signals are caught, be sure to test the behavior with
11
+ # the rake task in tasks/safe_shutdown.rb.
12
+ at_exit do
13
+ puts "Stopping Que..."
14
+ Que.stop!
16
15
  end
17
16
 
18
- loop { sleep 0.01; break if $stop }
17
+ stop = false
18
+ trap('INT'){stop = true}
19
+
20
+ loop do
21
+ sleep 0.01
22
+ break if stop
23
+ end
19
24
  end
20
25
 
21
26
  desc "Create Que's job table"
@@ -1,7 +1,7 @@
1
1
  module Que
2
2
  SQL = {
3
- # Thanks to RhodiumToad in #postgresql for the job lock CTE and its lateral
4
- # variant. They were modified only slightly from his design.
3
+ # Thanks to RhodiumToad in #postgresql for the job lock CTE. It was
4
+ # modified only slightly from his design.
5
5
  :lock_job => (
6
6
  <<-SQL
7
7
  WITH RECURSIVE cte AS (
@@ -29,64 +29,13 @@ module Que
29
29
  ) AS t1
30
30
  )
31
31
  )
32
- SELECT job_id, priority, run_at, args, job_class, error_count
32
+ SELECT priority, run_at, job_id, job_class, args, error_count
33
33
  FROM cte
34
34
  WHERE locked
35
35
  LIMIT 1
36
36
  SQL
37
37
  ).freeze,
38
38
 
39
- # Here's an alternate scheme using LATERAL, which will work in Postgres 9.3+.
40
- # Basically the same, but benchmark to see if it's faster/just as reliable.
41
-
42
- # WITH RECURSIVE cte AS (
43
- # SELECT *, pg_try_advisory_lock(s.job_id) AS locked
44
- # FROM (
45
- # SELECT *
46
- # FROM que_jobs
47
- # WHERE run_at <= now()
48
- # ORDER BY priority, run_at, job_id
49
- # LIMIT 1
50
- # ) s
51
- # UNION ALL (
52
- # SELECT j.*, pg_try_advisory_lock(j.job_id) AS locked
53
- # FROM (
54
- # SELECT *
55
- # FROM cte
56
- # WHERE NOT locked
57
- # ) t,
58
- # LATERAL (
59
- # SELECT *
60
- # FROM que_jobs
61
- # WHERE run_at <= now()
62
- # AND (priority, run_at, job_id) > (t.priority, t.run_at, t.job_id)
63
- # ORDER BY priority, run_at, job_id
64
- # LIMIT 1
65
- # ) j
66
- # )
67
- # )
68
- # SELECT *
69
- # FROM cte
70
- # WHERE locked
71
- # LIMIT 1
72
-
73
- :create_table => (
74
- <<-SQL
75
- CREATE TABLE que_jobs
76
- (
77
- priority integer NOT NULL DEFAULT 1,
78
- run_at timestamptz NOT NULL DEFAULT now(),
79
- job_id bigserial NOT NULL,
80
- job_class text NOT NULL,
81
- args json NOT NULL DEFAULT '[]'::json,
82
- error_count integer NOT NULL DEFAULT 0,
83
- last_error text,
84
-
85
- CONSTRAINT que_jobs_pkey PRIMARY KEY (priority, run_at, job_id)
86
- )
87
- SQL
88
- ).freeze,
89
-
90
39
  :check_job => (
91
40
  <<-SQL
92
41
  SELECT 1 AS one
@@ -101,7 +50,7 @@ module Que
101
50
  <<-SQL
102
51
  UPDATE que_jobs
103
52
  SET error_count = $1::integer,
104
- run_at = $2::timestamptz,
53
+ run_at = now() + $2::integer * '1 second'::interval,
105
54
  last_error = $3::text
106
55
  WHERE priority = $4::integer
107
56
  AND run_at = $5::timestamptz
@@ -116,6 +65,23 @@ module Que
116
65
  AND run_at = $2::timestamptz
117
66
  AND job_id = $3::bigint
118
67
  SQL
68
+ ).freeze,
69
+
70
+ :create_table => (
71
+ <<-SQL
72
+ CREATE TABLE que_jobs
73
+ (
74
+ priority integer NOT NULL DEFAULT 1,
75
+ run_at timestamptz NOT NULL DEFAULT now(),
76
+ job_id bigserial NOT NULL,
77
+ job_class text NOT NULL,
78
+ args json NOT NULL DEFAULT '[]'::json,
79
+ error_count integer NOT NULL DEFAULT 0,
80
+ last_error text,
81
+
82
+ CONSTRAINT que_jobs_pkey PRIMARY KEY (priority, run_at, job_id)
83
+ )
84
+ SQL
119
85
  ).freeze
120
86
  }
121
87
  end