que 0.4.0 → 0.5.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 221284cc1195f925ca4ad320ddc6ea16e2441a5f
4
- data.tar.gz: 6233e8871fb93b5e2f738653e9070c019ecf1baa
3
+ metadata.gz: 71cd78e2494e219cb26e3513ff4e41bbb335161c
4
+ data.tar.gz: 8d51de1cebe1eb810f1acf87e2a7b540a72aadfc
5
5
  SHA512:
6
- metadata.gz: 9f85215c53bf87781cb503747738446878414dae0ac1afdb44536870f0df81a0d0761441b6a3863865fcc8d30a35ba52951399e0a656adfdfb54cdd1d0355ffb
7
- data.tar.gz: 6d51c478c98f5fca91afd67c2c07779543fbae45cc7162cf041c94aeb8921e18b4df7a07238d42a3f20dd4d473c22c01727e92182d3b0bd5c332d08048920231
6
+ metadata.gz: 1a1d708935bfd81d8cf3c52732a5448a804dc7650741a0313f9879998f6eafd21fa5d2a32f7d9b9297041b4f783895c7f6305c0e742836dda894650fe8b01d7c
7
+ data.tar.gz: bac784c3fb91f11386c0b78cde93cd61754b0e66f03d00a646c3c3a9152f0509269595f55ffc7441aa62b7bc9926e39e672644d6f6dd2f39c88fd1bb7da17453
data/.travis.yml ADDED
@@ -0,0 +1,15 @@
1
+ language: ruby
2
+ rvm:
3
+ - "1.9.3"
4
+ - "2.0"
5
+ - "2.1"
6
+ - "rbx-2.1.1"
7
+ - "jruby-1.7.5"
8
+ before_script:
9
+ - psql -c 'create database "que-test"' -U postgres
10
+ - bundle exec ruby -r sequel -r ./lib/que -e 'Que.connection=Sequel.connect("postgres://localhost/que-test"); Que.migrate!'
11
+
12
+ script: "./spec/travis.rb"
13
+
14
+ addons:
15
+ postgresql: 9.3
data/CHANGELOG.md CHANGED
@@ -1,4 +1,26 @@
1
- ### Unreleased
1
+ ### 0.5.0 (2014-01-14)
2
+
3
+ * When running a worker pool inside your web process on ActiveRecord, Que will now wake a worker once a transaction containing a queued job is committed.
4
+
5
+ * The `que:work` rake task now has a default wake_interval of 0.1 seconds, since it relies exclusively on polling to pick up jobs. You can set a QUE_WAKE_INTERVAL environment variable to change this. The environment variable to set a size for the worker pool in the rake task has also been changed from WORKER_COUNT to QUE_WORKER_COUNT.
6
+
7
+ * Officially support Ruby 1.9.3. Note that due to the Thread#kill problems (see "Remove Que.stop!" below) there's a danger of data corruption when running under 1.9, though.
8
+
9
+ * The default priority for jobs is now 100 (it was 1 before). Like always (and like delayed_job), a lower priority means it's more important. You can migrate the schema version to 2 to set the new default value on the que_jobs table, though it's only necessary if you're doing your own INSERTs - if you use `MyJob.queue`, it's already taken care of.
10
+
11
+ * Added a migration system to make it easier to change the schema when updating Que. You can now write, for example, `Que.migrate!(:version => 2)` in your migrations. Migrations are run transactionally.
12
+
13
+ * The logging format has changed to be more easily machine-readable. You can also now customize the logging format by assigning a callable to Que.log_formatter=. See the new doc on [logging](https://github.com/chanks/que/blob/master/docs/logging.md)) for details. The default logger level is INFO - for less critical information (such as when no jobs were found to be available or when a job-lock race condition has been detected and avoided) you can set the QUE_LOG_LEVEL environment variable to DEBUG.
14
+
15
+ * MultiJson is now a soft dependency. Que will use it if it is available, but it is not required.
16
+
17
+ * Remove Que.stop!.
18
+
19
+ Using Thread#raise to kill workers is a bad idea - the results are unpredictable and nearly impossible to spec reliably. Its purpose was to prevent premature commits in ActiveRecord/Sequel when a thread is killed during shutdown, but it's possible to detect that situation on Ruby 2.0+, so this is really better handled by the ORMs directly. See the pull requests for [Sequel](https://github.com/jeremyevans/sequel/pull/752) and [ActiveRecord](https://github.com/rails/rails/pull/13656).
20
+
21
+ Now, when a process exits, if the worker pool is running (whether in a rake task or in a web process) the exit will be stalled until all workers have finished their current jobs. If you have long-running jobs, this may take a long time. If you need the process to exit immediately, you can SIGKILL without any threat of commiting prematurely.
22
+
23
+ ### 0.4.0 (2014-01-05)
2
24
 
3
25
  * Que.wake_all! was added, as a simple way to wake up all workers in the pool.
4
26
 
data/README.md CHANGED
@@ -6,7 +6,7 @@ Que is a queue for Ruby and PostgreSQL that manages jobs using [advisory locks](
6
6
 
7
7
  * **Concurrency** - Workers don't block each other when trying to lock jobs, as often occurs with "SELECT FOR UPDATE"-style locking. This allows for very high throughput with a large number of workers.
8
8
  * **Efficiency** - Locks are held in memory, so locking a job doesn't incur a disk write. These first two points are what limit performance with other queues - all workers trying to lock jobs have to wait behind one that's persisting its UPDATE on a locked_at column to disk (and the disks of however many other servers your database is synchronously replicating to). Under heavy load, Que's bottleneck is CPU, not I/O.
9
- * **Safety** - If a Ruby process dies, the jobs it is working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
9
+ * **Safety** - If a Ruby process dies, the jobs it's working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
10
10
 
11
11
  Additionally, there are the general benefits of storing jobs in Postgres, alongside the rest of your data, rather than in Redis or a dedicated queue:
12
12
 
@@ -14,13 +14,13 @@ Additionally, there are the general benefits of storing jobs in Postgres, alongs
14
14
  * **Atomic Backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
15
15
  * **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
16
16
 
17
- Que's primary goal is reliability. When it's stable, you should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md)).
17
+ Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md)).
18
18
 
19
19
  Que's secondary goal is performance. It won't be able to match the speed or throughput of a dedicated queue, or maybe even a Redis-backed queue, but it should be fast enough for most use cases. In [benchmarks of RDBMS queues](https://github.com/chanks/queue-shootout) using PostgreSQL 9.3 on a AWS c3.8xlarge instance, Que approaches 10,000 jobs per second, or about twenty times the throughput of DelayedJob or QueueClassic. You are encouraged to try things out on your own production hardware, though.
20
20
 
21
21
  Que also includes a worker pool, so that multiple threads can process jobs in the same process. It can even do this in the background of your web process - if you're running on Heroku, for example, you don't need to run a separate worker dyno.
22
22
 
23
- *Please be careful when running Que in production. It's still very new compared to other RDBMS-backed queues, and there may be issues that haven't been ironed out yet. Bug reports are welcome.*
23
+ *Please keep an eye out for problems when running Que in production. It's still new compared to other RDBMS-backed queues, and there may be issues that haven't been ironed out yet. Bug reports are welcome.*
24
24
 
25
25
  Que is tested on Ruby 2.0, Rubinius and JRuby (with the `jruby-pg` gem, which is [not yet functional with ActiveRecord](https://github.com/chanks/que/issues/4#issuecomment-29561356)). It requires Postgres 9.2+ for the JSON datatype.
26
26
 
@@ -40,7 +40,7 @@ Or install it yourself as:
40
40
 
41
41
  ## Usage
42
42
 
43
- The following assumes you're using Rails 4.0 and ActiveRecord. *Que hasn't been tested with versions of Rails before 4.0, and may or may not work with them.* For more information, or instructions on using Que outside of Rails or with Sequel or no ORM, see the [documentation](https://github.com/chanks/que/blob/master/docs).
43
+ The following assumes you're using Rails 4.0 and ActiveRecord. *Que hasn't been tested with versions of Rails before 4.0, and may or may not work with them.* See the [/docs directory](https://github.com/chanks/que/blob/master/docs) for instructions on using Que [outside of Rails](https://github.com/chanks/que/blob/master/docs/advanced_setup.md), and with [Sequel](https://github.com/chanks/que/blob/master/docs/using_sequel.md) or [no ORM](https://github.com/chanks/que/blob/master/docs/using_plain_connections.md), among other things.
44
44
 
45
45
  First, generate and run a migration for the job table.
46
46
 
@@ -51,15 +51,19 @@ Create a class for each type of job you want to run:
51
51
 
52
52
  # app/jobs/charge_credit_card.rb
53
53
  class ChargeCreditCard < Que::Job
54
- # Default options for this job. These may be omitted.
55
- @default_priority = 3
54
+ # Default settings for this job. These are optional - without them, jobs
55
+ # will default to priority 1 and run immediately.
56
+ @default_priority = 10
56
57
  @default_run_at = proc { 1.minute.from_now }
57
58
 
58
- def run(user_id, card_id, your_options = {})
59
+ def run(user_id, options)
59
60
  # Do stuff.
61
+ user = User[user_id]
62
+ card = CreditCard[options[:credit_card_id]]
60
63
 
61
64
  ActiveRecord::Base.transaction do
62
65
  # Write any changes you'd like to the database.
66
+ user.update_attributes :charged_at => Time.now
63
67
 
64
68
  # It's best to destroy the job in the same transaction as any other
65
69
  # changes you make. Que will destroy the job for you after the run
@@ -76,18 +80,18 @@ Queue your job. Again, it's best to do this in a transaction with other changes
76
80
  ActiveRecord::Base.transaction do
77
81
  # Persist credit card information
78
82
  card = CreditCard.create(params[:credit_card])
79
- ChargeCreditCard.queue(current_user.id, card.id, :your_custom_option => 'whatever')
83
+ ChargeCreditCard.queue(current_user.id, :credit_card_id => card.id)
80
84
  end
81
85
 
82
- You can also schedule it to run at a specific time, or with a specific priority:
86
+ You can also add options to run the job after a specific time, or with a specific priority:
83
87
 
84
- # 1 is high priority, 5 is low priority.
85
- ChargeCreditCard.queue current_user.id, card.id, :your_custom_option => 'whatever', :run_at => 1.day.from_now, :priority => 5
88
+ # The default priority is 100, and a lower number means a higher priority. 5 would be very important.
89
+ ChargeCreditCard.queue current_user.id, :credit_card_id => card.id, :run_at => 1.day.from_now, :priority => 5
86
90
 
87
91
  To determine what happens when a job is queued, you can set Que's mode in your application configuration. There are a few options for the mode:
88
92
 
89
- * `config.que.mode = :off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task to do this, see below). This is the default when running `rails console` in the development or production environments.
90
- * `config.que.mode = :async` - In this mode, a pool of background workers is spun up, each running in their own thread. This is the default when running `rails server` in the development or production environments. See the docs for [more information on managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md).
93
+ * `config.que.mode = :off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task to do this, see below). This is the default when running `rails console`.
94
+ * `config.que.mode = :async` - In this mode, a pool of background workers is spun up, each running in their own thread. This is the default when running `rails server`. See the docs for [more information on managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md).
91
95
  * `config.que.mode = :sync` - In this mode, any jobs you queue will be run in the same thread, synchronously (that is, `MyJob.queue` runs the job and won't return until it's completed). This makes your application's behavior easier to test, so it's the default in the test environment.
92
96
 
93
97
  ## Contributing
@@ -22,22 +22,24 @@ There are other docs to read if you're using [Sequel](https://github.com/chanks/
22
22
 
23
23
  After you've connected Que to the database, you can manage the jobs table:
24
24
 
25
- # Create the jobs table:
26
- Que.create!
25
+ # Create/update the jobs table to the latest schema version:
26
+ Que.migrate!
27
27
 
28
- # Clear the jobs table of all jobs:
29
- Que.clear!
28
+ You'll want to migrate to a specific version if you're using migration files, to ensure that they work the same way even when you upgrade Que in the future:
30
29
 
31
- # Drop the jobs table:
32
- Que.drop!
30
+ # Update the schema to version #2.
31
+ Que.migrate! :version => 2
33
32
 
34
- ### Other Setup
33
+ # To reverse the migration, drop the jobs table entirely:
34
+ Que.migrate! :version => 0
35
35
 
36
- You can give Que a logger to use if you like:
36
+ There's also a helper method to clear the jobs table:
37
37
 
38
- Que.logger = Logger.new(STDOUT)
38
+ Que.clear!
39
+
40
+ ### Other Setup
39
41
 
40
- You'll also need to set Que's mode manually:
42
+ You'll need to set Que's mode manually:
41
43
 
42
44
  # Start the worker pool:
43
45
  Que.mode = :async
@@ -47,4 +49,4 @@ You'll also need to set Que's mode manually:
47
49
 
48
50
  Be sure to read the docs on [managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md) for more information on using the worker pool.
49
51
 
50
- You may also want to set up an [error handler](https://github.com/chanks/que/blob/master/docs/error_handling.md) to track errors raised by jobs.
52
+ You'll also want to set up [logging](https://github.com/chanks/que/blob/master/docs/logging.md) and an [error handler](https://github.com/chanks/que/blob/master/docs/error_handling.md) to track errors raised by jobs.
@@ -0,0 +1,113 @@
1
+ ## Customizing Que
2
+
3
+ One of Que's goals to be easily extensible and hackable (and if anyone has any suggestions on ways to accomplish that, please [open an issue](https://github.com/chanks/que/issues)). This document is meant to demonstrate some of the ways Que can be used to accomplish different tasks that it's not already designed for.
4
+
5
+ ### Recurring Jobs
6
+
7
+ Que's support for scheduling jobs makes it easy to implement reliable recurring jobs. For example, suppose you want to run a job every hour that processes the users created in that time:
8
+
9
+ class Cron < Que::Job
10
+ def run
11
+ users = User.where(:created_at => @attrs[:run_at]...(@attrs[:run_at] + 1.hour))
12
+ # Do something with users.
13
+
14
+ ActiveRecord::Base.transaction do
15
+ destroy
16
+ self.class.queue :run_at => @attrs[:run_at] + 1.hour
17
+ end
18
+ end
19
+ end
20
+
21
+ Note that instead of using Time.now in our database query, and requeueing the job at 1.hour.from_now, we use the run_at of the current job as our timestamp. This corrects for delays in running the job. Suppose that there's a backlog of priority jobs, or that the worker briefly goes down, and this job, which was supposed to run at 11:00 a.m. isn't run until 11:05 a.m. A lazier implementation would look for users created after 1.hour.ago, and miss those that signed up between 10:00 a.m. and 10:05 a.m.
22
+
23
+ This also compensates for clock drift. `Time.now` on one of your application servers may not match `Time.now` on another application server may not match `now()` on your database server. The best way to stay reliable is have a single authoritative source on what the current time is, and your best source for authoritative information is always your database (this is why Que uses Postgres' `now()` function when locking jobs, by the way).
24
+
25
+ Note also the use of the triple-dot range, which results in a query like `SELECT "users".* FROM "users" WHERE ("users"."created_at" >= '2014-01-08 10:00:00.000000' AND "users"."created_at" < '2014-01-08 11:00:00.000000')` instead of a BETWEEN condition. This ensures that a user created at 11:00 am exactly isn't processed twice, by the jobs starting at both 10 am and 11 am.
26
+
27
+ ### DelayedJob-style Jobs
28
+
29
+ DelayedJob offers a simple API for delaying methods to objects:
30
+
31
+ @user.delay.activate!(@device)
32
+
33
+ The API is pleasant, but implementing it requires storing marshalled Ruby objects in the database, which is both inefficient and prone to bugs - for example, if you deploy an update that changes the name of an instance variable (a contained, internal change that might seem completely innocuous), the marshalled objects in the database will retain the old instance variable name and will behave unexpectedly when unmarshalled into the new Ruby code.
34
+
35
+ This is the danger of mixing the ephemeral state of a Ruby object in memory with the more permanent state of a database row. The advantage of Que's API is that, since your arguments are forced through a JSON serialization/deserialization process, it becomes your responsibility when designing a job class to establish an API for yourself (what the arguments to the job are and what they mean) that you will have to stick to in the future.
36
+
37
+ That said, if you want to queue jobs in the DelayedJob style, that can be done relatively easily:
38
+
39
+ class Delayed < Que::Job
40
+ def run(receiver, method, args)
41
+ Marshal.load(receiver).send method, *Marshal.load(args)
42
+ end
43
+ end
44
+
45
+ class DelayedAction
46
+ def initialize(receiver)
47
+ @receiver = receiver
48
+ end
49
+
50
+ def method_missing(method, *args)
51
+ Delayed.queue Marshal.dump(@receiver), method, Marshal.dump(args)
52
+ end
53
+ end
54
+
55
+ class Object
56
+ def delay
57
+ DelayedAction.new(self)
58
+ end
59
+ end
60
+
61
+ You can replace Marshal with YAML if you like.
62
+
63
+ ### QueueClassic-style Jobs
64
+
65
+ You may find it a hassle to keep an individual class file for each type of job. QueueClassic has a simpler design, wherein you simply give it a class method to call, like:
66
+
67
+ QC.enqueue("Kernel.puts", "hello world")
68
+
69
+ You can mimic this style with Que by using a simple job class:
70
+
71
+ class Command < Que::Job
72
+ def run(method, *args)
73
+ receiver, message = method.split('.')
74
+ Object.const_get(receiver).send(message, *args)
75
+ end
76
+ end
77
+
78
+ # Then:
79
+
80
+ Command.queue "Kernel.puts", "hello world"
81
+
82
+ ### Retaining Finished Jobs
83
+
84
+ Que deletes jobs from the queue as they are worked, in order to keep the `que_jobs` table and index small and efficient. If you have a need to hold onto finished jobs, the recommended way to do this is to add a second table to hold them, and then insert them there as they are deleted from the queue. You can use Ruby's inheritance mechanics to do this cleanly:
85
+
86
+ Que.execute "CREATE TABLE finished_jobs AS SELECT * FROM que_jobs LIMIT 0"
87
+ # Or, better, use a proper CREATE TABLE with not-null constraints, and add whatever indexes you like.
88
+
89
+ class MyJobClass < Que::Job
90
+ def destroy
91
+ Que.execute "INSERT INTO finished_jobs SELECT * FROM que_jobs WHERE priority = $1::integer AND run_at = $2::timestamptz AND job_id = $3::bigint", @attrs.values_at(:priority, :run_at, :job_id)
92
+ super
93
+ end
94
+ end
95
+
96
+ Then just have your job classes inherit from MyJobClass instead of Que::Job. If you need to query the jobs table and you want to include both finished and unfinished jobs, you might use:
97
+
98
+ Que.execute "CREATE VIEW all_jobs AS SELECT * FROM que_jobs UNION ALL SELECT * FROM finished_jobs"
99
+ Que.execute "SELECT * FROM all_jobs"
100
+
101
+ Alternately, if you want a more foolproof solution and you're not scared of PostgreSQL, you can use a trigger:
102
+
103
+ CREATE FUNCTION please_save_my_job()
104
+ RETURNS trigger
105
+ LANGUAGE plpgsql
106
+ AS $$
107
+ BEGIN
108
+ INSERT INTO finished_jobs SELECT (OLD).*;
109
+ RETURN OLD;
110
+ END;
111
+ $$;
112
+
113
+ CREATE TRIGGER keep_all_my_old_jobs BEFORE DELETE ON que_jobs FOR EACH ROW EXECUTE PROCEDURE please_save_my_job();
@@ -2,7 +2,7 @@
2
2
 
3
3
  If an error is raised and left uncaught by your job, Que will save the error message and backtrace to the database and schedule the job to be retried later.
4
4
 
5
- If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count[^4^] + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's.
5
+ If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count^4 + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's.
6
6
 
7
7
  Unlike DelayedJob, however, there is currently no maximum number of failures after which jobs will be deleted. Que's assumption is that if a job is erroring perpetually (and not just transiently), you will want to take action to get the job working properly rather than simply losing it silently.
8
8
 
data/docs/logging.md ADDED
@@ -0,0 +1,42 @@
1
+ ## Logging
2
+
3
+ By default, Que logs important information in JSON to either Rails' logger (when running in a Rails web process) or STDOUT (when running as a rake task). So, your logs will look something like:
4
+
5
+ I, [2014-01-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","thread":104928,"event":"job_worked","elapsed":0.01045,"job":{"priority":"1","run_at":"2014-01-12 05:07:31.081877+00","job_id":"4","job_class":"MyJob","args":[],"error_count":"0"}}
6
+
7
+ Of course you can have it log wherever you like:
8
+
9
+ Que.logger = Logger.new(...)
10
+
11
+ # Or, in your Rails configuration:
12
+
13
+ config.que.logger = Logger.new(...)
14
+
15
+ You can use Que's logger in your jobs anywhere you like:
16
+
17
+ class MyJob
18
+ def run
19
+ Que.log :my_output => "my string"
20
+ end
21
+ end
22
+
23
+ #=> I, [2014-01-12T05:13:11.006776 #4914] INFO -- : {"lib":"que","thread":24960,"my_output":"my string"}
24
+
25
+ Que will always add a 'lib' key, so you can easily filter its output from that of other sources, and the object_id of the thread that emitted the log, so you can follow the actions of a particular worker if you wish. You can also pass a :level key to set the level of the output:
26
+
27
+ Que.log :level => :debug, :my_output => 'my string'
28
+ #=> D, [2014-01-12T05:16:15.221941 #5088] DEBUG -- : {"lib":"que","thread":24960,"my_output":"my string"}
29
+
30
+ If you don't like JSON, you can also customize the format of the logging output by passing a callable object (such as a proc) to Que.log_formatter=. The proc should take a hash (the keys are symbols) and return a string. The keys and values are just as you would expect from the JSON output:
31
+
32
+ Que.log_formatter = proc do |data|
33
+ "Thread number #{data[:thread]} experienced a #{data[:event]}"
34
+ end
35
+
36
+ If the log formatter returns nil or false, a nothing will be logged at all. You could use this to narrow down what you want to emit, for example:
37
+
38
+ Que.log_formatter = proc do |data|
39
+ if ['job_worked', 'job_unavailable'].include?(data[:event])
40
+ JSON.dump(data)
41
+ end
42
+ end
@@ -65,3 +65,9 @@ Regardless of the `wake_interval` setting, you can always wake workers manually:
65
65
  Que.wake_all!
66
66
 
67
67
  `Que.wake_all!` is helpful if there are no jobs available and all your workers go to sleep, and then you queue a large number of jobs. Typically, it will take a little while for the entire pool of workers get going again - a new one will wake up every `wake_interval` seconds, but it will take up to `wake_interval * worker_count` seconds for all of them to get going. `Que.wake_all!` can get them all moving immediately.
68
+
69
+ ### Connection Pool Size
70
+
71
+ For the job locking system to work properly, each worker thread needs to reserve a database connection from the connection pool for the period of time between when it locks a job and when it releases that lock (which won't happen until the job has been finished and deleted from the queue).
72
+
73
+ So, for example, if you're running 6 workers in a rake task, you'll want to make sure that whatever connection pool Que is using (usually ActiveRecord's) has a maximum size of at least 6. If you're running those workers in a web process, you'll want the size to be at least 6 plus however many connections you expect your application to need for serving web requests (which may only be one if you're using Rails in single-threaded mode, or many more if you're running a threaded web server like Puma).
data/docs/using_sequel.md CHANGED
@@ -23,5 +23,5 @@ Then you can safely use the same database object to transactionally protect your
23
23
  # In your controller action:
24
24
  DB.transaction do
25
25
  @user = User.create(params[:user])
26
- SendRegistrationEmail.queue :user_id => @user.id
26
+ MyJob.queue :user_id => @user.id
27
27
  end
@@ -1,9 +1,11 @@
1
1
  class AddQue < ActiveRecord::Migration
2
2
  def self.up
3
- Que.create!
3
+ # The current version as of this migration's creation.
4
+ Que.migrate! :version => 2
4
5
  end
5
6
 
6
7
  def self.down
7
- Que.drop!
8
+ # Completely removes Que's job queue.
9
+ Que.migrate! :version => 0
8
10
  end
9
11
  end
data/lib/que.rb CHANGED
@@ -1,13 +1,24 @@
1
+ require 'time' # For Time#iso8601
2
+
1
3
  module Que
2
- autoload :Adapters, 'que/adapters/base'
3
- autoload :Job, 'que/job'
4
- autoload :SQL, 'que/sql'
5
- autoload :Version, 'que/version'
6
- autoload :Worker, 'que/worker'
4
+ autoload :Adapters, 'que/adapters/base'
5
+ autoload :Job, 'que/job'
6
+ autoload :Migrations, 'que/migrations'
7
+ autoload :SQL, 'que/sql'
8
+ autoload :Version, 'que/version'
9
+ autoload :Worker, 'que/worker'
10
+
11
+ begin
12
+ require 'multi_json'
13
+ JSON_MODULE = MultiJson
14
+ rescue LoadError
15
+ require 'json'
16
+ JSON_MODULE = JSON
17
+ end
7
18
 
8
19
  class << self
9
20
  attr_accessor :logger, :error_handler
10
- attr_writer :adapter
21
+ attr_writer :adapter, :log_formatter
11
22
 
12
23
  def adapter
13
24
  @adapter || raise("Que connection not established!")
@@ -27,12 +38,18 @@ module Que
27
38
  end
28
39
  end
29
40
 
41
+ # Have to support create! and drop! in old migrations. They just created
42
+ # and dropped the bare table.
30
43
  def create!
31
- execute SQL[:create_table]
44
+ migrate! :version => 1
32
45
  end
33
46
 
34
47
  def drop!
35
- execute "DROP TABLE que_jobs"
48
+ migrate! :version => 0
49
+ end
50
+
51
+ def migrate!(version = {:version => Migrations::CURRENT_VERSION})
52
+ Migrations.migrate!(version)
36
53
  end
37
54
 
38
55
  def clear!
@@ -54,8 +71,17 @@ module Que
54
71
  end.to_a
55
72
  end
56
73
 
57
- def log(level, text)
58
- logger.send level, "[Que] #{text}" if logger
74
+ def log(data)
75
+ level = data.delete(:level) || :info
76
+ data = {:lib => 'que', :thread => Thread.current.object_id}.merge(data)
77
+
78
+ if logger && output = log_formatter.call(data)
79
+ logger.send level, output
80
+ end
81
+ end
82
+
83
+ def log_formatter
84
+ @log_formatter ||= JSON_MODULE.method(:dump)
59
85
  end
60
86
 
61
87
  # Helper for making hashes indifferently-accessible, even when nested
@@ -79,7 +105,7 @@ module Que
79
105
  end
80
106
 
81
107
  # Copy some of the Worker class' config methods here for convenience.
82
- [:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :stop!, :wake!, :wake_all!].each do |meth|
108
+ [:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :wake!, :wake_all!].each do |meth|
83
109
  define_method(meth) { |*args| Worker.send(meth, *args) }
84
110
  end
85
111
  end