que 0.10.0 → 0.11.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -6,24 +6,26 @@ In order to remain simple and compatible with any ORM (or no ORM at all), Que is
6
6
 
7
7
  You can call `Que.job_stats` to return some aggregate data on the types of jobs currently in the queue. Example output:
8
8
 
9
- [
10
- {
11
- "job_class"=>"ChargeCreditCard",
12
- "count"=>"10",
13
- "count_working"=>"4",
14
- "count_errored"=>"2",
15
- "highest_error_count"=>"5",
16
- "oldest_run_at"=>"2014-01-04 21:24:55.817129+00"
17
- },
18
- {
19
- "job_class"=>"SendRegistrationEmail",
20
- "count"=>"8",
21
- "count_working"=>"0",
22
- "count_errored"=>"0",
23
- "highest_error_count"=>"0",
24
- "oldest_run_at"=>"2014-01-04 22:24:55.81532+00"
25
- }
26
- ]
9
+ ```ruby
10
+ [
11
+ {
12
+ "job_class"=>"ChargeCreditCard",
13
+ "count"=>"10",
14
+ "count_working"=>"4",
15
+ "count_errored"=>"2",
16
+ "highest_error_count"=>"5",
17
+ "oldest_run_at"=>"2014-01-04 21:24:55.817129+00"
18
+ },
19
+ {
20
+ "job_class"=>"SendRegistrationEmail",
21
+ "count"=>"8",
22
+ "count_working"=>"0",
23
+ "count_errored"=>"0",
24
+ "highest_error_count"=>"0",
25
+ "oldest_run_at"=>"2014-01-04 22:24:55.81532+00"
26
+ }
27
+ ]
28
+ ```
27
29
 
28
30
  This tells you that, for instance, there are ten ChargeCreditCard jobs in the queue, four of which are currently being worked, and two of which have experienced errors. One of them has started to process but experienced an error five times. The oldest_run_at is helpful for determining how long jobs have been sitting around, if you have backlog.
29
31
 
@@ -31,24 +33,26 @@ This tells you that, for instance, there are ten ChargeCreditCard jobs in the qu
31
33
 
32
34
  You can call `Que.worker_states` to return some information on every worker touching the queue (not just those in the current process). Example output:
33
35
 
34
- [
35
- {
36
- "priority"=>"2",
37
- "run_at"=>"2014-01-04 22:35:55.772324+00",
38
- "job_id"=>"4592",
39
- "job_class"=>"ChargeCreditCard",
40
- "args"=>"[345,56]",
41
- "error_count"=>"0",
42
- "last_error"=>nil,
43
- "pg_backend_pid"=>"1175",
44
- "pg_state"=>"idle",
45
- "pg_state_changed_at"=>"2014-01-04 22:35:55.777785+00",
46
- "pg_last_query"=>"SELECT * FROM users",
47
- "pg_last_query_started_at"=>"2014-01-04 22:35:55.777519+00",
48
- "pg_transaction_started_at"=>nil,
49
- "pg_waiting_on_lock"=>"f"
50
- }
51
- ]
36
+ ```ruby
37
+ [
38
+ {
39
+ "priority"=>"2",
40
+ "run_at"=>"2014-01-04 22:35:55.772324+00",
41
+ "job_id"=>"4592",
42
+ "job_class"=>"ChargeCreditCard",
43
+ "args"=>"[345,56]",
44
+ "error_count"=>"0",
45
+ "last_error"=>nil,
46
+ "pg_backend_pid"=>"1175",
47
+ "pg_state"=>"idle",
48
+ "pg_state_changed_at"=>"2014-01-04 22:35:55.777785+00",
49
+ "pg_last_query"=>"SELECT * FROM users",
50
+ "pg_last_query_started_at"=>"2014-01-04 22:35:55.777519+00",
51
+ "pg_transaction_started_at"=>nil,
52
+ "pg_waiting_on_lock"=>"f"
53
+ }
54
+ ]
55
+ ```
52
56
 
53
57
  In this case, there is only one worker currently working the queue. The first seven fields are the attributes of the job it is currently running. The next seven fields are information about that worker's Postgres connection, and are taken from `pg_stat_activity` - see [Postgres' documentation](http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW) for more information on interpreting these fields.
54
58
 
@@ -64,37 +68,47 @@ In this case, there is only one worker currently working the queue. The first se
64
68
 
65
69
  If you want to query the jobs table yourself to see what's been queued or to check the state of various jobs, you can always use Que to execute whatever SQL you want:
66
70
 
67
- Que.execute("select count(*) from que_jobs") #=> [{"count"=>"492"}]
71
+ ```ruby
72
+ Que.execute("select count(*) from que_jobs") #=> [{"count"=>"492"}]
73
+ ```
68
74
 
69
75
  If you want to use ActiveRecord's features when querying, you can define your own model around Que's job table:
70
76
 
71
- class QueJob < ActiveRecord::Base
72
- end
77
+ ```ruby
78
+ class QueJob < ActiveRecord::Base
79
+ end
73
80
 
74
- # Or:
81
+ # Or:
75
82
 
76
- class MyJob < ActiveRecord::Base
77
- self.table_name = :que_jobs
78
- end
83
+ class MyJob < ActiveRecord::Base
84
+ self.table_name = :que_jobs
85
+ end
86
+ ```
79
87
 
80
88
  Then you can query just as you would with any other model. Since the jobs table has a composite primary key, however, you probably won't be able to update or destroy jobs this way, though.
81
89
 
82
90
  If you're using Sequel, you can use the same technique:
83
91
 
84
- class QueJob < Sequel::Model
85
- end
92
+ ```ruby
93
+ class QueJob < Sequel::Model
94
+ end
86
95
 
87
- # Or:
96
+ # Or:
88
97
 
89
- class MyJob < Sequel::Model(:que_jobs)
90
- end
98
+ class MyJob < Sequel::Model(:que_jobs)
99
+ end
100
+ ```
91
101
 
92
102
  And note that Sequel *does* support composite primary keys:
93
103
 
94
- job = QueJob.where(:job_class => "ChargeCreditCard").first
95
- job.priority = 1
96
- job.save
104
+ ```ruby
105
+ job = QueJob.where(:job_class => "ChargeCreditCard").first
106
+ job.priority = 1
107
+ job.save
108
+ ```
97
109
 
98
110
  Or, you can just use Sequel's dataset methods:
99
111
 
100
- DB[:que_jobs].where{priority > 3}.all
112
+ ```ruby
113
+ DB[:que_jobs].where{priority > 3}.all
114
+ ```
data/docs/logging.md CHANGED
@@ -2,37 +2,49 @@
2
2
 
3
3
  By default, Que logs important information in JSON to either Rails' logger (when running in a Rails web process) or STDOUT (when running as a rake task). So, your logs will look something like:
4
4
 
5
- I, [2014-01-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","thread":104928,"event":"job_worked","elapsed":0.01045,"job":{"priority":"1","run_at":"2014-01-12 05:07:31.081877+00","job_id":"4","job_class":"MyJob","args":[],"error_count":"0"}}
5
+ ```
6
+ I, [2014-01-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","thread":104928,"event":"job_worked","elapsed":0.01045,"job":{"priority":"1","run_at":"2014-01-12 05:07:31.081877+00","job_id":"4","job_class":"MyJob","args":[],"error_count":"0"}}
7
+ ```
6
8
 
7
9
  Of course you can have it log wherever you like:
8
10
 
9
- Que.logger = Logger.new(...)
11
+ ```ruby
12
+ Que.logger = Logger.new(...)
13
+ ```
10
14
 
11
15
  You can use Que's logger in your jobs anywhere you like:
12
16
 
13
- class MyJob
14
- def run
15
- Que.log :my_output => "my string"
16
- end
17
- end
17
+ ```ruby
18
+ class MyJob
19
+ def run
20
+ Que.log :my_output => "my string"
21
+ end
22
+ end
18
23
 
19
- #=> I, [2014-01-12T05:13:11.006776 #4914] INFO -- : {"lib":"que","thread":24960,"my_output":"my string"}
24
+ #=> I, [2014-01-12T05:13:11.006776 #4914] INFO -- : {"lib":"que","thread":24960,"my_output":"my string"}
25
+ ```
20
26
 
21
27
  Que will always add a 'lib' key, so you can easily filter its output from that of other sources, and the object_id of the thread that emitted the log, so you can follow the actions of a particular worker if you wish. You can also pass a :level key to set the level of the output:
22
28
 
23
- Que.log :level => :debug, :my_output => 'my string'
24
- #=> D, [2014-01-12T05:16:15.221941 #5088] DEBUG -- : {"lib":"que","thread":24960,"my_output":"my string"}
29
+ ```ruby
30
+ Que.log :level => :debug, :my_output => 'my string'
31
+ #=> D, [2014-01-12T05:16:15.221941 #5088] DEBUG -- : {"lib":"que","thread":24960,"my_output":"my string"}
32
+ ```
25
33
 
26
34
  If you don't like JSON, you can also customize the format of the logging output by passing a callable object (such as a proc) to Que.log_formatter=. The proc should take a hash (the keys are symbols) and return a string. The keys and values are just as you would expect from the JSON output:
27
35
 
28
- Que.log_formatter = proc do |data|
29
- "Thread number #{data[:thread]} experienced a #{data[:event]}"
30
- end
36
+ ```ruby
37
+ Que.log_formatter = proc do |data|
38
+ "Thread number #{data[:thread]} experienced a #{data[:event]}"
39
+ end
40
+ ```
31
41
 
32
42
  If the log formatter returns nil or false, a nothing will be logged at all. You could use this to narrow down what you want to emit, for example:
33
43
 
34
- Que.log_formatter = proc do |data|
35
- if ['job_worked', 'job_unavailable'].include?(data[:event])
36
- JSON.dump(data)
37
- end
38
- end
44
+ ```ruby
45
+ Que.log_formatter = proc do |data|
46
+ if [:job_worked, :job_unavailable].include?(data[:event])
47
+ JSON.dump(data)
48
+ end
49
+ end
50
+ ```
@@ -2,7 +2,7 @@
2
2
 
3
3
  Que provides a pool of workers to process jobs in a multithreaded fashion - this allows you to save memory by working many jobs simultaneously in the same process.
4
4
 
5
- When the worker pool is active (as it is by default when running `rails server`, or when you set Que.mode = :async), the default number of workers is 4. This is fine for most use cases, but the ideal number for your app will depend on your interpreter and what types of jobs you're running.
5
+ When the worker pool is active (as it is by default when running `rails server`, or when you set `Que.mode = :async`), the default number of workers is 4. This is fine for most use cases, but the ideal number for your app will depend on your interpreter and what types of jobs you're running.
6
6
 
7
7
  Ruby MRI has a global interpreter lock (GIL), which prevents it from using more than one CPU core at a time. Having multiple workers running makes sense if your jobs tend to spend a lot of time in I/O (waiting on complex database queries, sending emails, making HTTP requests, etc.), as most jobs do. However, if your jobs are doing a lot of work in Ruby, they'll be spending a lot of time blocking each other, and having too many workers running will just slow everything down.
8
8
 
@@ -10,42 +10,56 @@ JRuby and Rubinius, on the other hand, have no global interpreter lock, and so c
10
10
 
11
11
  You can change the number of workers in the pool whenever you like by setting the `worker_count` option:
12
12
 
13
- Que.worker_count = 8
13
+ ```ruby
14
+ Que.worker_count = 8
15
+ ```
14
16
 
15
17
  ### Working Jobs Via Rake Task
16
18
 
17
19
  If you don't want to burden your web processes with too much work and want to run workers in a background process instead, similar to how most other queues work, you can:
18
20
 
19
- # Run a pool of 4 workers:
20
- rake que:work
21
+ ```shell
22
+ # Run a pool of 4 workers:
23
+ rake que:work
21
24
 
22
- # Or configure the number of workers:
23
- QUE_WORKER_COUNT=8 rake que:work
25
+ # Or configure the number of workers:
26
+ QUE_WORKER_COUNT=8 rake que:work
27
+ ```
24
28
 
25
29
  Other options available via environment variables are `QUE_QUEUE` to determine which named queue jobs are pulled from, and `QUE_WAKE_INTERVAL` to determine how long workers will wait to poll again when there are no jobs available. For example, to run 2 workers that run jobs from the "other_queue" queue and wait a half-second between polls, you could do:
26
30
 
27
- QUE_QUEUE=other_queue QUE_WORKER_COUNT=2 QUE_WAKE_INTERVAL=0.5 rake que:work
31
+ ```shell
32
+ QUE_QUEUE=other_queue QUE_WORKER_COUNT=2 QUE_WAKE_INTERVAL=0.5 rake que:work
33
+ ```
28
34
 
29
35
  ### Thread-Unsafe Application Code
30
36
 
31
37
  If your application code is not thread-safe, you won't want any workers to be processing jobs while anything else is happening in the Ruby process. So, you'll want to turn the worker pool off by default:
32
38
 
33
- Que.mode = :off
39
+ ```ruby
40
+ Que.mode = :off
41
+ ```
34
42
 
35
43
  This will prevent Que from trying to process jobs in the background of your web processes. In order to actually work jobs, you'll want to run a single worker at a time, and to do so via a separate rake task, like so:
36
44
 
37
- QUE_WORKER_COUNT=1 rake que:work
45
+ ```shell
46
+ QUE_WORKER_COUNT=1 rake que:work
47
+ ```
38
48
 
39
49
  ### The Wake Interval
40
50
 
41
51
  If a worker checks the job queue and finds no jobs ready for it to work, it will fall asleep. In order to make sure that newly-available jobs don't go unworked, a worker is awoken every so often to check for available work. By default, this happens every five seconds, but you can make it happen more or less often by setting a custom wake_interval:
42
52
 
43
- Que.wake_interval = 2 # In Rails, 2.seconds also works fine.
53
+ ```ruby
54
+ Que.wake_interval = 2 # In Rails, 2.seconds also works fine.
55
+ ```
44
56
 
45
57
  You can also choose to never let workers wake up on their own:
46
58
 
47
- # Never wake up any workers:
48
- Que.wake_interval = nil
59
+ ```ruby
60
+ # Never wake up any workers:
61
+ Que.wake_interval = nil
62
+ ```
49
63
 
50
64
  If you do this, though, you'll need to wake workers manually.
51
65
 
@@ -53,11 +67,13 @@ If you do this, though, you'll need to wake workers manually.
53
67
 
54
68
  Regardless of the `wake_interval` setting, you can always wake workers manually:
55
69
 
56
- # Wake up a single worker to check the queue for work:
57
- Que.wake!
70
+ ```ruby
71
+ # Wake up a single worker to check the queue for work:
72
+ Que.wake!
58
73
 
59
- # Wake up all workers in this process to check for work:
60
- Que.wake_all!
74
+ # Wake up all workers in this process to check for work:
75
+ Que.wake_all!
76
+ ```
61
77
 
62
78
  `Que.wake_all!` is helpful if there are no jobs available and all your workers go to sleep, and then you queue a large number of jobs. Typically, it will take a little while for the entire pool of workers get going again - a new one will wake up every `wake_interval` seconds, but it will take up to `wake_interval * worker_count` seconds for all of them to get going. `Que.wake_all!` can get them all moving immediately.
63
79
 
data/docs/migrating.md CHANGED
@@ -2,25 +2,29 @@
2
2
 
3
3
  Some new releases of Que may require updates to the database schema. It's recommended that you integrate these updates alongside your other database migrations. For example, when Que released version 0.6.0, the schema version was updated from 2 to 3. If you're running ActiveRecord, you could make a migration to perform this upgrade like so:
4
4
 
5
- class UpdateQue < ActiveRecord::Migration
6
- def self.up
7
- Que.migrate! :version => 3
8
- end
5
+ ```ruby
6
+ class UpdateQue < ActiveRecord::Migration
7
+ def self.up
8
+ Que.migrate! :version => 3
9
+ end
9
10
 
10
- def self.down
11
- Que.migrate! :version => 2
12
- end
13
- end
11
+ def self.down
12
+ Que.migrate! :version => 2
13
+ end
14
+ end
15
+ ```
14
16
 
15
17
  This will make sure that your database schema stays consistent with your codebase. If you're looking for something quicker and dirtier, you can always manually migrate in a console session:
16
18
 
17
- # Change schema to version 3.
18
- Que.migrate! :version => 3
19
+ ```ruby
20
+ # Change schema to version 3.
21
+ Que.migrate! :version => 3
19
22
 
20
- # Update to whatever the latest schema version is.
21
- Que.migrate!
23
+ # Update to whatever the latest schema version is.
24
+ Que.migrate!
22
25
 
23
- # Check your current schema version.
24
- Que.db_version #=> 3
26
+ # Check your current schema version.
27
+ Que.db_version #=> 3
28
+ ```
25
29
 
26
30
  Note that you can remove Que from your database completely by migrating to version 0.
@@ -2,20 +2,26 @@
2
2
 
3
3
  Que supports the use of multiple queues in a single job table. This feature is intended to support the case where multiple applications (with distinct codebases) are sharing the same database. For instance, you might have a separate Ruby application that handles only processing credit cards. In that case, you can run that application's workers against a specific queue:
4
4
 
5
- QUE_QUEUE=credit_cards rake que:work
5
+ ```shell
6
+ QUE_QUEUE=credit_cards rake que:work
7
+ ```
6
8
 
7
9
  Then you can set jobs to be enqueued in that queue specifically:
8
10
 
9
- ProcessCreditCard.enqueue current_user.id, :queue => 'credit_cards'
11
+ ```ruby
12
+ ProcessCreditCard.enqueue current_user.id, :queue => 'credit_cards'
10
13
 
11
- # Or:
14
+ # Or:
12
15
 
13
- class ProcessCreditCard < Que::Job
14
- # Set a default queue for this job class; this can be overridden by
15
- # passing the :queue parameter to enqueue like above.
16
- @queue = 'credit_cards'
17
- end
16
+ class ProcessCreditCard < Que::Job
17
+ # Set a default queue for this job class; this can be overridden by
18
+ # passing the :queue parameter to enqueue like above.
19
+ @queue = 'credit_cards'
20
+ end
21
+ ```
18
22
 
19
23
  In some cases, the ProcessCreditCard class may not be defined in the application that is enqueueing the job. In that case, you can specify the job class as a string:
20
24
 
21
- Que.enqueue current_user.id, :job_class => 'ProcessCreditCard', :queue => 'credit_cards'
25
+ ```ruby
26
+ Que.enqueue current_user.id, :job_class => 'ProcessCreditCard', :queue => 'credit_cards'
27
+ ```
@@ -2,49 +2,55 @@
2
2
 
3
3
  If you're not using an ORM like ActiveRecord or Sequel, you can have Que access jobs using a plain Postgres connection:
4
4
 
5
- require 'uri'
6
- require 'pg'
5
+ ```ruby
6
+ require 'uri'
7
+ require 'pg'
7
8
 
8
- uri = URI.parse(ENV['DATABASE_URL'])
9
+ uri = URI.parse(ENV['DATABASE_URL'])
9
10
 
10
- Que.connection = PG::Connection.open :host => uri.host,
11
- :user => uri.user,
12
- :password => uri.password,
13
- :port => uri.port || 5432,
14
- :dbname => uri.path[1..-1]
11
+ Que.connection = PG::Connection.open :host => uri.host,
12
+ :user => uri.user,
13
+ :password => uri.password,
14
+ :port => uri.port || 5432,
15
+ :dbname => uri.path[1..-1]
16
+ ```
15
17
 
16
18
  If you want to be able to use multithreading to run multiple jobs simultaneously in the same process, though, you'll need the ConnectionPool gem (be sure to add `gem 'connection_pool'` to your Gemfile):
17
19
 
18
- require 'uri'
19
- require 'pg'
20
- require 'connection_pool'
20
+ ```ruby
21
+ require 'uri'
22
+ require 'pg'
23
+ require 'connection_pool'
21
24
 
22
- uri = URI.parse(ENV['DATABASE_URL'])
25
+ uri = URI.parse(ENV['DATABASE_URL'])
23
26
 
24
- Que.connection = ConnectionPool.new :size => 10 do
25
- PG::Connection.open :host => uri.host,
26
- :user => uri.user,
27
- :password => uri.password,
28
- :port => uri.port || 5432,
29
- :dbname => uri.path[1..-1]
30
- end
27
+ Que.connection = ConnectionPool.new :size => 10 do
28
+ PG::Connection.open :host => uri.host,
29
+ :user => uri.user,
30
+ :password => uri.password,
31
+ :port => uri.port || 5432,
32
+ :dbname => uri.path[1..-1]
33
+ end
34
+ ```
31
35
 
32
36
  Be sure to pick your pool size carefully - if you use 10 for the size, you'll incur the overhead of having 10 connections open to Postgres even if you never use more than a couple of them.
33
37
 
34
38
  The Pond gem doesn't have this drawback - it is very similar to ConnectionPool, but establishes connections lazily (add `gem 'pond'` to your Gemfile):
35
39
 
36
- require 'uri'
37
- require 'pg'
38
- require 'pond'
39
-
40
- uri = URI.parse(ENV['DATABASE_URL'])
41
-
42
- Que.connection = Pond.new :maximum_size => 10 do
43
- PG::Connection.open :host => uri.host,
44
- :user => uri.user,
45
- :password => uri.password,
46
- :port => uri.port || 5432,
47
- :dbname => uri.path[1..-1]
48
- end
40
+ ```ruby
41
+ require 'uri'
42
+ require 'pg'
43
+ require 'pond'
44
+
45
+ uri = URI.parse(ENV['DATABASE_URL'])
46
+
47
+ Que.connection = Pond.new :maximum_size => 10 do
48
+ PG::Connection.open :host => uri.host,
49
+ :user => uri.user,
50
+ :password => uri.password,
51
+ :port => uri.port || 5432,
52
+ :dbname => uri.path[1..-1]
53
+ end
54
+ ```
49
55
 
50
56
  Please be aware that if you're using ActiveRecord or Sequel to manage your data, there's no reason for you to be using any of these methods - it's less efficient (unnecessary connections will waste memory on your database server) and you lose the reliability benefits of wrapping jobs in the same transactions as the rest of your data.
data/docs/using_sequel.md CHANGED
@@ -2,26 +2,30 @@
2
2
 
3
3
  If you're using Sequel, with or without Rails, you'll need to give Que a specific database instance to use:
4
4
 
5
- DB = Sequel.connect(ENV['DATABASE_URL'])
6
- Que.connection = DB
5
+ ```ruby
6
+ DB = Sequel.connect(ENV['DATABASE_URL'])
7
+ Que.connection = DB
8
+ ```
7
9
 
8
10
  Then you can safely use the same database object to transactionally protect your jobs:
9
11
 
10
- class MyJob < Que::Job
11
- def run
12
- # Do stuff.
12
+ ```ruby
13
+ class MyJob < Que::Job
14
+ def run
15
+ # Do stuff.
13
16
 
14
- DB.transaction do
15
- # Make changes to the database.
17
+ DB.transaction do
18
+ # Make changes to the database.
16
19
 
17
- # Destroying this job will be protected by the same transaction.
18
- destroy
19
- end
20
- end
20
+ # Destroying this job will be protected by the same transaction.
21
+ destroy
21
22
  end
23
+ end
24
+ end
22
25
 
23
- # In your controller action:
24
- DB.transaction do
25
- @user = User.create(params[:user])
26
- MyJob.enqueue :user_id => @user.id
27
- end
26
+ # In your controller action:
27
+ DB.transaction do
28
+ @user = User.create(params[:user])
29
+ MyJob.enqueue :user_id => @user.id
30
+ end
31
+ ```