que 0.5.0 → 0.6.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.gitignore +1 -1
- data/.travis.yml +1 -1
- data/CHANGELOG.md +21 -1
- data/Gemfile +5 -0
- data/README.md +7 -6
- data/docs/advanced_setup.md +14 -4
- data/docs/customizing_que.md +4 -4
- data/docs/error_handling.md +13 -1
- data/docs/managing_workers.md +2 -2
- data/docs/migrating.md +26 -0
- data/docs/multiple_queues.md +13 -0
- data/docs/shutting_down_safely.md +7 -0
- data/docs/writing_reliable_jobs.md +43 -0
- data/lib/generators/que/templates/add_que.rb +1 -1
- data/lib/que.rb +27 -41
- data/lib/que/adapters/base.rb +75 -4
- data/lib/que/job.rb +45 -28
- data/lib/que/migrations.rb +3 -2
- data/lib/que/migrations/{1-down.sql → 1/down.sql} +0 -0
- data/lib/que/migrations/{1-up.sql → 1/up.sql} +0 -0
- data/lib/que/migrations/{2-down.sql → 2/down.sql} +0 -0
- data/lib/que/migrations/{2-up.sql → 2/up.sql} +0 -0
- data/lib/que/migrations/3/down.sql +5 -0
- data/lib/que/migrations/3/up.sql +5 -0
- data/lib/que/sql.rb +24 -17
- data/lib/que/version.rb +1 -1
- data/lib/que/worker.rb +6 -5
- data/spec/adapters/active_record_spec.rb +6 -6
- data/spec/adapters/sequel_spec.rb +4 -4
- data/spec/gemfiles/Gemfile1 +18 -0
- data/spec/gemfiles/Gemfile2 +18 -0
- data/spec/support/helpers.rb +2 -1
- data/spec/support/shared_examples/adapter.rb +7 -3
- data/spec/support/shared_examples/multi_threaded_adapter.rb +2 -2
- data/spec/travis.rb +12 -4
- data/spec/unit/customization_spec.rb +148 -0
- data/spec/unit/{queue_spec.rb → enqueue_spec.rb} +115 -14
- data/spec/unit/logging_spec.rb +3 -2
- data/spec/unit/migrations_spec.rb +3 -2
- data/spec/unit/pool_spec.rb +30 -6
- data/spec/unit/run_spec.rb +12 -0
- data/spec/unit/states_spec.rb +29 -31
- data/spec/unit/stats_spec.rb +16 -14
- data/spec/unit/work_spec.rb +120 -25
- data/spec/unit/worker_spec.rb +55 -9
- data/tasks/safe_shutdown.rb +1 -1
- metadata +30 -17
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: f698e236aff233a785bc44075257a17c1ae35e3b
|
4
|
+
data.tar.gz: 0916c6cbb864a9c148bb43e9765cd569330abdf9
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 4bad7602663617ced6acf5e76cd041c8f6ec7ed0339b7706ca2121e305cb61bb4a55b2bdc1752c7f5f14ad2147e6e65df7e6d1a0192ca5bb25d12dd02f123b90
|
7
|
+
data.tar.gz: e7719dc683f696209c4f5dcdb07495693bb891d2a6dbace5ec7be9954037234024b0d246c8394a62f75272dc03acdcbbfea43f517acd9669467a352672a410bb
|
data/.gitignore
CHANGED
data/.travis.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,6 +1,26 @@
|
|
1
|
+
### 0.6.0 (2014-02-04)
|
2
|
+
|
3
|
+
* **A schema upgrade to version 3 is required for this release.** See [the migration doc](https://github.com/chanks/que/blob/master/docs/migrating.md) for information if you're upgrading from a previous release.
|
4
|
+
|
5
|
+
* You can now run a job's logic directly (without enqueueing it) like `MyJob.run(arg1, arg2, :other_arg => arg3)`. This is useful when a job class encapsulates logic that you want to invoke without involving the entire queue.
|
6
|
+
|
7
|
+
* You can now check the current version of Que's database schema with `Que.db_version`.
|
8
|
+
|
9
|
+
* The method for enqueuing a job has been renamed from `MyJob.queue` to `MyJob.enqueue`, since we were beginning to use the word 'queue' in a LOT of places. `MyJob.queue` still works, but it may be removed at some point.
|
10
|
+
|
11
|
+
* The variables for setting the defaults for a given job class have been changed from `@default_priority` to `@priority` and `@default_run_at` to `@run_at`. The old variables still work, but like `Job.queue`, they may be removed at some point.
|
12
|
+
|
13
|
+
* Log lines now include the machine's hostname, since a pid alone may not uniquely identify a process.
|
14
|
+
|
15
|
+
* Multiple queues are now supported. See [the docs](https://github.com/chanks/que/blob/master/docs/multiple_queues.md) for details. (chanks, joevandyk)
|
16
|
+
|
17
|
+
* Rubinius 2.2 is now supported. (brixen)
|
18
|
+
|
19
|
+
* Job classes may now define their own logic for determining the retry interval when a job raises an error. See [error handling](https://github.com/chanks/que/blob/master/docs/error_handling.md) for more information.
|
20
|
+
|
1
21
|
### 0.5.0 (2014-01-14)
|
2
22
|
|
3
|
-
* When running a worker pool inside your web process on ActiveRecord, Que will now wake a worker once a transaction containing a queued job is committed.
|
23
|
+
* When running a worker pool inside your web process on ActiveRecord, Que will now wake a worker once a transaction containing a queued job is committed. (joevandyk, chanks)
|
4
24
|
|
5
25
|
* The `que:work` rake task now has a default wake_interval of 0.1 seconds, since it relies exclusively on polling to pick up jobs. You can set a QUE_WAKE_INTERVAL environment variable to change this. The environment variable to set a size for the worker pool in the rake task has also been changed from WORKER_COUNT to QUE_WORKER_COUNT.
|
6
26
|
|
data/Gemfile
CHANGED
data/README.md
CHANGED
@@ -13,6 +13,7 @@ Additionally, there are the general benefits of storing jobs in Postgres, alongs
|
|
13
13
|
* **Transactional Control** - Queue a job along with other changes to your database, and it'll commit or rollback with everything else. If you're using ActiveRecord or Sequel, Que can piggyback on their connections, so setup is simple and jobs are protected by the transactions you're already using.
|
14
14
|
* **Atomic Backups** - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
|
15
15
|
* **Fewer Dependencies** - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
|
16
|
+
* **Security** - Postgres' support for SSL connections keeps your data safe in transport, for added protection when you're running workers on cloud platforms that you can't completely control.
|
16
17
|
|
17
18
|
Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on [how to write a reliable job](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md)).
|
18
19
|
|
@@ -52,9 +53,9 @@ Create a class for each type of job you want to run:
|
|
52
53
|
# app/jobs/charge_credit_card.rb
|
53
54
|
class ChargeCreditCard < Que::Job
|
54
55
|
# Default settings for this job. These are optional - without them, jobs
|
55
|
-
# will default to priority
|
56
|
-
@
|
57
|
-
@
|
56
|
+
# will default to priority 100 and run immediately.
|
57
|
+
@priority = 10
|
58
|
+
@run_at = proc { 1.minute.from_now }
|
58
59
|
|
59
60
|
def run(user_id, options)
|
60
61
|
# Do stuff.
|
@@ -80,19 +81,19 @@ Queue your job. Again, it's best to do this in a transaction with other changes
|
|
80
81
|
ActiveRecord::Base.transaction do
|
81
82
|
# Persist credit card information
|
82
83
|
card = CreditCard.create(params[:credit_card])
|
83
|
-
ChargeCreditCard.
|
84
|
+
ChargeCreditCard.enqueue(current_user.id, :credit_card_id => card.id)
|
84
85
|
end
|
85
86
|
|
86
87
|
You can also add options to run the job after a specific time, or with a specific priority:
|
87
88
|
|
88
89
|
# The default priority is 100, and a lower number means a higher priority. 5 would be very important.
|
89
|
-
ChargeCreditCard.
|
90
|
+
ChargeCreditCard.enqueue current_user.id, :credit_card_id => card.id, :run_at => 1.day.from_now, :priority => 5
|
90
91
|
|
91
92
|
To determine what happens when a job is queued, you can set Que's mode in your application configuration. There are a few options for the mode:
|
92
93
|
|
93
94
|
* `config.que.mode = :off` - In this mode, queueing a job will simply insert it into the database - the current process will make no effort to run it. You should use this if you want to use a dedicated process to work tasks (there's a rake task to do this, see below). This is the default when running `rails console`.
|
94
95
|
* `config.que.mode = :async` - In this mode, a pool of background workers is spun up, each running in their own thread. This is the default when running `rails server`. See the docs for [more information on managing workers](https://github.com/chanks/que/blob/master/docs/managing_workers.md).
|
95
|
-
* `config.que.mode = :sync` - In this mode, any jobs you queue will be run in the same thread, synchronously (that is, `MyJob.
|
96
|
+
* `config.que.mode = :sync` - In this mode, any jobs you queue will be run in the same thread, synchronously (that is, `MyJob.enqueue` runs the job and won't return until it's completed). This makes your application's behavior easier to test, so it's the default in the test environment.
|
96
97
|
|
97
98
|
## Contributing
|
98
99
|
|
data/docs/advanced_setup.md
CHANGED
@@ -13,11 +13,21 @@ Then you can queue jobs just as you would in Rails:
|
|
13
13
|
|
14
14
|
ActiveRecord::Base.transaction do
|
15
15
|
@user = User.create(params[:user])
|
16
|
-
SendRegistrationEmail.
|
16
|
+
SendRegistrationEmail.enqueue :user_id => @user.id
|
17
17
|
end
|
18
18
|
|
19
19
|
There are other docs to read if you're using [Sequel](https://github.com/chanks/que/blob/master/docs/using_sequel.md) or [plain Postgres connections](https://github.com/chanks/que/blob/master/docs/using_plain_connections.md) (with no ORM at all) instead of ActiveRecord.
|
20
20
|
|
21
|
+
### Forking Servers
|
22
|
+
|
23
|
+
If you want to run a worker pool in your web process and you're using a forking webserver like Unicorn or Puma in some configurations, you'll want to set `Que.mode = :off` in your application configuration and only start up the worker pool in the child processes. So, for Puma:
|
24
|
+
|
25
|
+
# config/puma.rb
|
26
|
+
on_worker_boot do
|
27
|
+
# Reestablish your database connection, etc...
|
28
|
+
Que.mode = :async
|
29
|
+
end
|
30
|
+
|
21
31
|
### Managing the Jobs Table
|
22
32
|
|
23
33
|
After you've connected Que to the database, you can manage the jobs table:
|
@@ -27,13 +37,13 @@ After you've connected Que to the database, you can manage the jobs table:
|
|
27
37
|
|
28
38
|
You'll want to migrate to a specific version if you're using migration files, to ensure that they work the same way even when you upgrade Que in the future:
|
29
39
|
|
30
|
-
# Update the schema to version #
|
31
|
-
Que.migrate! :version =>
|
40
|
+
# Update the schema to version #3.
|
41
|
+
Que.migrate! :version => 3
|
32
42
|
|
33
43
|
# To reverse the migration, drop the jobs table entirely:
|
34
44
|
Que.migrate! :version => 0
|
35
45
|
|
36
|
-
There's also a helper method to clear the jobs table:
|
46
|
+
There's also a helper method to clear all jobs from the jobs table:
|
37
47
|
|
38
48
|
Que.clear!
|
39
49
|
|
data/docs/customizing_que.md
CHANGED
@@ -13,7 +13,7 @@ Que's support for scheduling jobs makes it easy to implement reliable recurring
|
|
13
13
|
|
14
14
|
ActiveRecord::Base.transaction do
|
15
15
|
destroy
|
16
|
-
self.class.
|
16
|
+
self.class.enqueue :run_at => @attrs[:run_at] + 1.hour
|
17
17
|
end
|
18
18
|
end
|
19
19
|
end
|
@@ -48,7 +48,7 @@ That said, if you want to queue jobs in the DelayedJob style, that can be done r
|
|
48
48
|
end
|
49
49
|
|
50
50
|
def method_missing(method, *args)
|
51
|
-
Delayed.
|
51
|
+
Delayed.enqueue Marshal.dump(@receiver), method, Marshal.dump(args)
|
52
52
|
end
|
53
53
|
end
|
54
54
|
|
@@ -77,7 +77,7 @@ You can mimic this style with Que by using a simple job class:
|
|
77
77
|
|
78
78
|
# Then:
|
79
79
|
|
80
|
-
Command.
|
80
|
+
Command.enqueue "Kernel.puts", "hello world"
|
81
81
|
|
82
82
|
### Retaining Finished Jobs
|
83
83
|
|
@@ -88,7 +88,7 @@ Que deletes jobs from the queue as they are worked, in order to keep the `que_jo
|
|
88
88
|
|
89
89
|
class MyJobClass < Que::Job
|
90
90
|
def destroy
|
91
|
-
Que.execute "INSERT INTO finished_jobs SELECT * FROM que_jobs WHERE
|
91
|
+
Que.execute "INSERT INTO finished_jobs SELECT * FROM que_jobs WHERE queue = $1::text AND priority = $2::integer AND run_at = $3::timestamptz AND job_id = $4::bigint", @attrs.values_at(:queue, :priority, :run_at, :job_id)
|
92
92
|
super
|
93
93
|
end
|
94
94
|
end
|
data/docs/error_handling.md
CHANGED
@@ -2,7 +2,19 @@
|
|
2
2
|
|
3
3
|
If an error is raised and left uncaught by your job, Que will save the error message and backtrace to the database and schedule the job to be retried later.
|
4
4
|
|
5
|
-
If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count^4 + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's.
|
5
|
+
If a given job fails repeatedly, Que will retry it at exponentially-increasing intervals equal to (failure_count^4 + 3) seconds. This means that a job will be retried 4 seconds after its first failure, 19 seconds after its second, 84 seconds after its third, 259 seconds after its fourth, and so on until it succeeds. This pattern is very similar to DelayedJob's. Alternately, you can define your own retry logic by setting an interval to delay each time, or a callable that accepts the number of failures and returns an interval:
|
6
|
+
|
7
|
+
class MyJob < Que::Job
|
8
|
+
# Just retry a failed job every 5 seconds:
|
9
|
+
@retry_interval = 5
|
10
|
+
|
11
|
+
# Always retry this job immediately (not recommended, or transient
|
12
|
+
# errors will spam your error reporting):
|
13
|
+
@retry_interval = 0
|
14
|
+
|
15
|
+
# Increase the delay by 30 seconds every time this job fails:
|
16
|
+
@retry_interval = proc { |count| count * 30 }
|
17
|
+
end
|
6
18
|
|
7
19
|
Unlike DelayedJob, however, there is currently no maximum number of failures after which jobs will be deleted. Que's assumption is that if a job is erroring perpetually (and not just transiently), you will want to take action to get the job working properly rather than simply losing it silently.
|
8
20
|
|
data/docs/managing_workers.md
CHANGED
@@ -23,7 +23,7 @@ If you don't want to burden your web processes with too much work and want to ru
|
|
23
23
|
rake que:work
|
24
24
|
|
25
25
|
# Or configure the number of workers:
|
26
|
-
|
26
|
+
QUE_WORKER_COUNT=8 rake que:work
|
27
27
|
|
28
28
|
### Thread-Unsafe Application Code
|
29
29
|
|
@@ -36,7 +36,7 @@ If your application code is not thread-safe, you won't want any workers to be pr
|
|
36
36
|
|
37
37
|
This will prevent Que from trying to process jobs in the background of your web processes. In order to actually work jobs, you'll want to run a single worker at a time, and to do so via a separate rake task, like so:
|
38
38
|
|
39
|
-
|
39
|
+
QUE_WORKER_COUNT=1 rake que:work
|
40
40
|
|
41
41
|
### The Wake Interval
|
42
42
|
|
data/docs/migrating.md
ADDED
@@ -0,0 +1,26 @@
|
|
1
|
+
## Migrating
|
2
|
+
|
3
|
+
Some new releases of Que may require updates to the database schema. It's recommended that you integrate these updates alongside your other database migrations. For example, when Que released version 0.6.0, the schema version was updated from 2 to 3. If you're running ActiveRecord, you could make a migration to perform this upgrade like so:
|
4
|
+
|
5
|
+
class UpdateQue < ActiveRecord::Migration
|
6
|
+
def self.up
|
7
|
+
Que.migrate! :version => 3
|
8
|
+
end
|
9
|
+
|
10
|
+
def self.down
|
11
|
+
Que.migrate! :version => 2
|
12
|
+
end
|
13
|
+
end
|
14
|
+
|
15
|
+
This will make sure that your database schema stays consistent with your codebase. If you're looking for something quicker and dirtier, you can always manually migrate in a console session:
|
16
|
+
|
17
|
+
# Change schema to version 3.
|
18
|
+
Que.migrate! :version => 3
|
19
|
+
|
20
|
+
# Update to whatever the latest schema version is.
|
21
|
+
Que.migrate!
|
22
|
+
|
23
|
+
# Check your current schema version.
|
24
|
+
Que.db_version #=> 3
|
25
|
+
|
26
|
+
Note that you can remove Que from your database completely by migrating to version 0.
|
@@ -0,0 +1,13 @@
|
|
1
|
+
## Multiple Queues
|
2
|
+
|
3
|
+
Que supports the use of multiple queues in a single job table. This feature is intended to support the case where multiple applications (with distinct codebases) are sharing the same database. For instance, you might have a separate Ruby application that handles only processing credit cards. In that case, you can run that application's workers against a specific queue:
|
4
|
+
|
5
|
+
QUE_QUEUE=credit_cards rake que:work
|
6
|
+
|
7
|
+
Then you can set jobs to be enqueued in that queue specifically:
|
8
|
+
|
9
|
+
ProcessCreditCard.enqueue current_user.id, :queue => 'credit_cards'
|
10
|
+
|
11
|
+
In some cases, the ProcessCreditCard class may not be defined in the application that is enqueueing the job. In that case, you can specify the job class as a string:
|
12
|
+
|
13
|
+
Que.enqueue current_user.id, :job_class => 'ProcessCreditCard', :queue => 'credit_cards'
|
@@ -0,0 +1,7 @@
|
|
1
|
+
## Shutting Down Safely
|
2
|
+
|
3
|
+
To ensure safe operation, Que needs to be very careful in how it shuts down. When a Ruby process ends normally, it calls Thread#kill on any threads that are still running - unfortunately, if a thread is in the middle of a transaction when this happens, there is a risk that it will be prematurely commited, resulting in data corruption. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for more detail on this.
|
4
|
+
|
5
|
+
To prevent this, Que will block a Ruby process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md) for information on how to design your jobs to fail safely.
|
6
|
+
|
7
|
+
So, be prepared to use SIGKILL on your Ruby processes if they run for too long. For example, Heroku takes a good approach to this - when Heroku's platform is shutting down a process, it sends SIGTERM, waits ten seconds, then sends SIGKILL if the process still hasn't exited. This is a nice compromise - it will give each of your currently running jobs ten seconds to complete, and any jobs that haven't finished by then will be interrupted and retried later.
|
@@ -60,3 +60,46 @@ Finally, there are some jobs where you won't want to write to the database at al
|
|
60
60
|
end
|
61
61
|
|
62
62
|
In this case, we don't have any no way to prevent the occasional double-sending of an email. But, for ease of use, you can leave out the transaction and the `destroy` call entirely - Que will recognize that the job wasn't destroyed and will clean it up for you.
|
63
|
+
|
64
|
+
### Timeouts
|
65
|
+
|
66
|
+
Long-running jobs aren't necessarily a problem in Que, since the overhead of an individual job isn't that big (just an open PG connection and an advisory lock held in memory). But jobs that hang indefinitely can tie up a worker and [block the Ruby process from exiting gracefully](https://github.com/chanks/que/blob/master/docs/shutting_down_safely.md), which is a pain.
|
67
|
+
|
68
|
+
Que doesn't offer a general way to kill jobs that have been running too long, because that currently can't be done safely in Ruby. Typically, one would use Ruby's Timeout module for this sort of thing, but wrapping a database transaction inside a timeout introduces a risk of premature commits, which can corrupt your data. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for detail on why this is.
|
69
|
+
|
70
|
+
However, if there's part of your job that is prone to hang (due to an API call or other HTTP request that never returns, for example), you can timeout those individual parts of your job relatively safely. For example, consider a job that needs to make an HTTP request and then write to the database:
|
71
|
+
|
72
|
+
require 'net/http'
|
73
|
+
|
74
|
+
class ScrapeStuff < Que::Job
|
75
|
+
def run(domain_to_scrape, path_to_scrape)
|
76
|
+
result = Net::HTTP.get(domain_to_scrape, path_to_scrape)
|
77
|
+
|
78
|
+
ActiveRecord::Base.transaction do
|
79
|
+
# Insert result...
|
80
|
+
|
81
|
+
destroy
|
82
|
+
end
|
83
|
+
end
|
84
|
+
end
|
85
|
+
|
86
|
+
That request could take a very long time, or never return at all. Let's wrap it in a five-second timeout:
|
87
|
+
|
88
|
+
require 'net/http'
|
89
|
+
require 'timeout'
|
90
|
+
|
91
|
+
class ScrapeStuff < Que::Job
|
92
|
+
def run(domain_to_scrape, path_to_scrape)
|
93
|
+
result = Timeout.timeout(5){Net::HTTP.get(domain_to_scrape, path_to_scrape)}
|
94
|
+
|
95
|
+
ActiveRecord::Base.transaction do
|
96
|
+
# Insert result...
|
97
|
+
|
98
|
+
destroy
|
99
|
+
end
|
100
|
+
end
|
101
|
+
end
|
102
|
+
|
103
|
+
Now, if the request takes more than five seconds, a `Timeout::Error` will be raised and Que will just retry the job later. This solution isn't perfect, since Timeout uses Thread#kill under the hood, which can lead to unpredictable behavior. But it's separate from our transaction, so there's no risk of losing data - even a catastrophic error that left Net::HTTP in a bad state would be fixable by restarting the process.
|
104
|
+
|
105
|
+
Finally, remember that if you're using a library that offers its own timeout functionality, that's usually preferable to using the Timeout module.
|
data/lib/que.rb
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
require '
|
1
|
+
require 'socket' # For hostname
|
2
2
|
|
3
3
|
module Que
|
4
4
|
autoload :Adapters, 'que/adapters/base'
|
@@ -20,10 +20,6 @@ module Que
|
|
20
20
|
attr_accessor :logger, :error_handler
|
21
21
|
attr_writer :adapter, :log_formatter
|
22
22
|
|
23
|
-
def adapter
|
24
|
-
@adapter || raise("Que connection not established!")
|
25
|
-
end
|
26
|
-
|
27
23
|
def connection=(connection)
|
28
24
|
self.adapter = if connection.to_s == 'ActiveRecord'
|
29
25
|
Adapters::ActiveRecord.new
|
@@ -38,18 +34,12 @@ module Que
|
|
38
34
|
end
|
39
35
|
end
|
40
36
|
|
41
|
-
|
42
|
-
|
43
|
-
def create!
|
44
|
-
migrate! :version => 1
|
45
|
-
end
|
46
|
-
|
47
|
-
def drop!
|
48
|
-
migrate! :version => 0
|
37
|
+
def adapter
|
38
|
+
@adapter || raise("Que connection not established!")
|
49
39
|
end
|
50
40
|
|
51
|
-
def
|
52
|
-
|
41
|
+
def execute(*args)
|
42
|
+
adapter.execute(*args)
|
53
43
|
end
|
54
44
|
|
55
45
|
def clear!
|
@@ -64,16 +54,32 @@ module Que
|
|
64
54
|
execute :worker_states
|
65
55
|
end
|
66
56
|
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
57
|
+
# Give us a cleaner interface when specifying a job_class as a string.
|
58
|
+
def enqueue(*args)
|
59
|
+
Job.enqueue(*args)
|
60
|
+
end
|
61
|
+
|
62
|
+
def db_version
|
63
|
+
Migrations.db_version
|
64
|
+
end
|
65
|
+
|
66
|
+
def migrate!(version = {:version => Migrations::CURRENT_VERSION})
|
67
|
+
Migrations.migrate!(version)
|
68
|
+
end
|
69
|
+
|
70
|
+
# Have to support create! and drop! in old migrations. They just created
|
71
|
+
# and dropped the bare table.
|
72
|
+
def create!
|
73
|
+
migrate! :version => 1
|
74
|
+
end
|
75
|
+
|
76
|
+
def drop!
|
77
|
+
migrate! :version => 0
|
72
78
|
end
|
73
79
|
|
74
80
|
def log(data)
|
75
81
|
level = data.delete(:level) || :info
|
76
|
-
data = {:lib => 'que', :thread => Thread.current.object_id}.merge(data)
|
82
|
+
data = {:lib => 'que', :hostname => Socket.gethostname, :thread => Thread.current.object_id}.merge(data)
|
77
83
|
|
78
84
|
if logger && output = log_formatter.call(data)
|
79
85
|
logger.send level, output
|
@@ -84,26 +90,6 @@ module Que
|
|
84
90
|
@log_formatter ||= JSON_MODULE.method(:dump)
|
85
91
|
end
|
86
92
|
|
87
|
-
# Helper for making hashes indifferently-accessible, even when nested
|
88
|
-
# within each other and within arrays.
|
89
|
-
def indifferentiate(object)
|
90
|
-
case object
|
91
|
-
when Hash
|
92
|
-
h = if {}.respond_to?(:with_indifferent_access) # Better support for Rails.
|
93
|
-
{}.with_indifferent_access
|
94
|
-
else
|
95
|
-
Hash.new { |hash, key| hash[key.to_s] if Symbol === key }
|
96
|
-
end
|
97
|
-
|
98
|
-
object.each { |k, v| h[k] = indifferentiate(v) }
|
99
|
-
h
|
100
|
-
when Array
|
101
|
-
object.map { |v| indifferentiate(v) }
|
102
|
-
else
|
103
|
-
object
|
104
|
-
end
|
105
|
-
end
|
106
|
-
|
107
93
|
# Copy some of the Worker class' config methods here for convenience.
|
108
94
|
[:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :wake!, :wake_all!].each do |meth|
|
109
95
|
define_method(meth) { |*args| Worker.send(meth, *args) }
|