que 0.14.3 → 1.0.0.beta

Sign up to get free protection for your applications and to get access to all the features.
Files changed (102) hide show
  1. checksums.yaml +5 -5
  2. data/.gitignore +2 -0
  3. data/CHANGELOG.md +108 -14
  4. data/LICENSE.txt +1 -1
  5. data/README.md +49 -45
  6. data/bin/command_line_interface.rb +239 -0
  7. data/bin/que +8 -82
  8. data/docs/README.md +2 -0
  9. data/docs/active_job.md +6 -0
  10. data/docs/advanced_setup.md +7 -64
  11. data/docs/command_line_interface.md +45 -0
  12. data/docs/error_handling.md +65 -18
  13. data/docs/inspecting_the_queue.md +30 -80
  14. data/docs/job_helper_methods.md +27 -0
  15. data/docs/logging.md +3 -22
  16. data/docs/managing_workers.md +6 -61
  17. data/docs/middleware.md +15 -0
  18. data/docs/migrating.md +4 -7
  19. data/docs/multiple_queues.md +8 -4
  20. data/docs/shutting_down_safely.md +1 -1
  21. data/docs/using_plain_connections.md +39 -15
  22. data/docs/using_sequel.md +5 -3
  23. data/docs/writing_reliable_jobs.md +15 -24
  24. data/lib/que.rb +98 -182
  25. data/lib/que/active_job/extensions.rb +97 -0
  26. data/lib/que/active_record/connection.rb +51 -0
  27. data/lib/que/active_record/model.rb +48 -0
  28. data/lib/que/connection.rb +179 -0
  29. data/lib/que/connection_pool.rb +78 -0
  30. data/lib/que/job.rb +107 -156
  31. data/lib/que/job_cache.rb +240 -0
  32. data/lib/que/job_methods.rb +168 -0
  33. data/lib/que/listener.rb +176 -0
  34. data/lib/que/locker.rb +466 -0
  35. data/lib/que/metajob.rb +47 -0
  36. data/lib/que/migrations.rb +24 -17
  37. data/lib/que/migrations/4/down.sql +48 -0
  38. data/lib/que/migrations/4/up.sql +265 -0
  39. data/lib/que/poller.rb +267 -0
  40. data/lib/que/rails/railtie.rb +14 -0
  41. data/lib/que/result_queue.rb +35 -0
  42. data/lib/que/sequel/model.rb +51 -0
  43. data/lib/que/utils/assertions.rb +62 -0
  44. data/lib/que/utils/constantization.rb +19 -0
  45. data/lib/que/utils/error_notification.rb +68 -0
  46. data/lib/que/utils/freeze.rb +20 -0
  47. data/lib/que/utils/introspection.rb +50 -0
  48. data/lib/que/utils/json_serialization.rb +21 -0
  49. data/lib/que/utils/logging.rb +78 -0
  50. data/lib/que/utils/middleware.rb +33 -0
  51. data/lib/que/utils/queue_management.rb +18 -0
  52. data/lib/que/utils/transactions.rb +34 -0
  53. data/lib/que/version.rb +1 -1
  54. data/lib/que/worker.rb +128 -167
  55. data/que.gemspec +13 -2
  56. metadata +37 -80
  57. data/.rspec +0 -2
  58. data/.travis.yml +0 -64
  59. data/Gemfile +0 -24
  60. data/docs/customizing_que.md +0 -200
  61. data/lib/generators/que/install_generator.rb +0 -24
  62. data/lib/generators/que/templates/add_que.rb +0 -13
  63. data/lib/que/adapters/active_record.rb +0 -40
  64. data/lib/que/adapters/base.rb +0 -133
  65. data/lib/que/adapters/connection_pool.rb +0 -16
  66. data/lib/que/adapters/pg.rb +0 -21
  67. data/lib/que/adapters/pond.rb +0 -16
  68. data/lib/que/adapters/sequel.rb +0 -20
  69. data/lib/que/railtie.rb +0 -16
  70. data/lib/que/rake_tasks.rb +0 -59
  71. data/lib/que/sql.rb +0 -170
  72. data/spec/adapters/active_record_spec.rb +0 -175
  73. data/spec/adapters/connection_pool_spec.rb +0 -22
  74. data/spec/adapters/pg_spec.rb +0 -41
  75. data/spec/adapters/pond_spec.rb +0 -22
  76. data/spec/adapters/sequel_spec.rb +0 -57
  77. data/spec/gemfiles/Gemfile.current +0 -19
  78. data/spec/gemfiles/Gemfile.old +0 -19
  79. data/spec/gemfiles/Gemfile.older +0 -19
  80. data/spec/gemfiles/Gemfile.oldest +0 -19
  81. data/spec/spec_helper.rb +0 -129
  82. data/spec/support/helpers.rb +0 -25
  83. data/spec/support/jobs.rb +0 -35
  84. data/spec/support/shared_examples/adapter.rb +0 -42
  85. data/spec/support/shared_examples/multi_threaded_adapter.rb +0 -46
  86. data/spec/unit/configuration_spec.rb +0 -31
  87. data/spec/unit/connection_spec.rb +0 -14
  88. data/spec/unit/customization_spec.rb +0 -251
  89. data/spec/unit/enqueue_spec.rb +0 -245
  90. data/spec/unit/helper_spec.rb +0 -12
  91. data/spec/unit/logging_spec.rb +0 -101
  92. data/spec/unit/migrations_spec.rb +0 -84
  93. data/spec/unit/pool_spec.rb +0 -365
  94. data/spec/unit/run_spec.rb +0 -14
  95. data/spec/unit/states_spec.rb +0 -50
  96. data/spec/unit/stats_spec.rb +0 -46
  97. data/spec/unit/transaction_spec.rb +0 -36
  98. data/spec/unit/work_spec.rb +0 -596
  99. data/spec/unit/worker_spec.rb +0 -167
  100. data/tasks/benchmark.rb +0 -3
  101. data/tasks/rspec.rb +0 -14
  102. data/tasks/safe_shutdown.rb +0 -67
@@ -0,0 +1,27 @@
1
+ ## Job Helper Methods
2
+
3
+ There are a number of instance methods on Que::Job that you can use in your jobs, preferably in transactions. See [Writing Reliable Jobs](/writing_reliable_jobs.md) for more information on where to use these methods.
4
+
5
+ ### destroy
6
+
7
+ This method deletes the job from the queue table, ensuring that it won't be worked a second time.
8
+
9
+ ### finish
10
+
11
+ This method marks the current job as finished, ensuring that it won't be worked a second time. This is like destroy, in that it finalizes a job, but this method leaves the job in the table, in case you want to query it later.
12
+
13
+ ### expire
14
+
15
+ This method marks the current job as expired. It will be left in the table and won't be retried, but it will be easy to query for expired jobs. This method is called if the job exceeds its maximum_retry_count.
16
+
17
+ ### retry_in
18
+
19
+ This method marks the current job to be retried later. You can pass a numeric to this method, in which case that is the number of seconds after which it can be retried (`retry_in(10)`, `retry_in(0.5)`), or, if you're using ActiveSupport, you can pass in a duration object (`retry_in(10.minutes)`). This automatically happens, with an exponentially-increasing interval, when the job encounters an error.
20
+
21
+ ### error_count
22
+
23
+ This method returns the total number of times the job has errored, in case you want to modify the job's behavior after it has failed a given number of times.
24
+
25
+ ### default_resolve_action
26
+
27
+ If you don't perform a resolve action (destroy, finish, expire, retry_in) while the job is worked, Que will call this method for you. By default it simply calls `destroy`, but you can override it in your Job subclasses if you wish - for example, to call `finish`, or to invoke some more complicated logic.
@@ -3,7 +3,7 @@
3
3
  By default, Que logs important information in JSON to either Rails' logger (when running in a Rails web process) or STDOUT (when running via the `que` executable). So, your logs will look something like:
4
4
 
5
5
  ```
6
- I, [2014-01-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","thread":104928,"event":"job_worked","elapsed":0.01045,"job":{"priority":"1","run_at":"2014-01-12 05:07:31.081877+00","job_id":"4","job_class":"MyJob","args":[],"error_count":"0"}}
6
+ I, [2017-08-12T05:07:31.094201 #4687] INFO -- : {"lib":"que","hostname":"lovelace","pid":21626,"thread":21471100,"event":"job_worked","job_id":6157665,"elapsed":0.531411}
7
7
  ```
8
8
 
9
9
  Of course you can have it log wherever you like:
@@ -12,26 +12,7 @@ Of course you can have it log wherever you like:
12
12
  Que.logger = Logger.new(...)
13
13
  ```
14
14
 
15
- You can use Que's logger in your jobs anywhere you like:
16
-
17
- ```ruby
18
- class MyJob
19
- def run
20
- Que.log :my_output => "my string"
21
- end
22
- end
23
-
24
- #=> I, [2014-01-12T05:13:11.006776 #4914] INFO -- : {"lib":"que","thread":24960,"my_output":"my string"}
25
- ```
26
-
27
- Que will always add a 'lib' key, so you can easily filter its output from that of other sources, and the object_id of the thread that emitted the log, so you can follow the actions of a particular worker if you wish. You can also pass a :level key to set the level of the output:
28
-
29
- ```ruby
30
- Que.log :level => :debug, :my_output => 'my string'
31
- #=> D, [2014-01-12T05:16:15.221941 #5088] DEBUG -- : {"lib":"que","thread":24960,"my_output":"my string"}
32
- ```
33
-
34
- If you don't like JSON, you can also customize the format of the logging output by passing a callable object (such as a proc) to Que.log_formatter=. The proc should take a hash (the keys are symbols) and return a string. The keys and values are just as you would expect from the JSON output:
15
+ If you don't like logging in JSON, you can also customize the format of the logging output by passing a callable object (such as a proc) to Que.log_formatter=. The proc should take a hash (the keys are symbols) and return a string. The keys and values are just as you would expect from the JSON output:
35
16
 
36
17
  ```ruby
37
18
  Que.log_formatter = proc do |data|
@@ -39,7 +20,7 @@ Que.log_formatter = proc do |data|
39
20
  end
40
21
  ```
41
22
 
42
- If the log formatter returns nil or false, a nothing will be logged at all. You could use this to narrow down what you want to emit, for example:
23
+ If the log formatter returns nil or false, nothing will be logged at all. You could use this to narrow down what you want to emit, for example:
43
24
 
44
25
  ```ruby
45
26
  Que.log_formatter = proc do |data|
@@ -1,80 +1,25 @@
1
1
  ## Managing Workers
2
2
 
3
- Que provides a pool of workers to process jobs in a multithreaded fashion - this allows you to save memory by working many jobs simultaneously in the same process.
3
+ Que uses a multithreaded pool of workers to run jobs in parallel - this allows you to save memory by working many jobs simultaneously in the same process. The `que` executable starts up a pool of 6 workers by default. This is fine for most use cases, but the ideal number for your app will depend on your interpreter and what types of jobs you're running.
4
4
 
5
- When the worker pool is active (when you set `Que.mode = :async`), the default number of workers is 4. This is fine for most use cases, but the ideal number for your app will depend on your interpreter and what types of jobs you're running.
6
-
7
- Ruby MRI has a global interpreter lock (GIL), which prevents it from using more than one CPU core at a time. Having multiple workers running makes sense if your jobs tend to spend a lot of time in I/O (waiting on complex database queries, sending emails, making HTTP requests, etc.), as most jobs do. However, if your jobs are doing a lot of work in Ruby, they'll be spending a lot of time blocking each other, and having too many workers running will just slow everything down.
8
-
9
- JRuby and Rubinius, on the other hand, have no global interpreter lock, and so can make use of multiple CPU cores - you could potentially set the number of workers very high for them. You should experiment to find the best setting for your use case.
10
-
11
- You can change the number of workers in the pool whenever you like by setting the `worker_count` option:
12
-
13
- ```ruby
14
- Que.worker_count = 8
15
- ```
5
+ Ruby MRI has a global interpreter lock (GIL), which prevents it from using more than one CPU core at a time. Having multiple workers running makes sense if your jobs tend to spend a lot of time in I/O (waiting on complex database queries, sending emails, making HTTP requests, etc.), as most jobs do. However, if your jobs are doing a lot of work in Ruby, they'll be spending a lot of time blocking each other, and having too many workers running will cause you to lose efficiency to context-switching. So, you'll want to choose the appropriate number of workers for your use case.
16
6
 
17
7
  ### Working Jobs Via Executable
18
8
 
19
- If you don't want to burden your web processes with too much work and want to run workers in a background process instead, similar to how most other queues work, you can:
20
-
21
9
  ```shell
22
- # Run a pool of 4 workers:
10
+ # Run a pool of 6 workers:
23
11
  que
24
12
 
25
13
  # Or configure the number of workers:
26
- que --worker-count 8
14
+ que --worker-count 10
27
15
  ```
28
16
 
29
- See `que -h` for a list of command-line options.
17
+ See `que -h` for a complete list of command-line options.
30
18
 
31
19
  ### Thread-Unsafe Application Code
32
20
 
33
- If your application code is not thread-safe, you won't want any workers to be processing jobs while anything else is happening in the Ruby process. So, you'll want to turn the worker pool off by default:
34
-
35
- ```ruby
36
- Que.mode = :off
37
- ```
38
-
39
- This will prevent Que from trying to process jobs in the background of your web processes. In order to actually work jobs, you'll want to run a single worker at a time, and to do so via a separate process, like so:
21
+ If your application code is not thread-safe, you won't want any workers to be processing jobs while anything else is happening in the Ruby process. So, you'll want to run a single worker at a time, like so:
40
22
 
41
23
  ```shell
42
24
  que --worker-count 1
43
25
  ```
44
-
45
- ### The Wake Interval
46
-
47
- If a worker checks the job queue and finds no jobs ready for it to work, it will fall asleep. In order to make sure that newly-available jobs don't go unworked, a worker is awoken every so often to check for available work. By default, this happens every five seconds, but you can make it happen more or less often by setting a custom wake_interval:
48
-
49
- ```ruby
50
- Que.wake_interval = 2 # In Rails, 2.seconds also works fine.
51
- ```
52
-
53
- You can also choose to never let workers wake up on their own:
54
-
55
- ```ruby
56
- # Never wake up any workers:
57
- Que.wake_interval = nil
58
- ```
59
-
60
- If you do this, though, you'll need to wake workers manually.
61
-
62
- ### Manually Waking Workers
63
-
64
- Regardless of the `wake_interval` setting, you can always wake workers manually:
65
-
66
- ```ruby
67
- # Wake up a single worker to check the queue for work:
68
- Que.wake!
69
-
70
- # Wake up all workers in this process to check for work:
71
- Que.wake_all!
72
- ```
73
-
74
- `Que.wake_all!` is helpful if there are no jobs available and all your workers go to sleep, and then you queue a large number of jobs. Typically, it will take a little while for the entire pool of workers get going again - a new one will wake up every `wake_interval` seconds, but it will take up to `wake_interval * worker_count` seconds for all of them to get going. `Que.wake_all!` can get them all moving immediately.
75
-
76
- ### Connection Pool Size
77
-
78
- For the job locking system to work properly, each worker thread needs to reserve a database connection from the connection pool for the period of time between when it locks a job and when it releases that lock (which won't happen until the job has been finished and deleted from the queue).
79
-
80
- So, for example, if you're running 6 workers via the executable, you'll want to make sure that whatever connection pool Que is using (usually ActiveRecord's) has a maximum size of at least 6. If you're running those workers in a web process, you'll want the size to be at least 6 plus however many connections you expect your application to need for serving web requests (which may only be one if you're using Rails in single-threaded mode, or many more if you're running a threaded web server like Puma).
@@ -0,0 +1,15 @@
1
+ ## Defining Middleware For Jobs
2
+
3
+ You can define middleware to wrap jobs. For example:
4
+
5
+ ``` ruby
6
+ Que.middleware.push(
7
+ -> (job, &block) {
8
+ # Do stuff with the job object - report on it, count time elapsed, etc.
9
+ block.call
10
+ nil # Doesn't matter what's returned.
11
+ }
12
+ )
13
+ ```
14
+
15
+ This API is experimental for the 1.0 beta and may change.
@@ -3,13 +3,13 @@
3
3
  Some new releases of Que may require updates to the database schema. It's recommended that you integrate these updates alongside your other database migrations. For example, when Que released version 0.6.0, the schema version was updated from 2 to 3. If you're running ActiveRecord, you could make a migration to perform this upgrade like so:
4
4
 
5
5
  ```ruby
6
- class UpdateQue < ActiveRecord::Migration
6
+ class UpdateQue < ActiveRecord::Migration[5.0]
7
7
  def self.up
8
- Que.migrate! :version => 3
8
+ Que.migrate! version: 3
9
9
  end
10
10
 
11
11
  def self.down
12
- Que.migrate! :version => 2
12
+ Que.migrate! version: 2
13
13
  end
14
14
  end
15
15
  ```
@@ -18,10 +18,7 @@ This will make sure that your database schema stays consistent with your codebas
18
18
 
19
19
  ```ruby
20
20
  # Change schema to version 3.
21
- Que.migrate! :version => 3
22
-
23
- # Update to whatever the latest schema version is.
24
- Que.migrate!
21
+ Que.migrate! version: 3
25
22
 
26
23
  # Check your current schema version.
27
24
  Que.db_version #=> 3
@@ -1,27 +1,31 @@
1
1
  ## Multiple Queues
2
2
 
3
- Que supports the use of multiple queues in a single job table. This feature is intended to support the case where multiple applications (with distinct codebases) are sharing the same database. For instance, you might have a separate Ruby application that handles only processing credit cards. In that case, you can run that application's workers against a specific queue:
3
+ Que supports the use of multiple queues in a single job table. Please note that this feature is intended to support the case where multiple codebases are sharing the same job queue - if you want to support jobs of differing priorities, the numeric priority system offers better flexibility and performance.
4
+
5
+ For instance, you might have a separate Ruby application that handles only processing credit cards. In that case, you can run that application's workers against a specific queue:
4
6
 
5
7
  ```shell
6
8
  que --queue-name credit_cards
9
+ # The -q flag is equivalent, and either can be passed multiple times.
10
+ que -q default -q credit_cards
7
11
  ```
8
12
 
9
13
  Then you can set jobs to be enqueued in that queue specifically:
10
14
 
11
15
  ```ruby
12
- ProcessCreditCard.enqueue current_user.id, :queue => 'credit_cards'
16
+ ProcessCreditCard.enqueue current_user.id, queue: 'credit_cards'
13
17
 
14
18
  # Or:
15
19
 
16
20
  class ProcessCreditCard < Que::Job
17
21
  # Set a default queue for this job class; this can be overridden by
18
22
  # passing the :queue parameter to enqueue like above.
19
- @queue = 'credit_cards'
23
+ self.queue = 'credit_cards'
20
24
  end
21
25
  ```
22
26
 
23
27
  In some cases, the ProcessCreditCard class may not be defined in the application that is enqueueing the job. In that case, you can specify the job class as a string:
24
28
 
25
29
  ```ruby
26
- Que.enqueue current_user.id, :job_class => 'ProcessCreditCard', :queue => 'credit_cards'
30
+ Que.enqueue current_user.id, job_class: 'ProcessCreditCard', queue: 'credit_cards'
27
31
  ```
@@ -2,6 +2,6 @@
2
2
 
3
3
  To ensure safe operation, Que needs to be very careful in how it shuts down. When a Ruby process ends normally, it calls Thread#kill on any threads that are still running - unfortunately, if a thread is in the middle of a transaction when this happens, there is a risk that it will be prematurely commited, resulting in data corruption. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for more detail on this.
4
4
 
5
- To prevent this, Que will block a Ruby process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md) for information on how to design your jobs to fail safely.
5
+ To prevent this, Que will block the worker process from exiting until all jobs it is working have completed normally. Unfortunately, if you have long-running jobs, this may take a very long time (and if something goes wrong with a job's logic, it may never happen). The solution in this case is SIGKILL - luckily, Ruby processes that are killed via SIGKILL will end without using Thread#kill on its running threads. This is safer than exiting normally - when PostgreSQL loses the connection it will simply roll back the open transaction, if any, and unlock the job so it can be retried later by another worker. Be sure to read [Writing Reliable Jobs](https://github.com/chanks/que/blob/master/docs/writing_reliable_jobs.md) for information on how to design your jobs to fail safely.
6
6
 
7
7
  So, be prepared to use SIGKILL on your Ruby processes if they run for too long. For example, Heroku takes a good approach to this - when Heroku's platform is shutting down a process, it sends SIGTERM, waits ten seconds, then sends SIGKILL if the process still hasn't exited. This is a nice compromise - it will give each of your currently running jobs ten seconds to complete, and any jobs that haven't finished by then will be interrupted and retried later.
@@ -1,6 +1,10 @@
1
1
  ## Using Plain Postgres Connections
2
2
 
3
- If you're not using an ORM like ActiveRecord or Sequel, you can use one of two gems to manage a connection pool for your PG connections. The first is the ConnectionPool gem (be sure to add `gem 'connection_pool'` to your Gemfile):
3
+ If you're not using an ORM like ActiveRecord or Sequel, you can use a distinct connection pool to manage your Postgres connections. Please be aware that if you **are** using ActiveRecord or Sequel, there's no reason for you to be using any of these methods - it's less efficient (unnecessary connections will waste memory on your database server) and you lose the reliability benefits of wrapping jobs in the same transactions as the rest of your data.
4
+
5
+ ## Using ConnectionPool or Pond
6
+
7
+ Support for two connection pool gems is included in Que. The first is the ConnectionPool gem (be sure to add `gem 'connection_pool'` to your Gemfile):
4
8
 
5
9
  ```ruby
6
10
  require 'uri'
@@ -9,13 +13,14 @@ require 'connection_pool'
9
13
 
10
14
  uri = URI.parse(ENV['DATABASE_URL'])
11
15
 
12
- Que.connection = ConnectionPool.new :size => 10 do
13
- PG::Connection.open :host => uri.host,
14
- :user => uri.user,
15
- :password => uri.password,
16
- :port => uri.port || 5432,
17
- :dbname => uri.path[1..-1]
18
- end
16
+ Que.connection = ConnectionPool.new(size: 10) do
17
+ PG::Connection.open(
18
+ host: uri.host,
19
+ user: uri.user,
20
+ password: uri.password,
21
+ port: uri.port || 5432,
22
+ dbname: uri.path[1..-1]
23
+ )end
19
24
  ```
20
25
 
21
26
  Be sure to pick your pool size carefully - if you use 10 for the size, you'll incur the overhead of having 10 connections open to Postgres even if you never use more than a couple of them.
@@ -29,13 +34,32 @@ require 'pond'
29
34
 
30
35
  uri = URI.parse(ENV['DATABASE_URL'])
31
36
 
32
- Que.connection = Pond.new :maximum_size => 10 do
33
- PG::Connection.open :host => uri.host,
34
- :user => uri.user,
35
- :password => uri.password,
36
- :port => uri.port || 5432,
37
- :dbname => uri.path[1..-1]
37
+ Que.connection = Pond.new(maximum_size: 10) do
38
+ PG::Connection.open(
39
+ host: uri.host,
40
+ user: uri.user,
41
+ password: uri.password,
42
+ port: uri.port || 5432,
43
+ dbname: uri.path[1..-1]
44
+ )
45
+ end
46
+ ```
47
+
48
+ ## Using Any Other Connection Pool
49
+
50
+ You can use any other in-process connection pool by defining access to it in a proc that's passed to `Que.connection_proc = proc`. The proc you pass should accept a block and call it with a connection object. For instance, Que's built-in interface to Sequel's connection pool is basically implemented like:
51
+
52
+ ```ruby
53
+ Que.connection_proc = proc do |&block|
54
+ DB.synchronize do |connection|
55
+ block.call(connection)
56
+ end
38
57
  end
39
58
  ```
40
59
 
41
- Please be aware that if you're using ActiveRecord or Sequel to manage your data, there's no reason for you to be using any of these methods - it's less efficient (unnecessary connections will waste memory on your database server) and you lose the reliability benefits of wrapping jobs in the same transactions as the rest of your data. In general, your app should probably be using a connection pool, and Que should probably hook into whatever connection pool you're already using.
60
+ This proc must meet a few requirements:
61
+ - The yielded object must be an instance of `PG::Connection`.
62
+ - It must be reentrant - if it is called with a block, and then called again inside that block, it must return the same object. For example, in `proc.call{|conn1| proc.call{|conn2| conn1.object_id == conn2.object_id}}` the innermost condition must be true.
63
+ - It must lock the connection object and prevent any other thread from accessing it for the duration of the block.
64
+
65
+ If any of these conditions aren't met, Que will raise an error.
@@ -11,7 +11,7 @@ Then you can safely use the same database object to transactionally protect your
11
11
 
12
12
  ```ruby
13
13
  class MyJob < Que::Job
14
- def run
14
+ def run(user_id:)
15
15
  # Do stuff.
16
16
 
17
17
  DB.transaction do
@@ -23,9 +23,11 @@ class MyJob < Que::Job
23
23
  end
24
24
  end
25
25
 
26
- # In your controller action:
26
+ # Or, in your controller action:
27
27
  DB.transaction do
28
28
  @user = User.create(params[:user])
29
- MyJob.enqueue :user_id => @user.id
29
+ MyJob.enqueue user_id: @user.id
30
30
  end
31
31
  ```
32
+
33
+ Sequel automatically wraps model persistance actions (create, update, destroy) in transactions, so you can simply call #enqueue methods from your models' callbacks, if you wish.
@@ -2,7 +2,7 @@
2
2
 
3
3
  Que does everything it can to ensure that jobs are worked exactly once, but if something bad happens when a job is halfway completed, there's no way around it - the job will need be repeated over again from the beginning, probably by a different worker. When you're writing jobs, you need to be prepared for this to happen.
4
4
 
5
- The safest type of job is one that reads in data, either from the database or from external APIs, then does some number crunching and writes the results to the database. These jobs are easy to make safe - simply write the results to the database inside a transaction, and also have the job destroy itself inside that transaction, like so:
5
+ The safest type of job is one that reads in data, either from the database or from external APIs, then does some number crunching and writes the results to the database. These jobs are easy to make safe - simply write the results to the database inside a transaction, and also destroy the job inside that transaction, like so:
6
6
 
7
7
  ```ruby
8
8
  class UpdateWidgetPrice < Que::Job
@@ -12,9 +12,9 @@ class UpdateWidgetPrice < Que::Job
12
12
 
13
13
  ActiveRecord::Base.transaction do
14
14
  # Make changes to the database.
15
- widget.update :price => price
15
+ widget.update price: price
16
16
 
17
- # Destroy the job.
17
+ # Mark the job as destroyed, so it doesn't run again.
18
18
  destroy
19
19
  end
20
20
  end
@@ -28,10 +28,10 @@ The more difficult type of job is one that makes changes that can't be controlle
28
28
  ```ruby
29
29
  class ChargeCreditCard < Que::Job
30
30
  def run(user_id, credit_card_id)
31
- CreditCardService.charge(credit_card_id, :amount => "$10.00")
31
+ CreditCardService.charge(credit_card_id, amount: "$10.00")
32
32
 
33
33
  ActiveRecord::Base.transaction do
34
- User.where(:id => user_id).update_all :charged_at => Time.now
34
+ User.where(id: user_id).update_all charged_at: Time.now
35
35
  destroy
36
36
  end
37
37
  end
@@ -44,11 +44,11 @@ What if the process abruptly dies after we tell the provider to charge the credi
44
44
  class ChargeCreditCard < Que::Job
45
45
  def run(user_id, credit_card_id)
46
46
  unless CreditCardService.check_for_previous_charge(credit_card_id)
47
- CreditCardService.charge(credit_card_id, :amount => "$10.00")
47
+ CreditCardService.charge(credit_card_id, amount: "$10.00")
48
48
  end
49
49
 
50
50
  ActiveRecord::Base.transaction do
51
- User.where(:id => user_id).update_all :charged_at => Time.now
51
+ User.where(id: user_id).update_all charged_at: Time.now
52
52
  destroy
53
53
  end
54
54
  end
@@ -71,18 +71,14 @@ In this case, we don't have any no way to prevent the occasional double-sending
71
71
 
72
72
  ### Timeouts
73
73
 
74
- Long-running jobs aren't necessarily a problem in Que, since the overhead of an individual job isn't that big (just an open PG connection and an advisory lock held in memory). But jobs that hang indefinitely can tie up a worker and [block the Ruby process from exiting gracefully](https://github.com/chanks/que/blob/master/docs/shutting_down_safely.md), which is a pain.
74
+ Long-running jobs aren't necessarily a problem for the database, since the overhead of an individual job is very small (just an advisory lock held in memory). But jobs that hang indefinitely can tie up a worker and [block the Ruby process from exiting gracefully](https://github.com/chanks/que/blob/master/docs/shutting_down_safely.md), which is a pain.
75
75
 
76
- Que doesn't offer a general way to kill jobs that have been running too long, because that currently can't be done safely in Ruby. Typically, one would use Ruby's Timeout module for this sort of thing, but wrapping a database transaction inside a timeout introduces a risk of premature commits, which can corrupt your data. See [here](http://blog.headius.com/2008/02/ruby-threadraise-threadkill-timeoutrb.html) and [here](http://coderrr.wordpress.com/2011/05/03/beware-of-threadkill-or-your-activerecord-transactions-are-in-danger-of-being-partially-committed/) for detail on why this is.
77
-
78
- However, if there's part of your job that is prone to hang (due to an API call or other HTTP request that never returns, for example), you can timeout those individual parts of your job relatively safely. For example, consider a job that needs to make an HTTP request and then write to the database:
76
+ If there's part of your job that is prone to hang (due to an API call or other HTTP request that never returns, for example), you can (and should) timeout those parts of your job. For example, consider a job that needs to make an HTTP request and then write to the database:
79
77
 
80
78
  ```ruby
81
- require 'net/http'
82
-
83
79
  class ScrapeStuff < Que::Job
84
- def run(domain_to_scrape, path_to_scrape)
85
- result = Net::HTTP.get(domain_to_scrape, path_to_scrape)
80
+ def run(url_to_scrape)
81
+ result = YourHTTPLibrary.get(url_to_scrape)
86
82
 
87
83
  ActiveRecord::Base.transaction do
88
84
  # Insert result...
@@ -93,15 +89,12 @@ class ScrapeStuff < Que::Job
93
89
  end
94
90
  ```
95
91
 
96
- That request could take a very long time, or never return at all. Let's wrap it in a five-second timeout:
92
+ That request could take a very long time, or never return at all. Let's use the timeout feature that almost all HTTP libraries offer some version of:
97
93
 
98
94
  ```ruby
99
- require 'net/http'
100
- require 'timeout'
101
-
102
95
  class ScrapeStuff < Que::Job
103
- def run(domain_to_scrape, path_to_scrape)
104
- result = Timeout.timeout(5){Net::HTTP.get(domain_to_scrape, path_to_scrape)}
96
+ def run(url_to_scrape)
97
+ result = YourHTTPLibrary.get(url_to_scrape, timeout: 5)
105
98
 
106
99
  ActiveRecord::Base.transaction do
107
100
  # Insert result...
@@ -112,6 +105,4 @@ class ScrapeStuff < Que::Job
112
105
  end
113
106
  ```
114
107
 
115
- Now, if the request takes more than five seconds, a `Timeout::Error` will be raised and Que will just retry the job later. This solution isn't perfect, since Timeout uses Thread#kill under the hood, which can lead to unpredictable behavior. But it's separate from our transaction, so there's no risk of losing data - even a catastrophic error that left Net::HTTP in a bad state would be fixable by restarting the process.
116
-
117
- Finally, remember that if you're using a library that offers its own timeout functionality, that's usually preferable to using the Timeout module.
108
+ Now, if the request takes more than five seconds, an error will be raised (probably - check your library's documentation) and Que will just retry the job later.
data/lib/que.rb CHANGED
@@ -1,200 +1,116 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'socket' # For hostname
4
- require 'json'
3
+ require 'forwardable'
4
+ require 'socket' # For Socket.gethostname
5
5
 
6
6
  module Que
7
- autoload :Adapters, 'que/adapters/base'
8
- autoload :Job, 'que/job'
9
- autoload :Migrations, 'que/migrations'
10
- autoload :SQL, 'que/sql'
11
- autoload :Version, 'que/version'
12
- autoload :Worker, 'que/worker'
13
-
14
- HASH_DEFAULT_PROC = proc { |hash, key| hash[key.to_s] if Symbol === key }
15
-
16
- INDIFFERENTIATOR = proc do |object|
17
- case object
18
- when Array
19
- object.each(&INDIFFERENTIATOR)
20
- when Hash
21
- object.default_proc = HASH_DEFAULT_PROC
22
- object.each { |key, value| object[key] = INDIFFERENTIATOR.call(value) }
23
- object
24
- else
25
- object
26
- end
27
- end
28
-
29
- SYMBOLIZER = proc do |object|
30
- case object
31
- when Hash
32
- object.keys.each do |key|
33
- object[key.to_sym] = SYMBOLIZER.call(object.delete(key))
34
- end
35
- object
36
- when Array
37
- object.map! { |e| SYMBOLIZER.call(e) }
38
- else
39
- object
40
- end
41
- end
7
+ CURRENT_HOSTNAME = Socket.gethostname.freeze
8
+ DEFAULT_QUEUE = 'default'.freeze
9
+ TIME_REGEX = /\A\d{4}\-\d{2}\-\d{2}T\d{2}:\d{2}:\d{2}.\d{6}Z\z/
10
+ CONFIG_MUTEX = Mutex.new
11
+ MAXIMUM_PRIORITY = 32767
12
+
13
+ class Error < StandardError; end
14
+
15
+ # Store SQL strings frozen, with squashed whitespace so logs read better.
16
+ SQL = {}
17
+ def SQL.[]=(k,v); super(k, v.strip.gsub(/\s+/, ' ').freeze); end
18
+
19
+ # Load up modules that allow registration before modules that use it.
20
+ require_relative 'que/listener'
21
+
22
+ # Load utilities before main logic that will use them.
23
+ require_relative 'que/utils/assertions'
24
+ require_relative 'que/utils/constantization'
25
+ require_relative 'que/utils/error_notification'
26
+ require_relative 'que/utils/freeze'
27
+ require_relative 'que/utils/introspection'
28
+ require_relative 'que/utils/json_serialization'
29
+ require_relative 'que/utils/logging'
30
+ require_relative 'que/utils/middleware'
31
+ require_relative 'que/utils/queue_management'
32
+ require_relative 'que/utils/transactions'
33
+
34
+ require_relative 'que/connection'
35
+ require_relative 'que/connection_pool'
36
+ require_relative 'que/job_methods'
37
+ require_relative 'que/job'
38
+ require_relative 'que/job_cache'
39
+ require_relative 'que/locker'
40
+ require_relative 'que/metajob'
41
+ require_relative 'que/migrations'
42
+ require_relative 'que/poller'
43
+ require_relative 'que/result_queue'
44
+ require_relative 'que/version'
45
+ require_relative 'que/worker'
42
46
 
43
47
  class << self
44
- attr_accessor :error_notifier
45
- attr_writer :logger, :adapter, :log_formatter, :use_prepared_statements, :json_converter
46
-
47
- def connection=(connection)
48
- self.adapter =
49
- if connection.to_s == 'ActiveRecord'
50
- Adapters::ActiveRecord.new
48
+ include Utils::Assertions
49
+ include Utils::Constantization
50
+ include Utils::ErrorNotification
51
+ include Utils::Freeze
52
+ include Utils::Introspection
53
+ include Utils::JSONSerialization
54
+ include Utils::Logging
55
+ include Utils::Middleware
56
+ include Utils::QueueManagement
57
+ include Utils::Transactions
58
+
59
+ extend Forwardable
60
+
61
+ # Copy some commonly-used methods here, for convenience.
62
+ def_delegators :pool, :execute, :checkout, :in_transaction?
63
+ def_delegators Job, :enqueue, :run_synchronously, :run_synchronously=
64
+ def_delegators Migrations, :db_version, :migrate!
65
+
66
+ # Global configuration logic.
67
+ attr_accessor :use_prepared_statements
68
+ attr_writer :default_queue
69
+
70
+ def default_queue
71
+ @default_queue || DEFAULT_QUEUE
72
+ end
73
+
74
+ # Support simple integration with many common connection pools.
75
+ def connection=(conn)
76
+ self.connection_proc =
77
+ if conn.to_s == 'ActiveRecord'
78
+ # Load and setup AR compatibility.
79
+ require_relative 'que/active_record/connection'
80
+ m = Que::ActiveRecord::Connection::Middleware
81
+ middleware << m unless middleware.include?(m)
82
+ Que::ActiveRecord::Connection.method(:checkout)
51
83
  else
52
- case connection.class.to_s
53
- when 'Sequel::Postgres::Database' then Adapters::Sequel.new(connection)
54
- when 'ConnectionPool' then Adapters::ConnectionPool.new(connection)
55
- when 'PG::Connection' then Adapters::PG.new(connection)
56
- when 'Pond' then Adapters::Pond.new(connection)
57
- when 'NilClass' then connection
58
- else raise "Que connection not recognized: #{connection.inspect}"
84
+ case conn.class.to_s
85
+ when 'Sequel::Postgres::Database' then conn.method(:synchronize)
86
+ when 'Pond' then conn.method(:checkout)
87
+ when 'ConnectionPool' then conn.method(:with)
88
+ when 'NilClass' then conn
89
+ else raise Error, "Unsupported connection: #{conn.class}"
59
90
  end
60
91
  end
61
92
  end
62
93
 
63
- def adapter
64
- @adapter || raise("Que connection not established!")
65
- end
66
-
67
- def execute(*args)
68
- adapter.execute(*args)
69
- end
70
-
71
- def clear!
72
- execute "DELETE FROM que_jobs"
73
- end
74
-
75
- def job_stats
76
- execute :job_stats
77
- end
78
-
79
- def worker_states
80
- adapter.checkout do |conn|
81
- if conn.server_version >= 90600
82
- execute :worker_states_96
83
- else
84
- execute :worker_states_95
85
- end
86
- end
87
- end
88
-
89
- # Give us a cleaner interface when specifying a job_class as a string.
90
- def enqueue(*args)
91
- Job.enqueue(*args)
92
- end
93
-
94
- def db_version
95
- Migrations.db_version
96
- end
97
-
98
- def migrate!(version = {:version => Migrations::CURRENT_VERSION})
99
- Migrations.migrate!(version)
100
- end
101
-
102
- # Have to support create! and drop! in old migrations. They just created
103
- # and dropped the bare table.
104
- def create!
105
- migrate! :version => 1
106
- end
107
-
108
- def drop!
109
- migrate! :version => 0
94
+ # Integrate Que with any connection pool by passing it a reentrant block
95
+ # that locks and yields a Postgres connection.
96
+ def connection_proc=(connection_proc)
97
+ @pool = connection_proc && ConnectionPool.new(&connection_proc)
110
98
  end
111
99
 
112
- def log(data)
113
- level = data.delete(:level) || :info
114
- data = {:lib => 'que', :hostname => Socket.gethostname, :pid => Process.pid, :thread => Thread.current.object_id}.merge(data)
115
-
116
- if (l = logger) && output = log_formatter.call(data)
117
- l.send level, output
118
- end
119
- end
120
-
121
- def logger
122
- @logger.respond_to?(:call) ? @logger.call : @logger
123
- end
124
-
125
- def log_formatter
126
- @log_formatter ||= JSON.method(:dump)
127
- end
128
-
129
- def use_prepared_statements
130
- setting = @use_prepared_statements
131
- setting.nil? ? true : setting
132
- end
133
-
134
- def disable_prepared_statements
135
- warn "Que.disable_prepared_statements has been deprecated, please update your code to invert the result of Que.disable_prepared_statements instead. This shim will be removed in Que version 1.0.0."
136
- !use_prepared_statements
137
- end
138
-
139
- def disable_prepared_statements=(setting)
140
- warn "Que.disable_prepared_statements= has been deprecated, please update your code to pass the inverted value to Que.use_prepared_statements= instead. This shim will be removed in Que version 1.0.0."
141
- self.use_prepared_statements = !setting
142
- end
143
-
144
- def error_handler
145
- warn "Que.error_handler has been renamed to Que.error_notifier, please update your code. This shim will be removed in Que version 1.0.0."
146
- error_notifier
100
+ # How to actually access Que's established connection pool.
101
+ def pool
102
+ @pool || raise(Error, "Que connection not established!")
147
103
  end
148
104
 
149
- def error_handler=(p)
150
- warn "Que.error_handler= has been renamed to Que.error_notifier=, please update your code. This shim will be removed in Que version 1.0.0."
151
- self.error_notifier = p
152
- end
153
-
154
- def constantize(camel_cased_word)
155
- if camel_cased_word.respond_to?(:constantize)
156
- # Use ActiveSupport's version if it exists.
157
- camel_cased_word.constantize
158
- else
159
- camel_cased_word.split('::').inject(Object, &:const_get)
160
- end
161
- end
162
-
163
- # A helper method to manage transactions, used mainly by the migration
164
- # system. It's available for general use, but if you're using an ORM that
165
- # provides its own transaction helper, be sure to use that instead, or the
166
- # two may interfere with one another.
167
- def transaction
168
- adapter.checkout do
169
- if adapter.in_transaction?
170
- yield
171
- else
172
- begin
173
- execute "BEGIN"
174
- yield
175
- rescue => error
176
- raise
177
- ensure
178
- # Handle a raised error or a killed thread.
179
- if error || Thread.current.status == 'aborting'
180
- execute "ROLLBACK"
181
- else
182
- execute "COMMIT"
183
- end
184
- end
185
- end
186
- end
187
- end
188
-
189
- def json_converter
190
- @json_converter ||= INDIFFERENTIATOR
191
- end
192
-
193
- # Copy some of the Worker class' config methods here for convenience.
194
- [:mode, :mode=, :worker_count, :worker_count=, :wake_interval, :wake_interval=, :queue_name, :queue_name=, :wake!, :wake_all!].each do |meth|
195
- define_method(meth) { |*args| Worker.send(meth, *args) }
196
- end
105
+ # Set the current pool. Helpful for specs, but probably shouldn't be used
106
+ # generally.
107
+ attr_writer :pool
197
108
  end
109
+
110
+ # Set config defaults.
111
+ self.use_prepared_statements = true
198
112
  end
199
113
 
200
- require 'que/railtie' if defined? Rails::Railtie
114
+ # Load Rails features as appropriate.
115
+ require_relative 'que/rails/railtie' if defined?(::Rails::Railtie)
116
+ require_relative 'que/active_job/extensions' if defined?(::ActiveJob)