queue_classic 1.0.2 → 2.0.0rc1

Sign up to get free protection for your applications and to get access to all the features.
@@ -1,22 +1,25 @@
1
1
  module QC
2
2
  class Worker
3
3
 
4
- MAX_LOCK_ATTEMPTS = (ENV["QC_MAX_LOCK_ATTEMPTS"] || 5).to_i
5
-
6
- def initialize
4
+ def initialize(q_name, top_bound, fork_worker, listening_worker, max_attempts)
7
5
  log("worker initialized")
8
- log("worker running exp. backoff algorith max_attempts=#{MAX_LOCK_ATTEMPTS}")
9
6
  @running = true
10
7
 
11
- @queue = QC::Queue.new(ENV["QUEUE"])
12
- log("worker table=#{@queue.database.table_name}")
8
+ @queue = Queue.new(q_name)
9
+ log("worker queue=#{@queue.name}")
10
+
11
+ @top_bound = top_bound
12
+ log("worker top_bound=#{@top_bound}")
13
13
 
14
- @fork_worker = ENV["QC_FORK_WORKER"] == "true"
14
+ @fork_worker = fork_worker
15
15
  log("worker fork=#{@fork_worker}")
16
16
 
17
- @listening_worker = ENV["QC_LISTENING_WORKER"] == "true"
17
+ @listening_worker = listening_worker
18
18
  log("worker listen=#{@listening_worker}")
19
19
 
20
+ @max_attempts = max_attempts
21
+ log("max lock attempts =#{@max_attempts}")
22
+
20
23
  handle_signals
21
24
  end
22
25
 
@@ -45,6 +48,9 @@ module QC
45
48
  end
46
49
  end
47
50
 
51
+ # This method should be overriden if
52
+ # your worker is forking and you need to
53
+ # re-establish database connectoins
48
54
  def setup_child
49
55
  log("forked worker running setup")
50
56
  end
@@ -70,16 +76,17 @@ module QC
70
76
  def work
71
77
  log("worker start working")
72
78
  if job = lock_job
73
- log("worker locked job=#{job.id}")
79
+ log("worker locked job=#{job[:id]}")
74
80
  begin
75
- job.work
76
- log("worker finished job=#{job.id}")
81
+ call(job).tap do
82
+ log("worker finished job=#{job[:id]}")
83
+ end
77
84
  rescue Object => e
78
- log("worker failed job=#{job.id} exception=#{e.inspect}")
79
- handle_failure(job,e)
85
+ log("worker failed job=#{job[:id]} exception=#{e.inspect}")
86
+ handle_failure(job, e)
80
87
  ensure
81
- @queue.delete(job)
82
- log("worker deleted job=#{job.id}")
88
+ @queue.delete(job[:id])
89
+ log("worker deleted job=#{job[:id]}")
83
90
  end
84
91
  end
85
92
  end
@@ -89,17 +96,17 @@ module QC
89
96
  attempts = 0
90
97
  job = nil
91
98
  until job
92
- job = @queue.dequeue
99
+ job = @queue.lock(@top_bound)
93
100
  if job.nil?
94
101
  log("worker missed lock attempt=#{attempts}")
95
102
  attempts += 1
96
- if attempts < MAX_LOCK_ATTEMPTS
103
+ if attempts < @max_attempts
97
104
  seconds = 2**attempts
98
105
  wait(seconds)
99
106
  log("worker tries again")
100
107
  next
101
108
  else
102
- log("worker reached max attempts. max=#{MAX_LOCK_ATTEMPTS}")
109
+ log("worker reached max attempts. max=#{@max_attempts}")
103
110
  break
104
111
  end
105
112
  else
@@ -109,13 +116,20 @@ module QC
109
116
  job
110
117
  end
111
118
 
119
+ def call(job)
120
+ args = job[:args]
121
+ klass = eval(job[:method].split(".").first)
122
+ message = job[:method].split(".").last
123
+ klass.send(message, *args)
124
+ end
125
+
112
126
  def wait(t)
113
127
  if can_listen?
114
128
  log("worker waiting on LISTEN")
115
- @queue.database.listen
116
- @queue.database.wait_for_notify(t)
117
- @queue.database.unlisten
118
- @queue.database.drain_notify
129
+ Conn.listen
130
+ Conn.wait_for_notify(t)
131
+ Conn.unlisten
132
+ Conn.drain_notify
119
133
  log("worker finished LISTEN")
120
134
  else
121
135
  log("worker sleeps seconds=#{t}")
@@ -133,7 +147,7 @@ module QC
133
147
  end
134
148
 
135
149
  def log(msg)
136
- Logger.puts(msg)
150
+ Log.info(msg)
137
151
  end
138
152
 
139
153
  end
data/readme.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # queue_classic
2
2
 
3
- v1.0.2
3
+ v2.0.0rc1
4
4
 
5
5
  queue_classic is a PostgreSQL-backed queueing library that is focused on
6
6
  concurrent job locking, minimizing database load & providing a simple &
@@ -13,46 +13,141 @@ queue_classic features:
13
13
  * JSON encoding for jobs
14
14
  * Forking workers
15
15
  * Postgres' rock-solid locking mechanism
16
- * Fuzzy-FIFO support (1)
16
+ * Fuzzy-FIFO support [academic paper](http://www.cs.tau.ac.il/~shanir/nir-pubs-web/Papers/Lock_Free.pdf)
17
17
  * Long term support
18
18
 
19
- 1.Theory found here: http://www.cs.tau.ac.il/~shanir/nir-pubs-web/Papers/Lock_Free.pdf
20
-
21
19
  ## Proven
22
20
 
23
- I wrote queue_classic to solve a production problem. My problem was that I needed a
24
- queueing system that wouldn't fall over should I decide to press it nor should it freak out
25
- if I attached 100 workers to it. However, my problem didn't warrant adding an additional service.
26
- I was already using PostgreSQL to manage my application's data, why not use PostgreSQL to pass some messages?
27
- PostgreSQL was already handling thousands of reads and writes per second anyways. Why not add 35 more
28
- reads/writes per second to my established performance metric?
21
+ Queue_classic was designed out of necessity. I needed a message queue that was
22
+ fast, reliable, and low maintenance. It was built upon PostgreSQL out of a motivation
23
+ of not wanting to add a redis or 0MQ service to my network of services. It boasts
24
+ a small API and very few features. It was designed to be simple. Thus, if you need
25
+ advanced queueing features, queue_classic is not for you; try 0MQ, rabbitmq, or redis.
26
+ But if you are already running a PostgreSQL database, and you need a simple mechanism to
27
+ distribute jobs to worker processes, then queue_classic is exactly what you need to be using.
28
+
29
+ ### Heroku Postgres
30
+
31
+ The Heroku Postgres team uses queue_classic to monitor the health of
32
+ customer databases. They process 200 jobs per second using a [fugu](https://postgres.heroku.com/pricing)
33
+ database. They chose queue_classic because of it's simplicity and reliability.
29
34
 
30
- queue_classic handles over **3,000,000** jobs per day. It does this on Heroku's Ronin Database.
35
+ ### Cloudapp
31
36
 
32
- ## Quick Start
37
+ Larry uses queue_classic to deliver cloudapp's push notifications and to collect file meta-data from S3.
38
+ Cloudapp processes nearly 14 jobs per second.
33
39
 
34
- See doc/installation.md for Rails instructions
40
+ ```
41
+ I haven't even touched QC since setting it up.
42
+ The best queue is the one you don't have to hand hold.
43
+
44
+ -- Larry Marburger
45
+ ```
46
+
47
+ ## Setup
48
+
49
+ In addition to installing the rubygem, you will need to prepare your database.
50
+ Database preperation includes creating a table and loading PL/pgSQL functions.
51
+ You can issue the database preperation commands using **PSQL(1)** or place them in a
52
+ database migration.
53
+
54
+ ### Quick Start
35
55
 
36
56
  ```bash
37
57
  $ createdb queue_classic_test
38
- $ psql queue_classic_test
39
- psql- CREATE TABLE queue_classic_jobs (id serial, details text, locked_at timestamp);
40
- psql- CREATE INDEX queue_classic_jobs_id_idx ON queue_classic_jobs (id);
58
+ $ psql queue_classic_test -c "CREATE TABLE queue_classic_jobs (id serial, q_name varchar(255), method varchar(255), args text, locked_at timestamp);"
41
59
  $ export QC_DATABASE_URL="postgres://username:password@localhost/queue_classic_test"
42
60
  $ gem install queue_classic
43
- $ ruby -r queue_classic -e "QC::Database.new.load_functions"
61
+ $ ruby -r queue_classic -e "QC::Queries.load_functions"
44
62
  $ ruby -r queue_classic -e "QC.enqueue('Kernel.puts', 'hello world')"
45
63
  $ ruby -r queue_classic -e "QC::Worker.new.start"
46
64
  ```
47
65
 
66
+ ### Ruby on Rails Setup
67
+
68
+ **Gemfile**
69
+
70
+ ```ruby
71
+ source :rubygems
72
+ gem "queue_classic", "2.0.0rc1"
73
+ ```
74
+
75
+ **Rakefile**
76
+
77
+ ```ruby
78
+ require "queue_classic"
79
+ require "queue_classic/tasks"
80
+ ```
81
+
82
+ **config/initializers/queue_classic.rb**
83
+
84
+ ```ruby
85
+ # Optional if you have this set in your shell environment or use Heroku.
86
+ ENV["DATABASE_URL"] = "postgres://username:password@localhost/database_name"
87
+ ```
88
+
89
+ **db/migrations/add_queue_classic.rb**
90
+
91
+ ```ruby
92
+ class CreateJobsTable < ActiveRecord::Migration
93
+
94
+ def self.up
95
+ create_table :queue_classic_jobs do |t|
96
+ t.strict :q_name
97
+ t.string :method
98
+ t.text :args
99
+ t.timestamp :locked_at
100
+ end
101
+ add_index :queue_classic_jobs, :id
102
+ require "queue_classic"
103
+ QC::Queries.load_functions
104
+ end
105
+
106
+ def self.down
107
+ drop_table :queue_classic_jobs
108
+ require "queue_classic"
109
+ QC::Queries.drop_functions
110
+ end
111
+
112
+ end
113
+ ```
114
+
115
+ ### Sequel Setup
116
+
117
+ **db/migrations/1_add_queue_classic.rb**
118
+
119
+ ```ruby
120
+ Sequel.migration do
121
+ up do
122
+ create_table :queue_classic_jobs do
123
+ primary_key :id
124
+ String :q_name
125
+ String :details
126
+ Time :locked_at
127
+ end
128
+ require "queue_classic"
129
+ QC::Queries.load_functions
130
+ end
131
+
132
+ down do
133
+ drop_table :queue_classic_jobs
134
+ require "queue_classic"
135
+ QC::Queries.drop_functions
136
+ end
137
+ end
138
+ ```
139
+
48
140
  ## Configure
49
141
 
50
142
  ```bash
51
- # Enable logging.
52
- $VERBOSE
143
+ # Log level.
144
+ # export QC_LOG_LEVEL=`ruby -r "logger" -e "puts Logger::ERROR"`
145
+ $QC_LOG_LEVEL
53
146
 
54
147
  # Specifies the database that queue_classic will rely upon.
55
- $QC_DATABASE_URL || $DATABASE_URL
148
+ # queue_classic will try and use QC_DATABASE_URL before it uses DATABASE_URL.
149
+ $QC_DATABASE_URL
150
+ $DATABASE_URL
56
151
 
57
152
  # Fuzzy-FIFO
58
153
  # For strict FIFO set to 1. Otherwise, worker will
@@ -61,7 +156,8 @@ $QC_DATABASE_URL || $DATABASE_URL
61
156
  $QC_TOP_BOUND
62
157
 
63
158
  # If you want your worker to fork a new
64
- # child process for each job, set this var to 'true'
159
+ # UNIX process for each job, set this var to 'true'
160
+ #
65
161
  # Default: false
66
162
  $QC_FORK_WORKER
67
163
 
@@ -69,22 +165,422 @@ $QC_FORK_WORKER
69
165
  # if you want high throughput don't use Kernel.sleep
70
166
  # use LISTEN/NOTIFY sleep. When set to true, the worker's
71
167
  # sleep will be preempted by insertion into the queue.
168
+ #
72
169
  # Default: false
73
170
  $QC_LISTENING_WORKER
74
171
 
75
172
  # The worker uses an exp backoff algorithm. The base of
76
- # the exponent is 2. This var determines the max power of the
77
- # exp.
173
+ # the exponent is 2. This var determines the max power of the exp.
174
+ #
78
175
  # Default: 5 which implies max sleep time of 2^(5-1) => 16 seconds
79
176
  $QC_MAX_LOCK_ATTEMPTS
80
177
 
81
178
  # This var is important for consumers of the queue.
82
179
  # If you have configured many queues, this var will
83
180
  # instruct the worker to bind to a particular queue.
84
- # Default: queue_classic_jobs --which is the default queue table.
181
+ #
182
+ # Default: queue_classic_jobs
85
183
  $QUEUE
86
184
  ```
87
185
 
186
+ ## Usage
187
+
188
+ Users of queue_classic will be producing jobs (enqueue) or
189
+ consuming jobs (lock then delete).
190
+
191
+ ### Producer
192
+
193
+ You certainly don't need the queue_classic rubygem to put a job in the queue.
194
+
195
+ ```bash
196
+ $ psql queue_classic_test -c "INSERT INTO queue_classic_jobs (q_name, method, args) VALUES ('default', 'Kernel.puts', '[\"hello world\"]');"
197
+ ```
198
+
199
+ However, the rubygem will take care of converting your args to JSON and it will also dispatch
200
+ PUB/SUB notifications if the feature is enabled. It will also manage a connection to the database
201
+ that is independent of any other connection you may have in your application. Note: If your
202
+ queue table is in your application's database then your application's process will have 2 connections
203
+ to the database; one for your application and another for queue_classic.
204
+
205
+ The Ruby API for producing jobs is pretty simple:
206
+
207
+ ```ruby
208
+ # This method has no arguments.
209
+ QC.enqueue("Time.now")
210
+
211
+ # This method has 1 argument.
212
+ QC.enqueue("Kernel.puts", "hello world")
213
+
214
+ # This method has 2 arguments.
215
+ QC.enqueue("Kernel.printf", "hello %s", "world")
216
+
217
+ # This method has a hash argument.
218
+ QC.enqueue("Kernel.puts", {"hello" => "world"})
219
+
220
+ # This method has a hash argument.
221
+ QC.enqueue("Kernel.puts", ["hello", "world"])
222
+ ```
223
+
224
+ The basic idea is that all arguments should be easily encoded to json. OkJson
225
+ is used to encode the arguments, so the arguments can be anything that OkJson can encode.
226
+
227
+ ```ruby
228
+ # Won't work!
229
+ OkJson.encode({:test => "test"})
230
+
231
+ # OK
232
+ OkJson.encode({"test" => "test"})
233
+ ```
234
+
235
+ To see more information on usage, take a look at the test files in the source code. Also,
236
+ read up on [OkJson](https://github.com/kr/okjson)
237
+
238
+ #### Multiple Queues
239
+
240
+ The table containing the jobs has a column named *q_name*. This column
241
+ is the abstraction queue_classic uses to represent multiple queues. This allows
242
+ the programmer to place triggers and indecies on distinct queues.
243
+
244
+ ```ruby
245
+ # attach to the priority_queue. this will insert
246
+ # jobs with the column q_name = 'priority_queue'
247
+ p_queue = QC::Queue.new("priority_queue")
248
+
249
+ # This method has no arguments.
250
+ p_queue.enqueue("Time.now")
251
+
252
+ # This method has 1 argument.
253
+ p_queue.enqueue("Kernel.puts", "hello world")
254
+
255
+ # This method has 2 arguments.
256
+ p_queue.enqueue("Kernel.printf", "hello %s", "world")
257
+
258
+ # This method has a hash argument.
259
+ p_queue.enqueue("Kernel.puts", {"hello" => "world"})
260
+
261
+ # This method has a hash argument.
262
+ p_queue.enqueue("Kernel.puts", ["hello", "world"])
263
+ ```
264
+
265
+ This code example shows how to produce jobs into a custom queue,
266
+ to consume jobs from the customer queue be sure and set the `$QUEUE`
267
+ var to the q_name in the worker's UNIX environment.
268
+
269
+ ### Consumer
270
+
271
+ Now that you have some jobs in your queue, you probably want to work them.
272
+ Let's find out how... If you are using a Rakefile and have included `queue_classic/tasks`
273
+ then you can enter the following command to start a worker:
274
+
275
+ #### Rake Task
276
+
277
+ To work jobs from the default queue:
278
+
279
+ ```bash
280
+ $ bundle exec rake qc:work
281
+ ```
282
+ To work jobs from a custom queue:
283
+
284
+ ```bash
285
+ $ QUEUE="p_queue" bundle exec rake qc:work
286
+ ```
287
+
288
+ #### Bin File
289
+
290
+ The approach that I take when building simple ruby programs and sinatra apps is to
291
+ create an executable file that starts the worker. Start by making a bin directory
292
+ in your project's root directory. Then add a file called worker.
293
+
294
+ **bin/worker**
295
+
296
+ ```ruby
297
+ #!/usr/bin/env ruby
298
+ # encoding: utf-8
299
+
300
+ trap('INT') {exit}
301
+ trap('TERM') {exit}
302
+
303
+ require "your_app"
304
+ require "queue_classic"
305
+ worker = QC::Worker.new(q_name, top_bound, fork_worker, listening_worker, max_attempts)
306
+ worker.start
307
+ ```
308
+
309
+ #### Sublcass QC::Worker
310
+
311
+ Now that we have seen how to run a worker process, let's take a look at how to customize a worker.
312
+ The class `QC::Worker` will probably suit most of your needs; however, there are some mechanisms
313
+ that you will want to override. For instance, if you are using a forking worker, you will need to
314
+ open a new database connection in the child process that is doing your work. Also, you may want to
315
+ define how a failed job should behave. The default failed handler will simply print the job to stdout.
316
+ You can define a failure method that will enqueue the job again, or move it to another table, etc....
317
+
318
+ ```ruby
319
+ require "queue_classic"
320
+
321
+ class MyWorker < QC::Worker
322
+
323
+ # retry the job
324
+ def handle_failure(job, exception)
325
+ @queue.enque(job[:method], job[:args])
326
+ end
327
+
328
+ # the forked proc needs a new db connection
329
+ def setup_child
330
+ ActiveRecord::Base.establish_connection
331
+ end
332
+
333
+ end
334
+ ```
335
+
336
+ Notice that we have access to the `@queue` instance variable. Read the tests
337
+ and the worker class for more information on what you can do inside of the worker.
338
+
339
+ **bin/worker**
340
+
341
+ ```ruby
342
+ #!/usr/bin/env ruby
343
+ # encoding: utf-8
344
+
345
+ trap('INT') {exit}
346
+ trap('TERM') {exit}
347
+
348
+ require "your_app"
349
+ require "queue_classic"
350
+ require "my_worker"
351
+
352
+ worker = MyWorker.new(q_name, top_bound, fork_worker, listening_worker, max_attempts)
353
+ worker.start
354
+ ```
355
+
356
+ #### QC::Worker Details
357
+
358
+ ##### General Idea
359
+
360
+ The worker class (QC::Worker) is designed to be extended via inheritance. Any of
361
+ it's methods should be considered for extension. There are a few in particular
362
+ that act as stubs in hopes that the user will override them. Such methods
363
+ include: `handle_failure() and setup_child()`. See the section near the bottom
364
+ for a detailed descriptor of how to subclass the worker.
365
+
366
+ ##### Algorithm
367
+
368
+ When we ask the worker to start, it will enter a loop with a stop condition
369
+ dependent upon a method named `running?` . While in the method, the worker will
370
+ attempt to select and lock a job. If it can not on its first attempt, it will
371
+ use an exponential back-off technique to try again.
372
+
373
+ ##### Signals
374
+
375
+ *INT, TERM* Both of these signals will ensure that the running? method returns
376
+ false. If the worker is waiting -- as it does per the exponential backoff
377
+ technique; then a second signal must be sent.
378
+
379
+ ##### Forking
380
+
381
+ There are many reasons why you would and would not want your worker to fork.
382
+ An argument against forking may be that you want low latency in your job
383
+ execution. An argument in favor of forking is that your jobs leak memory and do
384
+ all sorts of crazy things, thus warranting the cleanup that fork allows.
385
+ Nevertheless, forking is not enabled by default. To instruct your worker to
386
+ fork, ensure the following shell variable is set:
387
+
388
+ ```bash
389
+ $ export QC_FORK_WORKER='true'
390
+ ```
391
+
392
+ One last note on forking. It is often the case that after Ruby forks a process,
393
+ some sort of setup needs to be done. For instance, you may want to re-establish
394
+ a database connection, or get a new file descriptor. queue_classic's worker
395
+ provides a hook that is called immediately after `Kernel.fork`. To use this hook
396
+ subclass the worker and override `setup_child()`.
397
+
398
+ ##### LISTEN/NOTIFY
399
+
400
+ The exponential back-off algorithm will require our worker to wait if it does
401
+ not succeed in locking a job. How we wait is something that can vary. PostgreSQL
402
+ has a wonderful feature that we can use to wait intelligently. Processes can LISTEN on a channel and be
403
+ alerted to notifications. queue_classic uses this feature to block until a
404
+ notification is received. If this feature is disabled, the worker will call
405
+ `Kernel.sleep(t)` where t is set by our exponential back-off algorithm. However,
406
+ if we are using LISTEN/NOTIFY then we can enter a type of sleep that can be
407
+ interrupted by a NOTIFY. For example, say we just started to wait for 2 seconds.
408
+ After the first millisecond of waiting, a job was enqueued. With LISTEN/NOTIFY
409
+ enabled, our worker would immediately preempt the wait and attempt to lock the job. This
410
+ allows our worker to be much more responsive. In the case there is no
411
+ notification, the worker will quit waiting after the timeout has expired.
412
+
413
+ LISTEN/NOTIFY is disabled by default but can be enabled by setting the following shell variable:
414
+
415
+ ```bash
416
+ $ export QC_LISTENING_WORKER='true'
417
+ ```
418
+
419
+ ##### Failure
420
+
421
+ I bet your worker will encounter a job that raises an exception. Queue_classic
422
+ thinks that you should know about this exception by means of you established
423
+ exception tracker. (i.e. Hoptoad, Exceptional) To that end, Queue_classic offers
424
+ a method that you can override. This method will be passed 2 arguments: the
425
+ exception instance and the job. Here are a few examples of things you might want
426
+ to do inside `handle_failure()`.
427
+
428
+ ## Tips and Tricks
429
+
430
+ ### Running Synchronously for tests
431
+
432
+ Author: [@em_csquared](https://twitter.com/#!/em_csquared)
433
+
434
+ I was tesing some code that started out handling some work in a web request and
435
+ wanted to move that work over to a queue. After completing a red-green-refactor
436
+ I did not want my tests to have to worry about workers or even hit the database.
437
+
438
+ Turns out its easy to get QueueClassic to just work in a synchronous way with:
439
+
440
+ ```ruby
441
+ def QC.enqueue(function_call, *args)
442
+ eval("#{function_call} *args")
443
+ end
444
+ ```
445
+
446
+ Now you can test QueueClassic as if it was calling your method directly!
447
+
448
+
449
+ ### Dispatching new jobs to workers without new code
450
+
451
+ Author: [@ryandotsmith (ace hacker)](https://twitter.com/#!/ryandotsmith)
452
+
453
+ The other day I found myself in a position in which I needed to delete a few
454
+ thousand records. The tough part of this situation is that I needed to ensure
455
+ the ActiveRecord callbacks were made on these objects thus making a simple SQL
456
+ statement unfeasible. Also, I didn't want to wait all day to select and destroy
457
+ these objects. queue_classic to the rescue! (no pun intended)
458
+
459
+ The API of queue_classic enables you to quickly dispatch jobs to workers. In my
460
+ case I wanted to call `Invoice.destroy(id)` a few thousand times. I fired up a
461
+ heroku console session and executed this line:
462
+
463
+ ```ruby
464
+ Invoice.find(:all, :select => "id", :conditions => "some condition").map {|i| QC.enqueue("Invoice.destroy", i.id) }
465
+ ```
466
+
467
+ With the help of 20 workers I was able to destroy all of these records
468
+ (preserving their callbacks) in a few minutes.
469
+
470
+ ### Enqueueing batches of jobs
471
+
472
+ Author: [@ryandotsmith (ace hacker)](https://twitter.com/#!/ryandotsmith)
473
+
474
+ I have seen several cases where the application will enqueue jobs in batches. For instance, you may be sending
475
+ 1,000 emails out. In this case, it would be foolish to do 1,000 individual transaction. Instead, you want to open
476
+ a new transaction, enqueue all of your jobs and then commit the transaction. This will save tons of time in the
477
+ database.
478
+
479
+ To achieve this we will create a helper method:
480
+
481
+ ```ruby
482
+
483
+ def qc_txn
484
+ begin
485
+ QC.database.execute("BEGIN")
486
+ yield
487
+ QC.database.execute("COMMIT")
488
+ rescue Exception
489
+ QC.database.execute("ROLLBACK")
490
+ raise
491
+ end
492
+ end
493
+ ```
494
+
495
+ Now in your application code you can do something like:
496
+
497
+ ```ruby
498
+ qc_txn do
499
+ Account.all.each do |act|
500
+ QC.enqueue("Emailer.send_notice", act.id)
501
+ end
502
+ end
503
+ ```
504
+
505
+ ### Scheduling Jobs
506
+
507
+ Author: [@ryandotsmith (ace hacker)](https://twitter.com/#!/ryandotsmith)
508
+
509
+ Many popular queueing solution provide support for scheduling. Features like
510
+ Redis-Scheduler and the run_at column in DJ are very important to the web
511
+ application developer. While queue_classic does not offer any sort of scheduling
512
+ features, I do not discount the importance of the concept. However, it is my
513
+ belief that a scheduler has no place in a queueing library, to that end I will
514
+ show you how to schedule jobs using queue_classic and the clockwork gem.
515
+
516
+ #### Example
517
+
518
+ In this example, we are working with a system that needs to compute a sales
519
+ summary at the end of each day. Lets say that we need to compute a summary for
520
+ each sales employee in the system.
521
+
522
+ Instead of enqueueing jobs with run_at set to 24hour intervals,
523
+ we will define a clock process to enqueue the jobs at a specified
524
+ time on each day. Let us create a file and call it clock.rb:
525
+
526
+ ```ruby
527
+ handler {|job| QC.enqueue(job)}
528
+ every(1.day, "SalesSummaryGenerator.build_daily_report", :at => "01:00")
529
+ ```
530
+
531
+ To start our scheduler, we will use the clockwork bin:
532
+
533
+ ```bash
534
+ $ clockwork clock.rb
535
+ ```
536
+
537
+ Now each day at 01:00 we will be sending the build_daily_report message to our
538
+ SalesSummaryGenerator class.
539
+
540
+ I found this abstraction quite powerful and easy to understand. Like
541
+ queue_classic, the clockwork gem is simple to understand and has 0 dependencies.
542
+ In production, I create a heroku process type called clock. This is typically
543
+ what my Procfile looks like:
544
+
545
+ ```
546
+ worker: rake jobs:work
547
+ clock: clockwork clock.rb
548
+ ```
549
+
550
+ ## Upgrading From Older Versions
551
+
552
+ ### 0.2.X to 0.3.X
553
+
554
+ * Deprecated QC.queue_length in favor of QC.length
555
+ * Locking functions need to be loaded into database via `$ rake qc:load_functions`
556
+
557
+ Also, the default queue is no longer named jobs,
558
+ it is named queue_classic_jobs. Renaming the table is the only change that needs to be made.
559
+
560
+ ```bash
561
+ $ psql your_database -c "ALTER TABLE jobs RENAME TO queue_classic_jobs;"
562
+ ```
563
+
564
+ Or if you are using Rails' Migrations:
565
+
566
+ ```ruby
567
+ class RenameJobsTable < ActiveRecord::Migration
568
+
569
+ def self.up
570
+ rename_table :jobs, :queue_classic_jobs
571
+ remove_index :jobs, :id
572
+ add_index :queue_classic_jobs, :id
573
+ end
574
+
575
+ def self.down
576
+ rename_table :queue_classic_jobs, :jobs
577
+ remove_index :queue_classic_jobs, :id
578
+ add_index :jobs, :id
579
+ end
580
+
581
+ end
582
+ ```
583
+
88
584
  ## Hacking on queue_classic
89
585
 
90
586
  ### Dependencies
@@ -101,10 +597,3 @@ $ createdb queue_classic_test
101
597
  $ export QC_DATABASE_URL="postgres://username:pass@localhost/queue_classic_test"
102
598
  $ rake
103
599
  ```
104
-
105
- ## Other Resources
106
-
107
- * [Discussion Group](http://groups.google.com/group/queue_classic "discussion group")
108
- * [Documentation](https://github.com/ryandotsmith/queue_classic/tree/master/doc)
109
- * [Example Rails App](https://github.com/ryandotsmith/queue_classic_example)
110
- * [Slide Deck](http://dl.dropbox.com/u/1579953/talks/queue_classic.pdf)