mathie-delayed_job 1.8.4

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1 @@
1
+ *.gem
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2005 Tobias Luetke
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOa AND
17
+ NONINFRINGEMENT. IN NO EVENT SaALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,190 @@
1
+ h1. Fork note
2
+
3
+ This is a temporary fork, backporting recruitmilitary's identifier patch to
4
+ 1.8.4 stable. It'll go away again once we don't need to deploy with this
5
+ combination!
6
+
7
+ h1. Delayed::Job
8
+
9
+ Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
10
+
11
+ It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
12
+
13
+ * sending massive newsletters
14
+ * image resizing
15
+ * http downloads
16
+ * updating smart collections
17
+ * updating solr, our search server, after product changes
18
+ * batch imports
19
+ * spam checks
20
+
21
+ h2. Installation
22
+
23
+ To install as a gem, add the following to @config/environment.rb@:
24
+
25
+ <pre>
26
+ config.gem 'collectiveidea-delayed_job', :lib => 'delayed_job',
27
+ :source => 'http://gems.github.com'
28
+ </pre>
29
+
30
+ Rake tasks are not automatically loaded from gems, so you'll need to add the following to your Rakefile:
31
+
32
+ <pre>
33
+ begin
34
+ require 'delayed/tasks'
35
+ rescue LoadError
36
+ STDERR.puts "Run `rake gems:install` to install delayed_job"
37
+ end
38
+ </pre>
39
+
40
+ To install as a plugin:
41
+
42
+ <pre>
43
+ script/plugin install git://github.com/collectiveidea/delayed_job.git
44
+ </pre>
45
+
46
+ After delayed_job is installed, run:
47
+
48
+ <pre>
49
+ script/generate delayed_job
50
+ rake db:migrate
51
+ </pre>
52
+
53
+ h2. Upgrading to 1.8
54
+
55
+ If you are upgrading from a previous release, you will need to generate the new @script/delayed_job@:
56
+
57
+ <pre>
58
+ script/generate delayed_job --skip-migration
59
+ </pre>
60
+
61
+ h2. Queuing Jobs
62
+
63
+ Call @#send_later(method, params)@ on any object and it will be processed in the background.
64
+
65
+ <pre>
66
+ # without delayed_job
67
+ Notifier.deliver_signup(@user)
68
+
69
+ # with delayed_job
70
+ Notifier.send_later :deliver_signup, @user
71
+ </pre>
72
+
73
+ If a method should always be run in the background, you can call @#handle_asynchronously@ after the method declaration:
74
+
75
+ <pre>
76
+ class Device
77
+ def deliver
78
+ # long running method
79
+ end
80
+ handle_asynchronously :deliver
81
+ end
82
+
83
+ device = Device.new
84
+ device.deliver
85
+ </pre>
86
+
87
+ h2. Running Jobs
88
+
89
+ @script/delayed_job@ can be used to manage a background process which will start working off jobs.
90
+
91
+ <pre>
92
+ $ RAILS_ENV=production script/delayed_job start
93
+ $ RAILS_ENV=production script/delayed_job stop
94
+
95
+ # Runs two workers in separate processes.
96
+ $ RAILS_ENV=production script/delayed_job -n 2 start
97
+ $ RAILS_ENV=production script/delayed_job stop
98
+ </pre>
99
+
100
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. Keep in mind that each worker will check the database at least every 5 seconds.
101
+
102
+ You can also invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
103
+
104
+ h2. Custom Jobs
105
+
106
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table. Job objects are serialized to yaml so that they can later be resurrected by the job runner.
107
+
108
+ <pre>
109
+ class NewsletterJob < Struct.new(:text, :emails)
110
+ def perform
111
+ emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
112
+ end
113
+ end
114
+
115
+ Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
116
+ </pre>
117
+
118
+ h2. Gory Details
119
+
120
+ The library evolves around a delayed_jobs table which looks as follows:
121
+
122
+ create_table :delayed_jobs, :force => true do |table|
123
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
124
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
125
+ table.text :handler # YAML-encoded string of the object that will do work
126
+ table.text :last_error # reason for last failure (See Note below)
127
+ table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
128
+ table.datetime :locked_at # Set when a client is working on this object
129
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
130
+ table.string :locked_by # Who is working on this object (if locked)
131
+ table.timestamps
132
+ end
133
+
134
+ On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
135
+
136
+ The default MAX_ATTEMPTS is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
137
+ With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
138
+
139
+ The default MAX_RUN_TIME is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
140
+ make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
141
+
142
+ By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
143
+ Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
144
+
145
+ Here is an example of changing job parameters in Rails:
146
+
147
+ <pre>
148
+ # config/initializers/delayed_job_config.rb
149
+ Delayed::Job.destroy_failed_jobs = false
150
+ silence_warnings do
151
+ Delayed::Job.const_set("MAX_ATTEMPTS", 3)
152
+ Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)
153
+ end
154
+ </pre>
155
+
156
+ h3. Cleaning up
157
+
158
+ You can invoke @rake jobs:clear@ to delete all jobs in the queue.
159
+
160
+ h2. Mailing List
161
+
162
+ Join us on the mailing list at http://groups.google.com/group/delayed_job
163
+
164
+ h2. How to contribute
165
+
166
+ If you find what looks like a bug:
167
+
168
+ # Check the GitHub issue tracker to see if anyone else has had the same issue.
169
+ http://github.com/collectiveidea/delayed_job/issues/
170
+ # If you don't see anything, create an issue with information on how to reproduce it.
171
+
172
+ If you want to contribute an enhancement or a fix:
173
+
174
+ # Fork the project on github.
175
+ http://github.com/collectiveidea/delayed_job/
176
+ # Make your changes with tests.
177
+ # Commit the changes without making changes to the Rakefile, VERSION, or any other files that aren't related to your enhancement or fix
178
+ # Send a pull request.
179
+
180
+ h3. Changes
181
+
182
+ * 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
183
+
184
+ * 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
185
+
186
+ * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
187
+
188
+ * 1.2.0: Added #send_later to Object for simpler job creation
189
+
190
+ * 1.0.0: Initial release
@@ -0,0 +1,34 @@
1
+ # -*- encoding: utf-8 -*-
2
+ begin
3
+ require 'jeweler'
4
+ rescue LoadError
5
+ puts "Jeweler not available. Install it with: sudo gem install technicalpickles-jeweler -s http://gems.github.com"
6
+ exit 1
7
+ end
8
+
9
+ Jeweler::Tasks.new do |s|
10
+ s.name = "mathie-delayed_job"
11
+ s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
12
+ s.email = "tobi@leetsoft.com"
13
+ s.homepage = "http://github.com/mathie/delayed_job"
14
+ s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
15
+ s.authors = ["Brandon Keepers", "Tobias Lütke"]
16
+
17
+ s.has_rdoc = true
18
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
19
+ s.extra_rdoc_files = ["README.textile"]
20
+
21
+ s.test_files = Dir['spec/**/*']
22
+ end
23
+
24
+ require 'spec/rake/spectask'
25
+
26
+ task :default => :spec
27
+
28
+ desc 'Run the specs'
29
+ Spec::Rake::SpecTask.new(:spec) do |t|
30
+ t.libs << 'lib'
31
+ t.pattern = 'spec/**/*_spec.rb'
32
+ t.verbose = true
33
+ end
34
+
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 1.8.4
@@ -0,0 +1,14 @@
1
+ # an example Monit configuration file for delayed_job
2
+ # See: http://stackoverflow.com/questions/1226302/how-to-monitor-delayedjob-with-monit/1285611
3
+ #
4
+ # To use:
5
+ # 1. copy to /var/www/apps/{app_name}/shared/delayed_job.monitrc
6
+ # 2. replace {app_name} as appropriate
7
+ # 3. add this to your /etc/monit/monitrc
8
+ #
9
+ # include /var/www/apps/{app_name}/shared/delayed_job.monitrc
10
+
11
+ check process delayed_job
12
+ with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.pid
13
+ start program = "RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job start"
14
+ stop program = "RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job stop"
@@ -0,0 +1,22 @@
1
+ class DelayedJobGenerator < Rails::Generator::Base
2
+ default_options :skip_migration => false
3
+
4
+ def manifest
5
+ record do |m|
6
+ m.template 'script', 'script/delayed_job', :chmod => 0755
7
+ unless options[:skip_migration]
8
+ m.migration_template "migration.rb", 'db/migrate',
9
+ :migration_file_name => "create_delayed_jobs"
10
+ end
11
+ end
12
+ end
13
+
14
+ protected
15
+
16
+ def add_options!(opt)
17
+ opt.separator ''
18
+ opt.separator 'Options:'
19
+ opt.on("--skip-migration", "Don't generate a migration") { |v| options[:skip_migration] = v }
20
+ end
21
+
22
+ end
@@ -0,0 +1,20 @@
1
+ class CreateDelayedJobs < ActiveRecord::Migration
2
+ def self.up
3
+ create_table :delayed_jobs, :force => true do |table|
4
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
5
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
6
+ table.text :handler # YAML-encoded string of the object that will do work
7
+ table.text :last_error # reason for last failure (See Note below)
8
+ table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
9
+ table.datetime :locked_at # Set when a client is working on this object
10
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
11
+ table.string :locked_by # Who is working on this object (if locked)
12
+ table.timestamps
13
+ end
14
+
15
+ end
16
+
17
+ def self.down
18
+ drop_table :delayed_jobs
19
+ end
20
+ end
@@ -0,0 +1,5 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
4
+ require 'delayed/command'
5
+ Delayed::Command.new(ARGV).daemonize
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require File.dirname(__FILE__) + '/lib/delayed_job'
@@ -0,0 +1,90 @@
1
+ require 'rubygems'
2
+ require 'daemons'
3
+ require 'optparse'
4
+
5
+ module Delayed
6
+ class Command
7
+ attr_accessor :worker_count
8
+
9
+ def initialize(args)
10
+ @files_to_reopen = []
11
+ @options = {:quiet => true}
12
+
13
+ @worker_count = 1
14
+
15
+ opts = OptionParser.new do |opts|
16
+ opts.banner = "Usage: #{File.basename($0)} [options] start|stop|restart|run"
17
+
18
+ opts.on('-h', '--help', 'Show this message') do
19
+ puts opts
20
+ exit 1
21
+ end
22
+ opts.on('-e', '--environment=NAME', 'Specifies the environment to run this delayed jobs under (test/development/production).') do |e|
23
+ STDERR.puts "The -e/--environment option has been deprecated and has no effect. Use RAILS_ENV and see http://github.com/collectiveidea/delayed_job/issues/#issue/7"
24
+ end
25
+ opts.on('--min-priority N', 'Minimum priority of jobs to run.') do |n|
26
+ @options[:min_priority] = n
27
+ end
28
+ opts.on('--max-priority N', 'Maximum priority of jobs to run.') do |n|
29
+ @options[:max_priority] = n
30
+ end
31
+ opts.on('-i', '--identifier=n', 'A numeric identifier for the worker.') do |n|
32
+ @options[:identifier] = n
33
+ end
34
+ opts.on('-n', '--number_of_workers=workers', "Number of unique workers to spawn") do |worker_count|
35
+ @worker_count = worker_count.to_i rescue 1
36
+ end
37
+ end
38
+ @args = opts.parse!(args)
39
+ end
40
+
41
+ def daemonize
42
+ ObjectSpace.each_object(File) do |file|
43
+ @files_to_reopen << file unless file.closed?
44
+ end
45
+
46
+ if @worker_count > 1 && @options[:identifier]
47
+ raise ArgumentError, 'Cannot specify both --number-of-workers and --identifier'
48
+ elsif @worker_count == 1 && @options[:identifier]
49
+ process_name = "delayed_job.#{@options[:identifier]}"
50
+ run_process(process_name)
51
+ else
52
+ worker_count.times do |worker_index|
53
+ process_name = worker_count == 1 ? "delayed_job" : "delayed_job.#{worker_index}"
54
+ run_process(process_name)
55
+ end
56
+ end
57
+ end
58
+
59
+ def run_process(process_name)
60
+ Daemons.run_proc(process_name, :dir => "#{RAILS_ROOT}/tmp/pids", :dir_mode => :normal, :ARGV => @args) do |*args|
61
+ run process_name
62
+ end
63
+ end
64
+
65
+ def run(worker_name = nil)
66
+ Dir.chdir(RAILS_ROOT)
67
+
68
+ # Re-open file handles
69
+ @files_to_reopen.each do |file|
70
+ begin
71
+ file.reopen File.join(RAILS_ROOT, 'log', 'delayed_job.log'), 'w+'
72
+ file.sync = true
73
+ rescue ::Exception
74
+ end
75
+ end
76
+
77
+ Delayed::Worker.logger = Rails.logger
78
+ ActiveRecord::Base.connection.reconnect!
79
+
80
+ Delayed::Job.worker_name = "#{worker_name} #{Delayed::Job.worker_name}"
81
+
82
+ Delayed::Worker.new(@options).start
83
+ rescue => e
84
+ Rails.logger.fatal e
85
+ STDERR.puts e.message
86
+ exit 1
87
+ end
88
+
89
+ end
90
+ end
@@ -0,0 +1,270 @@
1
+ require 'timeout'
2
+
3
+ module Delayed
4
+
5
+ class DeserializationError < StandardError
6
+ end
7
+
8
+ # A job object that is persisted to the database.
9
+ # Contains the work object as a YAML field.
10
+ class Job < ActiveRecord::Base
11
+ MAX_ATTEMPTS = 25
12
+ MAX_RUN_TIME = 4.hours
13
+ set_table_name :delayed_jobs
14
+
15
+ # By default failed jobs are destroyed after too many attempts.
16
+ # If you want to keep them around (perhaps to inspect the reason
17
+ # for the failure), set this to false.
18
+ cattr_accessor :destroy_failed_jobs
19
+ self.destroy_failed_jobs = true
20
+
21
+ # Every worker has a unique name which by default is the pid of the process.
22
+ # There are some advantages to overriding this with something which survives worker retarts:
23
+ # Workers can safely resume working on tasks which are locked by themselves. The worker will assume that it crashed before.
24
+ cattr_accessor :worker_name
25
+ self.worker_name = "host:#{Socket.gethostname} pid:#{Process.pid}" rescue "pid:#{Process.pid}"
26
+
27
+ NextTaskSQL = '(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
28
+ NextTaskOrder = 'priority DESC, run_at ASC'
29
+
30
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
31
+
32
+ cattr_accessor :min_priority, :max_priority
33
+ self.min_priority = nil
34
+ self.max_priority = nil
35
+
36
+ # When a worker is exiting, make sure we don't have any locked jobs.
37
+ def self.clear_locks!
38
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
39
+ end
40
+
41
+ def failed?
42
+ failed_at
43
+ end
44
+ alias_method :failed, :failed?
45
+
46
+ def payload_object
47
+ @payload_object ||= deserialize(self['handler'])
48
+ end
49
+
50
+ def name
51
+ @name ||= begin
52
+ payload = payload_object
53
+ if payload.respond_to?(:display_name)
54
+ payload.display_name
55
+ else
56
+ payload.class.name
57
+ end
58
+ end
59
+ end
60
+
61
+ def payload_object=(object)
62
+ self['handler'] = object.to_yaml
63
+ end
64
+
65
+ # Reschedule the job in the future (when a job fails).
66
+ # Uses an exponential scale depending on the number of failed attempts.
67
+ def reschedule(message, backtrace = [], time = nil)
68
+ if (self.attempts += 1) < MAX_ATTEMPTS
69
+ time ||= Job.db_time_now + (attempts ** 4) + 5
70
+
71
+ self.run_at = time
72
+ self.last_error = message + "\n" + backtrace.join("\n")
73
+ self.unlock
74
+ save!
75
+ else
76
+ logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures."
77
+ destroy_failed_jobs ? destroy : update_attribute(:failed_at, Time.now)
78
+ end
79
+ end
80
+
81
+
82
+ # Try to run one job. Returns true/false (work done/work failed) or nil if job can't be locked.
83
+ def run_with_lock(max_run_time, worker_name)
84
+ logger.info "* [JOB] acquiring lock on #{name}"
85
+ unless lock_exclusively!(max_run_time, worker_name)
86
+ # We did not get the lock, some other worker process must have
87
+ logger.warn "* [JOB] failed to acquire exclusive lock for #{name}"
88
+ return nil # no work done
89
+ end
90
+
91
+ begin
92
+ runtime = Benchmark.realtime do
93
+ Timeout.timeout(max_run_time.to_i) { invoke_job }
94
+ destroy
95
+ end
96
+ # TODO: warn if runtime > max_run_time ?
97
+ logger.info "* [JOB] #{name} completed after %.4f" % runtime
98
+ return true # did work
99
+ rescue Exception => e
100
+ reschedule e.message, e.backtrace
101
+ log_exception(e)
102
+ return false # work failed
103
+ end
104
+ end
105
+
106
+ # Add a job to the queue
107
+ def self.enqueue(*args, &block)
108
+ object = block_given? ? EvaledJob.new(&block) : args.shift
109
+
110
+ unless object.respond_to?(:perform) || block_given?
111
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
112
+ end
113
+
114
+ priority = args.first || 0
115
+ run_at = args[1]
116
+
117
+ Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
118
+ end
119
+
120
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
121
+ def self.find_available(limit = 5, max_run_time = MAX_RUN_TIME)
122
+
123
+ time_now = db_time_now
124
+
125
+ sql = NextTaskSQL.dup
126
+
127
+ conditions = [time_now, time_now - max_run_time, worker_name]
128
+
129
+ if self.min_priority
130
+ sql << ' AND (priority >= ?)'
131
+ conditions << min_priority
132
+ end
133
+
134
+ if self.max_priority
135
+ sql << ' AND (priority <= ?)'
136
+ conditions << max_priority
137
+ end
138
+
139
+ conditions.unshift(sql)
140
+
141
+ ActiveRecord::Base.silence do
142
+ find(:all, :conditions => conditions, :order => NextTaskOrder, :limit => limit)
143
+ end
144
+ end
145
+
146
+ # Run the next job we can get an exclusive lock on.
147
+ # If no jobs are left we return nil
148
+ def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME)
149
+
150
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
151
+ # this leads to a more even distribution of jobs across the worker processes
152
+ find_available(5, max_run_time).each do |job|
153
+ t = job.run_with_lock(max_run_time, worker_name)
154
+ return t unless t == nil # return if we did work (good or bad)
155
+ end
156
+
157
+ nil # we didn't do any work, all 5 were not lockable
158
+ end
159
+
160
+ # Lock this job for this worker.
161
+ # Returns true if we have the lock, false otherwise.
162
+ def lock_exclusively!(max_run_time, worker = worker_name)
163
+ now = self.class.db_time_now
164
+ affected_rows = if locked_by != worker
165
+ # We don't own this job so we will update the locked_by name and the locked_at
166
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
167
+ else
168
+ # We already own this job, this may happen if the job queue crashes.
169
+ # Simply resume and update the locked_at
170
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
171
+ end
172
+ if affected_rows == 1
173
+ self.locked_at = now
174
+ self.locked_by = worker
175
+ return true
176
+ else
177
+ return false
178
+ end
179
+ end
180
+
181
+ # Unlock this job (note: not saved to DB)
182
+ def unlock
183
+ self.locked_at = nil
184
+ self.locked_by = nil
185
+ end
186
+
187
+ # This is a good hook if you need to report job processing errors in additional or different ways
188
+ def log_exception(error)
189
+ logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
190
+ logger.error(error)
191
+ end
192
+
193
+ # Do num jobs and return stats on success/failure.
194
+ # Exit early if interrupted.
195
+ def self.work_off(num = 100)
196
+ success, failure = 0, 0
197
+
198
+ num.times do
199
+ case self.reserve_and_run_one_job
200
+ when true
201
+ success += 1
202
+ when false
203
+ failure += 1
204
+ else
205
+ break # leave if no work could be done
206
+ end
207
+ break if $exit # leave if we're exiting
208
+ end
209
+
210
+ return [success, failure]
211
+ end
212
+
213
+ # Moved into its own method so that new_relic can trace it.
214
+ def invoke_job
215
+ payload_object.perform
216
+ end
217
+
218
+ private
219
+
220
+ def deserialize(source)
221
+ handler = YAML.load(source) rescue nil
222
+
223
+ unless handler.respond_to?(:perform)
224
+ if handler.nil? && source =~ ParseObjectFromYaml
225
+ handler_class = $1
226
+ end
227
+ attempt_to_load(handler_class || handler.class)
228
+ handler = YAML.load(source)
229
+ end
230
+
231
+ return handler if handler.respond_to?(:perform)
232
+
233
+ raise DeserializationError,
234
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
235
+ rescue TypeError, LoadError, NameError => e
236
+ raise DeserializationError,
237
+ "Job failed to load: #{e.message}. Try to manually require the required file."
238
+ end
239
+
240
+ # Constantize the object so that ActiveSupport can attempt
241
+ # its auto loading magic. Will raise LoadError if not successful.
242
+ def attempt_to_load(klass)
243
+ klass.constantize
244
+ end
245
+
246
+ # Get the current time (GMT or local depending on DB)
247
+ # Note: This does not ping the DB to get the time, so all your clients
248
+ # must have syncronized clocks.
249
+ def self.db_time_now
250
+ (ActiveRecord::Base.default_timezone == :utc) ? Time.now.utc : Time.now
251
+ end
252
+
253
+ protected
254
+
255
+ def before_save
256
+ self.run_at ||= self.class.db_time_now
257
+ end
258
+
259
+ end
260
+
261
+ class EvaledJob
262
+ def initialize
263
+ @job = yield
264
+ end
265
+
266
+ def perform
267
+ eval(@job)
268
+ end
269
+ end
270
+ end