delayed_job_with_server_id 1.8.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/.gitignore ADDED
@@ -0,0 +1 @@
1
+ *.gem
data/MIT-LICENSE ADDED
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2005 Tobias Luetke
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOa AND
17
+ NONINFRINGEMENT. IN NO EVENT SaALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.textile ADDED
@@ -0,0 +1,127 @@
1
+ h1. Delayed::Job
2
+
3
+ Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
4
+
5
+ It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
6
+
7
+ * sending massive newsletters
8
+ * image resizing
9
+ * http downloads
10
+ * updating smart collections
11
+ * updating solr, our search server, after product changes
12
+ * batch imports
13
+ * spam checks
14
+
15
+ h2. Installation
16
+
17
+ To install as a gem, add the following to @config/environment.rb@:
18
+
19
+ <pre>
20
+ config.gem 'delayed_job'
21
+ </pre>
22
+
23
+ Rake tasks are not automatically loaded from gems, so you'll need to add the following to your Rakefile:
24
+
25
+ <pre>
26
+ begin
27
+ require 'delayed/tasks'
28
+ rescue LoadError
29
+ STDERR.puts "Run `rake gems:install` to install delayed_job"
30
+ end
31
+ </pre>
32
+
33
+ To install as a plugin:
34
+
35
+ <pre>
36
+ script/plugin install git://github.com/collectiveidea/delayed_job.git
37
+ </pre>
38
+
39
+ After delayed_job is installed, run:
40
+
41
+ <pre>
42
+ script/generate delayed_job
43
+ rake db:migrate
44
+ </pre>
45
+
46
+ h2. Queuing Jobs
47
+
48
+ Call @#send_later(method, params)@ on any object and it will be processed in the background.
49
+
50
+ <pre>
51
+ # without delayed_job
52
+ Notifier.deliver_signup(@user)
53
+
54
+ # with delayed_job
55
+ Notifier.send_later :deliver_signup, @user
56
+ </pre>
57
+
58
+ If a method should always be run in the background, you can call @#handle_asynchronously@ after the method declaration:
59
+
60
+ <pre>
61
+ class Device
62
+ def deliver
63
+ # long running method
64
+ end
65
+ handle_asynchronously :deliver
66
+ end
67
+
68
+ device = Device.new
69
+ device.deliver
70
+ </pre>
71
+
72
+ h2. Running Jobs
73
+
74
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. Keep in mind that each worker will check the database at least every 5 seconds.
75
+
76
+ You can invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
77
+ If you need to specify certain server for running certain jobs : @rake jobs:work SERVER_ID="the_server_id"@
78
+
79
+ h2. Custom Jobs
80
+
81
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table. Job objects are serialized to yaml so that they can later be resurrected by the job runner.
82
+
83
+ <pre>
84
+ class User
85
+ def self.run_job
86
+ some actions here . . . . . .
87
+ end
88
+ end
89
+
90
+ class TestDelayedJob < Struct.new()
91
+ def perform
92
+ User.run_job
93
+ end
94
+ end
95
+
96
+ #Delayed::Job.enqueue(:task, :priority, :run_at, :server_id)
97
+ Delayed::Job.enqueue(TestDelayedJob.new(), 0, Time.now, "the_server_id")
98
+ </pre>
99
+
100
+ h2. Gory Details
101
+
102
+ We add "server" as a new attribute fot the delayed job table.
103
+
104
+ <pre>
105
+ create_table :delayed_jobs, :force => true do |table|
106
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
107
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
108
+ table.text :handler # YAML-encoded string of the object that will do work
109
+ table.text :last_error # reason for last failure (See Note below)
110
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
111
+ table.datetime :locked_at # Set when a client is working on this object
112
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
113
+ table.string :locked_by # Who is working on this object (if locked)
114
+ table.string :server
115
+ table.timestamps
116
+ end
117
+ </pre>
118
+
119
+ If the server_id is given to a worker, this worker will be responsible to handle all the jobs marked with the same server_id only.
120
+ On the other hand, if no server_id is given to the worker, it will handle all jobs having nil server_id.
121
+
122
+
123
+ h3. Cleaning up
124
+
125
+ You can invoke @rake jobs:clear@ to delete all jobs in the queue.
126
+
127
+
data/Rakefile ADDED
@@ -0,0 +1,34 @@
1
+ # -*- encoding: utf-8 -*-
2
+ begin
3
+ require 'jeweler'
4
+ rescue LoadError
5
+ puts "Jeweler not available. Install it with: sudo gem install jeweler"
6
+ exit 1
7
+ end
8
+
9
+ Jeweler::Tasks.new do |s|
10
+ s.name = "delayed_job"
11
+ s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
12
+ s.email = "tobi@leetsoft.com"
13
+ s.homepage = "http://github.com/collectiveidea/delayed_job"
14
+ s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
15
+ s.authors = ["Brandon Keepers", "Tobias Lütke"]
16
+
17
+ s.has_rdoc = true
18
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
19
+ s.extra_rdoc_files = ["README.textile"]
20
+
21
+ s.test_files = Dir['spec/**/*']
22
+ end
23
+
24
+ require 'spec/rake/spectask'
25
+
26
+ task :default => :spec
27
+
28
+ desc 'Run the specs'
29
+ Spec::Rake::SpecTask.new(:spec) do |t|
30
+ t.libs << 'lib'
31
+ t.pattern = 'spec/**/*_spec.rb'
32
+ t.verbose = true
33
+ end
34
+
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 1.8.5
@@ -0,0 +1,14 @@
1
+ # an example Monit configuration file for delayed_job
2
+ # See: http://stackoverflow.com/questions/1226302/how-to-monitor-delayedjob-with-monit/1285611
3
+ #
4
+ # To use:
5
+ # 1. copy to /var/www/apps/{app_name}/shared/delayed_job.monitrc
6
+ # 2. replace {app_name} as appropriate
7
+ # 3. add this to your /etc/monit/monitrc
8
+ #
9
+ # include /var/www/apps/{app_name}/shared/delayed_job.monitrc
10
+
11
+ check process delayed_job
12
+ with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.pid
13
+ start program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job start"
14
+ stop program = "/usr/bin/env RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job stop"
@@ -0,0 +1,67 @@
1
+ # Generated by jeweler
2
+ # DO NOT EDIT THIS FILE DIRECTLY
3
+ # Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
4
+ # -*- encoding: utf-8 -*-
5
+
6
+ Gem::Specification.new do |s|
7
+ s.name = "delayed_job_with_server_id"
8
+ s.version = "1.8.5"
9
+
10
+ s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
+ s.authors = ["Rofaida Awad", "Mostafa Ragab", "Brandon Keepers", "Tobias L\303\274tke"]
12
+ s.date = "2010-10-24"
13
+ s.description = %q{Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks.}
14
+ s.email = "ragab.mostafa@gmail.com"
15
+ s.extra_rdoc_files = [
16
+ "README.textile"
17
+ ]
18
+ s.files = [
19
+ ".gitignore",
20
+ "MIT-LICENSE",
21
+ "README.textile",
22
+ "Rakefile",
23
+ "VERSION",
24
+ "contrib/delayed_job.monitrc",
25
+ "delayed_job.gemspec",
26
+ "generators/delayed_job/delayed_job_generator.rb",
27
+ "generators/delayed_job/templates/migration.rb",
28
+ "generators/delayed_job/templates/script",
29
+ "init.rb",
30
+ "lib/delayed/command.rb",
31
+ "lib/delayed/job.rb",
32
+ "lib/delayed/message_sending.rb",
33
+ "lib/delayed/performable_method.rb",
34
+ "lib/delayed/recipes.rb",
35
+ "lib/delayed/tasks.rb",
36
+ "lib/delayed/worker.rb",
37
+ "lib/delayed_job.rb",
38
+ "recipes/delayed_job.rb",
39
+ "spec/database.rb",
40
+ "spec/delayed_method_spec.rb",
41
+ "spec/job_spec.rb",
42
+ "spec/story_spec.rb",
43
+ "tasks/jobs.rake"
44
+ ]
45
+ s.homepage = %q{http://github.com/collectiveidea/delayed_job}
46
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
47
+ s.require_paths = ["lib"]
48
+ s.rubygems_version = %q{1.3.5}
49
+ s.summary = %q{Database-backed asynchronous priority queue system -- Extracted from Shopify}
50
+ s.test_files = [
51
+ "spec/database.rb",
52
+ "spec/delayed_method_spec.rb",
53
+ "spec/job_spec.rb",
54
+ "spec/story_spec.rb"
55
+ ]
56
+
57
+ if s.respond_to? :specification_version then
58
+ current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
59
+ s.specification_version = 3
60
+
61
+ if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
62
+ else
63
+ end
64
+ else
65
+ end
66
+ end
67
+
@@ -0,0 +1,22 @@
1
+ class DelayedJobGenerator < Rails::Generator::Base
2
+ default_options :skip_migration => false
3
+
4
+ def manifest
5
+ record do |m|
6
+ m.template 'script', 'script/delayed_job', :chmod => 0755
7
+ unless options[:skip_migration]
8
+ m.migration_template "migration.rb", 'db/migrate',
9
+ :migration_file_name => "create_delayed_jobs"
10
+ end
11
+ end
12
+ end
13
+
14
+ protected
15
+
16
+ def add_options!(opt)
17
+ opt.separator ''
18
+ opt.separator 'Options:'
19
+ opt.on("--skip-migration", "Don't generate a migration") { |v| options[:skip_migration] = v }
20
+ end
21
+
22
+ end
@@ -0,0 +1,21 @@
1
+ class CreateDelayedJobs < ActiveRecord::Migration
2
+ def self.up
3
+ create_table :delayed_jobs, :force => true do |table|
4
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
5
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
6
+ table.text :handler # YAML-encoded string of the object that will do work
7
+ table.text :last_error # reason for last failure (See Note below)
8
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
9
+ table.datetime :locked_at # Set when a client is working on this object
10
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
11
+ table.string :locked_by # Who is working on this object (if locked)
12
+ t.string :server # Who is server should be working on this object
13
+ table.timestamps
14
+ end
15
+
16
+ end
17
+
18
+ def self.down
19
+ drop_table :delayed_jobs
20
+ end
21
+ end
@@ -0,0 +1,5 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
4
+ require 'delayed/command'
5
+ Delayed::Command.new(ARGV).daemonize
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require File.dirname(__FILE__) + '/lib/delayed_job'
@@ -0,0 +1,79 @@
1
+ require 'rubygems'
2
+ require 'daemons'
3
+ require 'optparse'
4
+
5
+ module Delayed
6
+ class Command
7
+ attr_accessor :worker_count
8
+
9
+ def initialize(args)
10
+ @files_to_reopen = []
11
+ @options = {:quiet => true}
12
+
13
+ @worker_count = 1
14
+
15
+ opts = OptionParser.new do |opts|
16
+ opts.banner = "Usage: #{File.basename($0)} [options] start|stop|restart|run"
17
+
18
+ opts.on('-h', '--help', 'Show this message') do
19
+ puts opts
20
+ exit 1
21
+ end
22
+ opts.on('-e', '--environment=NAME', 'Specifies the environment to run this delayed jobs under (test/development/production).') do |e|
23
+ STDERR.puts "The -e/--environment option has been deprecated and has no effect. Use RAILS_ENV and see http://github.com/collectiveidea/delayed_job/issues/#issue/7"
24
+ end
25
+ opts.on('--min-priority N', 'Minimum priority of jobs to run.') do |n|
26
+ @options[:min_priority] = n
27
+ end
28
+ opts.on('--max-priority N', 'Maximum priority of jobs to run.') do |n|
29
+ @options[:max_priority] = n
30
+ end
31
+ opts.on('-n', '--number_of_workers=workers', "Number of unique workers to spawn") do |worker_count|
32
+ @worker_count = worker_count.to_i rescue 1
33
+ end
34
+ end
35
+ @args = opts.parse!(args)
36
+ end
37
+
38
+ def daemonize
39
+ ObjectSpace.each_object(File) do |file|
40
+ @files_to_reopen << file unless file.closed?
41
+ end
42
+
43
+ worker_count.times do |worker_index|
44
+ process_name = worker_count == 1 ? "delayed_job" : "delayed_job.#{worker_index}"
45
+ Daemons.run_proc(process_name, :dir => "#{RAILS_ROOT}/tmp/pids", :dir_mode => :normal, :ARGV => @args) do |*args|
46
+ run process_name
47
+ end
48
+ end
49
+ end
50
+
51
+ def run(worker_name = nil)
52
+ Dir.chdir(RAILS_ROOT)
53
+
54
+ # Re-open file handles
55
+ @files_to_reopen.each do |file|
56
+ begin
57
+ file.reopen File.join(RAILS_ROOT, 'log', 'delayed_job.log'), 'a+'
58
+ file.sync = true
59
+ rescue ::Exception
60
+ end
61
+ end
62
+
63
+ Delayed::Worker.logger = Rails.logger
64
+ if Delayed::Worker.logger.respond_to? :auto_flushing=
65
+ Delayed::Worker.logger.auto_flushing = true
66
+ end
67
+ ActiveRecord::Base.connection.reconnect!
68
+
69
+ Delayed::Job.worker_name = "#{worker_name} #{Delayed::Job.worker_name}"
70
+
71
+ Delayed::Worker.new(@options).start
72
+ rescue => e
73
+ Rails.logger.fatal e
74
+ STDERR.puts e.message
75
+ exit 1
76
+ end
77
+
78
+ end
79
+ end
@@ -0,0 +1,321 @@
1
+ require 'timeout'
2
+
3
+ module Delayed
4
+
5
+ class DeserializationError < StandardError
6
+ end
7
+
8
+ # A job object that is persisted to the database.
9
+ # Contains the work object as a YAML field.
10
+ class Job < ActiveRecord::Base
11
+ @@max_attempts = 25
12
+ @@max_run_time = 4.hours
13
+
14
+ cattr_accessor :max_attempts, :max_run_time
15
+
16
+ set_table_name :delayed_jobs
17
+
18
+ # By default failed jobs are destroyed after too many attempts.
19
+ # If you want to keep them around (perhaps to inspect the reason
20
+ # for the failure), set this to false.
21
+ cattr_accessor :destroy_failed_jobs
22
+ self.destroy_failed_jobs = true
23
+
24
+ # Every worker has a unique name which by default is the pid of the process.
25
+ # There are some advantages to overriding this with something which survives worker retarts:
26
+ # Workers can safely resume working on tasks which are locked by themselves. The worker will assume that it crashed before.
27
+ @@worker_name = nil
28
+
29
+ def self.worker_name
30
+ return @@worker_name unless @@worker_name.nil?
31
+ "host:#{Socket.gethostname} pid:#{Process.pid}" rescue "pid:#{Process.pid}"
32
+ end
33
+
34
+ def self.worker_name=(val)
35
+ @@worker_name = val
36
+ end
37
+
38
+ def worker_name
39
+ self.class.worker_name
40
+ end
41
+
42
+ def worker_name=(val)
43
+ @@worker_name = val
44
+ end
45
+
46
+ NextTaskSQL = '(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
47
+ NextTaskOrder = 'priority DESC, run_at ASC'
48
+
49
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
50
+
51
+ cattr_accessor :min_priority, :max_priority, :server_id
52
+ self.min_priority = nil
53
+ self.max_priority = nil
54
+ self.server_id = nil
55
+
56
+ # When a worker is exiting, make sure we don't have any locked jobs.
57
+ def self.clear_locks!
58
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
59
+ end
60
+
61
+ def failed?
62
+ failed_at
63
+ end
64
+ alias_method :failed, :failed?
65
+
66
+ def payload_object
67
+ @payload_object ||= deserialize(self['handler'])
68
+ end
69
+
70
+ def name
71
+ @name ||= begin
72
+ payload = payload_object
73
+ if payload.respond_to?(:display_name)
74
+ payload.display_name
75
+ else
76
+ payload.class.name
77
+ end
78
+ end
79
+ end
80
+
81
+ def payload_object=(object)
82
+ self['handler'] = object.to_yaml
83
+ end
84
+
85
+ # Reschedule the job in the future (when a job fails).
86
+ # Uses an exponential scale depending on the number of failed attempts.
87
+ def reschedule(message, backtrace = [], time = nil)
88
+ self.last_error = message + "\n" + backtrace.join("\n")
89
+
90
+ if (self.attempts += 1) < max_attempts
91
+ time ||= Job.db_time_now + (attempts ** 4) + 5
92
+
93
+ self.run_at = time
94
+ self.unlock
95
+ save!
96
+ else
97
+ logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consecutive failures."
98
+ destroy_failed_jobs ? destroy : update_attribute(:failed_at, Delayed::Job.db_time_now)
99
+ end
100
+ end
101
+
102
+
103
+ # Try to run one job. Returns true/false (work done/work failed) or nil if job can't be locked.
104
+ def run_with_lock(max_run_time, worker_name)
105
+ logger.info "* [JOB] acquiring lock on #{name}"
106
+ unless lock_exclusively!(max_run_time, worker_name)
107
+ # We did not get the lock, some other worker process must have
108
+ logger.warn "* [JOB] failed to acquire exclusive lock for #{name}"
109
+ return nil # no work done
110
+ end
111
+
112
+ begin
113
+ runtime = Benchmark.realtime do
114
+ Timeout.timeout(max_run_time.to_i) { invoke_job }
115
+ destroy
116
+ end
117
+ # TODO: warn if runtime > max_run_time ?
118
+ logger.info "* [JOB] #{name} completed after %.4f" % runtime
119
+ return true # did work
120
+ rescue Exception => e
121
+ reschedule e.message, e.backtrace
122
+ log_exception(e)
123
+ return false # work failed
124
+ end
125
+ end
126
+
127
+ # Add a job to the queue
128
+ def self.enqueue(*args, &block)
129
+ object = block_given? ? EvaledJob.new(&block) : args.shift
130
+
131
+ unless object.respond_to?(:perform) || block_given?
132
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
133
+ end
134
+
135
+ priority = args.first || 0
136
+ run_at = args[1]
137
+ server = args[2]
138
+
139
+ Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at, :server => server)
140
+ end
141
+
142
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
143
+ def self.find_available(limit = 5, max_run_time = max_run_time)
144
+
145
+ time_now = db_time_now
146
+
147
+ sql = NextTaskSQL.dup
148
+
149
+ conditions = [time_now, time_now - max_run_time, worker_name]
150
+
151
+ if self.min_priority
152
+ sql << ' AND (priority >= ?)'
153
+ conditions << min_priority
154
+ end
155
+
156
+ if self.max_priority
157
+ sql << ' AND (priority <= ?)'
158
+ conditions << max_priority
159
+ end
160
+
161
+ if self.server_id
162
+ sql << ' AND server = ?'
163
+ conditions << server_id
164
+ else
165
+ sql << ' AND server is ?'
166
+ conditions << nil
167
+ end
168
+
169
+
170
+ conditions.unshift(sql)
171
+
172
+ ActiveRecord::Base.silence do
173
+ find(:all, :conditions => conditions, :order => NextTaskOrder, :limit => limit)
174
+ end
175
+ end
176
+
177
+ # Run the next job we can get an exclusive lock on.
178
+ # If no jobs are left we return nil
179
+ def self.reserve_and_run_one_job(max_run_time = max_run_time)
180
+
181
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
182
+ # this leads to a more even distribution of jobs across the worker processes
183
+ find_available(5, max_run_time).each do |job|
184
+ t = job.run_with_lock(max_run_time, worker_name)
185
+ return t unless t == nil # return if we did work (good or bad)
186
+ end
187
+
188
+ nil # we didn't do any work, all 5 were not lockable
189
+ end
190
+
191
+ # Run a specific job.
192
+ # If no jobs are left we return nil
193
+ def self.work_on(id, server_id, max_run_time = MAX_RUN_TIME)
194
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
195
+ # this leads to a more even distribution of jobs across the worker processes
196
+ job = find(id)
197
+ if job
198
+ if job.server == nil || job.server == server_id # the job may be run by this or any server.
199
+ t = job.run_with_lock(max_run_time, worker_name)
200
+ else
201
+ puts "Job #{id} is not for server #{server_id}."
202
+ return nil
203
+ end
204
+ end
205
+
206
+ return t unless t == nil # return if we did work (good or bad)
207
+
208
+ nil # we didn't do any work, all 5 were not lockable
209
+ end
210
+
211
+ # Lock this job for this worker.
212
+ # Returns true if we have the lock, false otherwise.
213
+ def lock_exclusively!(max_run_time, worker = worker_name)
214
+ now = self.class.db_time_now
215
+ affected_rows = if locked_by != worker
216
+ # We don't own this job so we will update the locked_by name and the locked_at
217
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
218
+ else
219
+ # We already own this job, this may happen if the job queue crashes.
220
+ # Simply resume and update the locked_at
221
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
222
+ end
223
+ if affected_rows == 1
224
+ self.locked_at = now
225
+ self.locked_by = worker
226
+ return true
227
+ else
228
+ return false
229
+ end
230
+ end
231
+
232
+ # Unlock this job (note: not saved to DB)
233
+ def unlock
234
+ self.locked_at = nil
235
+ self.locked_by = nil
236
+ end
237
+
238
+ # This is a good hook if you need to report job processing errors in additional or different ways
239
+ def log_exception(error)
240
+ logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
241
+ logger.error(error)
242
+ end
243
+
244
+ # Do num jobs and return stats on success/failure.
245
+ # Exit early if interrupted.
246
+ def self.work_off(num = 100)
247
+ success, failure = 0, 0
248
+
249
+ num.times do
250
+ case self.reserve_and_run_one_job
251
+ when true
252
+ success += 1
253
+ when false
254
+ failure += 1
255
+ else
256
+ break # leave if no work could be done
257
+ end
258
+ break if $exit # leave if we're exiting
259
+ end
260
+
261
+ return [success, failure]
262
+ end
263
+
264
+ # Moved into its own method so that new_relic can trace it.
265
+ def invoke_job
266
+ payload_object.perform
267
+ end
268
+
269
+ private
270
+
271
+ def deserialize(source)
272
+ handler = YAML.load(source) rescue nil
273
+
274
+ unless handler.respond_to?(:perform)
275
+ if handler.nil? && source =~ ParseObjectFromYaml
276
+ handler_class = $1
277
+ end
278
+ attempt_to_load(handler_class || handler.class)
279
+ handler = YAML.load(source)
280
+ end
281
+
282
+ return handler if handler.respond_to?(:perform)
283
+
284
+ raise DeserializationError,
285
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
286
+ rescue TypeError, LoadError, NameError => e
287
+ raise DeserializationError,
288
+ "Job failed to load: #{e.message}. Try to manually require the required file."
289
+ end
290
+
291
+ # Constantize the object so that ActiveSupport can attempt
292
+ # its auto loading magic. Will raise LoadError if not successful.
293
+ def attempt_to_load(klass)
294
+ klass.constantize
295
+ end
296
+
297
+ # Get the current time (GMT or local depending on DB)
298
+ # Note: This does not ping the DB to get the time, so all your clients
299
+ # must have syncronized clocks.
300
+ def self.db_time_now
301
+ (ActiveRecord::Base.default_timezone == :utc) ? Time.now.utc : Time.zone.now
302
+ end
303
+
304
+ protected
305
+
306
+ def before_save
307
+ self.run_at ||= self.class.db_time_now
308
+ end
309
+
310
+ end
311
+
312
+ class EvaledJob
313
+ def initialize
314
+ @job = yield
315
+ end
316
+
317
+ def perform
318
+ eval(@job)
319
+ end
320
+ end
321
+ end