collectiveidea-delayed_job 1.8.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1 @@
1
+ *.gem
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2005 Tobias Luetke
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOa AND
17
+ NONINFRINGEMENT. IN NO EVENT SaALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,107 @@
1
+ h1. Delayed::Job
2
+
3
+ Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
4
+
5
+ It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
6
+
7
+ * sending massive newsletters
8
+ * image resizing
9
+ * http downloads
10
+ * updating smart collections
11
+ * updating solr, our search server, after product changes
12
+ * batch imports
13
+ * spam checks
14
+
15
+ h2. Setup
16
+
17
+ The library evolves around a delayed_jobs table which looks as follows:
18
+
19
+ create_table :delayed_jobs, :force => true do |table|
20
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
21
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
22
+ table.text :handler # YAML-encoded string of the object that will do work
23
+ table.string :last_error # reason for last failure (See Note below)
24
+ table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
25
+ table.datetime :locked_at # Set when a client is working on this object
26
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
27
+ table.string :locked_by # Who is working on this object (if locked)
28
+ table.timestamps
29
+ end
30
+
31
+ On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
32
+
33
+ The default MAX_ATTEMPTS is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
34
+ With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
35
+
36
+ The default MAX_RUN_TIME is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
37
+ make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
38
+
39
+ By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
40
+ Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
41
+
42
+ Here is an example of changing job parameters in Rails:
43
+
44
+ # config/initializers/delayed_job_config.rb
45
+ Delayed::Job.destroy_failed_jobs = false
46
+ silence_warnings do
47
+ Delayed::Job.const_set("MAX_ATTEMPTS", 3)
48
+ Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)
49
+ end
50
+
51
+ Note: If your error messages are long, consider changing last_error field to a :text instead of a :string (255 character limit).
52
+
53
+
54
+ h2. Usage
55
+
56
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.
57
+ Job objects are serialized to yaml so that they can later be resurrected by the job runner.
58
+
59
+ class NewsletterJob < Struct.new(:text, :emails)
60
+ def perform
61
+ emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
62
+ end
63
+ end
64
+
65
+ Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
66
+
67
+ There is also a second way to get jobs in the queue: send_later.
68
+
69
+
70
+ BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, massive_csv)
71
+
72
+
73
+ This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
74
+ which are stored as their text representation and loaded from the database fresh when the job is actually run later.
75
+
76
+
77
+ h2. Running the jobs
78
+
79
+ Run @script/generate delayed_job@ to add @script/delayed_job@. This script can then be used to manage a process which will start working off jobs.
80
+
81
+ # Runs two workers in separate processes.
82
+ $ ruby script/delayed_job -e production -n 2 start
83
+ $ ruby script/delayed_job -e production stop
84
+
85
+ You can invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
86
+
87
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. You can even
88
+ run multiple workers on per computer, but you must give each one a unique name. (TODO: put in an example)
89
+ Keep in mind that each worker will check the database at least every 5 seconds.
90
+
91
+ Note: The rake task will exit if the database has any network connectivity problems.
92
+
93
+ h3. Cleaning up
94
+
95
+ You can invoke @rake jobs:clear@ to delete all jobs in the queue.
96
+
97
+ h3. Changes
98
+
99
+ * 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
100
+
101
+ * 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
102
+
103
+ * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
104
+
105
+ * 1.2.0: Added #send_later to Object for simpler job creation
106
+
107
+ * 1.0.0: Initial release
@@ -0,0 +1,22 @@
1
+ # -*- encoding: utf-8 -*-
2
+ begin
3
+ require 'jeweler'
4
+ rescue LoadError
5
+ puts "Jeweler not available. Install it with: sudo gem install technicalpickles-jeweler -s http://gems.github.com"
6
+ exit 1
7
+ end
8
+
9
+ Jeweler::Tasks.new do |s|
10
+ s.name = "delayed_job"
11
+ s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
12
+ s.email = "tobi@leetsoft.com"
13
+ s.homepage = "http://github.com/tobi/delayed_job/tree/master"
14
+ s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
15
+ s.authors = ["Tobias Lütke"]
16
+
17
+ s.has_rdoc = true
18
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
19
+ s.extra_rdoc_files = ["README.textile"]
20
+
21
+ s.test_files = Dir['spec/**/*']
22
+ end
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 1.8.0
@@ -0,0 +1,61 @@
1
+ # -*- encoding: utf-8 -*-
2
+
3
+ Gem::Specification.new do |s|
4
+ s.name = %q{delayed_job}
5
+ s.version = "1.8.0"
6
+
7
+ s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
8
+ s.authors = ["Tobias L\303\274tke"]
9
+ s.date = %q{2009-07-19}
10
+ s.description = %q{Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks.}
11
+ s.email = %q{tobi@leetsoft.com}
12
+ s.extra_rdoc_files = [
13
+ "README.textile"
14
+ ]
15
+ s.files = [
16
+ ".gitignore",
17
+ "MIT-LICENSE",
18
+ "README.textile",
19
+ "Rakefile",
20
+ "VERSION",
21
+ "delayed_job.gemspec",
22
+ "generators/delayed_job/delayed_job_generator.rb",
23
+ "generators/delayed_job/templates/migration.rb",
24
+ "generators/delayed_job/templates/script",
25
+ "init.rb",
26
+ "lib/delayed/command.rb",
27
+ "lib/delayed/job.rb",
28
+ "lib/delayed/message_sending.rb",
29
+ "lib/delayed/performable_method.rb",
30
+ "lib/delayed/worker.rb",
31
+ "lib/delayed_job.rb",
32
+ "recipes/delayed_job.rb",
33
+ "spec/database.rb",
34
+ "spec/delayed_method_spec.rb",
35
+ "spec/job_spec.rb",
36
+ "spec/story_spec.rb",
37
+ "tasks/jobs.rake",
38
+ "tasks/tasks.rb"
39
+ ]
40
+ s.homepage = %q{http://github.com/tobi/delayed_job/tree/master}
41
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
42
+ s.require_paths = ["lib"]
43
+ s.rubygems_version = %q{1.3.3}
44
+ s.summary = %q{Database-backed asynchronous priority queue system -- Extracted from Shopify}
45
+ s.test_files = [
46
+ "spec/database.rb",
47
+ "spec/delayed_method_spec.rb",
48
+ "spec/job_spec.rb",
49
+ "spec/story_spec.rb"
50
+ ]
51
+
52
+ if s.respond_to? :specification_version then
53
+ current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
54
+ s.specification_version = 3
55
+
56
+ if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
57
+ else
58
+ end
59
+ else
60
+ end
61
+ end
@@ -0,0 +1,11 @@
1
+ class DelayedJobGenerator < Rails::Generator::Base
2
+
3
+ def manifest
4
+ record do |m|
5
+ m.template 'script', 'script/delayed_job', :chmod => 0755
6
+ m.migration_template "migration.rb", 'db/migrate',
7
+ :migration_file_name => "create_delayed_jobs"
8
+ end
9
+ end
10
+
11
+ end
@@ -0,0 +1,20 @@
1
+ class CreateDelayedJobs < ActiveRecord::Migration
2
+ def self.up
3
+ create_table :delayed_jobs, :force => true do |table|
4
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
5
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
6
+ table.text :handler # YAML-encoded string of the object that will do work
7
+ table.text :last_error # reason for last failure (See Note below)
8
+ table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
9
+ table.datetime :locked_at # Set when a client is working on this object
10
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
11
+ table.string :locked_by # Who is working on this object (if locked)
12
+ table.timestamps
13
+ end
14
+
15
+ end
16
+
17
+ def self.down
18
+ drop_table :delayed_jobs
19
+ end
20
+ end
@@ -0,0 +1,7 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ # Daemons sets pwd to /, so we have to explicitly set RAILS_ROOT
4
+ RAILS_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..'))
5
+
6
+ require File.join(File.dirname(__FILE__), *%w(.. vendor plugins delayed_job lib delayed command))
7
+ Delayed::Command.new(ARGV).daemonize
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require File.dirname(__FILE__) + '/lib/delayed_job'
@@ -0,0 +1,65 @@
1
+ require 'rubygems'
2
+ require 'daemons'
3
+ require 'optparse'
4
+
5
+ module Delayed
6
+ class Command
7
+ attr_accessor :worker_count
8
+
9
+ def initialize(args)
10
+ @options = {:quiet => true}
11
+ @worker_count = 1
12
+
13
+ opts = OptionParser.new do |opts|
14
+ opts.banner = "Usage: #{File.basename($0)} [options] start|stop|restart|run"
15
+
16
+ opts.on('-h', '--help', 'Show this message') do
17
+ puts opts
18
+ exit 1
19
+ end
20
+ opts.on('-e', '--environment=NAME', 'Specifies the environment to run this delayed jobs under (test/development/production).') do |e|
21
+ ENV['RAILS_ENV'] = e
22
+ end
23
+ opts.on('--min-priority N', 'Minimum priority of jobs to run.') do |n|
24
+ @options[:min_priority] = n
25
+ end
26
+ opts.on('--max-priority N', 'Maximum priority of jobs to run.') do |n|
27
+ @options[:max_priority] = n
28
+ end
29
+ opts.on('-n', '--number_of_workers=workers', "Number of unique workers to spawn") do |worker_count|
30
+ @worker_count = worker_count.to_i rescue 1
31
+ end
32
+ end
33
+ @args = opts.parse!(args)
34
+ end
35
+
36
+ def daemonize
37
+ worker_count.times do |worker_index|
38
+ process_name = worker_count == 1 ? "delayed_job" : "delayed_job.#{worker_index}"
39
+ Daemons.run_proc(process_name, :dir => "#{RAILS_ROOT}/tmp/pids", :dir_mode => :normal, :ARGV => @args) do |*args|
40
+ run process_name
41
+ end
42
+ end
43
+ end
44
+
45
+ def run(worker_name = nil)
46
+ Dir.chdir(RAILS_ROOT)
47
+ require File.join(RAILS_ROOT, 'config', 'environment')
48
+
49
+ # Replace the default logger
50
+ logger = Logger.new(File.join(RAILS_ROOT, 'log', 'delayed_job.log'))
51
+ logger.level = ActiveRecord::Base.logger.level
52
+ ActiveRecord::Base.logger = logger
53
+ ActiveRecord::Base.clear_active_connections!
54
+ Delayed::Worker.logger = logger
55
+ Delayed::Job.worker_name = "#{worker_name} #{Delayed::Job.worker_name}"
56
+
57
+ Delayed::Worker.new(@options).start
58
+ rescue => e
59
+ logger.fatal e
60
+ STDERR.puts e.message
61
+ exit 1
62
+ end
63
+
64
+ end
65
+ end
@@ -0,0 +1,271 @@
1
+ require 'timeout'
2
+
3
+ module Delayed
4
+
5
+ class DeserializationError < StandardError
6
+ end
7
+
8
+ # A job object that is persisted to the database.
9
+ # Contains the work object as a YAML field.
10
+ class Job < ActiveRecord::Base
11
+ MAX_ATTEMPTS = 25
12
+ MAX_RUN_TIME = 4.hours
13
+ set_table_name :delayed_jobs
14
+
15
+ # By default failed jobs are destroyed after too many attempts.
16
+ # If you want to keep them around (perhaps to inspect the reason
17
+ # for the failure), set this to false.
18
+ cattr_accessor :destroy_failed_jobs
19
+ self.destroy_failed_jobs = true
20
+
21
+ # Every worker has a unique name which by default is the pid of the process.
22
+ # There are some advantages to overriding this with something which survives worker retarts:
23
+ # Workers can safely resume working on tasks which are locked by themselves. The worker will assume that it crashed before.
24
+ cattr_accessor :worker_name
25
+ self.worker_name = "host:#{Socket.gethostname} pid:#{Process.pid}" rescue "pid:#{Process.pid}"
26
+
27
+ NextTaskSQL = '(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
28
+ NextTaskOrder = 'priority DESC, run_at ASC'
29
+
30
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
31
+
32
+ cattr_accessor :min_priority, :max_priority
33
+ self.min_priority = nil
34
+ self.max_priority = nil
35
+
36
+ # When a worker is exiting, make sure we don't have any locked jobs.
37
+ def self.clear_locks!
38
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
39
+ end
40
+
41
+ def failed?
42
+ failed_at
43
+ end
44
+ alias_method :failed, :failed?
45
+
46
+ def payload_object
47
+ @payload_object ||= deserialize(self['handler'])
48
+ end
49
+
50
+ def name
51
+ @name ||= begin
52
+ payload = payload_object
53
+ if payload.respond_to?(:display_name)
54
+ payload.display_name
55
+ else
56
+ payload.class.name
57
+ end
58
+ end
59
+ end
60
+
61
+ def payload_object=(object)
62
+ self['handler'] = object.to_yaml
63
+ end
64
+
65
+ # Reschedule the job in the future (when a job fails).
66
+ # Uses an exponential scale depending on the number of failed attempts.
67
+ def reschedule(message, backtrace = [], time = nil)
68
+ if self.attempts < MAX_ATTEMPTS
69
+ time ||= Job.db_time_now + (attempts ** 4) + 5
70
+
71
+ self.attempts += 1
72
+ self.run_at = time
73
+ self.last_error = message + "\n" + backtrace.join("\n")
74
+ self.unlock
75
+ save!
76
+ else
77
+ logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures."
78
+ destroy_failed_jobs ? destroy : update_attribute(:failed_at, Time.now)
79
+ end
80
+ end
81
+
82
+
83
+ # Try to run one job. Returns true/false (work done/work failed) or nil if job can't be locked.
84
+ def run_with_lock(max_run_time, worker_name)
85
+ logger.info "* [JOB] acquiring lock on #{name}"
86
+ unless lock_exclusively!(max_run_time, worker_name)
87
+ # We did not get the lock, some other worker process must have
88
+ logger.warn "* [JOB] failed to acquire exclusive lock for #{name}"
89
+ return nil # no work done
90
+ end
91
+
92
+ begin
93
+ runtime = Benchmark.realtime do
94
+ Timeout.timeout(max_run_time.to_i) { invoke_job }
95
+ destroy
96
+ end
97
+ # TODO: warn if runtime > max_run_time ?
98
+ logger.info "* [JOB] #{name} completed after %.4f" % runtime
99
+ return true # did work
100
+ rescue Exception => e
101
+ reschedule e.message, e.backtrace
102
+ log_exception(e)
103
+ return false # work failed
104
+ end
105
+ end
106
+
107
+ # Add a job to the queue
108
+ def self.enqueue(*args, &block)
109
+ object = block_given? ? EvaledJob.new(&block) : args.shift
110
+
111
+ unless object.respond_to?(:perform) || block_given?
112
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
113
+ end
114
+
115
+ priority = args.first || 0
116
+ run_at = args[1]
117
+
118
+ Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
119
+ end
120
+
121
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
122
+ def self.find_available(limit = 5, max_run_time = MAX_RUN_TIME)
123
+
124
+ time_now = db_time_now
125
+
126
+ sql = NextTaskSQL.dup
127
+
128
+ conditions = [time_now, time_now - max_run_time, worker_name]
129
+
130
+ if self.min_priority
131
+ sql << ' AND (priority >= ?)'
132
+ conditions << min_priority
133
+ end
134
+
135
+ if self.max_priority
136
+ sql << ' AND (priority <= ?)'
137
+ conditions << max_priority
138
+ end
139
+
140
+ conditions.unshift(sql)
141
+
142
+ ActiveRecord::Base.silence do
143
+ find(:all, :conditions => conditions, :order => NextTaskOrder, :limit => limit)
144
+ end
145
+ end
146
+
147
+ # Run the next job we can get an exclusive lock on.
148
+ # If no jobs are left we return nil
149
+ def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME)
150
+
151
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
152
+ # this leads to a more even distribution of jobs across the worker processes
153
+ find_available(5, max_run_time).each do |job|
154
+ t = job.run_with_lock(max_run_time, worker_name)
155
+ return t unless t == nil # return if we did work (good or bad)
156
+ end
157
+
158
+ nil # we didn't do any work, all 5 were not lockable
159
+ end
160
+
161
+ # Lock this job for this worker.
162
+ # Returns true if we have the lock, false otherwise.
163
+ def lock_exclusively!(max_run_time, worker = worker_name)
164
+ now = self.class.db_time_now
165
+ affected_rows = if locked_by != worker
166
+ # We don't own this job so we will update the locked_by name and the locked_at
167
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
168
+ else
169
+ # We already own this job, this may happen if the job queue crashes.
170
+ # Simply resume and update the locked_at
171
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
172
+ end
173
+ if affected_rows == 1
174
+ self.locked_at = now
175
+ self.locked_by = worker
176
+ return true
177
+ else
178
+ return false
179
+ end
180
+ end
181
+
182
+ # Unlock this job (note: not saved to DB)
183
+ def unlock
184
+ self.locked_at = nil
185
+ self.locked_by = nil
186
+ end
187
+
188
+ # This is a good hook if you need to report job processing errors in additional or different ways
189
+ def log_exception(error)
190
+ logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
191
+ logger.error(error)
192
+ end
193
+
194
+ # Do num jobs and return stats on success/failure.
195
+ # Exit early if interrupted.
196
+ def self.work_off(num = 100)
197
+ success, failure = 0, 0
198
+
199
+ num.times do
200
+ case self.reserve_and_run_one_job
201
+ when true
202
+ success += 1
203
+ when false
204
+ failure += 1
205
+ else
206
+ break # leave if no work could be done
207
+ end
208
+ break if $exit # leave if we're exiting
209
+ end
210
+
211
+ return [success, failure]
212
+ end
213
+
214
+ # Moved into its own method so that new_relic can trace it.
215
+ def invoke_job
216
+ payload_object.perform
217
+ end
218
+
219
+ private
220
+
221
+ def deserialize(source)
222
+ handler = YAML.load(source) rescue nil
223
+
224
+ unless handler.respond_to?(:perform)
225
+ if handler.nil? && source =~ ParseObjectFromYaml
226
+ handler_class = $1
227
+ end
228
+ attempt_to_load(handler_class || handler.class)
229
+ handler = YAML.load(source)
230
+ end
231
+
232
+ return handler if handler.respond_to?(:perform)
233
+
234
+ raise DeserializationError,
235
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
236
+ rescue TypeError, LoadError, NameError => e
237
+ raise DeserializationError,
238
+ "Job failed to load: #{e.message}. Try to manually require the required file."
239
+ end
240
+
241
+ # Constantize the object so that ActiveSupport can attempt
242
+ # its auto loading magic. Will raise LoadError if not successful.
243
+ def attempt_to_load(klass)
244
+ klass.constantize
245
+ end
246
+
247
+ # Get the current time (GMT or local depending on DB)
248
+ # Note: This does not ping the DB to get the time, so all your clients
249
+ # must have syncronized clocks.
250
+ def self.db_time_now
251
+ (ActiveRecord::Base.default_timezone == :utc) ? Time.now.utc : Time.now
252
+ end
253
+
254
+ protected
255
+
256
+ def before_save
257
+ self.run_at ||= self.class.db_time_now
258
+ end
259
+
260
+ end
261
+
262
+ class EvaledJob
263
+ def initialize
264
+ @job = yield
265
+ end
266
+
267
+ def perform
268
+ eval(@job)
269
+ end
270
+ end
271
+ end