xspond-delayed_job 1.8.5

Sign up to get free protection for your applications and to get access to all the features.
data/.gitignore ADDED
@@ -0,0 +1 @@
1
+ *.gem
data/MIT-LICENSE ADDED
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2005 Tobias Luetke
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOa AND
17
+ NONINFRINGEMENT. IN NO EVENT SaALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.textile ADDED
@@ -0,0 +1,185 @@
1
+ h1. Delayed::Job
2
+
3
+ Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
4
+
5
+ It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
6
+
7
+ * sending massive newsletters
8
+ * image resizing
9
+ * http downloads
10
+ * updating smart collections
11
+ * updating solr, our search server, after product changes
12
+ * batch imports
13
+ * spam checks
14
+
15
+ h2. Installation
16
+
17
+ To install as a gem, add the following to @config/environment.rb@:
18
+
19
+ <pre>
20
+ config.gem 'collectiveidea-delayed_job', :lib => 'delayed_job',
21
+ :source => 'http://gems.github.com'
22
+ </pre>
23
+
24
+ Rake tasks are not automatically loaded from gems, so you'll need to add the following to your Rakefile:
25
+
26
+ <pre>
27
+ begin
28
+ require 'delayed/tasks'
29
+ rescue LoadError
30
+ STDERR.puts "Run `rake gems:install` to install delayed_job"
31
+ end
32
+ </pre>
33
+
34
+ To install as a plugin:
35
+
36
+ <pre>
37
+ script/plugin install git://github.com/collectiveidea/delayed_job.git
38
+ </pre>
39
+
40
+ After delayed_job is installed, run:
41
+
42
+ <pre>
43
+ script/generate delayed_job
44
+ rake db:migrate
45
+ </pre>
46
+
47
+ h2. Upgrading to 1.8
48
+
49
+ If you are upgrading from a previous release, you will need to generate the new @script/delayed_job@:
50
+
51
+ <pre>
52
+ script/generate delayed_job --skip-migration
53
+ </pre>
54
+
55
+ h2. Queuing Jobs
56
+
57
+ Call @#send_later(method, params)@ on any object and it will be processed in the background.
58
+
59
+ <pre>
60
+ # without delayed_job
61
+ Notifier.deliver_signup(@user)
62
+
63
+ # with delayed_job
64
+ Notifier.send_later :deliver_signup, @user
65
+ </pre>
66
+
67
+ If a method should always be run in the background, you can call @#handle_asynchronously@ after the method declaration:
68
+
69
+ <pre>
70
+ class Device
71
+ def deliver
72
+ # long running method
73
+ end
74
+ handle_asynchronously :deliver
75
+ end
76
+
77
+ device = Device.new
78
+ device.deliver
79
+ </pre>
80
+
81
+ h2. Running Jobs
82
+
83
+ @script/delayed_job@ can be used to manage a background process which will start working off jobs.
84
+
85
+ <pre>
86
+ $ RAILS_ENV=production script/delayed_job start
87
+ $ RAILS_ENV=production script/delayed_job stop
88
+
89
+ # Runs two workers in separate processes.
90
+ $ RAILS_ENV=production script/delayed_job -n 2 start
91
+ $ RAILS_ENV=production script/delayed_job stop
92
+ </pre>
93
+
94
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. Keep in mind that each worker will check the database at least every 5 seconds.
95
+
96
+ You can also invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
97
+
98
+ h2. Custom Jobs
99
+
100
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table. Job objects are serialized to yaml so that they can later be resurrected by the job runner.
101
+
102
+ <pre>
103
+ class NewsletterJob < Struct.new(:text, :emails)
104
+ def perform
105
+ emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
106
+ end
107
+ end
108
+
109
+ Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
110
+ </pre>
111
+
112
+ h2. Gory Details
113
+
114
+ The library evolves around a delayed_jobs table which looks as follows:
115
+
116
+ create_table :delayed_jobs, :force => true do |table|
117
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
118
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
119
+ table.text :handler # YAML-encoded string of the object that will do work
120
+ table.text :last_error # reason for last failure (See Note below)
121
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
122
+ table.datetime :locked_at # Set when a client is working on this object
123
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
124
+ table.string :locked_by # Who is working on this object (if locked)
125
+ table.timestamps
126
+ end
127
+
128
+ On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
129
+
130
+ The default Job::max_attempts is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
131
+ With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
132
+
133
+ The default Job::max_run_time is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
134
+ make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
135
+
136
+ By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
137
+ Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
138
+
139
+ Here is an example of changing job parameters in Rails:
140
+
141
+ <pre>
142
+ # config/initializers/delayed_job_config.rb
143
+ Delayed::Job.destroy_failed_jobs = false
144
+ silence_warnings do
145
+ Delayed::Worker::sleep_delay = 60
146
+ Delayed::Job::max_attempts = 3
147
+ Delayed::Job::max_run_time = 5.minutes
148
+ end
149
+ </pre>
150
+
151
+ h3. Cleaning up
152
+
153
+ You can invoke @rake jobs:clear@ to delete all jobs in the queue.
154
+
155
+ h2. Mailing List
156
+
157
+ Join us on the mailing list at http://groups.google.com/group/delayed_job
158
+
159
+ h2. How to contribute
160
+
161
+ If you find what looks like a bug:
162
+
163
+ # Check the GitHub issue tracker to see if anyone else has had the same issue.
164
+ http://github.com/collectiveidea/delayed_job/issues/
165
+ # If you don't see anything, create an issue with information on how to reproduce it.
166
+
167
+ If you want to contribute an enhancement or a fix:
168
+
169
+ # Fork the project on github.
170
+ http://github.com/collectiveidea/delayed_job/
171
+ # Make your changes with tests.
172
+ # Commit the changes without making changes to the Rakefile, VERSION, or any other files that aren't related to your enhancement or fix
173
+ # Send a pull request.
174
+
175
+ h3. Changes
176
+
177
+ * 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
178
+
179
+ * 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
180
+
181
+ * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
182
+
183
+ * 1.2.0: Added #send_later to Object for simpler job creation
184
+
185
+ * 1.0.0: Initial release
data/Rakefile ADDED
@@ -0,0 +1,36 @@
1
+ # -*- encoding: utf-8 -*-
2
+ begin
3
+ require 'jeweler'
4
+ rescue LoadError
5
+ puts "Jeweler not available. Install it with: sudo gem install technicalpickles-jeweler -s http://gems.github.com"
6
+ exit 1
7
+ end
8
+
9
+ Jeweler::Tasks.new do |s|
10
+ s.name = "xspond-delayed_job"
11
+ s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
12
+ s.email = "tobi@leetsoft.com"
13
+ s.homepage = "http://github.com/xspond/delayed_job"
14
+ s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
15
+ s.authors = ["Brandon Keepers", "Tobias Lütke", "David Genord II"]
16
+
17
+ s.has_rdoc = true
18
+ s.rdoc_options = ["--main", "README.textile", "--inline-source", "--line-numbers"]
19
+ s.extra_rdoc_files = ["README.textile"]
20
+
21
+ s.test_files = Dir['spec/**/*']
22
+ end
23
+
24
+ Jeweler::GemcutterTasks.new
25
+
26
+ require 'spec/rake/spectask'
27
+
28
+ task :default => :spec
29
+
30
+ desc 'Run the specs'
31
+ Spec::Rake::SpecTask.new(:spec) do |t|
32
+ t.libs << 'lib'
33
+ t.pattern = 'spec/**/*_spec.rb'
34
+ t.verbose = true
35
+ end
36
+
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 1.8.5
@@ -0,0 +1,14 @@
1
+ # an example Monit configuration file for delayed_job
2
+ # See: http://stackoverflow.com/questions/1226302/how-to-monitor-delayedjob-with-monit/1285611
3
+ #
4
+ # To use:
5
+ # 1. copy to /var/www/apps/{app_name}/shared/delayed_job.monitrc
6
+ # 2. replace {app_name} as appropriate
7
+ # 3. add this to your /etc/monit/monitrc
8
+ #
9
+ # include /var/www/apps/{app_name}/shared/delayed_job.monitrc
10
+
11
+ check process delayed_job
12
+ with pidfile /var/www/apps/{app_name}/shared/pids/delayed_job.pid
13
+ start program = "RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job start"
14
+ stop program = "RAILS_ENV=production /var/www/apps/{app_name}/current/script/delayed_job stop"
@@ -0,0 +1,22 @@
1
+ class DelayedJobGenerator < Rails::Generator::Base
2
+ default_options :skip_migration => false
3
+
4
+ def manifest
5
+ record do |m|
6
+ m.template 'script', 'script/delayed_job', :chmod => 0755
7
+ unless options[:skip_migration]
8
+ m.migration_template "migration.rb", 'db/migrate',
9
+ :migration_file_name => "create_delayed_jobs"
10
+ end
11
+ end
12
+ end
13
+
14
+ protected
15
+
16
+ def add_options!(opt)
17
+ opt.separator ''
18
+ opt.separator 'Options:'
19
+ opt.on("--skip-migration", "Don't generate a migration") { |v| options[:skip_migration] = v }
20
+ end
21
+
22
+ end
@@ -0,0 +1,20 @@
1
+ class CreateDelayedJobs < ActiveRecord::Migration
2
+ def self.up
3
+ create_table :delayed_jobs, :force => true do |table|
4
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
5
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
6
+ table.text :handler # YAML-encoded string of the object that will do work
7
+ table.text :last_error # reason for last failure (See Note below)
8
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
9
+ table.datetime :locked_at # Set when a client is working on this object
10
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
11
+ table.string :locked_by # Who is working on this object (if locked)
12
+ table.timestamps
13
+ end
14
+
15
+ end
16
+
17
+ def self.down
18
+ drop_table :delayed_jobs
19
+ end
20
+ end
@@ -0,0 +1,5 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
4
+ require 'delayed/command'
5
+ Delayed::Command.new(ARGV).daemonize
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require File.dirname(__FILE__) + '/lib/delayed_job'
@@ -0,0 +1,83 @@
1
+ require 'rubygems'
2
+ require 'daemons'
3
+ require 'optparse'
4
+
5
+ module Delayed
6
+ class Command
7
+ def initialize(args)
8
+ @files_to_reopen = []
9
+ @options = {:quiet => true, :worker_count => 1}
10
+
11
+ opts = OptionParser.new do |opts|
12
+ opts.banner = "Usage: #{File.basename($0)} [options] start|stop|restart|run"
13
+
14
+ opts.on('-h', '--help', 'Show this message') do
15
+ puts opts
16
+ exit 1
17
+ end
18
+ opts.on('-e', '--environment=NAME', 'Specifies the environment to run this delayed jobs under (test/development/production).') do |e|
19
+ STDERR.puts "The -e/--environment option has been deprecated and has no effect. Use RAILS_ENV and see http://github.com/collectiveidea/delayed_job/issues/#issue/7"
20
+ end
21
+ opts.on('--min-priority N', 'Minimum priority of jobs to run.') do |n|
22
+ @options[:min_priority] = n
23
+ end
24
+ opts.on('--max-priority N', 'Maximum priority of jobs to run.') do |n|
25
+ @options[:max_priority] = n
26
+ end
27
+ opts.on('-n', '--number_of_workers=workers', "Number of unique workers to spawn") do |worker_count|
28
+ begin
29
+ @options[:worker_count] = worker_count.to_i if worker_count.to_i > 0
30
+ rescue
31
+ end
32
+ end
33
+ end
34
+ @args = opts.parse!(args)
35
+ end
36
+
37
+ def daemonize
38
+ ObjectSpace.each_object(File) do |file|
39
+ @files_to_reopen << file unless file.closed?
40
+ end
41
+
42
+ process_name = 'delayed_job'
43
+ Daemons.run_proc(process_name, :dir => "#{RAILS_ROOT}/tmp/pids", :dir_mode => :normal, :ARGV => @args) do |*args|
44
+ run process_name
45
+ end
46
+ end
47
+
48
+ def logger
49
+ if defined?(Rails) && Rails.respond_to?(:logger)
50
+ Rails.logger
51
+ elsif defined?(RAILS_DEFAULT_LOGGER)
52
+ RAILS_DEFAULT_LOGGER
53
+ elsif defined?(Merb) && Merb.respond_to?(:logger)
54
+ Merb.logger
55
+ end
56
+ end
57
+
58
+ def run(worker_name = nil)
59
+ Dir.chdir(RAILS_ROOT)
60
+
61
+ # Re-open file handles
62
+ @files_to_reopen.each do |file|
63
+ begin
64
+ file.reopen File.join(RAILS_ROOT, 'log', 'delayed_job.log'), 'a+'
65
+ file.sync = true
66
+ rescue ::Exception
67
+ end
68
+ end
69
+
70
+ Delayed::Worker.logger = logger if logger
71
+ ActiveRecord::Base.connection.reconnect!
72
+
73
+ worker = Delayed::Worker.new(@options)
74
+ worker.name_prefix = "#{worker_name} "
75
+ worker.start
76
+ rescue => e
77
+ logger.fatal e if logger
78
+ STDERR.puts e.message
79
+ exit 1
80
+ end
81
+
82
+ end
83
+ end
@@ -0,0 +1,224 @@
1
+ require 'timeout'
2
+
3
+ module Delayed
4
+
5
+ class DeserializationError < StandardError
6
+ end
7
+
8
+ # A job object that is persisted to the database.
9
+ # Contains the work object as a YAML field.
10
+ class Job < ActiveRecord::Base
11
+ @@max_attempts = 25
12
+ @@max_run_time = 4.hours
13
+
14
+ cattr_accessor :max_attempts, :max_run_time
15
+
16
+ set_table_name :delayed_jobs
17
+
18
+ # By default failed jobs are destroyed after too many attempts.
19
+ # If you want to keep them around (perhaps to inspect the reason
20
+ # for the failure), set this to false.
21
+ cattr_accessor :destroy_failed_jobs
22
+ self.destroy_failed_jobs = true
23
+
24
+ named_scope :ready_to_run, lambda {|worker_name, max_run_time|
25
+ {:conditions => ['(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR locked_by = ?) AND failed_at IS NULL', db_time_now, db_time_now - max_run_time, worker_name]}
26
+ }
27
+ named_scope :by_priority, :order => 'priority DESC, run_at ASC'
28
+
29
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
30
+
31
+ cattr_accessor :min_priority, :max_priority
32
+ self.min_priority = nil
33
+ self.max_priority = nil
34
+
35
+ # When a worker is exiting, make sure we don't have any locked jobs.
36
+ def self.clear_locks!(worker_name)
37
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
38
+ end
39
+
40
+ def failed?
41
+ failed_at
42
+ end
43
+ alias_method :failed, :failed?
44
+
45
+ def payload_object
46
+ @payload_object ||= deserialize(self['handler'])
47
+ end
48
+
49
+ def name
50
+ @name ||= begin
51
+ payload = payload_object
52
+ if payload.respond_to?(:display_name)
53
+ payload.display_name
54
+ else
55
+ payload.class.name
56
+ end
57
+ end
58
+ end
59
+
60
+ def payload_object=(object)
61
+ self['handler'] = object.to_yaml
62
+ end
63
+
64
+ # Reschedule the job in the future (when a job fails).
65
+ # Uses an exponential scale depending on the number of failed attempts.
66
+ def reschedule(message, backtrace = [], time = nil)
67
+ self.last_error = message + "\n" + backtrace.join("\n")
68
+
69
+ if (self.attempts += 1) < max_attempts
70
+ time ||= Job.db_time_now + (attempts ** 4) + 5
71
+
72
+ self.run_at = time
73
+ self.unlock
74
+ save!
75
+ else
76
+ logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consecutive failures."
77
+ destroy_failed_jobs ? destroy : update_attribute(:failed_at, Delayed::Job.db_time_now)
78
+ end
79
+ end
80
+
81
+
82
+ # Try to lock and run job. Returns true/false (work done/work failed) or nil if job can't be locked.
83
+ def run_with_lock(max_run_time, worker_name)
84
+ logger.info "* [JOB] acquiring lock on #{name}"
85
+ if lock_exclusively!(max_run_time, worker_name)
86
+ run(max_run_time)
87
+ else
88
+ # We did not get the lock, some other worker process must have
89
+ logger.warn "* [JOB] failed to acquire exclusive lock for #{name}"
90
+ nil # no work done
91
+ end
92
+ end
93
+
94
+ # Try to run job. Returns true/false (work done/work failed)
95
+ def run(max_run_time)
96
+ runtime = Benchmark.realtime do
97
+ Timeout.timeout(max_run_time.to_i) { invoke_job }
98
+ destroy
99
+ end
100
+ # TODO: warn if runtime > max_run_time ?
101
+ logger.info "* [JOB] #{name} completed after %.4f" % runtime
102
+ return true # did work
103
+ rescue Exception => e
104
+ reschedule e.message, e.backtrace
105
+ log_exception(e)
106
+ return false # work failed
107
+ end
108
+
109
+ # Add a job to the queue
110
+ def self.enqueue(*args, &block)
111
+ object = block_given? ? EvaledJob.new(&block) : args.shift
112
+
113
+ unless object.respond_to?(:perform) || block_given?
114
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
115
+ end
116
+
117
+ priority = args.first || 0
118
+ run_at = args[1]
119
+
120
+ Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
121
+ end
122
+
123
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
124
+ def self.find_available(worker_name, limit = 5, max_run_time = max_run_time)
125
+ scope = self.ready_to_run(worker_name, max_run_time)
126
+ scope = scope.scoped(:conditions => ['priority >= ?', min_priority]) if min_priority
127
+ scope = scope.scoped(:conditions => ['priority <= ?', max_priority]) if max_priority
128
+
129
+ ActiveRecord::Base.silence do
130
+ scope.by_priority.all(:limit => limit)
131
+ end
132
+ end
133
+
134
+ # Lock this job for this worker.
135
+ # Returns true if we have the lock, false otherwise.
136
+ def lock_exclusively!(max_run_time, worker)
137
+ now = self.class.db_time_now
138
+ affected_rows = if locked_by != worker
139
+ # We don't own this job so we will update the locked_by name and the locked_at
140
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
141
+ else
142
+ # We already own this job, this may happen if the job queue crashes.
143
+ # Simply resume and update the locked_at
144
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
145
+ end
146
+ if affected_rows == 1
147
+ self.locked_at = now
148
+ self.locked_by = worker
149
+ return true
150
+ else
151
+ return false
152
+ end
153
+ end
154
+
155
+ # Unlock this job (note: not saved to DB)
156
+ def unlock
157
+ self.locked_at = nil
158
+ self.locked_by = nil
159
+ end
160
+
161
+ # This is a good hook if you need to report job processing errors in additional or different ways
162
+ def log_exception(error)
163
+ logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
164
+ logger.error(error)
165
+ end
166
+
167
+ # Moved into its own method so that new_relic can trace it.
168
+ def invoke_job
169
+ payload_object.perform
170
+ end
171
+
172
+ private
173
+
174
+ def deserialize(source)
175
+ handler = YAML.load(source) rescue nil
176
+
177
+ unless handler.respond_to?(:perform)
178
+ if handler.nil? && source =~ ParseObjectFromYaml
179
+ handler_class = $1
180
+ end
181
+ attempt_to_load(handler_class || handler.class)
182
+ handler = YAML.load(source)
183
+ end
184
+
185
+ return handler if handler.respond_to?(:perform)
186
+
187
+ raise DeserializationError,
188
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
189
+ rescue TypeError, LoadError, NameError => e
190
+ raise DeserializationError,
191
+ "Job failed to load: #{e.message}. Try to manually require the required file."
192
+ end
193
+
194
+ # Constantize the object so that ActiveSupport can attempt
195
+ # its auto loading magic. Will raise LoadError if not successful.
196
+ def attempt_to_load(klass)
197
+ klass.constantize
198
+ end
199
+
200
+ # Get the current time (GMT or local depending on DB)
201
+ # Note: This does not ping the DB to get the time, so all your clients
202
+ # must have syncronized clocks.
203
+ def self.db_time_now
204
+ (ActiveRecord::Base.default_timezone == :utc) ? Time.now.utc : Time.zone.now
205
+ end
206
+
207
+ protected
208
+
209
+ def before_save
210
+ self.run_at ||= self.class.db_time_now
211
+ end
212
+
213
+ end
214
+
215
+ class EvaledJob
216
+ def initialize
217
+ @job = yield
218
+ end
219
+
220
+ def perform
221
+ eval(@job)
222
+ end
223
+ end
224
+ end