jkreeftmeijer-delayed_job 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1 @@
1
+ *.gem
@@ -0,0 +1,20 @@
1
+ Copyright (c) 2005 Tobias Luetke
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining
4
+ a copy of this software and associated documentation files (the
5
+ "Software"), to deal in the Software without restriction, including
6
+ without limitation the rights to use, copy, modify, merge, publish,
7
+ distribute, sublicense, and/or sell copies of the Software, and to
8
+ permit persons to whom the Software is furnished to do so, subject to
9
+ the following conditions:
10
+
11
+ The above copyright notice and this permission notice shall be
12
+ included in all copies or substantial portions of the Software.
13
+
14
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
15
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
16
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOa AND
17
+ NONINFRINGEMENT. IN NO EVENT SaALL THE AUTHORS OR COPYRIGHT HOLDERS BE
18
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
19
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
20
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,110 @@
1
+ h1. Delayed::Job
2
+
3
+ Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. This is a fork of Zachary Belzer's fork (which added MongoMapper support) of Tobias Lütke's DelayedJob. It's purpose is to add MongoMapper support.
4
+
5
+ It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
6
+
7
+ * sending massive newsletters
8
+ * image resizing
9
+ * http downloads
10
+ * updating smart collections
11
+ * updating solr, our search server, after product changes
12
+ * batch imports
13
+ * spam checks
14
+
15
+ h2. Setup
16
+
17
+ The library evolves around a delayed_jobs table which looks as follows:
18
+
19
+ create_table :delayed_jobs, :force => true do |table|
20
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
21
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
22
+ table.text :handler # YAML-encoded string of the object that will do work
23
+ table.string :last_error # reason for last failure (See Note below)
24
+ table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
25
+ table.datetime :locked_at # Set when a client is working on this object
26
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
27
+ table.string :locked_by # Who is working on this object (if locked)
28
+ table.timestamps
29
+ end
30
+
31
+ On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
32
+
33
+ The default MAX_ATTEMPTS is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
34
+ With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
35
+
36
+ The default MAX_RUN_TIME is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
37
+ make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
38
+
39
+ By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
40
+ Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
41
+
42
+ Here is an example of changing job parameters in Rails:
43
+
44
+ # config/initializers/delayed_job_config.rb
45
+ Delayed::Job.destroy_failed_jobs = false
46
+ silence_warnings do
47
+ Delayed::Job.const_set("MAX_ATTEMPTS", 3)
48
+ Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)
49
+ end
50
+
51
+ Note: If your error messages are long, consider changing last_error field to a :text instead of a :string (255 character limit).
52
+
53
+
54
+ h2. Usage
55
+
56
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.
57
+ Job objects are serialized to yaml so that they can later be resurrected by the job runner.
58
+
59
+ class NewsletterJob < Struct.new(:text, :emails)
60
+ def perform
61
+ emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
62
+ end
63
+ end
64
+
65
+ Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
66
+
67
+ There is also a second way to get jobs in the queue: send_later.
68
+
69
+
70
+ BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, massive_csv)
71
+
72
+
73
+ This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
74
+ which are stored as their text representation and loaded from the database fresh when the job is actually run later.
75
+
76
+
77
+ h2. Running the jobs
78
+
79
+ You can invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
80
+
81
+ You can also run by writing a simple @script/job_runner@, and invoking it externally:
82
+
83
+ <pre><code>
84
+ #!/usr/bin/env ruby
85
+ require File.dirname(__FILE__) + '/../config/environment'
86
+
87
+ Delayed::Worker.new.start
88
+ </code></pre>
89
+
90
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. You can even
91
+ run multiple workers on per computer, but you must give each one a unique name. (TODO: put in an example)
92
+ Keep in mind that each worker will check the database at least every 5 seconds.
93
+
94
+ Note: The rake task will exit if the database has any network connectivity problems.
95
+
96
+ h3. Cleaning up
97
+
98
+ You can invoke @rake jobs:clear@ to delete all jobs in the queue.
99
+
100
+ h3. Changes
101
+
102
+ * 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
103
+
104
+ * 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
105
+
106
+ * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
107
+
108
+ * 1.2.0: Added #send_later to Object for simpler job creation
109
+
110
+ * 1.0.0: Initial release
@@ -0,0 +1,35 @@
1
+ require 'rake'
2
+ require 'tasks/tasks'
3
+ require 'spec/rake/spectask'
4
+
5
+ task :default => :spec
6
+
7
+ Spec::Rake::SpecTask.new(:spec => ['spec:active_record', 'spec:mongo'])
8
+
9
+ namespace :spec do
10
+ desc "Run specs for active_record adapter"
11
+ Spec::Rake::SpecTask.new(:active_record) do |t|
12
+ t.spec_files = FileList['spec/setup/active_record.rb', 'spec/*_spec.rb']
13
+ end
14
+
15
+ desc "Run specs for mongo_mapper adapter"
16
+ Spec::Rake::SpecTask.new(:mongo) do |t|
17
+ t.spec_files = FileList['spec/setup/mongo.rb', 'spec/*_spec.rb']
18
+ end
19
+ end
20
+
21
+ begin
22
+ require 'jeweler'
23
+ Jeweler::Tasks.new do |gemspec|
24
+ gemspec.name = "jkreeftmeijer-delayed_job"
25
+ gemspec.summary = "Database backed asynchronous priority queue for MongoMapper and ActiveRecord"
26
+ gemspec.description = "A fork of Zachary Belzer's fork (which added MongoMapper support) of Tobias Lütke's DelayedJob. Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background."
27
+ gemspec.email = "jeff@kreeftmeijer.nl"
28
+ gemspec.homepage = "http://github.com/jeffkreeftmeijer/delayed_job"
29
+ gemspec.authors = ["Tobias Lütke", "Zachary Belzer", "Jeff Kreeftmeijer"]
30
+ end
31
+ rescue LoadError
32
+ puts "Jeweler not available. Install it with: sudo gem install jeweler"
33
+ end
34
+
35
+ Jeweler::GemcutterTasks.new
data/VERSION ADDED
@@ -0,0 +1 @@
1
+ 0.1.0
@@ -0,0 +1,41 @@
1
+ #version = File.read('README.textile').scan(/^\*\s+([\d\.]+)/).flatten
2
+
3
+ Gem::Specification.new do |s|
4
+ s.name = "delayed_job"
5
+ s.version = "1.7.0"
6
+ s.date = "2008-11-28"
7
+ s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
8
+ s.email = "tobi@leetsoft.com"
9
+ s.homepage = "http://github.com/tobi/delayed_job/tree/master"
10
+ s.description = "Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
11
+ s.authors = ["Tobias Lütke"]
12
+
13
+ # s.bindir = "bin"
14
+ # s.executables = ["delayed_job"]
15
+ # s.default_executable = "delayed_job"
16
+
17
+ s.has_rdoc = false
18
+ s.rdoc_options = ["--main", "README.textile"]
19
+ s.extra_rdoc_files = ["README.textile"]
20
+
21
+ # run git ls-files to get an updated list
22
+ s.files = %w[
23
+ MIT-LICENSE
24
+ README.textile
25
+ delayed_job.gemspec
26
+ init.rb
27
+ lib/delayed/job.rb
28
+ lib/delayed/message_sending.rb
29
+ lib/delayed/performable_method.rb
30
+ lib/delayed/worker.rb
31
+ lib/delayed_job.rb
32
+ tasks/jobs.rake
33
+ tasks/tasks.rb
34
+ ]
35
+ s.test_files = %w[
36
+ spec/database.rb
37
+ spec/delayed_method_spec.rb
38
+ spec/job_spec.rb
39
+ spec/story_spec.rb
40
+ ]
41
+ end
data/init.rb ADDED
@@ -0,0 +1 @@
1
+ require File.dirname(__FILE__) + '/lib/delayed_job'
@@ -0,0 +1,64 @@
1
+ # Generated by jeweler
2
+ # DO NOT EDIT THIS FILE
3
+ # Instead, edit Jeweler::Tasks in Rakefile, and run `rake gemspec`
4
+ # -*- encoding: utf-8 -*-
5
+
6
+ Gem::Specification.new do |s|
7
+ s.name = %q{jkreeftmeijer-delayed_job}
8
+ s.version = "0.1.0"
9
+
10
+ s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
+ s.authors = ["Tobias L\303\274tke", "Zachary Belzer", "Jeff Kreeftmeijer"]
12
+ s.date = %q{2009-12-19}
13
+ s.description = %q{A fork of Zachary Belzer's fork (which added MongoMapper support) of Tobias Lütke's DelayedJob. Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.}
14
+ s.email = %q{jeff@kreeftmeijer.nl}
15
+ s.extra_rdoc_files = [
16
+ "README.textile"
17
+ ]
18
+ s.files = [
19
+ ".gitignore",
20
+ "MIT-LICENSE",
21
+ "README.textile",
22
+ "Rakefile",
23
+ "VERSION",
24
+ "delayed_job.gemspec",
25
+ "init.rb",
26
+ "jkreeftmeijer-delayed_job.gemspec",
27
+ "lib/delayed/job.rb",
28
+ "lib/delayed/job/active_record_job.rb",
29
+ "lib/delayed/job/mongo_job.rb",
30
+ "lib/delayed/message_sending.rb",
31
+ "lib/delayed/performable_method.rb",
32
+ "lib/delayed/worker.rb",
33
+ "lib/delayed_job.rb",
34
+ "spec/delayed_method_spec.rb",
35
+ "spec/job_spec.rb",
36
+ "spec/setup/active_record.rb",
37
+ "spec/setup/mongo.rb",
38
+ "spec/story_spec.rb",
39
+ "tasks/jobs.rake",
40
+ "tasks/tasks.rb"
41
+ ]
42
+ s.homepage = %q{http://github.com/jeffkreeftmeijer/delayed_job}
43
+ s.rdoc_options = ["--charset=UTF-8"]
44
+ s.require_paths = ["lib"]
45
+ s.rubygems_version = %q{1.3.5}
46
+ s.summary = %q{Database backed asynchronous priority queue for MongoMapper and ActiveRecord}
47
+ s.test_files = [
48
+ "spec/delayed_method_spec.rb",
49
+ "spec/job_spec.rb",
50
+ "spec/setup/active_record.rb",
51
+ "spec/setup/mongo.rb",
52
+ "spec/story_spec.rb"
53
+ ]
54
+
55
+ if s.respond_to? :specification_version then
56
+ current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
57
+ s.specification_version = 3
58
+
59
+ if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
60
+ else
61
+ end
62
+ else
63
+ end
64
+ end
@@ -0,0 +1,146 @@
1
+ module Delayed
2
+
3
+ class DeserializationError < StandardError
4
+ end
5
+
6
+ class Job
7
+ MAX_ATTEMPTS = 25
8
+ MAX_RUN_TIME = 4.hours
9
+
10
+ # By default failed jobs are destroyed after too many attempts.
11
+ # If you want to keep them around (perhaps to inspect the reason
12
+ # for the failure), set this to false.
13
+ cattr_accessor :destroy_failed_jobs
14
+ self.destroy_failed_jobs = true
15
+
16
+ # Every worker has a unique name which by default is the pid of the process.
17
+ # There are some advantages to overriding this with something which survives worker retarts:
18
+ # Workers can safely resume working on tasks which are locked by themselves. The worker will assume that it crashed before.
19
+ cattr_accessor :worker_name
20
+ self.worker_name = "host:#{Socket.gethostname} pid:#{Process.pid}" rescue "pid:#{Process.pid}"
21
+
22
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
23
+
24
+ cattr_accessor :min_priority, :max_priority
25
+ self.min_priority = nil
26
+ self.max_priority = nil
27
+
28
+ # When a worker is exiting, make sure we don't have any locked jobs.
29
+ def self.clear_locks!
30
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
31
+ end
32
+
33
+ def failed?
34
+ failed_at
35
+ end
36
+ alias_method :failed, :failed?
37
+
38
+ def payload_object
39
+ @payload_object ||= deserialize(self['handler'])
40
+ end
41
+
42
+ def name
43
+ @name ||= begin
44
+ payload = payload_object
45
+ if payload.respond_to?(:display_name)
46
+ payload.display_name
47
+ else
48
+ payload.class.name
49
+ end
50
+ end
51
+ end
52
+
53
+ def payload_object=(object)
54
+ self['handler'] = object.to_yaml
55
+ end
56
+
57
+ # Run the next job we can get an exclusive lock on.
58
+ # If no jobs are left we return nil
59
+ def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME)
60
+
61
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
62
+ # this leads to a more even distribution of jobs across the worker processes
63
+ find_available(5, max_run_time).each do |job|
64
+ t = job.run_with_lock(max_run_time, worker_name)
65
+ return t unless t == nil # return if we did work (good or bad)
66
+ end
67
+
68
+ nil # we didn't do any work, all 5 were not lockable
69
+ end
70
+
71
+ # Unlock this job (note: not saved to DB)
72
+ def unlock
73
+ self.locked_at = nil
74
+ self.locked_by = nil
75
+ end
76
+
77
+ # This is a good hook if you need to report job processing errors in additional or different ways
78
+ def log_exception(error)
79
+ logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
80
+ logger.error(error)
81
+ end
82
+
83
+ # Do num jobs and return stats on success/failure.
84
+ # Exit early if interrupted.
85
+ def self.work_off(num = 100)
86
+ success, failure = 0, 0
87
+
88
+ num.times do
89
+ case self.reserve_and_run_one_job
90
+ when true
91
+ success += 1
92
+ when false
93
+ failure += 1
94
+ else
95
+ break # leave if no work could be done
96
+ end
97
+ break if $exit # leave if we're exiting
98
+ end
99
+
100
+ return [success, failure]
101
+ end
102
+
103
+ # Moved into its own method so that new_relic can trace it.
104
+ def invoke_job
105
+ payload_object.perform
106
+ end
107
+
108
+ private
109
+
110
+ def deserialize(source)
111
+ handler = YAML.load(source) rescue nil
112
+
113
+ unless handler.respond_to?(:perform)
114
+ if handler.nil? && source =~ ParseObjectFromYaml
115
+ handler_class = $1
116
+ end
117
+ attempt_to_load(handler_class || handler.class)
118
+ handler = YAML.load(source)
119
+ end
120
+
121
+ return handler if handler.respond_to?(:perform)
122
+
123
+ raise DeserializationError,
124
+ 'Job failed to load: Unknown handler. Try to manually require the appropiate file.'
125
+ rescue TypeError, LoadError, NameError => e
126
+ raise DeserializationError,
127
+ "Job failed to load: #{e.message}. Try to manually require the required file."
128
+ end
129
+
130
+ # Constantize the object so that ActiveSupport can attempt
131
+ # its auto loading magic. Will raise LoadError if not successful.
132
+ def attempt_to_load(klass)
133
+ klass.constantize
134
+ end
135
+ end
136
+
137
+ class EvaledJob
138
+ def initialize
139
+ @job = yield
140
+ end
141
+
142
+ def perform
143
+ eval(@job)
144
+ end
145
+ end
146
+ end
@@ -0,0 +1,151 @@
1
+ module Delayed
2
+
3
+ # A job object that is persisted to the database.
4
+ # Contains the work object as a YAML field.
5
+ class Job < ActiveRecord::Base
6
+ set_table_name :delayed_jobs
7
+
8
+ NextTaskSQL = '(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
9
+ NextTaskOrder = 'priority DESC, run_at ASC'
10
+
11
+ # When a worker is exiting, make sure we don't have any locked jobs.
12
+ def self.clear_locks!
13
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
14
+ end
15
+
16
+ # Reschedule the job in the future (when a job fails).
17
+ # Uses an exponential scale depending on the number of failed attempts.
18
+ def reschedule(message, backtrace = [], time = nil)
19
+ if self.attempts < MAX_ATTEMPTS
20
+ time ||= Job.db_time_now + (attempts ** 4) + 5
21
+
22
+ self.attempts += 1
23
+ self.run_at = time
24
+ self.last_error = message + "\n" + backtrace.join("\n")
25
+ self.unlock
26
+ save!
27
+ else
28
+ logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures."
29
+ destroy_failed_jobs ? destroy : update_attribute(:failed_at, Delayed::Job.db_time_now)
30
+ end
31
+ end
32
+
33
+ # Try to run one job. Returns true/false (work done/work failed) or nil if job can't be locked.
34
+ def run_with_lock(max_run_time, worker_name)
35
+ logger.info "* [JOB] aquiring lock on #{name}"
36
+ unless lock_exclusively!(max_run_time, worker_name)
37
+ # We did not get the lock, some other worker process must have
38
+ logger.warn "* [JOB] failed to aquire exclusive lock for #{name}"
39
+ return nil # no work done
40
+ end
41
+
42
+ begin
43
+ runtime = Benchmark.realtime do
44
+ invoke_job # TODO: raise error if takes longer than max_run_time
45
+ destroy
46
+ end
47
+ # TODO: warn if runtime > max_run_time ?
48
+ logger.info "* [JOB] #{name} completed after %.4f" % runtime
49
+ return true # did work
50
+ rescue Exception => e
51
+ reschedule e.message, e.backtrace
52
+ log_exception(e)
53
+ return false # work failed
54
+ end
55
+ end
56
+
57
+ # Add a job to the queue
58
+ def self.enqueue(*args, &block)
59
+ object = block_given? ? EvaledJob.new(&block) : args.shift
60
+
61
+ unless object.respond_to?(:perform) || block_given?
62
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
63
+ end
64
+
65
+ priority = args.first || 0
66
+ run_at = args[1]
67
+
68
+ Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
69
+ end
70
+
71
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
72
+ # Return in random order prevent everyone trying to do same head job at once.
73
+ def self.find_available(limit = 5, max_run_time = MAX_RUN_TIME)
74
+
75
+ time_now = db_time_now
76
+
77
+ sql = NextTaskSQL.dup
78
+
79
+ conditions = [time_now, time_now - max_run_time, worker_name]
80
+
81
+ if self.min_priority
82
+ sql << ' AND (priority >= ?)'
83
+ conditions << min_priority
84
+ end
85
+
86
+ if self.max_priority
87
+ sql << ' AND (priority <= ?)'
88
+ conditions << max_priority
89
+ end
90
+
91
+ conditions.unshift(sql)
92
+
93
+ records = ActiveRecord::Base.silence do
94
+ find(:all, :conditions => conditions, :order => NextTaskOrder, :limit => limit)
95
+ end
96
+
97
+ records.sort_by { rand() }
98
+ end
99
+
100
+ # Run the next job we can get an exclusive lock on.
101
+ # If no jobs are left we return nil
102
+ def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME)
103
+
104
+ # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
105
+ # this leads to a more even distribution of jobs across the worker processes
106
+ find_available(5, max_run_time).each do |job|
107
+ t = job.run_with_lock(max_run_time, worker_name)
108
+ return t unless t == nil # return if we did work (good or bad)
109
+ end
110
+
111
+ nil # we didn't do any work, all 5 were not lockable
112
+ end
113
+
114
+ # Lock this job for this worker.
115
+ # Returns true if we have the lock, false otherwise.
116
+ def lock_exclusively!(max_run_time, worker = worker_name)
117
+ now = self.class.db_time_now
118
+ affected_rows = if locked_by != worker
119
+ # We don't own this job so we will update the locked_by name and the locked_at
120
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?)", id, (now - max_run_time.to_i)])
121
+ else
122
+ # We already own this job, this may happen if the job queue crashes.
123
+ # Simply resume and update the locked_at
124
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
125
+ end
126
+ if affected_rows == 1
127
+ self.locked_at = now
128
+ self.locked_by = worker
129
+ return true
130
+ else
131
+ return false
132
+ end
133
+ end
134
+
135
+ private
136
+
137
+ # Get the current time (GMT or local depending on DB)
138
+ # Note: This does not ping the DB to get the time, so all your clients
139
+ # must have syncronized clocks.
140
+ def self.db_time_now
141
+ (ActiveRecord::Base.default_timezone == :utc) ? Time.now.utc : Time.zone.now
142
+ end
143
+
144
+ protected
145
+
146
+ def before_save
147
+ self.run_at ||= self.class.db_time_now
148
+ end
149
+
150
+ end
151
+ end