delayed_job_on_steroids 1.7.5 → 2.0.0

Sign up to get free protection for your applications and to get access to all the features.
data/.gitignore CHANGED
@@ -1,2 +1,3 @@
1
1
  pkg/
2
2
  rdoc/
3
+ coverage/
data/README.markdown ADDED
@@ -0,0 +1,124 @@
1
+ h1. Delayed::Job (on steroids)
2
+
3
+ delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
4
+ Amongst those tasks are:
5
+
6
+ * sending massive newsletters
7
+ * image resizing
8
+ * http downloads
9
+ * updating smart collections
10
+ * updating solr
11
+ * batch imports
12
+ * spam checks
13
+
14
+
15
+ h2. Setup
16
+
17
+ The library evolves around a delayed_jobs table which can be created by using:
18
+
19
+ script/generate delayed_job_migration
20
+
21
+ The created table looks as follows:
22
+
23
+ create_table :delayed_jobs, :force => true do |table|
24
+ table.integer :priority, :null => false, :default => 0 # Allows some jobs to jump to the front of the queue.
25
+ table.integer :attempts, :null => false, :default => 0 # Provides for retries, but still fail eventually.
26
+ table.text :handler, :null => false # YAML-encoded string of the object that will do work.
27
+ table.string :job_type, :null => false # Class name of the job object, for type-specific workers.
28
+ table.string :job_tag # Helps to locate this job among others of the same type in your application.
29
+ table.string :last_error # Reason for last failure.
30
+ table.datetime :run_at, :null => false # When to run. Could be Job.db_time_now or some time in the future.
31
+ table.datetime :locked_at # Set when a client is working on this object.
32
+ table.string :locked_by # Who is working on this object (if locked).
33
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead).
34
+ table.timestamps
35
+ end
36
+
37
+ On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
38
+
39
+ The default `MAX_ATTEMPTS` is `25`. After this, the job either deleted (default), or left in the database with `failed_at` set.
40
+ With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
41
+
42
+ The default `MAX_RUN_TIME` is `4.hours`. If your job takes longer than that, another computer could pick it up. It's up to you to
43
+ make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
44
+
45
+ By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set `Delayed::Worker.destroy_failed_jobs = false`. The failed jobs will be marked with non-null `failed_at`.
46
+
47
+ Here is an example of changing job parameters in Rails:
48
+
49
+ # config/initializers/delayed_job_config.rb
50
+ Delayed::Worker.destroy_failed_jobs = false
51
+ silence_warnings do
52
+ Delayed::Job.const_set("MAX_ATTEMPTS", 3)
53
+ Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)
54
+ end
55
+
56
+
57
+ h2. Usage
58
+
59
+ Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.
60
+ Job objects are serialized to yaml so that they can later be resurrected by the job runner.
61
+
62
+ class NewsletterJob < Struct.new(:text, :emails)
63
+ def perform
64
+ emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
65
+ end
66
+ end
67
+
68
+ Delayed::Job.enqueue(NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email)))
69
+
70
+ There is also a second way to get jobs in the queue: send_later.
71
+
72
+ BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, massive_csv)
73
+
74
+ This will simply create a `Delayed::PerformableMethod` job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects which are stored as their text representation and loaded from the database fresh when the job is actually run later.
75
+
76
+
77
+ h3. Running the jobs
78
+
79
+ Run `script/generate delayed_job` to add `script/delayed_job`. This script can then be used to manage a process which will start working off jobs.
80
+
81
+ $ ruby script/delayed_job -h
82
+
83
+ Workers can be running on any computer, as long as they have access to the database and their clock is in sync. You can even
84
+ run multiple workers on per computer, but you must give each one a unique name (`script/delayed_job` will do it for you).
85
+ Keep in mind that each worker will check the database at least every 5 seconds.
86
+
87
+
88
+ h2. About this fork
89
+
90
+ This fork was born to introduce new features to delayed_job, but also to be almost-fully compatible with it.
91
+
92
+
93
+ h3. Incompatibilities with tobi's delayed_job
94
+
95
+ * Database schema:
96
+ * `last_error` column's type changed from string to text;
97
+ * some columns are NOT NULL now.
98
+ * Invert meaning of `priority` field: job with lesser priority will be executed earlier. See http://www.elevatedcode.com/articles/2009/11/04/speeding-up-delayed-job/ for background.
99
+
100
+
101
+ h3. Changes
102
+
103
+ * 2.0:
104
+ * Added `script/delayed_job` - runs as daemon, several workers concurrently, minimal and maximal priority, job type, logger, etc.
105
+ * Added rake tasks: `jobs:clear:all`, `jobs:clear:failed`, `jobs:stats`.
106
+ * Added timeout for job execution.
107
+ * Added `send_at` method for queueing jobs in the future.
108
+ * Consume less memory with Ruby Enterprise Edition.
109
+
110
+ * 1.7.5:
111
+ * Added possibility to run only specific types of jobs.
112
+
113
+
114
+ h3. Original changelog
115
+
116
+ * 1.7.0: Added `failed_at` column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
117
+
118
+ * 1.6.0: Renamed `locked_until` to `locked_at`. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
119
+
120
+ * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: `locked_until` and `locked_by`. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
121
+
122
+ * 1.2.0: Added `#send_later` to Object for simpler job creation
123
+
124
+ * 1.0.0: Initial release
data/Rakefile CHANGED
@@ -1,3 +1,5 @@
1
+ # encoding: utf-8
2
+
1
3
  require 'rubygems'
2
4
  require 'rake'
3
5
 
@@ -19,15 +21,12 @@ rescue LoadError
19
21
  end
20
22
 
21
23
  require 'spec/rake/spectask'
22
- Spec::Rake::SpecTask.new(:spec) do |spec|
23
- spec.libs << 'lib' << 'spec'
24
- spec.spec_files = FileList['spec/**/*_spec.rb']
25
- end
24
+ Spec::Rake::SpecTask.new(:spec)
26
25
 
27
26
  Spec::Rake::SpecTask.new(:rcov) do |spec|
28
- spec.libs << 'lib' << 'spec'
29
- spec.pattern = 'spec/**/*_spec.rb'
30
27
  spec.rcov = true
28
+ spec.rcov_opts = ['--exclude', 'gems']
29
+ spec.verbose = true
31
30
  end
32
31
 
33
32
  task :spec => :check_dependencies
data/VERSION CHANGED
@@ -1 +1 @@
1
- 1.7.5
1
+ 2.0.0
@@ -5,37 +5,42 @@
5
5
 
6
6
  Gem::Specification.new do |s|
7
7
  s.name = %q{delayed_job_on_steroids}
8
- s.version = "1.7.5"
8
+ s.version = "2.0.0"
9
9
 
10
10
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
11
  s.authors = ["Tobias L\303\274tke", "Aleksey Palazhchenko"]
12
- s.date = %q{2010-03-15}
12
+ s.date = %q{2010-03-24}
13
13
  s.description = %q{Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.}
14
14
  s.email = %q{aleksey.palazhchenko@gmail.com}
15
15
  s.extra_rdoc_files = [
16
- "README.textile"
16
+ "README.markdown"
17
17
  ]
18
18
  s.files = [
19
19
  ".gitignore",
20
20
  "MIT-LICENSE",
21
- "README.textile",
21
+ "README.markdown",
22
22
  "Rakefile",
23
23
  "VERSION",
24
24
  "delayed_job_on_steroids.gemspec",
25
+ "generators/delayed_job/delayed_job_generator.rb",
26
+ "generators/delayed_job/templates/script",
25
27
  "generators/delayed_job_migration/delayed_job_migration_generator.rb",
26
28
  "generators/delayed_job_migration/templates/migration.rb",
27
29
  "init.rb",
28
- "lib/delayed/job.rb",
29
- "lib/delayed/message_sending.rb",
30
- "lib/delayed/performable_method.rb",
31
- "lib/delayed/worker.rb",
32
- "lib/delayed_job.rb",
30
+ "lib/delayed_job_on_steroids.rb",
31
+ "lib/delayed_on_steroids/command.rb",
32
+ "lib/delayed_on_steroids/job.rb",
33
+ "lib/delayed_on_steroids/job_deprecations.rb",
34
+ "lib/delayed_on_steroids/message_sending.rb",
35
+ "lib/delayed_on_steroids/performable_method.rb",
36
+ "lib/delayed_on_steroids/tasks.rb",
37
+ "lib/delayed_on_steroids/worker.rb",
33
38
  "spec/database.rb",
34
39
  "spec/delayed_method_spec.rb",
35
40
  "spec/job_spec.rb",
36
41
  "spec/story_spec.rb",
37
- "tasks/jobs.rake",
38
- "tasks/tasks.rb"
42
+ "spec/worker_spec.rb",
43
+ "tasks/jobs.rake"
39
44
  ]
40
45
  s.homepage = %q{http://github.com/AlekSi/delayed_job_on_steroids}
41
46
  s.rdoc_options = ["--charset=UTF-8"]
@@ -46,7 +51,8 @@ Gem::Specification.new do |s|
46
51
  "spec/delayed_method_spec.rb",
47
52
  "spec/job_spec.rb",
48
53
  "spec/story_spec.rb",
49
- "spec/database.rb"
54
+ "spec/database.rb",
55
+ "spec/worker_spec.rb"
50
56
  ]
51
57
 
52
58
  if s.respond_to? :specification_version then
@@ -0,0 +1,9 @@
1
+ class DelayedJobGenerator < Rails::Generator::Base
2
+
3
+ def manifest
4
+ record do |m|
5
+ m.template 'script', 'script/delayed_job', :chmod => 0755
6
+ end
7
+ end
8
+
9
+ end
@@ -0,0 +1,5 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
4
+ require 'delayed_on_steroids/command'
5
+ Delayed::Command.new.run
@@ -1,22 +1,26 @@
1
1
  class CreateDelayedJobs < ActiveRecord::Migration
2
2
  def self.up
3
- create_table :delayed_jobs, :force => true do |t|
4
- t.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
5
- t.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
6
- t.text :handler # YAML-encoded string of the object that will do work
7
- t.string :job_type # Class name of the job object, for type-specific workers
8
- t.string :last_error # reason for last failure (See Note below)
9
- t.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future.
10
- t.datetime :locked_at # Set when a client is working on this object
11
- t.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
12
- t.string :locked_by # Who is working on this object (if locked)
13
-
14
- t.timestamps
3
+ create_table :delayed_jobs, :force => true do |table|
4
+ table.integer :priority, :null => false, :default => 0 # Allows some jobs to jump to the front of the queue.
5
+ table.integer :attempts, :null => false, :default => 0 # Provides for retries, but still fail eventually.
6
+ table.text :handler, :null => false # YAML-encoded string of the object that will do work.
7
+ table.string :job_type, :null => false # Class name of the job object, for type-specific workers.
8
+ table.string :job_tag # Helps to locate this job among others of the same type in your application.
9
+ table.string :last_error # Reason for last failure.
10
+ table.datetime :run_at, :null => false # When to run. Could be Job.db_time_now for immediately, or sometime in the future.
11
+ table.datetime :locked_at # Set when a client is working on this object.
12
+ table.string :locked_by # Who is working on this object (if locked).
13
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead).
14
+ table.timestamps
15
15
  end
16
16
 
17
- add_index :delayed_jobs, :locked_by
17
+ add_index :delayed_jobs, [:priority, :run_at]
18
18
  add_index :delayed_jobs, :job_type
19
- add_index :delayed_jobs, :priority
19
+ add_index :delayed_jobs, :job_tag
20
+ add_index :delayed_jobs, :run_at
21
+ add_index :delayed_jobs, :locked_at
22
+ add_index :delayed_jobs, :locked_by
23
+ add_index :delayed_jobs, :failed_at
20
24
  end
21
25
 
22
26
  def self.down
data/init.rb CHANGED
@@ -1 +1 @@
1
- require File.dirname(__FILE__) + '/lib/delayed_job'
1
+ require File.dirname(__FILE__) + '/lib/delayed_job_on_steroids'
@@ -0,0 +1,10 @@
1
+ autoload :ActiveRecord, 'active_record'
2
+
3
+ require File.dirname(__FILE__) + '/delayed_on_steroids/message_sending'
4
+ require File.dirname(__FILE__) + '/delayed_on_steroids/performable_method'
5
+ require File.dirname(__FILE__) + '/delayed_on_steroids/job_deprecations'
6
+ require File.dirname(__FILE__) + '/delayed_on_steroids/job'
7
+ require File.dirname(__FILE__) + '/delayed_on_steroids/worker'
8
+
9
+ Object.send(:include, Delayed::MessageSending)
10
+ Module.send(:include, Delayed::MessageSending::ClassMethods)
@@ -0,0 +1,104 @@
1
+ require 'optparse'
2
+
3
+ module Delayed
4
+
5
+ # Used by script/delayed_job: parses options, sets logger, invokes Worker.
6
+ class Command
7
+
8
+ def initialize
9
+ @worker_count = 1
10
+ @run_as_daemon = false
11
+
12
+ ARGV.clone.options do |opts|
13
+ opts.separator "Options:"
14
+ opts.on('--worker-name=name', String, 'Worker name. Default is auto-generated.') { |n| Delayed::Worker.name = n }
15
+ opts.on('--min-priority=number', Integer, 'Minimum priority of jobs to run.') { |n| Delayed::Worker.min_priority = n }
16
+ opts.on('--max-priority=number', Integer, 'Maximum priority of jobs to run.') { |n| Delayed::Worker.max_priority = n }
17
+ opts.on('--job-types=types', String, 'Type of jobs to run.') { |t| Delayed::Worker.job_types = t.split(',') }
18
+ opts.on('--keep-failed-jobs', 'Do not remove failed jobs from database.') { Delayed::Worker.destroy_failed_jobs = false }
19
+ opts.on('--log-file=file', String, 'Use specified file to log instead of Rails default logger.') do |f|
20
+ Delayed::Worker.logger = ActiveSupport::BufferedLogger.new(f)
21
+ end
22
+ opts.on("-q", "--quiet", "Be quieter.") { @quiet = true }
23
+ opts.on("-d", "--daemon", "Make worker run as a Daemon.") { @run_as_daemon = true }
24
+ opts.on('-n', '--number-of-workers=number', Integer, "Number of unique workers to spawn. Implies -d option if number > 1.") do |n|
25
+ @worker_count = ([n, 1].max rescue 1)
26
+ @run_as_daemon ||= (@worker_count > 1)
27
+ end
28
+ opts.on("-e", "--environment=name", String,
29
+ "Specifies the environment to run this worker under (test/development/production/etc).") do |e|
30
+ ENV["RAILS_ENV"] = e
31
+ RAILS_ENV.replace(e)
32
+ end
33
+
34
+ opts.on("-h", "--help", "Show this help message.") { puts opts; exit }
35
+ opts.parse!
36
+ end
37
+ end
38
+
39
+ def spawn_workers
40
+ # fork children if needed
41
+ worker_no = nil
42
+ if @worker_count > 1
43
+ it_is_parent = true
44
+ @worker_count.times do |no|
45
+ it_is_parent = fork
46
+ worker_no = no
47
+ break unless it_is_parent
48
+ end
49
+ exit 0 if it_is_parent
50
+ end
51
+
52
+ Process.daemon if @run_as_daemon
53
+
54
+ if Delayed::Worker.name.nil?
55
+ Delayed::Worker.name = ("host:#{Socket.gethostname} " rescue "") + "pid:#{Process.pid}"
56
+ else
57
+ Delayed::Worker.name += worker_no.to_s
58
+ end
59
+ end
60
+
61
+ def write_pid
62
+ pid = "#{RAILS_ROOT}/tmp/pids/dj_#{Delayed::Worker.name.parameterize('_')}.pid"
63
+ File.open(pid, 'w') { |f| f.write(Process.pid) }
64
+ at_exit { File.delete(pid) if File.exist?(pid) }
65
+ end
66
+
67
+ def setup_logger
68
+ if Delayed::Worker.logger.respond_to?(:auto_flushing=)
69
+ Delayed::Worker.logger.auto_flushing = true
70
+ end
71
+
72
+ if @quiet and Delayed::Worker.logger.respond_to?(:level=)
73
+ if Delayed::Worker.logger.kind_of?(Logger)
74
+ Delayed::Worker.logger.level = Logger::Severity::INFO
75
+ elsif Delayed::Worker.logger.kind_of?(ActiveSupport::BufferedLogger)
76
+ Delayed::Worker.logger.level = ActiveSupport::BufferedLogger::Severity::INFO
77
+ end
78
+ end
79
+
80
+ ActiveRecord::Base.logger = Delayed::Worker.logger
81
+ end
82
+
83
+ def run
84
+ warn "Running in #{RAILS_ENV} environment!" if RAILS_ENV.include?("dev") or RAILS_ENV.include?("test")
85
+
86
+ # Saves memory with Ruby Enterprise Edition
87
+ if GC.respond_to?(:copy_on_write_friendly=)
88
+ GC.copy_on_write_friendly = true
89
+ end
90
+
91
+ spawn_workers
92
+ Dir.chdir(RAILS_ROOT)
93
+ write_pid
94
+ setup_logger
95
+ ActiveRecord::Base.connection.reconnect!
96
+
97
+ Delayed::Worker.instance.start
98
+ rescue => e
99
+ Delayed::Worker.logger.fatal(e)
100
+ STDERR.puts(e.message)
101
+ exit 1
102
+ end
103
+ end
104
+ end
@@ -1,51 +1,45 @@
1
+ require 'timeout'
2
+
1
3
  module Delayed
2
4
 
3
5
  class DeserializationError < StandardError
4
6
  end
5
7
 
6
8
  # A job object that is persisted to the database.
7
- # Contains the work object as a YAML field.
9
+ # Contains the work object as a YAML field +handler+.
8
10
  class Job < ActiveRecord::Base
9
- MAX_ATTEMPTS = 25
10
- MAX_RUN_TIME = 4.hours
11
11
  set_table_name :delayed_jobs
12
+ before_save { |job| job.run_at ||= job.class.db_time_now }
12
13
 
13
- # By default failed jobs are destroyed after too many attempts.
14
- # If you want to keep them around (perhaps to inspect the reason
15
- # for the failure), set this to false.
16
- cattr_accessor :destroy_failed_jobs
17
- self.destroy_failed_jobs = true
18
-
19
- # Every worker has a unique name which by default is the pid of the process.
20
- # There are some advantages to overriding this with something which survives worker retarts:
21
- # Workers can safely resume working on tasks which are locked by themselves. The worker will assume that it crashed before.
22
- cattr_accessor :worker_name
23
- self.worker_name = "host:#{Socket.gethostname} pid:#{Process.pid}" rescue "pid:#{Process.pid}"
14
+ extend JobDeprecations
24
15
 
25
- NextTaskSQL = '(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR (locked_by = ?)) AND failed_at IS NULL'
26
- NextTaskOrder = 'priority DESC, run_at ASC'
16
+ MAX_ATTEMPTS = 25
17
+ MAX_RUN_TIME = 4.hours
27
18
 
28
19
  ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
29
20
 
30
- cattr_accessor :min_priority, :max_priority, :job_types
31
- self.min_priority = nil
32
- self.max_priority = nil
33
- self.job_types = nil
34
-
35
- # When a worker is exiting, make sure we don't have any locked jobs.
36
- def self.clear_locks!
37
- update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
38
- end
39
-
21
+ # Returns +true+ if current job failed.
40
22
  def failed?
41
- failed_at
23
+ not failed_at.nil?
42
24
  end
43
25
  alias_method :failed, :failed?
44
26
 
27
+ # Returns +true+ if current job locked.
28
+ def locked?
29
+ not locked_at.nil?
30
+ end
31
+ alias_method :locked, :locked?
32
+
45
33
  def payload_object
46
34
  @payload_object ||= deserialize(self['handler'])
47
35
  end
48
36
 
37
+ def payload_object=(object)
38
+ self['job_type'] = object.class.to_s
39
+ self['handler'] = object.to_yaml
40
+ end
41
+
42
+ # Returns job name.
49
43
  def name
50
44
  @name ||= begin
51
45
  payload = payload_object
@@ -57,62 +51,55 @@ module Delayed
57
51
  end
58
52
  end
59
53
 
60
- def payload_object=(object)
61
- self['job_type'] = object.class.to_s
62
- self['handler'] = object.to_yaml
63
- end
64
-
65
- # Reschedule the job in the future (when a job fails).
66
- # Uses an exponential scale depending on the number of failed attempts.
54
+ # Reschedule the job to run at +time+ (when a job fails).
55
+ # If +time+ is nil it uses an exponential scale depending on the number of failed attempts.
67
56
  def reschedule(message, backtrace = [], time = nil)
68
- if self.attempts < MAX_ATTEMPTS
57
+ if (self.attempts += 1) < MAX_ATTEMPTS
69
58
  time ||= Job.db_time_now + (attempts ** 4) + 5
70
59
 
71
- self.attempts += 1
72
60
  self.run_at = time
73
61
  self.last_error = message + "\n" + backtrace.join("\n")
74
- self.unlock
62
+ self.locked_at = nil
63
+ self.locked_by = nil
75
64
  save!
76
65
  else
77
- logger.info "* [JOB] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures."
78
- destroy_failed_jobs ? destroy : update_attribute(:failed_at, Delayed::Job.db_time_now)
66
+ Worker.logger.warn("* [#{Worker.name}] PERMANENTLY removing #{self.name} because of #{attempts} consequetive failures.")
67
+ Worker.destroy_failed_jobs ? destroy : update_attribute(:failed_at, self.class.db_time_now)
79
68
  end
80
69
  end
81
70
 
82
-
83
71
  # Try to run one job. Returns true/false (work done/work failed) or nil if job can't be locked.
84
- def run_with_lock(max_run_time, worker_name)
85
- logger.info "* [JOB] aquiring lock on #{name}"
72
+ def run_with_lock(max_run_time = MAX_RUN_TIME, worker_name = Worker.name)
73
+ Worker.logger.info("* [#{Worker.name}] acquiring lock on #{name}")
86
74
  unless lock_exclusively!(max_run_time, worker_name)
87
75
  # We did not get the lock, some other worker process must have
88
- logger.warn "* [JOB] failed to aquire exclusive lock for #{name}"
76
+ Worker.logger.warn("* [#{Worker.name}] failed to acquire exclusive lock for #{name}")
89
77
  return nil # no work done
90
78
  end
91
79
 
92
80
  begin
93
81
  runtime = Benchmark.realtime do
94
- invoke_job # TODO: raise error if takes longer than max_run_time
82
+ Timeout.timeout(max_run_time.to_i) { invoke_job }
95
83
  destroy
96
84
  end
97
- # TODO: warn if runtime > max_run_time ?
98
- logger.info "* [JOB] #{name} completed after %.4f" % runtime
85
+ Worker.logger.info("* [#{Worker.name}] #{name} completed after %.4f" % runtime)
99
86
  return true # did work
100
87
  rescue Exception => e
101
- reschedule e.message, e.backtrace
88
+ reschedule(e.message, e.backtrace)
102
89
  log_exception(e)
103
90
  return false # work failed
104
91
  end
105
92
  end
106
93
 
107
- # Add a job to the queue
94
+ # Add a job to the queue. Arguments: priority, run_at.
108
95
  def self.enqueue(*args, &block)
109
96
  object = block_given? ? EvaledJob.new(&block) : args.shift
110
97
 
111
98
  unless object.respond_to?(:perform) || block_given?
112
99
  raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
113
100
  end
114
-
115
- priority = args.first || 0
101
+
102
+ priority = args[0] || 0
116
103
  run_at = args[1]
117
104
 
118
105
  Job.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
@@ -120,80 +107,88 @@ module Delayed
120
107
 
121
108
  # Find a few candidate jobs to run (in case some immediately get locked by others).
122
109
  def self.find_available(limit = 5, max_run_time = MAX_RUN_TIME)
123
-
124
110
  time_now = db_time_now
111
+ sql = ''
112
+ conditions = []
113
+
114
+ # 1) not scheduled in the future
115
+ sql << '(run_at <= ?)'
116
+ conditions << time_now
117
+
118
+ # 2) and job is not failed yet
119
+ sql << ' AND (failed_at IS NULL)'
120
+
121
+ # 3a) and already locked by same worker
122
+ sql << ' AND ('
123
+ sql << '(locked_by = ?)'
124
+ conditions << Worker.name
125
125
 
126
- sql = NextTaskSQL.dup
126
+ # 3b) or not locked yet
127
+ sql << ' OR (locked_at IS NULL)'
127
128
 
128
- conditions = [time_now, time_now - max_run_time, worker_name]
129
+ # 3c) or lock expired
130
+ sql << ' OR (locked_at < ?)'
131
+ sql << ')'
132
+ conditions << time_now - max_run_time
129
133
 
130
- if self.min_priority
134
+ if Worker.min_priority
131
135
  sql << ' AND (priority >= ?)'
132
- conditions << min_priority
136
+ conditions << Worker.min_priority
133
137
  end
134
138
 
135
- if self.max_priority
139
+ if Worker.max_priority
136
140
  sql << ' AND (priority <= ?)'
137
- conditions << max_priority
141
+ conditions << Worker.max_priority
138
142
  end
139
143
 
140
- if self.job_types
144
+ if Worker.job_types
141
145
  sql << ' AND (job_type IN (?))'
142
- conditions << job_types
146
+ conditions << Worker.job_types
143
147
  end
144
148
 
145
149
  conditions.unshift(sql)
146
-
147
- ActiveRecord::Base.silence do
148
- find(:all, :conditions => conditions, :order => NextTaskOrder, :limit => limit)
149
- end
150
+ find(:all, :conditions => conditions, :order => 'priority ASC, run_at ASC', :limit => limit)
150
151
  end
151
152
 
152
153
  # Run the next job we can get an exclusive lock on.
153
154
  # If no jobs are left we return nil
154
155
  def self.reserve_and_run_one_job(max_run_time = MAX_RUN_TIME)
155
156
 
156
- # We get up to 5 jobs from the db. In case we cannot get exclusive access to a job we try the next.
157
+ # We get up to 20 jobs from the db. In case we cannot get exclusive access to a job we try the next.
157
158
  # this leads to a more even distribution of jobs across the worker processes
158
- find_available(5, max_run_time).each do |job|
159
- t = job.run_with_lock(max_run_time, worker_name)
159
+ find_available(20, max_run_time).each do |job|
160
+ t = job.run_with_lock(max_run_time, Worker.name)
160
161
  return t unless t == nil # return if we did work (good or bad)
161
162
  end
162
163
 
163
- nil # we didn't do any work, all 5 were not lockable
164
+ nil # we didn't do any work, all 20 were not lockable
164
165
  end
165
166
 
166
167
  # Lock this job for this worker.
167
168
  # Returns true if we have the lock, false otherwise.
168
- def lock_exclusively!(max_run_time, worker = worker_name)
169
+ def lock_exclusively!(max_run_time = MAX_RUN_TIME, worker_name = Worker.name)
169
170
  now = self.class.db_time_now
170
- affected_rows = if locked_by != worker
171
+ affected_rows = if locked_by != worker_name
171
172
  # We don't own this job so we will update the locked_by name and the locked_at
172
- self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?)", id, (now - max_run_time.to_i)])
173
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker_name], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
173
174
  else
174
175
  # We already own this job, this may happen if the job queue crashes.
175
176
  # Simply resume and update the locked_at
176
- self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
177
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker_name])
177
178
  end
178
179
  if affected_rows == 1
179
180
  self.locked_at = now
180
- self.locked_by = worker
181
+ self.locked_by = worker_name
181
182
  return true
182
183
  else
183
184
  return false
184
185
  end
185
186
  end
186
187
 
187
- # Unlock this job (note: not saved to DB)
188
- def unlock
189
- self.locked_at = nil
190
- self.locked_by = nil
191
- end
192
-
193
188
  # This is a good hook if you need to report job processing errors in additional or different ways
194
- def log_exception(error)
195
- logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
196
- logger.error(error)
189
+ def log_exception(e)
190
+ Worker.logger.error("! [#{Worker.name}] #{name} failed with #{e.class.name}: #{e.message} - #{attempts} failed attempts")
191
+ Worker.logger.error(e)
197
192
  end
198
193
 
199
194
  # Do num jobs and return stats on success/failure.
@@ -221,7 +216,20 @@ module Delayed
221
216
  payload_object.perform
222
217
  end
223
218
 
224
- private
219
+ # Get the current time (GMT or local depending on DB)
220
+ # Note: This does not ping the DB to get the time, so all your clients
221
+ # must have syncronized clocks.
222
+ def self.db_time_now
223
+ if Time.zone
224
+ Time.zone.now
225
+ elsif ActiveRecord::Base.default_timezone == :utc
226
+ Time.now.utc
227
+ else
228
+ Time.now
229
+ end
230
+ end
231
+
232
+ private
225
233
 
226
234
  def deserialize(source)
227
235
  handler = YAML.load(source) rescue nil
@@ -237,7 +245,7 @@ module Delayed
237
245
  return handler if handler.respond_to?(:perform)
238
246
 
239
247
  raise DeserializationError,
240
- 'Job failed to load: Unknown handler. Try to manually require the appropiate file.'
248
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
241
249
  rescue TypeError, LoadError, NameError => e
242
250
  raise DeserializationError,
243
251
  "Job failed to load: #{e.message}. Try to manually require the required file."
@@ -249,25 +257,6 @@ module Delayed
249
257
  klass.constantize
250
258
  end
251
259
 
252
- # Get the current time (GMT or local depending on DB)
253
- # Note: This does not ping the DB to get the time, so all your clients
254
- # must have syncronized clocks.
255
- def self.db_time_now
256
- if Time.zone
257
- Time.zone.now
258
- elsif ActiveRecord::Base.default_timezone == :utc
259
- Time.now.utc
260
- else
261
- Time.now
262
- end
263
- end
264
-
265
- protected
266
-
267
- def before_save
268
- self.run_at ||= self.class.db_time_now
269
- end
270
-
271
260
  end
272
261
 
273
262
  class EvaledJob