delayed_job 1.8.5 → 1.9.0pre

Sign up to get free protection for your applications and to get access to all the features.
@@ -36,13 +36,24 @@ To install as a plugin:
36
36
  script/plugin install git://github.com/collectiveidea/delayed_job.git
37
37
  </pre>
38
38
 
39
- After delayed_job is installed, run:
39
+ After delayed_job is installed, you will need to setup the backend.
40
+
41
+ h2. Backends
42
+
43
+ delayed_job supports multiple backends for storing the job queue. The default is Active Record, which requires a jobs table.
40
44
 
41
45
  <pre>
42
46
  script/generate delayed_job
43
47
  rake db:migrate
44
48
  </pre>
45
49
 
50
+ You can change the backend in an initializer:
51
+
52
+ <pre>
53
+ # config/initializers/delayed_job.rb
54
+ Delayed::Worker.backend = :mongo_mapper
55
+ </pre>
56
+
46
57
  h2. Upgrading to 1.8
47
58
 
48
59
  If you are upgrading from a previous release, you will need to generate the new @script/delayed_job@:
@@ -51,6 +62,8 @@ If you are upgrading from a previous release, you will need to generate the new
51
62
  script/generate delayed_job --skip-migration
52
63
  </pre>
53
64
 
65
+ Known Issues: script/delayed_job does not work properly with anything besides the Active Record backend. That will be resolved before the next gem release.
66
+
54
67
  h2. Queuing Jobs
55
68
 
56
69
  Call @#send_later(method, params)@ on any object and it will be processed in the background.
@@ -112,39 +125,39 @@ h2. Gory Details
112
125
 
113
126
  The library evolves around a delayed_jobs table which looks as follows:
114
127
 
115
- create_table :delayed_jobs, :force => true do |table|
116
- table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
117
- table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
118
- table.text :handler # YAML-encoded string of the object that will do work
119
- table.text :last_error # reason for last failure (See Note below)
120
- table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
121
- table.datetime :locked_at # Set when a client is working on this object
122
- table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
123
- table.string :locked_by # Who is working on this object (if locked)
124
- table.timestamps
125
- end
128
+ <pre>
129
+ create_table :delayed_jobs, :force => true do |table|
130
+ table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue
131
+ table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually.
132
+ table.text :handler # YAML-encoded string of the object that will do work
133
+ table.text :last_error # reason for last failure (See Note below)
134
+ table.datetime :run_at # When to run. Could be Time.zone.now for immediately, or sometime in the future.
135
+ table.datetime :locked_at # Set when a client is working on this object
136
+ table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead)
137
+ table.string :locked_by # Who is working on this object (if locked)
138
+ table.timestamps
139
+ end
140
+ </pre>
126
141
 
127
142
  On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.
128
143
 
129
- The default Job::max_attempts is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
144
+ The default Worker.max_attempts is 25. After this, the job either deleted (default), or left in the database with "failed_at" set.
130
145
  With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.
131
146
 
132
- The default Job::max_run_time is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
147
+ The default Worker.max_run_time is 4.hours. If your job takes longer than that, another computer could pick it up. It's up to you to
133
148
  make sure your job doesn't exceed this time. You should set this to the longest time you think the job could take.
134
149
 
135
150
  By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
136
- Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
151
+ Delayed::Worker.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.
137
152
 
138
153
  Here is an example of changing job parameters in Rails:
139
154
 
140
155
  <pre>
141
156
  # config/initializers/delayed_job_config.rb
142
- Delayed::Job.destroy_failed_jobs = false
143
- silence_warnings do
144
- Delayed::Worker::sleep_delay = 60
145
- Delayed::Job::max_attempts = 3
146
- Delayed::Job::max_run_time = 5.minutes
147
- end
157
+ Delayed::Worker.destroy_failed_jobs = false
158
+ Delayed::Worker.sleep_delay = 60
159
+ Delayed::Worker.max_attempts = 3
160
+ Delayed::Worker.max_run_time = 5.minutes
148
161
  </pre>
149
162
 
150
163
  h3. Cleaning up
data/Rakefile CHANGED
@@ -11,7 +11,7 @@ Jeweler::Tasks.new do |s|
11
11
  s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
12
12
  s.email = "tobi@leetsoft.com"
13
13
  s.homepage = "http://github.com/collectiveidea/delayed_job"
14
- s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
14
+ s.description = "Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks.\n\nThis gem is collectiveidea's fork (http://github.com/collectiveidea/delayed_job)."
15
15
  s.authors = ["Brandon Keepers", "Tobias Lütke"]
16
16
 
17
17
  s.has_rdoc = true
@@ -19,6 +19,10 @@ Jeweler::Tasks.new do |s|
19
19
  s.extra_rdoc_files = ["README.textile"]
20
20
 
21
21
  s.test_files = Dir['spec/**/*']
22
+
23
+ s.add_dependency "daemons"
24
+ s.add_development_dependency "rspec"
25
+ s.add_development_dependency "sqlite3-ruby"
22
26
  end
23
27
 
24
28
  require 'spec/rake/spectask'
@@ -28,7 +32,8 @@ task :default => :spec
28
32
  desc 'Run the specs'
29
33
  Spec::Rake::SpecTask.new(:spec) do |t|
30
34
  t.libs << 'lib'
31
- t.pattern = 'spec/**/*_spec.rb'
35
+ t.pattern = 'spec/*_spec.rb'
32
36
  t.verbose = true
33
37
  end
38
+ task :spec => :check_dependencies
34
39
 
data/VERSION CHANGED
@@ -1 +1 @@
1
- 1.8.5
1
+ 1.9.0pre
@@ -5,12 +5,14 @@
5
5
 
6
6
  Gem::Specification.new do |s|
7
7
  s.name = %q{delayed_job}
8
- s.version = "1.8.5"
8
+ s.version = "1.9.0pre"
9
9
 
10
10
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
11
  s.authors = ["Brandon Keepers", "Tobias L\303\274tke"]
12
- s.date = %q{2010-03-15}
13
- s.description = %q{Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks.}
12
+ s.date = %q{2010-03-26}
13
+ s.description = %q{Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks.
14
+
15
+ This gem is collectiveidea's fork (http://github.com/collectiveidea/delayed_job).}
14
16
  s.email = %q{tobi@leetsoft.com}
15
17
  s.extra_rdoc_files = [
16
18
  "README.textile"
@@ -26,20 +28,29 @@ Gem::Specification.new do |s|
26
28
  "generators/delayed_job/delayed_job_generator.rb",
27
29
  "generators/delayed_job/templates/migration.rb",
28
30
  "generators/delayed_job/templates/script",
29
- "init.rb",
31
+ "lib/delayed/backend/active_record.rb",
32
+ "lib/delayed/backend/base.rb",
33
+ "lib/delayed/backend/mongo_mapper.rb",
30
34
  "lib/delayed/command.rb",
31
- "lib/delayed/job.rb",
32
35
  "lib/delayed/message_sending.rb",
33
36
  "lib/delayed/performable_method.rb",
34
37
  "lib/delayed/recipes.rb",
35
38
  "lib/delayed/tasks.rb",
36
39
  "lib/delayed/worker.rb",
37
40
  "lib/delayed_job.rb",
41
+ "rails/init.rb",
38
42
  "recipes/delayed_job.rb",
39
- "spec/database.rb",
43
+ "spec/backend/active_record_job_spec.rb",
44
+ "spec/backend/mongo_mapper_job_spec.rb",
45
+ "spec/backend/shared_backend_spec.rb",
40
46
  "spec/delayed_method_spec.rb",
41
- "spec/job_spec.rb",
47
+ "spec/performable_method_spec.rb",
48
+ "spec/sample_jobs.rb",
49
+ "spec/setup/active_record.rb",
50
+ "spec/setup/mongo_mapper.rb",
51
+ "spec/spec_helper.rb",
42
52
  "spec/story_spec.rb",
53
+ "spec/worker_spec.rb",
43
54
  "tasks/jobs.rake"
44
55
  ]
45
56
  s.homepage = %q{http://github.com/collectiveidea/delayed_job}
@@ -48,10 +59,19 @@ Gem::Specification.new do |s|
48
59
  s.rubygems_version = %q{1.3.5}
49
60
  s.summary = %q{Database-backed asynchronous priority queue system -- Extracted from Shopify}
50
61
  s.test_files = [
51
- "spec/database.rb",
62
+ "spec/backend",
63
+ "spec/backend/active_record_job_spec.rb",
64
+ "spec/backend/mongo_mapper_job_spec.rb",
65
+ "spec/backend/shared_backend_spec.rb",
52
66
  "spec/delayed_method_spec.rb",
53
- "spec/job_spec.rb",
54
- "spec/story_spec.rb"
67
+ "spec/performable_method_spec.rb",
68
+ "spec/sample_jobs.rb",
69
+ "spec/setup",
70
+ "spec/setup/active_record.rb",
71
+ "spec/setup/mongo_mapper.rb",
72
+ "spec/spec_helper.rb",
73
+ "spec/story_spec.rb",
74
+ "spec/worker_spec.rb"
55
75
  ]
56
76
 
57
77
  if s.respond_to? :specification_version then
@@ -59,9 +79,18 @@ Gem::Specification.new do |s|
59
79
  s.specification_version = 3
60
80
 
61
81
  if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
82
+ s.add_runtime_dependency(%q<daemons>, [">= 0"])
83
+ s.add_development_dependency(%q<rspec>, [">= 0"])
84
+ s.add_development_dependency(%q<sqlite3-ruby>, [">= 0"])
62
85
  else
86
+ s.add_dependency(%q<daemons>, [">= 0"])
87
+ s.add_dependency(%q<rspec>, [">= 0"])
88
+ s.add_dependency(%q<sqlite3-ruby>, [">= 0"])
63
89
  end
64
90
  else
91
+ s.add_dependency(%q<daemons>, [">= 0"])
92
+ s.add_dependency(%q<rspec>, [">= 0"])
93
+ s.add_dependency(%q<sqlite3-ruby>, [">= 0"])
65
94
  end
66
95
  end
67
96
 
@@ -4,7 +4,7 @@ class DelayedJobGenerator < Rails::Generator::Base
4
4
  def manifest
5
5
  record do |m|
6
6
  m.template 'script', 'script/delayed_job', :chmod => 0755
7
- unless options[:skip_migration]
7
+ if !options[:skip_migration] && defined?(ActiveRecord)
8
8
  m.migration_template "migration.rb", 'db/migrate',
9
9
  :migration_file_name => "create_delayed_jobs"
10
10
  end
@@ -11,7 +11,8 @@ class CreateDelayedJobs < ActiveRecord::Migration
11
11
  table.string :locked_by # Who is working on this object (if locked)
12
12
  table.timestamps
13
13
  end
14
-
14
+
15
+ add_index :delayed_jobs, [:priority, :run_at], :name => 'delayed_jobs_priority'
15
16
  end
16
17
 
17
18
  def self.down
@@ -0,0 +1,90 @@
1
+ require 'active_record'
2
+
3
+ class ActiveRecord::Base
4
+ def self.load_for_delayed_job(id)
5
+ if id
6
+ find(id)
7
+ else
8
+ super
9
+ end
10
+ end
11
+
12
+ def dump_for_delayed_job
13
+ "#{self.class};#{id}"
14
+ end
15
+ end
16
+
17
+ module Delayed
18
+ module Backend
19
+ module ActiveRecord
20
+ # A job object that is persisted to the database.
21
+ # Contains the work object as a YAML field.
22
+ class Job < ::ActiveRecord::Base
23
+ include Delayed::Backend::Base
24
+ set_table_name :delayed_jobs
25
+
26
+ before_save :set_default_run_at
27
+
28
+ named_scope :ready_to_run, lambda {|worker_name, max_run_time|
29
+ {:conditions => ['(run_at <= ? AND (locked_at IS NULL OR locked_at < ?) OR locked_by = ?) AND failed_at IS NULL', db_time_now, db_time_now - max_run_time, worker_name]}
30
+ }
31
+ named_scope :by_priority, :order => 'priority ASC, run_at ASC'
32
+
33
+ def self.after_fork
34
+ ActiveRecord::Base.connection.reconnect!
35
+ end
36
+
37
+ # When a worker is exiting, make sure we don't have any locked jobs.
38
+ def self.clear_locks!(worker_name)
39
+ update_all("locked_by = null, locked_at = null", ["locked_by = ?", worker_name])
40
+ end
41
+
42
+ # Find a few candidate jobs to run (in case some immediately get locked by others).
43
+ def self.find_available(worker_name, limit = 5, max_run_time = Worker.max_run_time)
44
+ scope = self.ready_to_run(worker_name, max_run_time)
45
+ scope = scope.scoped(:conditions => ['priority >= ?', Worker.min_priority]) if Worker.min_priority
46
+ scope = scope.scoped(:conditions => ['priority <= ?', Worker.max_priority]) if Worker.max_priority
47
+
48
+ ::ActiveRecord::Base.silence do
49
+ scope.by_priority.all(:limit => limit)
50
+ end
51
+ end
52
+
53
+ # Lock this job for this worker.
54
+ # Returns true if we have the lock, false otherwise.
55
+ def lock_exclusively!(max_run_time, worker)
56
+ now = self.class.db_time_now
57
+ affected_rows = if locked_by != worker
58
+ # We don't own this job so we will update the locked_by name and the locked_at
59
+ self.class.update_all(["locked_at = ?, locked_by = ?", now, worker], ["id = ? and (locked_at is null or locked_at < ?) and (run_at <= ?)", id, (now - max_run_time.to_i), now])
60
+ else
61
+ # We already own this job, this may happen if the job queue crashes.
62
+ # Simply resume and update the locked_at
63
+ self.class.update_all(["locked_at = ?", now], ["id = ? and locked_by = ?", id, worker])
64
+ end
65
+ if affected_rows == 1
66
+ self.locked_at = now
67
+ self.locked_by = worker
68
+ return true
69
+ else
70
+ return false
71
+ end
72
+ end
73
+
74
+ # Get the current time (GMT or local depending on DB)
75
+ # Note: This does not ping the DB to get the time, so all your clients
76
+ # must have syncronized clocks.
77
+ def self.db_time_now
78
+ if Time.zone
79
+ Time.zone.now
80
+ elsif ::ActiveRecord::Base.default_timezone == :utc
81
+ Time.now.utc
82
+ else
83
+ Time.now
84
+ end
85
+ end
86
+
87
+ end
88
+ end
89
+ end
90
+ end
@@ -0,0 +1,106 @@
1
+ module Delayed
2
+ module Backend
3
+ class DeserializationError < StandardError
4
+ end
5
+
6
+ module Base
7
+ def self.included(base)
8
+ base.extend ClassMethods
9
+ end
10
+
11
+ module ClassMethods
12
+ # Add a job to the queue
13
+ def enqueue(*args)
14
+ object = args.shift
15
+ unless object.respond_to?(:perform)
16
+ raise ArgumentError, 'Cannot enqueue items which do not respond to perform'
17
+ end
18
+
19
+ priority = args.first || 0
20
+ run_at = args[1]
21
+ self.create(:payload_object => object, :priority => priority.to_i, :run_at => run_at)
22
+ end
23
+
24
+ # Hook method that is called before a new worker is forked
25
+ def before_fork
26
+ end
27
+
28
+ # Hook method that is called after a new worker is forked
29
+ def after_fork
30
+ end
31
+ end
32
+
33
+ ParseObjectFromYaml = /\!ruby\/\w+\:([^\s]+)/
34
+
35
+ def failed?
36
+ failed_at
37
+ end
38
+ alias_method :failed, :failed?
39
+
40
+ def payload_object
41
+ @payload_object ||= deserialize(self['handler'])
42
+ end
43
+
44
+ def name
45
+ @name ||= begin
46
+ payload = payload_object
47
+ if payload.respond_to?(:display_name)
48
+ payload.display_name
49
+ else
50
+ payload.class.name
51
+ end
52
+ end
53
+ end
54
+
55
+ def payload_object=(object)
56
+ self['handler'] = object.to_yaml
57
+ end
58
+
59
+ # Moved into its own method so that new_relic can trace it.
60
+ def invoke_job
61
+ payload_object.perform
62
+ end
63
+
64
+ # Unlock this job (note: not saved to DB)
65
+ def unlock
66
+ self.locked_at = nil
67
+ self.locked_by = nil
68
+ end
69
+
70
+ private
71
+
72
+ def deserialize(source)
73
+ handler = YAML.load(source) rescue nil
74
+
75
+ unless handler.respond_to?(:perform)
76
+ if handler.nil? && source =~ ParseObjectFromYaml
77
+ handler_class = $1
78
+ end
79
+ attempt_to_load(handler_class || handler.class)
80
+ handler = YAML.load(source)
81
+ end
82
+
83
+ return handler if handler.respond_to?(:perform)
84
+
85
+ raise DeserializationError,
86
+ 'Job failed to load: Unknown handler. Try to manually require the appropriate file.'
87
+ rescue TypeError, LoadError, NameError => e
88
+ raise DeserializationError,
89
+ "Job failed to load: #{e.message}. Try to manually require the required file."
90
+ end
91
+
92
+ # Constantize the object so that ActiveSupport can attempt
93
+ # its auto loading magic. Will raise LoadError if not successful.
94
+ def attempt_to_load(klass)
95
+ klass.constantize
96
+ end
97
+
98
+ protected
99
+
100
+ def set_default_run_at
101
+ self.run_at ||= self.class.db_time_now
102
+ end
103
+
104
+ end
105
+ end
106
+ end
@@ -0,0 +1,110 @@
1
+ require 'mongo_mapper'
2
+
3
+ module ::MongoMapper
4
+ module Document
5
+ module ClassMethods
6
+ def load_for_delayed_job(id)
7
+ find!(id)
8
+ end
9
+ end
10
+
11
+ module InstanceMethods
12
+ def dump_for_delayed_job
13
+ "#{self.class};#{id}"
14
+ end
15
+ end
16
+ end
17
+ end
18
+
19
+ module Delayed
20
+ module Backend
21
+ module MongoMapper
22
+ class Job
23
+ include ::MongoMapper::Document
24
+ include Delayed::Backend::Base
25
+ set_collection_name 'delayed_jobs'
26
+
27
+ key :priority, Integer, :default => 0
28
+ key :attempts, Integer, :default => 0
29
+ key :handler, String
30
+ key :run_at, Time
31
+ key :locked_at, Time
32
+ key :locked_by, String, :index => true
33
+ key :failed_at, Time
34
+ key :last_error, String
35
+ timestamps!
36
+
37
+ before_save :set_default_run_at
38
+
39
+ ensure_index [[:priority, 1], [:run_at, 1]]
40
+
41
+ def self.before_fork
42
+ ::MongoMapper.connection.close
43
+ end
44
+
45
+ def self.after_fork
46
+ ::MongoMapper.connection.connect_to_master
47
+ end
48
+
49
+ def self.db_time_now
50
+ ::MongoMapper.time_class.now.utc
51
+ end
52
+
53
+ def self.find_available(worker_name, limit = 5, max_run_time = Worker.max_run_time)
54
+ right_now = db_time_now
55
+
56
+ conditions = {
57
+ :run_at => {"$lte" => right_now},
58
+ :limit => -limit, # In mongo, positive limits are 'soft' and negative are 'hard'
59
+ :failed_at => nil,
60
+ :sort => [['priority', 1], ['run_at', 1]]
61
+ }
62
+
63
+ where = "this.locked_at == null || this.locked_at < #{make_date(right_now - max_run_time)}"
64
+
65
+ (conditions[:priority] ||= {})['$gte'] = Worker.min_priority if Worker.min_priority
66
+ (conditions[:priority] ||= {})['$lte'] = Worker.max_priority if Worker.max_priority
67
+
68
+ results = all(conditions.merge(:locked_by => worker_name))
69
+ results += all(conditions.merge('$where' => where)) if results.size < limit
70
+ results
71
+ end
72
+
73
+ # When a worker is exiting, make sure we don't have any locked jobs.
74
+ def self.clear_locks!(worker_name)
75
+ collection.update({:locked_by => worker_name}, {"$set" => {:locked_at => nil, :locked_by => nil}}, :multi => true)
76
+ end
77
+
78
+ # Lock this job for this worker.
79
+ # Returns true if we have the lock, false otherwise.
80
+ def lock_exclusively!(max_run_time, worker = worker_name)
81
+ right_now = self.class.db_time_now
82
+ overtime = right_now - max_run_time.to_i
83
+
84
+ query = "this.locked_at == null || this.locked_at < #{make_date(overtime)} || this.locked_by == #{worker.to_json}"
85
+ conditions = {:_id => id, :run_at => {"$lte" => right_now}, "$where" => query}
86
+
87
+ collection.update(conditions, {"$set" => {:locked_at => right_now, :locked_by => worker}})
88
+ affected_rows = collection.find({:_id => id, :locked_by => worker}).count
89
+ if affected_rows == 1
90
+ self.locked_at = right_now
91
+ self.locked_by = worker
92
+ return true
93
+ else
94
+ return false
95
+ end
96
+ end
97
+
98
+ private
99
+
100
+ def self.make_date(date_or_seconds)
101
+ "new Date(#{date_or_seconds.to_f * 1000})"
102
+ end
103
+
104
+ def make_date(date)
105
+ self.class.make_date(date)
106
+ end
107
+ end
108
+ end
109
+ end
110
+ end