chore-core 1.5.10 → 1.7.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: b1aae723bbfe580eb5d31876ca91f32dd28baae7
4
- data.tar.gz: 014aae64a27e2a6eb6210b76561e6cc1765056b6
3
+ metadata.gz: 74242dc2e704cdf2cf5baf9dd9b25822bbe20066
4
+ data.tar.gz: 0082dc5ab748faff7ba73a8bb6f01773aaf6b35c
5
5
  SHA512:
6
- metadata.gz: 110b80593989c4ddd8bf1502705d1ce88fd26ba0e458dcffdd9072ddcbf2d3dd9b7b4760b3a7ab37bbd150f66b4acea02e280e7396c2e203330820e5dceb6939
7
- data.tar.gz: ef5522be2d88a900151437d3a127b5a692a7e71a3a77169a3fe12e25089157b363f7a0f890d1f48d4fd84aca305eaacfbb5f1553026a02a92ce99ef159ccf4c9
6
+ metadata.gz: de6ce3260f50d5a324843f6bbc9511974da8d69731e893aad11a5404ee4107f167a4cb02b9579e372f822f414551ce4ee754139a316845282a2465170f5c03a1
7
+ data.tar.gz: fdd9bb38cb35ff082e055bf76a897571bdde176cb1fa20cd9880ed19f57c55d908f0f3c6c830fceae62cec88f761c0228dcb1bcdb051f1e561a48ea5d3f2bc49
data/README.md CHANGED
@@ -11,7 +11,7 @@ The full docs for Chore can always be found at http://tapjoy.github.io/chore.
11
11
 
12
12
  Chore can be integrated with any Ruby-based project by following these instructions:
13
13
 
14
- gem 'chore-core', '~> 1.5'
14
+ gem 'chore-core', '~> 1.6.0'
15
15
 
16
16
  If you also plan on using SQS, you must also bring in dalli to use for memcached:
17
17
 
@@ -21,7 +21,7 @@ Create a `Chorefile` file in the root of your project directory. While you can c
21
21
 
22
22
  --require=./<FILE_TO_LOAD>
23
23
 
24
- Make sure that `--require` points to the main entry point for your app. If integrating with a Rails app, just point it to the directory of your application and it will handle loading the correct files on its own.
24
+ Make sure that `--require` points to the main entry point for your app. If integrating with a Rails app, just point it to the directory of your application and it will handle loading the correct files on its own.
25
25
 
26
26
  Other options include:
27
27
 
@@ -36,7 +36,7 @@ Other options include:
36
36
  --queue_prefix prefixy # A prefix to prepend to queue names, mainly for development and qa testing purposes
37
37
  --max-attempts 100 # The maximum number of times a job can be attempted
38
38
  --dupe-on-cache-failure # Determines the deduping behavior when a cache connection error occurs. When set to `false`, the message is assumed not to be a duplicate. Defaults to `false`.
39
- --queue-polling-size 10 # If your particular queueing system supports responding with messages in batches of a certain size, you can control that with this flag. SQS has a built in upper-limit of 10, but other systems will vary.
39
+ --queue-polling-size 10 # If your particular queueing system supports responding with messages in batches of a certain size, you can control that with this flag. SQS has a built in upper-limit of 10, but other systems will vary.
40
40
 
41
41
  If you're using SQS, you'll want to add AWS keys so that Chore can authenticate with AWS.
42
42
 
@@ -62,7 +62,7 @@ Because you are likely to use the same app as the basis for both producing and c
62
62
 
63
63
  However, like many aspects of Chore, it is ultimately up to the developer to decide which use case fits their needs best. Chore is happy to let you configure it in almost any way you want.
64
64
 
65
- An example of how to configure chore via and initializer:
65
+ An example of how to configure chore via an initializer:
66
66
 
67
67
  ```ruby
68
68
  Chore.configure do |c|
@@ -139,6 +139,28 @@ It is worth noting that any option that can be set via config file or command-li
139
139
 
140
140
  If a global publisher is set, it can be overridden on a per-job basis by specifying the publisher in `queue_options`.
141
141
 
142
+ ## Retry Backoff Strategy
143
+
144
+ Chore has basic support for delaying retries of a failed job using a step function. Currently the only queue that
145
+ supports this functionality is SQS, all others will simply ignore the delay setting.
146
+
147
+ ### Setup
148
+
149
+ The `:backoff` option for a queue expects a lambda that takes a single `UnitOfWork` argument. The return should be a
150
+ number of seconds to delay the next attempt.
151
+
152
+ ```ruby
153
+ queue_options :name => 'nameOfQueue',
154
+ :backoff => lambda { |work| work.current_attempt ** 2 } # Exponential backoff
155
+ ```
156
+
157
+ ### Using the Backoff
158
+
159
+ If there is a `:backoff` option supplied, any failures will delay the next attempt by the result of that lambda.
160
+
161
+ ### Notes on SQS and Delays
162
+
163
+ Read more details about SQS and Delays [here](docs/Delayed Jobs.md)
142
164
 
143
165
  ## Hooks
144
166
 
@@ -182,7 +204,11 @@ class TestJob
182
204
  queue_options :name => 'test_queue'
183
205
 
184
206
  def perform(args={})
185
- Chore.logger.debug "My first sync job"
207
+ # Do something cool
208
+ end
209
+
210
+ def before_perform_log(message)
211
+ Chore.logger.debug "About to do something cool with: #{message.inspect}"
186
212
  end
187
213
  end
188
214
  ```
@@ -257,5 +283,5 @@ Chore has several plugin gems available, which extend it's core functionality
257
283
 
258
284
  ## Copyright
259
285
 
260
- Copyright (c) 2013 - 2014 Tapjoy. See LICENSE.txt for
286
+ Copyright (c) 2013 - 2015 Tapjoy. See LICENSE.txt for
261
287
  further details.
data/bin/chore CHANGED
@@ -22,6 +22,9 @@ Chore::Signal.trap "USR1" do
22
22
  end
23
23
 
24
24
  begin
25
+ # Pre-load any Bundler dependencies now, so that the CLI parser has them loaded
26
+ # prior to intrpretting the command line args for things like consumers/producers
27
+ Bundler.require if defined?(Bundler)
25
28
  cli = Chore::CLI.instance
26
29
  cli.run!(ARGV)
27
30
  rescue => e
@@ -30,5 +33,3 @@ rescue => e
30
33
  STDERR.puts e.backtrace.join("\n")
31
34
  exit 1
32
35
  end
33
-
34
-
data/chore-core.gemspec CHANGED
@@ -37,9 +37,9 @@ Gem::Specification.new do |s|
37
37
  s.summary = "Job processing... for the future!"
38
38
 
39
39
  s.add_runtime_dependency(%q<json>, [">= 0"])
40
- s.add_runtime_dependency(%q<aws-sdk>, ["~> 1.56", ">= 1.56.0"])
40
+ s.add_runtime_dependency(%q<aws-sdk-v1>, ["~> 1.56", ">= 1.56.0"])
41
41
  s.add_runtime_dependency(%q<thread>, ["~> 0.1.3"])
42
- s.add_development_dependency(%q<rspec>, ["~> 2.12.0"])
42
+ s.add_development_dependency(%q<rspec>, ["~> 3.3.0"])
43
43
  s.add_development_dependency(%q<rdoc>, ["~> 3.12"])
44
44
  s.add_development_dependency(%q<bundler>, [">= 0"])
45
45
  end
data/lib/chore/job.rb CHANGED
@@ -16,7 +16,7 @@ module Chore
16
16
  # Throw a RejectMessageException from your job to signal that the message should be rejected.
17
17
  # The semantics of +reject+ are queue implementation dependent.
18
18
  end
19
-
19
+
20
20
  def self.job_classes #:nodoc:
21
21
  @classes || []
22
22
  end
@@ -42,15 +42,24 @@ module Chore
42
42
 
43
43
  module ClassMethods
44
44
  DEFAULT_OPTIONS = { }
45
-
45
+
46
46
  # Pass a hash of options to queue_options the included class's use of Chore::Job
47
- # +opts+ has just the one required option.
47
+ # +opts+ has just the one required option.
48
48
  # * +:name+: which should map to the name of the queue this job should be published to.
49
49
  def queue_options(opts = {})
50
50
  @chore_options = (@chore_options || DEFAULT_OPTIONS).merge(opts_from_cli).merge(opts)
51
+
51
52
  required_options.each do |k|
52
53
  raise ArgumentError.new("#{self.to_s} :#{k} is a required option for Chore::Job") unless @chore_options[k]
53
54
  end
55
+
56
+ if @chore_options.key?(:backoff)
57
+ if !@chore_options[:backoff].is_a?(Proc)
58
+ raise ArgumentError, "#{self.to_s}: backoff must be a lambda or Proc"
59
+ elsif @chore_options[:backoff].arity != 1
60
+ raise ArgumentError, "#{self.to_s}: backoff must accept a single argument"
61
+ end
62
+ end
54
63
  end
55
64
 
56
65
  # This is a method so it can be overriden to create additional required
@@ -68,14 +77,14 @@ module Chore
68
77
  @from_cli ||= (Chore.config.marshal_dump.select {|k,v| required_options.include? k } || {})
69
78
  end
70
79
 
71
- # Execute the current job. We create an instance of the job to do the perform
80
+ # Execute the current job. We create an instance of the job to do the perform
72
81
  # as this allows the jobs themselves to do initialization that might require access
73
82
  # to the parameters of the job.
74
83
  def perform(*args)
75
84
  job = self.new(args)
76
85
  job.perform(*args)
77
86
  end
78
-
87
+
79
88
  # Publish a job using an instance of job. Similar to perform we do this so that a job
80
89
  # can perform initialization logic before the perform_async is begun. This, in addition, to
81
90
  # hooks allows for rather complex jobs to be written simply.
@@ -93,6 +102,12 @@ module Chore
93
102
  def prefixed_queue_name
94
103
  "#{Chore.config.queue_prefix}#{self.options[:name]}"
95
104
  end
105
+
106
+ # We require a proc for the backoff strategy, but as we check for it in `.queue_options` we can simply check for
107
+ # the key at this point.
108
+ def has_backoff?
109
+ self.options.key?(:backoff)
110
+ end
96
111
  end #ClassMethods
97
112
 
98
113
  # This is handy to override in an included job to be able to do job setup that requires
@@ -3,10 +3,23 @@ module Chore
3
3
  module SQS
4
4
  # Helper method to create queues based on the currently known list as provided by your configured Chore::Jobs
5
5
  # This is meant to be invoked from a rake task, and not directly.
6
- # These queues will be created with the default settings, which may not be ideal.
6
+ # These queues will be created with the default settings, which may not be ideal.
7
7
  # This is meant only as a convenience helper for testing, and not as a way to create production quality queues in SQS
8
- def self.create_queues!
8
+ def self.create_queues!(halt_on_existing=false)
9
9
  raise 'You must have atleast one Chore Job configured and loaded before attempting to create queues' unless Chore.prefixed_queue_names.length > 0
10
+
11
+ if halt_on_existing
12
+ existing = self.existing_queues
13
+ if existing.size > 0
14
+ raise <<-ERROR.gsub(/^\s+/, '')
15
+ We found queues that already exist! Verify your queue names or prefix are setup correctly.
16
+
17
+ The following queue names were found:
18
+ #{existing.join("\n")}
19
+ ERROR
20
+ end
21
+ end
22
+
10
23
  #This will raise an exception if AWS has not been configured by the project making use of Chore
11
24
  sqs_queues = AWS::SQS.new.queues
12
25
  Chore.prefixed_queue_names.each do |queue_name|
@@ -38,6 +51,22 @@ module Chore
38
51
  end
39
52
  Chore.prefixed_queue_names
40
53
  end
54
+
55
+ # Collect a list of queues that already exist
56
+ def self.existing_queues
57
+ #This will raise an exception if AWS has not been configured by the project making use of Chore
58
+ sqs_queues = AWS::SQS.new.queues
59
+
60
+ Chore.prefixed_queue_names.select do |queue_name|
61
+ # If the NonExistentQueue exception is raised we do not care about that queue name.
62
+ begin
63
+ sqs_queues.named(queue_name)
64
+ true
65
+ rescue AWS::SQS::Errors::NonExistentQueue
66
+ false
67
+ end
68
+ end
69
+ end
41
70
  end
42
71
  end
43
72
  end
@@ -55,6 +55,14 @@ module Chore
55
55
  queue.batch_delete([id])
56
56
  end
57
57
 
58
+ def delay(item, backoff_calc)
59
+ delay = backoff_calc.call(item)
60
+ Chore.logger.debug "Delaying #{item.id} by #{delay} seconds"
61
+ queue.batch_change_visibility(delay, [item.id])
62
+
63
+ return delay
64
+ end
65
+
58
66
  private
59
67
 
60
68
  # Requests messages from SQS, and invokes the provided +&block+ over each one. Afterwards, the :on_fetch
@@ -63,15 +71,17 @@ module Chore
63
71
  msg = queue.receive_messages(:limit => sqs_polling_amount, :attributes => [:receive_count])
64
72
  messages = *msg
65
73
  messages.each do |message|
66
- block.call(message.handle, queue_name, queue_timeout, message.body, message.receive_count - 1) unless duplicate_message?({:id=>message.id, :queue=>message.queue.url, :visibility_timeout=>message.queue.visibility_timeout})
74
+ unless duplicate_message?(message)
75
+ block.call(message.handle, queue_name, queue_timeout, message.body, message.receive_count - 1)
76
+ end
67
77
  Chore.run_hooks_for(:on_fetch, message.handle, message.body)
68
78
  end
69
79
  messages
70
80
  end
71
81
 
72
82
  # Checks if the given message has already been received within the timeout window for this queue
73
- def duplicate_message?(msg_data)
74
- dupe_detector.found_duplicate?(msg_data)
83
+ def duplicate_message?(message)
84
+ dupe_detector.found_duplicate?(:id=>message.id, :queue=>message.queue.url, :visibility_timeout=>message.queue.visibility_timeout)
75
85
  end
76
86
 
77
87
  # Returns the instance of the DuplicateDetector used to ensure unique messages.
@@ -1,7 +1,14 @@
1
1
  namespace :chore do
2
- desc "Create all defined queues"
3
- task :create do
4
- Chore::Queues::SQS.create_queues!
2
+ desc <<-DESC.gsub(/^\s+/, '')
3
+ Create all defined queues. If the halt_on_existing argument is set (defaults to off) the task will abort if a single
4
+ queue already exists without attempting to create any.
5
+
6
+ This flag is specifically provided for our integration testing platform to ensure we don't deploy to an incorrect environment.
7
+ DESC
8
+ task :create, :halt_on_existing do |t, args|
9
+ halt_on_existing = %w(1 true yes t y).include?(args[:halt_on_existing])
10
+
11
+ Chore::Queues::SQS.create_queues!(halt_on_existing)
5
12
  end
6
13
 
7
14
  desc "Remove all defined queues"
data/lib/chore/version.rb CHANGED
@@ -1,8 +1,8 @@
1
1
  module Chore
2
2
  module Version #:nodoc:
3
3
  MAJOR = 1
4
- MINOR = 5
5
- PATCH = 10
4
+ MINOR = 7
5
+ PATCH = 2
6
6
 
7
7
  STRING = [ MAJOR, MINOR, PATCH ].join('.')
8
8
  end
data/lib/chore/worker.rb CHANGED
@@ -22,7 +22,7 @@ module Chore
22
22
 
23
23
  # Create a Worker. Give it an array of work (or single item), and +opts+.
24
24
  # Currently, the only option supported by Worker is +:payload_handler+ which contains helpers
25
- # for decoding the item and finding the correct payload class
25
+ # for decoding the item and finding the correct payload class
26
26
  def initialize(work=[],opts={})
27
27
  @stopping = false
28
28
  @started_at = Time.now
@@ -79,27 +79,44 @@ module Chore
79
79
  message = options[:payload_handler].decode(item.message)
80
80
  klass = options[:payload_handler].payload_class(message)
81
81
  return unless klass.run_hooks_for(:before_perform,message)
82
+
82
83
  begin
83
84
  Chore.logger.info { "Running job #{klass} with params #{message}"}
84
85
  perform_job(klass,message)
85
86
  item.consumer.complete(item.id)
86
87
  Chore.logger.info { "Finished job #{klass} with params #{message}"}
87
- klass.run_hooks_for(:after_perform,message)
88
+ klass.run_hooks_for(:after_perform, message)
88
89
  rescue Job::RejectMessageException
89
90
  item.consumer.reject(item.id)
90
91
  Chore.logger.error { "Failed to run job for #{item.message} with error: Job raised a RejectMessageException" }
91
92
  klass.run_hooks_for(:on_rejected, message)
92
93
  rescue => e
93
- Chore.logger.error { "Failed to run job #{item.message} with error: #{e.message} at #{e.backtrace * "\n"}" }
94
- if item.current_attempt >= klass.options[:max_attempts]
95
- klass.run_hooks_for(:on_permanent_failure,item.queue_name,message,e)
96
- item.consumer.complete(item.id)
94
+ if klass.has_backoff?
95
+ attempt_to_delay(item, message, klass)
97
96
  else
98
- klass.run_hooks_for(:on_failure,message,e)
97
+ handle_failure(item, message, klass, e)
99
98
  end
100
99
  end
101
100
  end
102
101
 
102
+ def attempt_to_delay(item, message, klass)
103
+ delayed_for = item.consumer.delay(item, klass.options[:backoff])
104
+ Chore.logger.info { "Delaying retry by #{delayed_for} seconds for the job #{item.message}" }
105
+ klass.run_hooks_for(:on_delay, message)
106
+ rescue => e
107
+ handle_failure(item, message, klass, e)
108
+ end
109
+
110
+ def handle_failure(item, message, klass, e)
111
+ Chore.logger.error { "Failed to run job #{item.message} with error: #{e.message} at #{e.backtrace * "\n"}" }
112
+ if item.current_attempt >= klass.options[:max_attempts]
113
+ klass.run_hooks_for(:on_permanent_failure,item.queue_name,message,e)
114
+ item.consumer.complete(item.id)
115
+ else
116
+ klass.run_hooks_for(:on_failure, message, e)
117
+ end
118
+ end
119
+
103
120
  def perform_job(klass, message)
104
121
  klass.perform(*options[:payload_handler].payload(message))
105
122
  end
@@ -15,32 +15,32 @@ describe Chore::DuplicateDetector do
15
15
 
16
16
  describe "#found_duplicate" do
17
17
  it 'should not return true if the message has not already been seen' do
18
- memcache.should_receive(:add).and_return(true)
19
- dedupe.found_duplicate?(message_data).should_not be_true
18
+ expect(memcache).to receive(:add).and_return(true)
19
+ expect(dedupe.found_duplicate?(message_data)).to_not be true
20
20
  end
21
21
 
22
22
  it 'should return true if the message has already been seen' do
23
- memcache.should_receive(:add).and_return(false)
24
- dedupe.found_duplicate?(message_data).should be_true
23
+ expect(memcache).to receive(:add).and_return(false)
24
+ expect(dedupe.found_duplicate?(message_data)).to be true
25
25
  end
26
26
 
27
27
  it 'should return false if given an invalid message' do
28
- dedupe.found_duplicate?({}).should be_false
28
+ expect(dedupe.found_duplicate?({})).to be false
29
29
  end
30
30
 
31
31
  it "should return false when identity store errors" do
32
- memcache.should_receive(:add).and_raise("no")
33
- dedupe.found_duplicate?(message_data).should be_false
32
+ expect(memcache).to receive(:add).and_raise("no")
33
+ expect(dedupe.found_duplicate?(message_data)).to be false
34
34
  end
35
35
 
36
36
  it "should set the timeout to be the queue's " do
37
- memcache.should_receive(:add).with(id,"1",timeout).and_return(true)
38
- dedupe.found_duplicate?(message_data).should be_false
37
+ expect(memcache).to receive(:add).with(id,"1",timeout).and_return(true)
38
+ expect(dedupe.found_duplicate?(message_data)).to be false
39
39
  end
40
40
 
41
41
  it "should call #visibility_timeout once and only once" do
42
- queue.should_receive(:visibility_timeout).once
43
- memcache.should_receive(:add).at_least(3).times.and_return(true)
42
+ expect(queue).to receive(:visibility_timeout).once
43
+ expect(memcache).to receive(:add).at_least(3).times.and_return(true)
44
44
  3.times { dedupe.found_duplicate?(message_data) }
45
45
  end
46
46
 
@@ -49,8 +49,8 @@ describe Chore::DuplicateDetector do
49
49
  let(:dupe_on_cache_failure) { true }
50
50
 
51
51
  it "returns true" do
52
- memcache.should_receive(:add).and_raise
53
- dedupe.found_duplicate?(message_data).should be_true
52
+ expect(memcache).to receive(:add).and_raise
53
+ expect(dedupe.found_duplicate?(message_data)).to be true
54
54
  end
55
55
  end
56
56
  end
@@ -9,7 +9,8 @@ describe Chore::Job do
9
9
  end
10
10
 
11
11
  after(:each) do
12
- TestJob.queue_options config
12
+ # Reset the config
13
+ TestJob.instance_variable_set(:@chore_options, nil)
13
14
  end
14
15
 
15
16
  it 'should have an perform_async method' do
@@ -38,7 +39,30 @@ describe Chore::Job do
38
39
  TestJob.options[:name].should == 'test_queue'
39
40
  end
40
41
 
41
- describe(:perform_async) do
42
+ describe 'the backoff config' do
43
+ it 'must be a Proc instance' do
44
+ options = config.merge(:backoff => 'abc')
45
+
46
+ expect { TestJob.queue_options(options) }.to raise_error(ArgumentError)
47
+ end
48
+
49
+ it 'rejects a lambda with no arguments' do
50
+ options = config.merge(:backoff => lambda { })
51
+ expect { TestJob.queue_options(options) }.to raise_error(ArgumentError)
52
+ end
53
+
54
+ it 'allows a lambda with one argument' do
55
+ options = config.merge(:backoff => lambda { |a| })
56
+ expect { TestJob.queue_options(options) }.not_to raise_error
57
+ end
58
+
59
+ it 'rejects a lambda with two arguments' do
60
+ options = config.merge(:backoff => lambda { |a,b| })
61
+ expect { TestJob.queue_options(options) }.to raise_error(ArgumentError)
62
+ end
63
+ end
64
+
65
+ describe(:perform_async) do
42
66
  it 'should call an instance of the queue_options publisher' do
43
67
  args = [1,2,{:h => 'ash'}]
44
68
  TestJob.queue_options(:publisher => Chore::Publisher)