cloudtasker 0.12.rc6 → 0.12.rc11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 00be8d2e572e3129ef5330cdbb555aa0485edd15b191d5d8ccc5d28a98a0eab8
4
- data.tar.gz: 79067d7953556f03e9a81ee9d977cf2734a9843df8f9a4613dbd8d81274bc599
3
+ metadata.gz: 1c7677889e31eccaadb3cf019f168e2d7133f10682379d0c2ccf1987da450105
4
+ data.tar.gz: 66ea1098cfbfdc9e5261dba0d6d033e44c8d84ee7623c464fc3bccd3999e708e
5
5
  SHA512:
6
- metadata.gz: 99d9b7ba99390e6bc10e9ed917262782a7c47814df6412c847dedd06a8a06a6b9ad4eaf5ddbcde03534ec88f34fcb1cda03621a3c2a0202031d087e6cd915c5f
7
- data.tar.gz: e15a6a167f15dbd8c390c7d393a80297b6cd805c85d4e5ca16f961ff147c909bebbfcf9c326708e7d7508caa3ff47b726f071e618e3979a1340848420311ddbb
6
+ metadata.gz: b7ccd9a1e2950d57f51dad86c847373c29caf753b092711cc9be167a8574cdaaec7eeff99dfe08fc3c0e29c4af1a7a21b04e92f36fcc6769b9e3b8f1561c6231
7
+ data.tar.gz: 7fc85d30cab6c80e8e64afa3285a50dfafb47f66dc214d228c758b7e19481239cf5900fad3bbbd29c3220658529eeeb106f12daf9e10f52a10f4d1bb10864354
data/.rubocop.yml CHANGED
@@ -12,7 +12,7 @@ Metrics/ModuleLength:
12
12
  Max: 150
13
13
 
14
14
  Metrics/AbcSize:
15
- Max: 20
15
+ Max: 25
16
16
  Exclude:
17
17
  - 'spec/support/*'
18
18
 
data/CHANGELOG.md CHANGED
@@ -1,17 +1,26 @@
1
1
  # Changelog
2
2
 
3
- ## Latest RC [v0.12.rc6](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc6) (2021-03-31)
3
+ ## Latest RC [v0.12.rc11](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc11) (2021-06-26)
4
4
 
5
- [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc6)
5
+ [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc11)
6
6
 
7
7
  **Improvements:**
8
8
  - ActiveJob: do not double log errors (ActiveJob has its own error logging)
9
- - Error logging: Use worker logger so as to include context (job args etc.)
10
- - Error logging: Do not log exception and stack trace separately, combine them instead.
11
9
  - Batch callbacks: Retry jobs when completion callback fails
12
- - Redis: Use Redis Sets instead of key pattern matching for listing methods (Cron jobs and Local Server)
10
+ - Batch state: use native Redis hashes to store batch state instead of a serialized hash in a string key
13
11
  - Batch progress: restrict calculation to direct children by default. Allow depth to be specified. Calculating progress using all tree jobs created significant delays on large batches.
12
+ - Batch redis usage: cleanup batches as they get completed or become dead to avoid excessive redis usage with large batches.
13
+ - Batch expansion: Inject `parent_batch` in jobs. Can be used to expand the parent batch the job is in.
14
+ - Configuration: allow configuration of Cloud Tasks `dispatch deadline` at global and worker level
15
+ - Configuration: allow specifying global `on_error` and `on_dead` callbacks for error reporting
16
+ - Cron jobs: Use Redis Sets instead of key pattern matching for resource listing
17
+ - Error logging: Use worker logger so as to include context (job args etc.)
18
+ - Error logging: Do not log exception and stack trace separately, combine them instead.
19
+ - Local server: Use Redis Sets instead of key pattern matching for resource listing
20
+ - Local server: Guard against nil tasks to prevent job daemon failures
21
+ - Performance: remove use of redis locks and rely on atomic transactions instead for Batch and Unique Job.
14
22
  - Worker: raise DeadWorkerError instead of MissingWorkerArgumentsError when arguments are missing. This is more consistent with what middlewares expect.
23
+ - Worker redis usage: delete redis payload storage once the job is successful or dead instead of expiring the key.
15
24
 
16
25
  **Fixed bugs:**
17
26
  - Retries: Enforce job retry limit on job processing. There was an edge case where jobs could be retried indefinitely on batch callback errors.
data/README.md CHANGED
@@ -35,8 +35,10 @@ A local processing server is also available for development. This local server p
35
35
  2. [Logging context](#logging-context)
36
36
  9. [Error Handling](#error-handling)
37
37
  1. [HTTP Error codes](#http-error-codes)
38
- 2. [Error callbacks](#error-callbacks)
39
- 3. [Max retries](#max-retries)
38
+ 2. [Worker callbacks](#worker-callbacks)
39
+ 3. [Global callbacks](#global-callbacks)
40
+ 4. [Max retries](#max-retries)
41
+ 5. [Dispatch deadline](#dispatch-deadline)
40
42
  10. [Testing](#testing)
41
43
  1. [Test helper setup](#test-helper-setup)
42
44
  2. [In-memory queues](#in-memory-queues)
@@ -351,6 +353,53 @@ Cloudtasker.configure do |config|
351
353
  #
352
354
  # Store all job payloads in Redis exceeding 50 KB:
353
355
  # config.store_payloads_in_redis = 50
356
+
357
+ #
358
+ # Specify the dispatch deadline for jobs in Cloud Tasks, in seconds.
359
+ # Jobs taking longer will be retried by Cloud Tasks, even if they eventually
360
+ # complete on the server side.
361
+ #
362
+ # Note that this option is applied when jobs are enqueued job. Changing this value
363
+ # will not impact already enqueued jobs.
364
+ #
365
+ # This option can also be configured on a per worker basis via
366
+ # the cloudtasker_options directive.
367
+ #
368
+ # Supported since: v0.12.rc8
369
+ #
370
+ # Default: 600 seconds (10 minutes)
371
+ #
372
+ # config.dispatch_deadline = 600
373
+
374
+ #
375
+ # Specify a proc to be invoked every time a job fails due to a runtime
376
+ # error.
377
+ #
378
+ # This hook is not invoked for DeadWorkerError. See on_dead instead.
379
+ #
380
+ # This is useful when you need to apply general exception handling, such
381
+ # as reporting errors to a third-party service like Rollbar or Bugsnag.
382
+ #
383
+ # Note: the worker argument might be nil, such as when InvalidWorkerError is raised.
384
+ #
385
+ # Supported since: v0.12.rc11
386
+ #
387
+ # Default: no operation
388
+ #
389
+ # config.on_error = ->(error, worker) { Rollbar.error(error) }
390
+
391
+ #
392
+ # Specify a proc to be invoked every time a job dies due to too many
393
+ # retries.
394
+ #
395
+ # This is useful when you need to apply general exception handling, such
396
+ # logging specific messages/context when a job dies.
397
+ #
398
+ # Supported since: v0.12.rc11
399
+ #
400
+ # Default: no operation
401
+ #
402
+ # config.on_dead = ->(error, worker) { Rollbar.error(error) }
354
403
  end
355
404
  ```
356
405
 
@@ -631,7 +680,7 @@ Jobs failing will automatically return the following HTTP error code to Cloud Ta
631
680
  | 404 | The job has specified an incorrect worker class. |
632
681
  | 422 | An error happened during the execution of the worker (`perform` method) |
633
682
 
634
- ### Error callbacks
683
+ ### Worker callbacks
635
684
 
636
685
  Workers can implement the `on_error(error)` and `on_dead(error)` callbacks to do things when a job fails during its execution:
637
686
 
@@ -659,6 +708,25 @@ class HandleErrorWorker
659
708
  end
660
709
  ```
661
710
 
711
+ ### Global callbacks
712
+ **Supported since**: `0.12.rc11`
713
+
714
+ If you need to apply general exception handling logic to your workers you can specify `on_error` and `on_dead` hooks in the Cloudtasker configuration.
715
+
716
+ This is useful if you need to report errors to third-party services such as Rollbar or Bugsnag.
717
+
718
+ ```ruby
719
+ # config/initializers/cloudtasker.rb
720
+
721
+ Cloudtasker.configure do |config|
722
+ #
723
+ # Report runtime and dead worker errors to Rollbar
724
+ #
725
+ config.on_error = -> (error, _worker) { Rollbar.error(error) }
726
+ config.on_dead = -> (error, _worker) { Rollbar.error(error) }
727
+ end
728
+ ```
729
+
662
730
  ### Max retries
663
731
 
664
732
  By default jobs are retried 25 times - using an exponential backoff - before being declared dead. This number of retries can be customized locally on workers and/or globally via the Cloudtasker initializer.
@@ -721,6 +789,48 @@ class SomeErrorWorker
721
789
  end
722
790
  ```
723
791
 
792
+ ### Dispatch deadline
793
+ **Supported since**: `0.12.rc8`
794
+
795
+ By default Cloud Tasks will automatically timeout your jobs after 10 minutes, independently of your server HTTP timeout configuration.
796
+
797
+ You can modify the dispatch deadline for jobs at a global level or on a per job basis.
798
+
799
+ E.g. Set the default dispatch deadline to 20 minutes.
800
+ ```ruby
801
+ # config/initializers/cloudtasker.rb
802
+
803
+ Cloudtasker.configure do |config|
804
+ #
805
+ # Specify the dispatch deadline for jobs in Cloud Tasks, in seconds.
806
+ # Jobs taking longer will be retried by Cloud Tasks, even if they eventually
807
+ # complete on the server side.
808
+ #
809
+ # Note that this option is applied when jobs are enqueued job. Changing this value
810
+ # will not impact already enqueued jobs.
811
+ #
812
+ # Default: 600 (10 minutes)
813
+ #
814
+ config.dispatch_deadline = 20 * 60 # 20 minutes
815
+ end
816
+ ```
817
+
818
+ E.g. Set a dispatch deadline of 5 minutes on a specific worker
819
+ ```ruby
820
+ # app/workers/some_error_worker.rb
821
+
822
+ class SomeFasterWorker
823
+ include Cloudtasker::Worker
824
+
825
+ # This will override the global setting
826
+ cloudtasker_options dispatch_deadline: 5 * 60
827
+
828
+ def perform
829
+ # ... do things ...
830
+ end
831
+ end
832
+ ```
833
+
724
834
  ## Testing
725
835
  Cloudtasker provides several options to test your workers.
726
836
 
data/docs/BATCH_JOBS.md CHANGED
@@ -18,7 +18,7 @@ Cloudtasker.configure do |config|
18
18
  end
19
19
  ```
20
20
 
21
- ## Example
21
+ ## Example: Creating a new batch
22
22
 
23
23
  The following example defines a worker that adds itself to the batch with different arguments then monitors the success of the batch.
24
24
 
@@ -47,6 +47,38 @@ class BatchWorker
47
47
  end
48
48
  ```
49
49
 
50
+ ## Example: Expanding the parent batch
51
+ **Note**: `parent_batch` is available since `0.12.rc10`
52
+
53
+ ```ruby
54
+ # All the jobs will be attached to the top parent batch.
55
+ class BatchWorker
56
+ include Cloudtasker::Worker
57
+
58
+ def perform(level, instance)
59
+ # Use existing parent_batch or create a new one
60
+ current_batch = parent_batch || batch
61
+
62
+ 3.times { |n| current_batch.add(self.class, level + 1, n) } if level < 2
63
+ end
64
+
65
+ # Invoked when any descendant (e.g. sub-sub job) is complete
66
+ def on_batch_node_complete(child)
67
+ logger.info("Direct or Indirect child complete: #{child.job_id}")
68
+ end
69
+
70
+ # Invoked when a direct descendant is complete
71
+ def on_child_complete(child)
72
+ logger.info("Direct child complete: #{child.job_id}")
73
+ end
74
+
75
+ # Invoked when all chidren have finished
76
+ def on_batch_complete
77
+ Rails.logger.info("Batch complete")
78
+ end
79
+ end
80
+ ```
81
+
50
82
  ## Available callbacks
51
83
 
52
84
  The following callbacks are available on your workers to track the progress of the batch:
@@ -113,7 +113,7 @@ module Cloudtasker
113
113
  # @param [Hash] http_request The HTTP request content.
114
114
  # @param [Integer] schedule_time When to run the task (Unix timestamp)
115
115
  #
116
- def initialize(id:, http_request:, schedule_time: nil, queue: nil, job_retries: 0)
116
+ def initialize(id:, http_request:, schedule_time: nil, queue: nil, job_retries: 0, **_xargs)
117
117
  @id = id
118
118
  @http_request = http_request
119
119
  @schedule_time = Time.at(schedule_time || 0)
@@ -7,7 +7,7 @@ module Cloudtasker
7
7
  module Backend
8
8
  # Manage local tasks pushed to Redis
9
9
  class RedisTask
10
- attr_reader :id, :http_request, :schedule_time, :retries, :queue
10
+ attr_reader :id, :http_request, :schedule_time, :retries, :queue, :dispatch_deadline
11
11
 
12
12
  RETRY_INTERVAL = 20 # seconds
13
13
 
@@ -39,7 +39,7 @@ module Cloudtasker
39
39
  def self.all
40
40
  if redis.exists?(key)
41
41
  # Use Schedule Set if available
42
- redis.smembers(key).map { |id| find(id) }
42
+ redis.smembers(key).map { |id| find(id) }.compact
43
43
  else
44
44
  # Fallback to redis key matching and migrate tasks
45
45
  # to use Task Set instead.
@@ -123,13 +123,15 @@ module Cloudtasker
123
123
  # @param [Hash] http_request The HTTP request content.
124
124
  # @param [Integer] schedule_time When to run the task (Unix timestamp)
125
125
  # @param [Integer] retries The number of times the job failed.
126
+ # @param [Integer] dispatch_deadline The dispatch_deadline in seconds.
126
127
  #
127
- def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil)
128
+ def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil, dispatch_deadline: nil)
128
129
  @id = id
129
130
  @http_request = http_request
130
131
  @schedule_time = Time.at(schedule_time || 0)
131
132
  @retries = retries || 0
132
- @queue = queue || Cloudtasker::Config::DEFAULT_JOB_QUEUE
133
+ @queue = queue || Config::DEFAULT_JOB_QUEUE
134
+ @dispatch_deadline = dispatch_deadline || Config::DEFAULT_DISPATCH_DEADLINE
133
135
  end
134
136
 
135
137
  #
@@ -152,7 +154,8 @@ module Cloudtasker
152
154
  http_request: http_request,
153
155
  schedule_time: schedule_time.to_i,
154
156
  retries: retries,
155
- queue: queue
157
+ queue: queue,
158
+ dispatch_deadline: dispatch_deadline
156
159
  }
157
160
  end
158
161
 
@@ -176,7 +179,8 @@ module Cloudtasker
176
179
  retries: is_error ? retries + 1 : retries,
177
180
  http_request: http_request,
178
181
  schedule_time: (Time.now + interval).to_i,
179
- queue: queue
182
+ queue: queue,
183
+ dispatch_deadline: dispatch_deadline
180
184
  )
181
185
  redis.sadd(self.class.key, id)
182
186
  end
@@ -207,6 +211,13 @@ module Cloudtasker
207
211
  end
208
212
 
209
213
  resp
214
+ rescue Net::ReadTimeout
215
+ retry_later(RETRY_INTERVAL)
216
+ Cloudtasker.logger.info(
217
+ format_log_message(
218
+ "Task deadline exceeded (#{dispatch_deadline}s) - Retry in #{RETRY_INTERVAL} seconds..."
219
+ )
220
+ )
210
221
  end
211
222
 
212
223
  #
@@ -242,7 +253,7 @@ module Cloudtasker
242
253
  @http_client ||=
243
254
  begin
244
255
  uri = URI(http_request[:url])
245
- Net::HTTP.new(uri.host, uri.port).tap { |e| e.read_timeout = 60 * 10 }
256
+ Net::HTTP.new(uri.host, uri.port).tap { |e| e.read_timeout = dispatch_deadline }
246
257
  end
247
258
  end
248
259
 
@@ -6,7 +6,7 @@ module Cloudtasker
6
6
  # Include batch related methods onto Cloudtasker::Worker
7
7
  # See: Cloudtasker::Batch::Middleware#configure
8
8
  module Worker
9
- attr_accessor :batch
9
+ attr_accessor :batch, :parent_batch
10
10
  end
11
11
  end
12
12
  end
@@ -17,6 +17,10 @@ module Cloudtasker
17
17
  # because the jobs will be either retried or dropped
18
18
  IGNORED_ERRORED_CALLBACKS = %i[on_child_error on_child_dead].freeze
19
19
 
20
+ # The maximum number of seconds to wait for a batch state lock
21
+ # to be acquired.
22
+ BATCH_MAX_LOCK_WAIT = 60
23
+
20
24
  #
21
25
  # Return the cloudtasker redis client
22
26
  #
@@ -69,8 +73,12 @@ module Cloudtasker
69
73
  # Load extension if not loaded already on the worker class
70
74
  worker.class.include(Extension::Worker) unless worker.class <= Extension::Worker
71
75
 
72
- # Add batch capability
76
+ # Add batch and parent batch to worker
73
77
  worker.batch = new(worker)
78
+ worker.parent_batch = worker.batch.parent_batch
79
+
80
+ # Return the batch
81
+ worker.batch
74
82
  end
75
83
 
76
84
  #
@@ -176,7 +184,9 @@ module Cloudtasker
176
184
  # @return [Hash] The state of each child worker.
177
185
  #
178
186
  def batch_state
179
- redis.fetch(batch_state_gid)
187
+ migrate_batch_state_to_redis_hash
188
+
189
+ redis.hgetall(batch_state_gid)
180
190
  end
181
191
 
182
192
  #
@@ -208,6 +218,24 @@ module Cloudtasker
208
218
  )
209
219
  end
210
220
 
221
+ #
222
+ # This method migrates the batch state to be a Redis hash instead
223
+ # of a hash stored in a string key.
224
+ #
225
+ def migrate_batch_state_to_redis_hash
226
+ return unless redis.type(batch_state_gid) == 'string'
227
+
228
+ # Migrate batch state to Redis hash if it is still using a legacy string key
229
+ # We acquire a lock then check again
230
+ redis.with_lock(batch_state_gid, max_wait: BATCH_MAX_LOCK_WAIT) do
231
+ if redis.type(batch_state_gid) == 'string'
232
+ state = redis.fetch(batch_state_gid)
233
+ redis.del(batch_state_gid)
234
+ redis.hset(batch_state_gid, state) if state.any?
235
+ end
236
+ end
237
+ end
238
+
211
239
  #
212
240
  # Save the batch.
213
241
  #
@@ -218,8 +246,11 @@ module Cloudtasker
218
246
  # complete (success or failure).
219
247
  redis.write(batch_gid, worker.to_h)
220
248
 
249
+ # Stop there if no jobs to save
250
+ return if jobs.empty?
251
+
221
252
  # Save list of child workers
222
- redis.write(batch_state_gid, jobs.map { |e| [e.job_id, 'scheduled'] }.to_h)
253
+ redis.hset(batch_state_gid, jobs.map { |e| [e.job_id, 'scheduled'] }.to_h)
223
254
  end
224
255
 
225
256
  #
@@ -228,29 +259,23 @@ module Cloudtasker
228
259
  # @param [String] job_id The batch id.
229
260
  # @param [String] status The status of the sub-batch.
230
261
  #
231
- # @return [<Type>] <description>
232
- #
233
262
  def update_state(batch_id, status)
234
- redis.with_lock(batch_state_gid) do
235
- state = batch_state
236
- state[batch_id.to_sym] = status.to_s if state.key?(batch_id.to_sym)
237
- redis.write(batch_state_gid, state)
238
- end
263
+ migrate_batch_state_to_redis_hash
264
+
265
+ # Update the batch state batch_id entry with the new status
266
+ redis.hset(batch_state_gid, batch_id, status) if redis.hexists(batch_state_gid, batch_id)
239
267
  end
240
268
 
241
269
  #
242
270
  # Return true if all the child workers have completed.
243
271
  #
244
- # @return [<Type>] <description>
272
+ # @return [Boolean] True if the batch is complete.
245
273
  #
246
274
  def complete?
247
- redis.with_lock(batch_state_gid) do
248
- state = redis.fetch(batch_state_gid)
249
- return true unless state
275
+ migrate_batch_state_to_redis_hash
250
276
 
251
- # Check that all children are complete
252
- state.values.all? { |e| COMPLETION_STATUSES.include?(e) }
253
- end
277
+ # Check that all child jobs have completed
278
+ redis.hvals(batch_state_gid).all? { |e| COMPLETION_STATUSES.include?(e) }
254
279
  end
255
280
 
256
281
  #
@@ -285,8 +310,8 @@ module Cloudtasker
285
310
  # Propagate event
286
311
  parent_batch&.on_child_complete(self, status)
287
312
 
288
- # The batch tree is complete. Cleanup the tree.
289
- cleanup unless parent_batch
313
+ # The batch tree is complete. Cleanup the downstream tree.
314
+ cleanup
290
315
  end
291
316
 
292
317
  #
@@ -331,11 +356,10 @@ module Cloudtasker
331
356
  # Remove all batch and sub-batch keys from Redis.
332
357
  #
333
358
  def cleanup
334
- # Capture batch state
335
- state = batch_state
359
+ migrate_batch_state_to_redis_hash
336
360
 
337
361
  # Delete child batches recursively
338
- state.to_h.keys.each { |id| self.class.find(id)&.cleanup }
362
+ redis.hkeys(batch_state_gid).each { |id| self.class.find(id)&.cleanup }
339
363
 
340
364
  # Delete batch redis entries
341
365
  redis.del(batch_gid)
@@ -402,8 +426,11 @@ module Cloudtasker
402
426
  # Perform job
403
427
  yield
404
428
 
405
- # Save batch (if child worker has been enqueued)
406
- setup
429
+ # Save batch if child jobs added
430
+ setup if jobs.any?
431
+
432
+ # Save parent batch if batch expanded
433
+ parent_batch&.setup if parent_batch&.jobs&.any?
407
434
 
408
435
  # Complete batch
409
436
  complete(:completed)
@@ -3,7 +3,7 @@
3
3
  module Cloudtasker
4
4
  # An interface class to manage tasks on the backend (Cloud Task or Redis)
5
5
  class CloudTask
6
- attr_accessor :id, :http_request, :schedule_time, :retries, :queue
6
+ attr_accessor :id, :http_request, :schedule_time, :retries, :queue, :dispatch_deadline
7
7
 
8
8
  #
9
9
  # The backend to use for cloud tasks.
@@ -73,12 +73,13 @@ module Cloudtasker
73
73
  # @param [Integer] retries The number of times the job failed.
74
74
  # @param [String] queue The queue the task is in.
75
75
  #
76
- def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil)
76
+ def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil, dispatch_deadline: nil)
77
77
  @id = id
78
78
  @http_request = http_request
79
79
  @schedule_time = schedule_time
80
80
  @retries = retries || 0
81
81
  @queue = queue
82
+ @dispatch_deadline = dispatch_deadline
82
83
  end
83
84
 
84
85
  #
@@ -7,7 +7,8 @@ module Cloudtasker
7
7
  class Config
8
8
  attr_accessor :redis, :store_payloads_in_redis
9
9
  attr_writer :secret, :gcp_location_id, :gcp_project_id,
10
- :gcp_queue_prefix, :processor_path, :logger, :mode, :max_retries
10
+ :gcp_queue_prefix, :processor_path, :logger, :mode, :max_retries,
11
+ :dispatch_deadline, :on_error, :on_dead
11
12
 
12
13
  # Max Cloud Task size in bytes
13
14
  MAX_TASK_SIZE = 100 * 1024 # 100 KB
@@ -46,6 +47,14 @@ module Cloudtasker
46
47
  DEFAULT_QUEUE_CONCURRENCY = 10
47
48
  DEFAULT_QUEUE_RETRIES = -1 # unlimited
48
49
 
50
+ # Job timeout configuration for Cloud Tasks
51
+ DEFAULT_DISPATCH_DEADLINE = 10 * 60 # 10 minutes
52
+ MIN_DISPATCH_DEADLINE = 15 # seconds
53
+ MAX_DISPATCH_DEADLINE = 30 * 60 # 30 minutes
54
+
55
+ # Default on_error Proc
56
+ DEFAULT_ON_ERROR = ->(error, worker) {}
57
+
49
58
  # The number of times jobs will be attempted before declaring them dead.
50
59
  #
51
60
  # With the default retry configuration (maxDoublings = 16 and minBackoff = 0.100s)
@@ -207,6 +216,16 @@ module Cloudtasker
207
216
  @gcp_location_id || DEFAULT_LOCATION_ID
208
217
  end
209
218
 
219
+ #
220
+ # Return the Dispatch deadline duration. Cloud Tasks will timeout the job after
221
+ # this duration is elapsed.
222
+ #
223
+ # @return [Integer] The value in seconds.
224
+ #
225
+ def dispatch_deadline
226
+ @dispatch_deadline || DEFAULT_DISPATCH_DEADLINE
227
+ end
228
+
210
229
  #
211
230
  # Return the secret to use to sign the verification tokens
212
231
  # attached to tasks.
@@ -214,11 +233,31 @@ module Cloudtasker
214
233
  # @return [String] The cloudtasker secret
215
234
  #
216
235
  def secret
217
- @secret || (
236
+ @secret ||= (
218
237
  defined?(Rails) && Rails.application.credentials&.dig(:secret_key_base)
219
238
  ) || raise(StandardError, SECRET_MISSING_ERROR)
220
239
  end
221
240
 
241
+ #
242
+ # Return a Proc invoked whenever a worker runtime error is raised.
243
+ # See Cloudtasker::WorkerHandler.with_worker_handling
244
+ #
245
+ # @return [Proc] A Proc handler
246
+ #
247
+ def on_error
248
+ @on_error || DEFAULT_ON_ERROR
249
+ end
250
+
251
+ #
252
+ # Return a Proc invoked whenever a worker DeadWorkerError is raised.
253
+ # See Cloudtasker::WorkerHandler.with_worker_handling
254
+ #
255
+ # @return [Proc] A Proc handler
256
+ #
257
+ def on_dead
258
+ @on_dead || DEFAULT_ON_ERROR
259
+ end
260
+
222
261
  #
223
262
  # Return the chain of client middlewares.
224
263
  #
@@ -84,7 +84,7 @@ module Cloudtasker
84
84
 
85
85
  # Deliver task
86
86
  begin
87
- Thread.current['task'].deliver
87
+ Thread.current['task']&.deliver
88
88
  rescue Errno::EBADF, Errno::ECONNREFUSED => e
89
89
  raise(e) unless Thread.current['attempts'] < 3
90
90
 
@@ -75,14 +75,18 @@ module Cloudtasker
75
75
  # end
76
76
  #
77
77
  # @param [String] cache_key The cache key to access.
78
+ # @param [Integer] max_wait The number of seconds after which the lock will be cleared anyway.
78
79
  #
79
- def with_lock(cache_key)
80
+ def with_lock(cache_key, max_wait: nil)
80
81
  return nil unless cache_key
81
82
 
83
+ # Set max wait
84
+ max_wait = (max_wait || LOCK_DURATION).to_i
85
+
82
86
  # Wait to acquire lock
83
87
  lock_key = [LOCK_KEY_PREFIX, cache_key].join('/')
84
88
  client.with do |conn|
85
- sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: LOCK_DURATION)
89
+ sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: max_wait)
86
90
  end
87
91
 
88
92
  # yield content
@@ -149,25 +149,18 @@ module Cloudtasker
149
149
  # if taken by another job.
150
150
  #
151
151
  def lock!
152
- redis.with_lock(unique_gid) do
153
- locked_id = redis.get(unique_gid)
152
+ lock_acquired = redis.set(unique_gid, id, nx: true, ex: lock_ttl)
153
+ lock_already_acquired = !lock_acquired && redis.get(unique_gid) == id
154
154
 
155
- # Abort job lock process if lock is already taken by another job
156
- raise(LockError, locked_id) if locked_id && locked_id != id
157
-
158
- # Take job lock if the lock is currently free
159
- redis.set(unique_gid, id, ex: lock_ttl) unless locked_id
160
- end
155
+ raise(LockError) unless lock_acquired || lock_already_acquired
161
156
  end
162
157
 
163
158
  #
164
159
  # Delete the job lock.
165
160
  #
166
161
  def unlock!
167
- redis.with_lock(unique_gid) do
168
- locked_id = redis.get(unique_gid)
169
- redis.del(unique_gid) if locked_id == id
170
- end
162
+ locked_id = redis.get(unique_gid)
163
+ redis.del(unique_gid) if locked_id == id
171
164
  end
172
165
  end
173
166
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Cloudtasker
4
- VERSION = '0.12.rc6'
4
+ VERSION = '0.12.rc11'
5
5
  end
@@ -167,6 +167,22 @@ module Cloudtasker
167
167
  (@job_queue ||= self.class.cloudtasker_options_hash[:queue] || Config::DEFAULT_JOB_QUEUE).to_s
168
168
  end
169
169
 
170
+ #
171
+ # Return the Dispatch deadline duration. Cloud Tasks will timeout the job after
172
+ # this duration is elapsed.
173
+ #
174
+ # @return [Integer] The value in seconds.
175
+ #
176
+ def dispatch_deadline
177
+ @dispatch_deadline ||= [
178
+ [
179
+ Config::MIN_DISPATCH_DEADLINE,
180
+ (self.class.cloudtasker_options_hash[:dispatch_deadline] || Cloudtasker.config.dispatch_deadline).to_i
181
+ ].max,
182
+ Config::MAX_DISPATCH_DEADLINE
183
+ ].min
184
+ end
185
+
170
186
  #
171
187
  # Return the Cloudtasker logger instance.
172
188
  #
@@ -14,12 +14,6 @@ module Cloudtasker
14
14
  # payloads in Redis
15
15
  REDIS_PAYLOAD_NAMESPACE = 'payload'
16
16
 
17
- # Arg payload cache keys get expired instead of deleted
18
- # in case jobs are re-processed due to connection interruption
19
- # (job is successful but Cloud Task considers it as failed due
20
- # to network interruption)
21
- ARGS_PAYLOAD_CLEANUP_TTL = 3600 # 1 hour
22
-
23
17
  #
24
18
  # Return a namespaced key
25
19
  #
@@ -100,20 +94,19 @@ module Cloudtasker
100
94
  # Yied worker
101
95
  resp = yield(worker)
102
96
 
103
- # Schedule args payload deletion after job has been successfully processed
104
- # Note: we expire the key instead of deleting it immediately in case the job
105
- # succeeds but is considered as failed by Cloud Task due to network interruption.
106
- # In such case the job is likely to be re-processed soon after.
107
- redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key && !worker.job_reenqueued
97
+ # Delete stored args payload if job has completed
98
+ redis.del(args_payload_key) if args_payload_key && !worker.job_reenqueued
108
99
 
109
100
  resp
110
101
  rescue DeadWorkerError => e
111
102
  # Delete stored args payload if job is dead
112
- redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key
103
+ redis.del(args_payload_key) if args_payload_key
113
104
  log_execution_error(worker, e)
105
+ Cloudtasker.config.on_dead.call(e, worker)
114
106
  raise(e)
115
107
  rescue StandardError => e
116
108
  log_execution_error(worker, e)
109
+ Cloudtasker.config.on_error.call(e, worker)
117
110
  raise(e)
118
111
  end
119
112
 
@@ -165,6 +158,7 @@ module Cloudtasker
165
158
  },
166
159
  body: worker_payload.to_json
167
160
  },
161
+ dispatch_deadline: worker.dispatch_deadline.to_i,
168
162
  queue: worker.job_queue
169
163
  }
170
164
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cloudtasker
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.12.rc6
4
+ version: 0.12.rc11
5
5
  platform: ruby
6
6
  authors:
7
7
  - Arnaud Lachaume
8
- autorequire:
8
+ autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2021-03-31 00:00:00.000000000 Z
11
+ date: 2021-06-26 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -416,7 +416,7 @@ metadata:
416
416
  homepage_uri: https://github.com/keypup-io/cloudtasker
417
417
  source_code_uri: https://github.com/keypup-io/cloudtasker
418
418
  changelog_uri: https://github.com/keypup-io/cloudtasker/master/tree/CHANGELOG.md
419
- post_install_message:
419
+ post_install_message:
420
420
  rdoc_options: []
421
421
  require_paths:
422
422
  - lib
@@ -432,7 +432,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
432
432
  version: 1.3.1
433
433
  requirements: []
434
434
  rubygems_version: 3.0.0
435
- signing_key:
435
+ signing_key:
436
436
  specification_version: 4
437
437
  summary: Background jobs for Ruby using Google Cloud Tasks (beta)
438
438
  test_files: []