cloudtasker 0.12.rc5 → 0.12.rc10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a1f63a9bfefe90d0cfa0d6b567098ec72efe150894fbd878daa72fe934d27a25
4
- data.tar.gz: 347d6358120bd83f116b569fafcd0313a96d8c34a4f6222ca20b0dedda866098
3
+ metadata.gz: 5763bef46a0c554549150326375c3fb49c9eb77193b6862334dc1da72a5fe34f
4
+ data.tar.gz: 211272f9129642cb8c7260f639a6a76714161ac37c87dc5f073633a072781b44
5
5
  SHA512:
6
- metadata.gz: a4ecf9ad17d612133653f70e7691fc746742c3ca62eb8f67c09bdc1992776391e755a5c674b97d0339bd74514958299062610d448b089fe5995dd6abe5756021
7
- data.tar.gz: 231f0ba89cf89db4bdb138a1d34d10688161d156c9898da7a067783f75f930b9318a08dfdf240188e23dfa3a0e9dd59b38b1b235b5ab5038eb6727a244877017
6
+ metadata.gz: ac4012191c2878256abdc446eabc4cd4cf4ff822e426fc7a89e25c77ab2724c5b1d2e508fb88092ddde86680512797c50711d4188798a281cf275b3e03304a0e
7
+ data.tar.gz: 7a1558eed571862501a3e5a264cfa006014e9454c0f31cb915c9d5229aed6a9c47f539287cb447a226db4302b60d761496256791d9844cc8291164910ec8d6f5
data/.rubocop.yml CHANGED
@@ -12,7 +12,7 @@ Metrics/ModuleLength:
12
12
  Max: 150
13
13
 
14
14
  Metrics/AbcSize:
15
- Max: 20
15
+ Max: 25
16
16
  Exclude:
17
17
  - 'spec/support/*'
18
18
 
data/CHANGELOG.md CHANGED
@@ -1,16 +1,25 @@
1
1
  # Changelog
2
2
 
3
- ## Latest RC [v0.12.rc5](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc5) (2021-03-30)
3
+ ## Latest RC [v0.12.rc10](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc10) (2021-05-31)
4
4
 
5
- [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc5)
5
+ [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc10)
6
6
 
7
7
  **Improvements:**
8
8
  - ActiveJob: do not double log errors (ActiveJob has its own error logging)
9
- - Error logging: Use worker logger so as to include context (job args etc.)
10
- - Error logging: Do not log exception and stack trace separately, combine them instead.
11
9
  - Batch callbacks: Retry jobs when completion callback fails
12
- - Redis: Use Redis Sets instead of key pattern matching for listing methods (Cron jobs and Local Server)
10
+ - Batch state: use native Redis hashes to store batch state instead of a serialized hash in a string key
13
11
  - Batch progress: restrict calculation to direct children by default. Allow depth to be specified. Calculating progress using all tree jobs created significant delays on large batches.
12
+ - Batch redis usage: cleanup batches as they get completed or become dead to avoid excessive redis usage with large batches.
13
+ - Batch expansion: Inject `parent_batch` in jobs. Can be used to expand the parent batch the job is in.
14
+ - Configuration: allow configuration of Cloud Tasks `dispatch deadline` at global and worker level
15
+ - Cron jobs: Use Redis Sets instead of key pattern matching for resource listing
16
+ - Error logging: Use worker logger so as to include context (job args etc.)
17
+ - Error logging: Do not log exception and stack trace separately, combine them instead.
18
+ - Local server: Use Redis Sets instead of key pattern matching for resource listing
19
+ - Local server: Guard against nil tasks to prevent job daemon failures
20
+ - Performance: remove use of redis locks and rely on atomic transactions instead for Batch and Unique Job.
21
+ - Worker: raise DeadWorkerError instead of MissingWorkerArgumentsError when arguments are missing. This is more consistent with what middlewares expect.
22
+ - Worker redis usage: delete redis payload storage once the job is successful or dead instead of expiring the key.
14
23
 
15
24
  **Fixed bugs:**
16
25
  - Retries: Enforce job retry limit on job processing. There was an edge case where jobs could be retried indefinitely on batch callback errors.
data/README.md CHANGED
@@ -37,6 +37,7 @@ A local processing server is also available for development. This local server p
37
37
  1. [HTTP Error codes](#http-error-codes)
38
38
  2. [Error callbacks](#error-callbacks)
39
39
  3. [Max retries](#max-retries)
40
+ 4. [Dispatch deadline](#dispatch-deadline)
40
41
  10. [Testing](#testing)
41
42
  1. [Test helper setup](#test-helper-setup)
42
43
  2. [In-memory queues](#in-memory-queues)
@@ -351,6 +352,23 @@ Cloudtasker.configure do |config|
351
352
  #
352
353
  # Store all job payloads in Redis exceeding 50 KB:
353
354
  # config.store_payloads_in_redis = 50
355
+
356
+ #
357
+ # Specify the dispatch deadline for jobs in Cloud Tasks, in seconds.
358
+ # Jobs taking longer will be retried by Cloud Tasks, even if they eventually
359
+ # complete on the server side.
360
+ #
361
+ # Note that this option is applied when jobs are enqueued job. Changing this value
362
+ # will not impact already enqueued jobs.
363
+ #
364
+ # This option can also be configured on a per worker basis via
365
+ # the cloudtasker_options directive.
366
+ #
367
+ # Supported since: v0.12.rc8
368
+ #
369
+ # Default: 600 seconds (10 minutes)
370
+ #
371
+ # config.dispatch_deadline = 600
354
372
  end
355
373
  ```
356
374
 
@@ -721,6 +739,48 @@ class SomeErrorWorker
721
739
  end
722
740
  ```
723
741
 
742
+ ### Dispatch deadline
743
+ **Supported since**: `0.12.rc8`
744
+
745
+ By default Cloud Tasks will automatically timeout your jobs after 10 minutes, independently of your server HTTP timeout configuration.
746
+
747
+ You can modify the dispatch deadline for jobs at a global level or on a per job basis.
748
+
749
+ E.g. Set the default dispatch deadline to 20 minutes.
750
+ ```ruby
751
+ # config/initializers/cloudtasker.rb
752
+
753
+ Cloudtasker.configure do |config|
754
+ #
755
+ # Specify the dispatch deadline for jobs in Cloud Tasks, in seconds.
756
+ # Jobs taking longer will be retried by Cloud Tasks, even if they eventually
757
+ # complete on the server side.
758
+ #
759
+ # Note that this option is applied when jobs are enqueued job. Changing this value
760
+ # will not impact already enqueued jobs.
761
+ #
762
+ # Default: 600 (10 minutes)
763
+ #
764
+ config.dispatch_deadline = 20 * 60 # 20 minutes
765
+ end
766
+ ```
767
+
768
+ E.g. Set a dispatch deadline of 5 minutes on a specific worker
769
+ ```ruby
770
+ # app/workers/some_error_worker.rb
771
+
772
+ class SomeFasterWorker
773
+ include Cloudtasker::Worker
774
+
775
+ # This will override the global setting
776
+ cloudtasker_options dispatch_deadline: 5 * 60
777
+
778
+ def perform
779
+ # ... do things ...
780
+ end
781
+ end
782
+ ```
783
+
724
784
  ## Testing
725
785
  Cloudtasker provides several options to test your workers.
726
786
 
@@ -19,7 +19,7 @@ module Cloudtasker
19
19
  # Process payload
20
20
  WorkerHandler.execute_from_payload!(payload)
21
21
  head :no_content
22
- rescue DeadWorkerError, MissingWorkerArgumentsError
22
+ rescue DeadWorkerError
23
23
  # 205: job will NOT be retried
24
24
  head :reset_content
25
25
  rescue InvalidWorkerError
data/docs/BATCH_JOBS.md CHANGED
@@ -18,7 +18,7 @@ Cloudtasker.configure do |config|
18
18
  end
19
19
  ```
20
20
 
21
- ## Example
21
+ ## Example: Creating a new batch
22
22
 
23
23
  The following example defines a worker that adds itself to the batch with different arguments then monitors the success of the batch.
24
24
 
@@ -47,6 +47,38 @@ class BatchWorker
47
47
  end
48
48
  ```
49
49
 
50
+ ## Example: Expanding the parent batch
51
+ **Note**: `parent_batch` is available since `0.12.rc10`
52
+
53
+ ```ruby
54
+ # All the jobs will be attached to the top parent batch.
55
+ class BatchWorker
56
+ include Cloudtasker::Worker
57
+
58
+ def perform(level, instance)
59
+ # Use existing parent_batch or create a new one
60
+ current_batch = parent_batch || batch
61
+
62
+ 3.times { |n| current_batch.add(self.class, level + 1, n) } if level < 2
63
+ end
64
+
65
+ # Invoked when any descendant (e.g. sub-sub job) is complete
66
+ def on_batch_node_complete(child)
67
+ logger.info("Direct or Indirect child complete: #{child.job_id}")
68
+ end
69
+
70
+ # Invoked when a direct descendant is complete
71
+ def on_child_complete(child)
72
+ logger.info("Direct child complete: #{child.job_id}")
73
+ end
74
+
75
+ # Invoked when all chidren have finished
76
+ def on_batch_complete
77
+ Rails.logger.info("Batch complete")
78
+ end
79
+ end
80
+ ```
81
+
50
82
  ## Available callbacks
51
83
 
52
84
  The following callbacks are available on your workers to track the progress of the batch:
@@ -113,7 +113,7 @@ module Cloudtasker
113
113
  # @param [Hash] http_request The HTTP request content.
114
114
  # @param [Integer] schedule_time When to run the task (Unix timestamp)
115
115
  #
116
- def initialize(id:, http_request:, schedule_time: nil, queue: nil, job_retries: 0)
116
+ def initialize(id:, http_request:, schedule_time: nil, queue: nil, job_retries: 0, **_xargs)
117
117
  @id = id
118
118
  @http_request = http_request
119
119
  @schedule_time = Time.at(schedule_time || 0)
@@ -7,7 +7,7 @@ module Cloudtasker
7
7
  module Backend
8
8
  # Manage local tasks pushed to Redis
9
9
  class RedisTask
10
- attr_reader :id, :http_request, :schedule_time, :retries, :queue
10
+ attr_reader :id, :http_request, :schedule_time, :retries, :queue, :dispatch_deadline
11
11
 
12
12
  RETRY_INTERVAL = 20 # seconds
13
13
 
@@ -39,7 +39,7 @@ module Cloudtasker
39
39
  def self.all
40
40
  if redis.exists?(key)
41
41
  # Use Schedule Set if available
42
- redis.smembers(key).map { |id| find(id) }
42
+ redis.smembers(key).map { |id| find(id) }.compact
43
43
  else
44
44
  # Fallback to redis key matching and migrate tasks
45
45
  # to use Task Set instead.
@@ -123,13 +123,15 @@ module Cloudtasker
123
123
  # @param [Hash] http_request The HTTP request content.
124
124
  # @param [Integer] schedule_time When to run the task (Unix timestamp)
125
125
  # @param [Integer] retries The number of times the job failed.
126
+ # @param [Integer] dispatch_deadline The dispatch_deadline in seconds.
126
127
  #
127
- def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil)
128
+ def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil, dispatch_deadline: nil)
128
129
  @id = id
129
130
  @http_request = http_request
130
131
  @schedule_time = Time.at(schedule_time || 0)
131
132
  @retries = retries || 0
132
- @queue = queue || Cloudtasker::Config::DEFAULT_JOB_QUEUE
133
+ @queue = queue || Config::DEFAULT_JOB_QUEUE
134
+ @dispatch_deadline = dispatch_deadline || Config::DEFAULT_DISPATCH_DEADLINE
133
135
  end
134
136
 
135
137
  #
@@ -152,7 +154,8 @@ module Cloudtasker
152
154
  http_request: http_request,
153
155
  schedule_time: schedule_time.to_i,
154
156
  retries: retries,
155
- queue: queue
157
+ queue: queue,
158
+ dispatch_deadline: dispatch_deadline
156
159
  }
157
160
  end
158
161
 
@@ -176,7 +179,8 @@ module Cloudtasker
176
179
  retries: is_error ? retries + 1 : retries,
177
180
  http_request: http_request,
178
181
  schedule_time: (Time.now + interval).to_i,
179
- queue: queue
182
+ queue: queue,
183
+ dispatch_deadline: dispatch_deadline
180
184
  )
181
185
  redis.sadd(self.class.key, id)
182
186
  end
@@ -207,6 +211,13 @@ module Cloudtasker
207
211
  end
208
212
 
209
213
  resp
214
+ rescue Net::ReadTimeout
215
+ retry_later(RETRY_INTERVAL)
216
+ Cloudtasker.logger.info(
217
+ format_log_message(
218
+ "Task deadline exceeded (#{dispatch_deadline}s) - Retry in #{RETRY_INTERVAL} seconds..."
219
+ )
220
+ )
210
221
  end
211
222
 
212
223
  #
@@ -242,7 +253,7 @@ module Cloudtasker
242
253
  @http_client ||=
243
254
  begin
244
255
  uri = URI(http_request[:url])
245
- Net::HTTP.new(uri.host, uri.port).tap { |e| e.read_timeout = 60 * 10 }
256
+ Net::HTTP.new(uri.host, uri.port).tap { |e| e.read_timeout = dispatch_deadline }
246
257
  end
247
258
  end
248
259
 
@@ -6,7 +6,7 @@ module Cloudtasker
6
6
  # Include batch related methods onto Cloudtasker::Worker
7
7
  # See: Cloudtasker::Batch::Middleware#configure
8
8
  module Worker
9
- attr_accessor :batch
9
+ attr_accessor :batch, :parent_batch
10
10
  end
11
11
  end
12
12
  end
@@ -17,6 +17,10 @@ module Cloudtasker
17
17
  # because the jobs will be either retried or dropped
18
18
  IGNORED_ERRORED_CALLBACKS = %i[on_child_error on_child_dead].freeze
19
19
 
20
+ # The maximum number of seconds to wait for a batch state lock
21
+ # to be acquired.
22
+ BATCH_MAX_LOCK_WAIT = 60
23
+
20
24
  #
21
25
  # Return the cloudtasker redis client
22
26
  #
@@ -69,8 +73,12 @@ module Cloudtasker
69
73
  # Load extension if not loaded already on the worker class
70
74
  worker.class.include(Extension::Worker) unless worker.class <= Extension::Worker
71
75
 
72
- # Add batch capability
76
+ # Add batch and parent batch to worker
73
77
  worker.batch = new(worker)
78
+ worker.parent_batch = worker.batch.parent_batch
79
+
80
+ # Return the batch
81
+ worker.batch
74
82
  end
75
83
 
76
84
  #
@@ -176,7 +184,9 @@ module Cloudtasker
176
184
  # @return [Hash] The state of each child worker.
177
185
  #
178
186
  def batch_state
179
- redis.fetch(batch_state_gid)
187
+ migrate_batch_state_to_redis_hash
188
+
189
+ redis.hgetall(batch_state_gid)
180
190
  end
181
191
 
182
192
  #
@@ -208,6 +218,24 @@ module Cloudtasker
208
218
  )
209
219
  end
210
220
 
221
+ #
222
+ # This method migrates the batch state to be a Redis hash instead
223
+ # of a hash stored in a string key.
224
+ #
225
+ def migrate_batch_state_to_redis_hash
226
+ return unless redis.type(batch_state_gid) == 'string'
227
+
228
+ # Migrate batch state to Redis hash if it is still using a legacy string key
229
+ # We acquire a lock then check again
230
+ redis.with_lock(batch_state_gid, max_wait: BATCH_MAX_LOCK_WAIT) do
231
+ if redis.type(batch_state_gid) == 'string'
232
+ state = redis.fetch(batch_state_gid)
233
+ redis.del(batch_state_gid)
234
+ redis.hset(batch_state_gid, state) if state.any?
235
+ end
236
+ end
237
+ end
238
+
211
239
  #
212
240
  # Save the batch.
213
241
  #
@@ -218,8 +246,11 @@ module Cloudtasker
218
246
  # complete (success or failure).
219
247
  redis.write(batch_gid, worker.to_h)
220
248
 
249
+ # Stop there if no jobs to save
250
+ return if jobs.empty?
251
+
221
252
  # Save list of child workers
222
- redis.write(batch_state_gid, jobs.map { |e| [e.job_id, 'scheduled'] }.to_h)
253
+ redis.hset(batch_state_gid, jobs.map { |e| [e.job_id, 'scheduled'] }.to_h)
223
254
  end
224
255
 
225
256
  #
@@ -228,29 +259,23 @@ module Cloudtasker
228
259
  # @param [String] job_id The batch id.
229
260
  # @param [String] status The status of the sub-batch.
230
261
  #
231
- # @return [<Type>] <description>
232
- #
233
262
  def update_state(batch_id, status)
234
- redis.with_lock(batch_state_gid) do
235
- state = batch_state
236
- state[batch_id.to_sym] = status.to_s if state.key?(batch_id.to_sym)
237
- redis.write(batch_state_gid, state)
238
- end
263
+ migrate_batch_state_to_redis_hash
264
+
265
+ # Update the batch state batch_id entry with the new status
266
+ redis.hset(batch_state_gid, batch_id, status) if redis.hexists(batch_state_gid, batch_id)
239
267
  end
240
268
 
241
269
  #
242
270
  # Return true if all the child workers have completed.
243
271
  #
244
- # @return [<Type>] <description>
272
+ # @return [Boolean] True if the batch is complete.
245
273
  #
246
274
  def complete?
247
- redis.with_lock(batch_state_gid) do
248
- state = redis.fetch(batch_state_gid)
249
- return true unless state
275
+ migrate_batch_state_to_redis_hash
250
276
 
251
- # Check that all children are complete
252
- state.values.all? { |e| COMPLETION_STATUSES.include?(e) }
253
- end
277
+ # Check that all child jobs have completed
278
+ redis.hvals(batch_state_gid).all? { |e| COMPLETION_STATUSES.include?(e) }
254
279
  end
255
280
 
256
281
  #
@@ -285,8 +310,8 @@ module Cloudtasker
285
310
  # Propagate event
286
311
  parent_batch&.on_child_complete(self, status)
287
312
 
288
- # The batch tree is complete. Cleanup the tree.
289
- cleanup unless parent_batch
313
+ # The batch tree is complete. Cleanup the downstream tree.
314
+ cleanup
290
315
  end
291
316
 
292
317
  #
@@ -331,11 +356,10 @@ module Cloudtasker
331
356
  # Remove all batch and sub-batch keys from Redis.
332
357
  #
333
358
  def cleanup
334
- # Capture batch state
335
- state = batch_state
359
+ migrate_batch_state_to_redis_hash
336
360
 
337
361
  # Delete child batches recursively
338
- state.to_h.keys.each { |id| self.class.find(id)&.cleanup }
362
+ redis.hkeys(batch_state_gid).each { |id| self.class.find(id)&.cleanup }
339
363
 
340
364
  # Delete batch redis entries
341
365
  redis.del(batch_gid)
@@ -402,8 +426,11 @@ module Cloudtasker
402
426
  # Perform job
403
427
  yield
404
428
 
405
- # Save batch (if child worker has been enqueued)
406
- setup
429
+ # Save batch if child jobs added
430
+ setup if jobs.any?
431
+
432
+ # Save parent batch if batch expanded
433
+ parent_batch&.setup if parent_batch&.jobs&.any?
407
434
 
408
435
  # Complete batch
409
436
  complete(:completed)
@@ -3,7 +3,7 @@
3
3
  module Cloudtasker
4
4
  # An interface class to manage tasks on the backend (Cloud Task or Redis)
5
5
  class CloudTask
6
- attr_accessor :id, :http_request, :schedule_time, :retries, :queue
6
+ attr_accessor :id, :http_request, :schedule_time, :retries, :queue, :dispatch_deadline
7
7
 
8
8
  #
9
9
  # The backend to use for cloud tasks.
@@ -73,12 +73,13 @@ module Cloudtasker
73
73
  # @param [Integer] retries The number of times the job failed.
74
74
  # @param [String] queue The queue the task is in.
75
75
  #
76
- def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil)
76
+ def initialize(id:, http_request:, schedule_time: nil, retries: 0, queue: nil, dispatch_deadline: nil)
77
77
  @id = id
78
78
  @http_request = http_request
79
79
  @schedule_time = schedule_time
80
80
  @retries = retries || 0
81
81
  @queue = queue
82
+ @dispatch_deadline = dispatch_deadline
82
83
  end
83
84
 
84
85
  #
@@ -7,7 +7,7 @@ module Cloudtasker
7
7
  class Config
8
8
  attr_accessor :redis, :store_payloads_in_redis
9
9
  attr_writer :secret, :gcp_location_id, :gcp_project_id,
10
- :gcp_queue_prefix, :processor_path, :logger, :mode, :max_retries
10
+ :gcp_queue_prefix, :processor_path, :logger, :mode, :max_retries, :dispatch_deadline
11
11
 
12
12
  # Max Cloud Task size in bytes
13
13
  MAX_TASK_SIZE = 100 * 1024 # 100 KB
@@ -46,6 +46,11 @@ module Cloudtasker
46
46
  DEFAULT_QUEUE_CONCURRENCY = 10
47
47
  DEFAULT_QUEUE_RETRIES = -1 # unlimited
48
48
 
49
+ # Job timeout configuration for Cloud Tasks
50
+ DEFAULT_DISPATCH_DEADLINE = 10 * 60 # 10 minutes
51
+ MIN_DISPATCH_DEADLINE = 15 # seconds
52
+ MAX_DISPATCH_DEADLINE = 30 * 60 # 30 minutes
53
+
49
54
  # The number of times jobs will be attempted before declaring them dead.
50
55
  #
51
56
  # With the default retry configuration (maxDoublings = 16 and minBackoff = 0.100s)
@@ -207,6 +212,16 @@ module Cloudtasker
207
212
  @gcp_location_id || DEFAULT_LOCATION_ID
208
213
  end
209
214
 
215
+ #
216
+ # Return the Dispatch deadline duration. Cloud Tasks will timeout the job after
217
+ # this duration is elapsed.
218
+ #
219
+ # @return [Integer] The value in seconds.
220
+ #
221
+ def dispatch_deadline
222
+ @dispatch_deadline || DEFAULT_DISPATCH_DEADLINE
223
+ end
224
+
210
225
  #
211
226
  # Return the secret to use to sign the verification tokens
212
227
  # attached to tasks.
@@ -84,7 +84,7 @@ module Cloudtasker
84
84
 
85
85
  # Deliver task
86
86
  begin
87
- Thread.current['task'].deliver
87
+ Thread.current['task']&.deliver
88
88
  rescue Errno::EBADF, Errno::ECONNREFUSED => e
89
89
  raise(e) unless Thread.current['attempts'] < 3
90
90
 
@@ -75,14 +75,18 @@ module Cloudtasker
75
75
  # end
76
76
  #
77
77
  # @param [String] cache_key The cache key to access.
78
+ # @param [Integer] max_wait The number of seconds after which the lock will be cleared anyway.
78
79
  #
79
- def with_lock(cache_key)
80
+ def with_lock(cache_key, max_wait: nil)
80
81
  return nil unless cache_key
81
82
 
83
+ # Set max wait
84
+ max_wait = (max_wait || LOCK_DURATION).to_i
85
+
82
86
  # Wait to acquire lock
83
87
  lock_key = [LOCK_KEY_PREFIX, cache_key].join('/')
84
88
  client.with do |conn|
85
- sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: LOCK_DURATION)
89
+ sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: max_wait)
86
90
  end
87
91
 
88
92
  # yield content
@@ -149,25 +149,18 @@ module Cloudtasker
149
149
  # if taken by another job.
150
150
  #
151
151
  def lock!
152
- redis.with_lock(unique_gid) do
153
- locked_id = redis.get(unique_gid)
152
+ lock_acquired = redis.set(unique_gid, id, nx: true, ex: lock_ttl)
153
+ lock_already_acquired = !lock_acquired && redis.get(unique_gid) == id
154
154
 
155
- # Abort job lock process if lock is already taken by another job
156
- raise(LockError, locked_id) if locked_id && locked_id != id
157
-
158
- # Take job lock if the lock is currently free
159
- redis.set(unique_gid, id, ex: lock_ttl) unless locked_id
160
- end
155
+ raise(LockError) unless lock_acquired || lock_already_acquired
161
156
  end
162
157
 
163
158
  #
164
159
  # Delete the job lock.
165
160
  #
166
161
  def unlock!
167
- redis.with_lock(unique_gid) do
168
- locked_id = redis.get(unique_gid)
169
- redis.del(unique_gid) if locked_id == id
170
- end
162
+ locked_id = redis.get(unique_gid)
163
+ redis.del(unique_gid) if locked_id == id
171
164
  end
172
165
  end
173
166
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Cloudtasker
4
- VERSION = '0.12.rc5'
4
+ VERSION = '0.12.rc10'
5
5
  end
@@ -167,6 +167,22 @@ module Cloudtasker
167
167
  (@job_queue ||= self.class.cloudtasker_options_hash[:queue] || Config::DEFAULT_JOB_QUEUE).to_s
168
168
  end
169
169
 
170
+ #
171
+ # Return the Dispatch deadline duration. Cloud Tasks will timeout the job after
172
+ # this duration is elapsed.
173
+ #
174
+ # @return [Integer] The value in seconds.
175
+ #
176
+ def dispatch_deadline
177
+ @dispatch_deadline ||= [
178
+ [
179
+ Config::MIN_DISPATCH_DEADLINE,
180
+ (self.class.cloudtasker_options_hash[:dispatch_deadline] || Cloudtasker.config.dispatch_deadline).to_i
181
+ ].max,
182
+ Config::MAX_DISPATCH_DEADLINE
183
+ ].min
184
+ end
185
+
170
186
  #
171
187
  # Return the Cloudtasker logger instance.
172
188
  #
@@ -332,6 +348,22 @@ module Cloudtasker
332
348
  job_retries > job_max_retries
333
349
  end
334
350
 
351
+ #
352
+ # Return true if the job arguments are missing.
353
+ #
354
+ # This may happen if a job
355
+ # was successfully run but retried due to Cloud Task dispatch deadline
356
+ # exceeded. If the arguments were stored in Redis then they may have
357
+ # been flushed already after the successful completion.
358
+ #
359
+ # If job arguments are missing then the job will simply be declared dead.
360
+ #
361
+ # @return [Boolean] True if the arguments are missing.
362
+ #
363
+ def arguments_missing?
364
+ job_args.empty? && [0, -1].exclude?(method(:perform).arity)
365
+ end
366
+
335
367
  #
336
368
  # Return the time taken (in seconds) to perform the job. This duration
337
369
  # includes the middlewares and the actual perform method.
@@ -384,14 +416,9 @@ module Cloudtasker
384
416
  Cloudtasker.config.server_middleware.invoke(self) do
385
417
  # Immediately abort the job if it is already dead
386
418
  flag_as_dead if job_dead?
419
+ flag_as_dead(MissingWorkerArgumentsError.new('worker arguments are missing')) if arguments_missing?
387
420
 
388
421
  begin
389
- # Abort if arguments are missing. This may happen with redis arguments storage
390
- # if Cloud Tasks times out on a job but the job still succeeds
391
- if job_args.empty? && [0, -1].exclude?(method(:perform).arity)
392
- raise(MissingWorkerArgumentsError, 'worker arguments are missing')
393
- end
394
-
395
422
  # Perform the job
396
423
  perform(*job_args)
397
424
  rescue StandardError => e
@@ -14,12 +14,6 @@ module Cloudtasker
14
14
  # payloads in Redis
15
15
  REDIS_PAYLOAD_NAMESPACE = 'payload'
16
16
 
17
- # Arg payload cache keys get expired instead of deleted
18
- # in case jobs are re-processed due to connection interruption
19
- # (job is successful but Cloud Task considers it as failed due
20
- # to network interruption)
21
- ARGS_PAYLOAD_CLEANUP_TTL = 3600 # 1 hour
22
-
23
17
  #
24
18
  # Return a namespaced key
25
19
  #
@@ -100,16 +94,13 @@ module Cloudtasker
100
94
  # Yied worker
101
95
  resp = yield(worker)
102
96
 
103
- # Schedule args payload deletion after job has been successfully processed
104
- # Note: we expire the key instead of deleting it immediately in case the job
105
- # succeeds but is considered as failed by Cloud Task due to network interruption.
106
- # In such case the job is likely to be re-processed soon after.
107
- redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key && !worker.job_reenqueued
97
+ # Delete stored args payload if job has completed
98
+ redis.del(args_payload_key) if args_payload_key && !worker.job_reenqueued
108
99
 
109
100
  resp
110
- rescue DeadWorkerError, MissingWorkerArgumentsError => e
101
+ rescue DeadWorkerError => e
111
102
  # Delete stored args payload if job is dead
112
- redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key
103
+ redis.del(args_payload_key) if args_payload_key
113
104
  log_execution_error(worker, e)
114
105
  raise(e)
115
106
  rescue StandardError => e
@@ -165,6 +156,7 @@ module Cloudtasker
165
156
  },
166
157
  body: worker_payload.to_json
167
158
  },
159
+ dispatch_deadline: worker.dispatch_deadline.to_i,
168
160
  queue: worker.job_queue
169
161
  }
170
162
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cloudtasker
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.12.rc5
4
+ version: 0.12.rc10
5
5
  platform: ruby
6
6
  authors:
7
7
  - Arnaud Lachaume
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2021-03-30 00:00:00.000000000 Z
11
+ date: 2021-05-31 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport