cloudtasker 0.12.rc1 → 0.12.rc6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b89246a4e0d7f658fdb3115c3a00b54df0e55b930428fe52caef471386119906
4
- data.tar.gz: 4fe22aa2588e95fffcb223e32e50a2baa5df2730d1e04f8f9d0202aa4138fbd4
3
+ metadata.gz: 00be8d2e572e3129ef5330cdbb555aa0485edd15b191d5d8ccc5d28a98a0eab8
4
+ data.tar.gz: 79067d7953556f03e9a81ee9d977cf2734a9843df8f9a4613dbd8d81274bc599
5
5
  SHA512:
6
- metadata.gz: 1ea18e7601bb394ad4d7f72838bf3eaa9d7ff05df0f0b4f3d08e5d7fa82cee52e51be0c4148d0ac282986f366f6159d0bcc634993f07fbbf28c63ec021d29b5e
7
- data.tar.gz: 9282074934628583c6c7b5d2bc6efaf6fac87777bab718adf2204f3eb87143f8bce950711395f6d1ecc6dac6fb349fef52bc94eeb42352322d824b6b35011d5a
6
+ metadata.gz: 99d9b7ba99390e6bc10e9ed917262782a7c47814df6412c847dedd06a8a06a6b9ad4eaf5ddbcde03534ec88f34fcb1cda03621a3c2a0202031d087e6cd915c5f
7
+ data.tar.gz: e15a6a167f15dbd8c390c7d393a80297b6cd805c85d4e5ca16f961ff147c909bebbfcf9c326708e7d7508caa3ff47b726f071e618e3979a1340848420311ddbb
data/.rubocop.yml CHANGED
@@ -6,7 +6,7 @@ AllCops:
6
6
  - 'vendor/**/*'
7
7
 
8
8
  Metrics/ClassLength:
9
- Max: 150
9
+ Max: 200
10
10
 
11
11
  Metrics/ModuleLength:
12
12
  Max: 150
data/CHANGELOG.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # Changelog
2
2
 
3
- ## Latest RC [v0.12.rc1](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc1) (2021-03-11)
3
+ ## Latest RC [v0.12.rc6](https://github.com/keypup-io/cloudtasker/tree/v0.12.rc6) (2021-03-31)
4
4
 
5
- [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc1)
5
+ [Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.11.0...v0.12.rc6)
6
6
 
7
7
  **Improvements:**
8
8
  - ActiveJob: do not double log errors (ActiveJob has its own error logging)
@@ -10,6 +10,8 @@
10
10
  - Error logging: Do not log exception and stack trace separately, combine them instead.
11
11
  - Batch callbacks: Retry jobs when completion callback fails
12
12
  - Redis: Use Redis Sets instead of key pattern matching for listing methods (Cron jobs and Local Server)
13
+ - Batch progress: restrict calculation to direct children by default. Allow depth to be specified. Calculating progress using all tree jobs created significant delays on large batches.
14
+ - Worker: raise DeadWorkerError instead of MissingWorkerArgumentsError when arguments are missing. This is more consistent with what middlewares expect.
13
15
 
14
16
  **Fixed bugs:**
15
17
  - Retries: Enforce job retry limit on job processing. There was an edge case where jobs could be retried indefinitely on batch callback errors.
data/README.md CHANGED
@@ -12,7 +12,7 @@ Cloudtasker also provides optional modules for running [cron jobs](docs/CRON_JOB
12
12
 
13
13
  A local processing server is also available for development. This local server processes jobs in lieu of Cloud Tasks and allows you to work offline.
14
14
 
15
- **Maturity**: This gem is production-ready. We at Keypup have already processed millions of jobs using Cloudtasker and all related extensions (cron, batch and unique jobs). I'm waiting till the end of 2020 before releasing the official `v1.0.0` in case we've missed any edge-case bug.
15
+ **Maturity**: This gem is production-ready. We at Keypup have already processed millions of jobs using Cloudtasker and all related extensions (cron, batch and unique jobs). We are planning to release the official `v1.0.0` somewhere in 2021, in case we've missed any edge-case bug.
16
16
 
17
17
  ## Summary
18
18
 
@@ -136,7 +136,7 @@ That's it! Your job was picked up by the Cloudtasker local server and sent for p
136
136
  Now jump to the next section to configure your app to use Google Cloud Tasks as a backend.
137
137
 
138
138
  ## Get started with Rails & ActiveJob
139
- **Note**: ActiveJob is supported since `0.11.0`
139
+ **Note**: ActiveJob is supported since `0.11.0`
140
140
  **Note**: Cloudtasker extensions (cron, batch and unique jobs) are not available when using cloudtasker via ActiveJob.
141
141
 
142
142
  Cloudtasker is pre-integrated with ActiveJob. Follow the steps below to get started.
@@ -19,7 +19,7 @@ module Cloudtasker
19
19
  # Process payload
20
20
  WorkerHandler.execute_from_payload!(payload)
21
21
  head :no_content
22
- rescue DeadWorkerError, MissingWorkerArgumentsError
22
+ rescue DeadWorkerError
23
23
  # 205: job will NOT be retried
24
24
  head :reset_content
25
25
  rescue InvalidWorkerError
data/docs/BATCH_JOBS.md CHANGED
@@ -84,8 +84,29 @@ You can access progression statistics in callback using `batch.progress`. See th
84
84
  E.g.
85
85
  ```ruby
86
86
  def on_batch_node_complete(_child_job)
87
- logger.info("Total: #{batch.progress.total}")
88
- logger.info("Completed: #{batch.progress.completed}")
89
- logger.info("Progress: #{batch.progress.percent.to_i}%")
87
+ progress = batch.progress
88
+ logger.info("Total: #{progress.total}")
89
+ logger.info("Completed: #{progress.completed}")
90
+ logger.info("Progress: #{progress.percent.to_i}%")
91
+ end
92
+ ```
93
+
94
+ **Since:** `v0.12.rc5`
95
+ By default the `progress` method only considers the direct child jobs to evaluate the batch progress. You can pass `depth: somenumber` to the `progress` method to calculate the actual batch progress in a more granular way. Be careful however that this method recursively calculates progress on the sub-batches and is therefore expensive.
96
+
97
+ E.g.
98
+ ```ruby
99
+ def on_batch_node_complete(_child_job)
100
+ # Considers the children for batch progress calculation
101
+ progress_0 = batch.progress # same as batch.progress(depth: 0)
102
+
103
+ # Considers the children and grand-children for batch progress calculation
104
+ progress_1 = batch.progress(depth: 1)
105
+
106
+ # Considers the children, grand-children and grand-grand-children for batch progress calculation
107
+ progress_2 = batch.progress(depth: 3)
108
+
109
+ logger.info("Progress: #{progress_1.percent.to_i}%")
110
+ logger.info("Progress: #{progress_2.percent.to_i}%")
90
111
  end
91
112
  ```
@@ -178,6 +178,7 @@ module Cloudtasker
178
178
  schedule_time: (Time.now + interval).to_i,
179
179
  queue: queue
180
180
  )
181
+ redis.sadd(self.class.key, id)
181
182
  end
182
183
 
183
184
  #
@@ -271,9 +271,8 @@ module Cloudtasker
271
271
  raise(e) unless IGNORED_ERRORED_CALLBACKS.include?(callback)
272
272
 
273
273
  # Log error instead
274
- worker.logger.error(
275
- ["Callback #{callback} failed to run. Skipping to preserve error flow.", e, e.backtrace].flatten.join("\n")
276
- )
274
+ worker.logger.error(e)
275
+ worker.logger.error("Callback #{callback} failed to run. Skipping to preserve error flow.")
277
276
  end
278
277
 
279
278
  #
@@ -348,13 +347,20 @@ module Cloudtasker
348
347
  #
349
348
  # @return [Cloudtasker::Batch::BatchProgress] The batch progress.
350
349
  #
351
- def progress
350
+ def progress(depth: 0)
351
+ depth = depth.to_i
352
+
352
353
  # Capture batch state
353
354
  state = batch_state
354
355
 
355
- # Sum batch progress of current batch and all sub-batches
356
+ # Return immediately if we do not need to go down the tree
357
+ return BatchProgress.new(state) if depth <= 0
358
+
359
+ # Sum batch progress of current batch and sub-batches up to the specified
360
+ # depth
356
361
  state.to_h.reduce(BatchProgress.new(state)) do |memo, (child_id, child_status)|
357
- memo + (self.class.find(child_id)&.progress || BatchProgress.new(child_id => child_status))
362
+ memo + (self.class.find(child_id)&.progress(depth: depth - 1) ||
363
+ BatchProgress.new(child_id => child_status))
358
364
  end
359
365
  end
360
366
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Cloudtasker
4
- VERSION = '0.12.rc1'
4
+ VERSION = '0.12.rc6'
5
5
  end
@@ -332,6 +332,22 @@ module Cloudtasker
332
332
  job_retries > job_max_retries
333
333
  end
334
334
 
335
+ #
336
+ # Return true if the job arguments are missing.
337
+ #
338
+ # This may happen if a job
339
+ # was successfully run but retried due to Cloud Task dispatch deadline
340
+ # exceeded. If the arguments were stored in Redis then they may have
341
+ # been flushed already after the successful completion.
342
+ #
343
+ # If job arguments are missing then the job will simply be declared dead.
344
+ #
345
+ # @return [Boolean] True if the arguments are missing.
346
+ #
347
+ def arguments_missing?
348
+ job_args.empty? && [0, -1].exclude?(method(:perform).arity)
349
+ end
350
+
335
351
  #
336
352
  # Return the time taken (in seconds) to perform the job. This duration
337
353
  # includes the middlewares and the actual perform method.
@@ -384,14 +400,9 @@ module Cloudtasker
384
400
  Cloudtasker.config.server_middleware.invoke(self) do
385
401
  # Immediately abort the job if it is already dead
386
402
  flag_as_dead if job_dead?
403
+ flag_as_dead(MissingWorkerArgumentsError.new('worker arguments are missing')) if arguments_missing?
387
404
 
388
405
  begin
389
- # Abort if arguments are missing. This may happen with redis arguments storage
390
- # if Cloud Tasks times out on a job but the job still succeeds
391
- if job_args.empty? && [0, -1].exclude?(method(:perform).arity)
392
- raise(MissingWorkerArgumentsError, 'worker arguments are missing')
393
- end
394
-
395
406
  # Perform the job
396
407
  perform(*job_args)
397
408
  rescue StandardError => e
@@ -64,7 +64,7 @@ module Cloudtasker
64
64
  logger = worker&.logger || Cloudtasker.logger
65
65
 
66
66
  # Log error
67
- logger.error([error, error.backtrace].flatten.join("\n"))
67
+ logger.error(error)
68
68
  end
69
69
 
70
70
  #
@@ -107,7 +107,7 @@ module Cloudtasker
107
107
  redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key && !worker.job_reenqueued
108
108
 
109
109
  resp
110
- rescue DeadWorkerError, MissingWorkerArgumentsError => e
110
+ rescue DeadWorkerError => e
111
111
  # Delete stored args payload if job is dead
112
112
  redis.expire(args_payload_key, ARGS_PAYLOAD_CLEANUP_TTL) if args_payload_key
113
113
  log_execution_error(worker, e)
@@ -51,6 +51,26 @@ module Cloudtasker
51
51
  Cloudtasker.logger
52
52
  end
53
53
 
54
+ #
55
+ # Format the log message as string.
56
+ #
57
+ # @param [Object] msg The log message or object.
58
+ #
59
+ # @return [String] The formatted message
60
+ #
61
+ def formatted_message_as_string(msg)
62
+ # Format message
63
+ msg_content = if msg.is_a?(Exception)
64
+ [msg.inspect, msg.backtrace].flatten(1).join("\n")
65
+ elsif msg.is_a?(String)
66
+ msg
67
+ else
68
+ msg.inspect
69
+ end
70
+
71
+ "[Cloudtasker][#{worker.class}][#{worker.job_id}] #{msg_content}"
72
+ end
73
+
54
74
  #
55
75
  # Format main log message.
56
76
  #
@@ -59,7 +79,12 @@ module Cloudtasker
59
79
  # @return [String] The formatted log message
60
80
  #
61
81
  def formatted_message(msg)
62
- "[Cloudtasker][#{worker.class}][#{worker.job_id}] #{msg}"
82
+ if msg.is_a?(String)
83
+ formatted_message_as_string(msg)
84
+ else
85
+ # Delegate object formatting to logger
86
+ msg
87
+ end
63
88
  end
64
89
 
65
90
  #
@@ -147,7 +172,9 @@ module Cloudtasker
147
172
  # ActiveSupport::Logger does not support passing a payload through a block on top
148
173
  # of a message.
149
174
  if defined?(ActiveSupport::Logger) && logger.is_a?(ActiveSupport::Logger)
150
- logger.send(level) { "#{formatted_message(msg)} -- #{payload_block.call}" }
175
+ # The logger is fairly basic in terms of formatting. All inputs get converted
176
+ # as regular strings.
177
+ logger.send(level) { "#{formatted_message_as_string(msg)} -- #{payload_block.call}" }
151
178
  else
152
179
  logger.send(level, formatted_message(msg), &payload_block)
153
180
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cloudtasker
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.12.rc1
4
+ version: 0.12.rc6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Arnaud Lachaume
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2021-03-11 00:00:00.000000000 Z
11
+ date: 2021-03-31 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport