cloudtasker 0.10.rc5 → 0.10.rc6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: db7ce2488df7c451c0ed08a2674c32e6e5964a926ad8bf158fee3e76ec71ee82
4
- data.tar.gz: 354177692074ddf3aa74fb1dba70dd56933211b460f035884fbf34bc87dc89fc
3
+ metadata.gz: 796e69d0470947fe0af6ea530cfd6c5b69e95b6a1be42b307e84b27dd9246198
4
+ data.tar.gz: 5c51dd25b3033546a7115d83d638123e80a9ee26fb74845b88a02247edb39461
5
5
  SHA512:
6
- metadata.gz: 627dc29f6318bc4f0f676deea5ea932f6ec6414cf7f98c3f986132de59c44d30db5d645c518a04bf98ca99925719efce7a430a48a85d83be1cd90551a5f8206a
7
- data.tar.gz: e9806327d16e5ac6700577732cc9347f403eb21c6debd52cbacb36157af110a3ccab917e01fab9d6cd87d98ae0cfa1558fe7ebf0d1a938d4a0b40a13a492e87d
6
+ metadata.gz: 134e6d344e75a9500850b135ea215c2b64d31cf366892acc0aab1b37c47e758dfa7e2bd00015ec6d8bb47556aba002bb578f2365ea34196d3343b24cee0f2156
7
+ data.tar.gz: 73bd3cbd91fa1938b97df2cd7f5125a3f90205fb31b5d12efe5b64c4c0aa03e74f6f83cd8c9267b95b5553be93395a45db4f365bbb49b5623a97bf982bf9d2d7
data/.rubocop.yml CHANGED
@@ -37,4 +37,7 @@ Metrics/BlockLength:
37
37
  Style/Documentation:
38
38
  Exclude:
39
39
  - 'examples/**/*'
40
- - 'spec/**/*'
40
+ - 'spec/**/*'
41
+
42
+ Metrics/ParameterLists:
43
+ CountKeywordArgs: false
data/README.md CHANGED
@@ -467,7 +467,7 @@ end
467
467
 
468
468
  Will generate the following log with context `{:worker=> ..., :job_id=> ..., :job_meta=> ...}`
469
469
  ```log
470
- [Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}}
470
+ [Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}, :task_id => "4e755d3f-6de0-426c-b4ac-51edd445c045"}
471
471
  ```
472
472
 
473
473
  The way contextual information is displayed depends on the logger itself. For example with [semantic_logger](http://rocketjob.github.io/semantic_logger) contextual information might not appear in the log message but show up as payload data on the log entry itself (e.g. using the fluentd adapter).
@@ -503,6 +503,21 @@ end
503
503
 
504
504
  See the [Cloudtasker::Worker class](lib/cloudtasker/worker.rb) for more information on attributes available to be logged in your `log_context_processor` proc.
505
505
 
506
+ ### Searching logs: Job ID vs Task ID
507
+ **Note**: `task_id` field is available in logs starting with `0.10.rc6`
508
+
509
+ Job instances are assigned two different different IDs for tracking and logging purpose: `job_id` and `task_id`. These IDs are found in each log entry to facilitate search.
510
+
511
+ | Field | Definition |
512
+ |------|-------------|
513
+ | `job_id` | This ID is generated by Cloudtasker. It identifies the job along its entire lifecyle. It is persistent across retries and reschedules. |
514
+ | `task_id` | This ID is generated by Google Cloud Tasks. It identifies a job instance on the Google Cloud Task side. It is persistent across retries but NOT across reschedules. |
515
+
516
+ The Google Cloud Task UI (GCP console) lists all the tasks pending/retrying and their associated task id (also called "Task name"). From there you can:
517
+ 1. Use a task ID to lookup the logs of a specific job instance in Stackdriver Logging (or any other logging solution).
518
+ 2. From (1) you can retrieve the `job_id` attribute of the job.
519
+ 3. From (2) you can use the `job_id` to lookup the job logs along its entire lifecycle.
520
+
506
521
  ## Error Handling
507
522
 
508
523
  Jobs failing will automatically return an HTTP error to Cloud Task and trigger a retry at a later time. The number of Cloud Task retries Cloud Task will depend on the configuration of your queue in Cloud Tasks.
@@ -51,19 +51,28 @@ module Cloudtasker
51
51
  end
52
52
 
53
53
  # Return content parsed as JSON and add job retries count
54
- JSON.parse(content).merge(job_retries: job_retries)
54
+ JSON.parse(content).merge(job_retries: job_retries, task_id: task_id)
55
55
  end
56
56
  end
57
57
 
58
58
  #
59
59
  # Extract the number of times this task failed at runtime.
60
60
  #
61
- # @return [Integer] The number of failures
61
+ # @return [Integer] The number of failures.
62
62
  #
63
63
  def job_retries
64
64
  request.headers[Cloudtasker::Config::RETRY_HEADER].to_i
65
65
  end
66
66
 
67
+ #
68
+ # Return the Google Cloud Task ID from headers.
69
+ #
70
+ # @return [String] The task ID.
71
+ #
72
+ def task_id
73
+ request.headers[Cloudtasker::Config::TASK_ID_HEADER]
74
+ end
75
+
67
76
  #
68
77
  # Authenticate incoming requests using a bearer token
69
78
  #
data/cloudtasker.gemspec CHANGED
@@ -15,8 +15,6 @@ Gem::Specification.new do |spec|
15
15
  spec.homepage = 'https://github.com/keypup-io/cloudtasker'
16
16
  spec.license = 'MIT'
17
17
 
18
- # spec.metadata["allowed_push_host"] = "TODO: Set to 'http://mygemserver.com'"
19
-
20
18
  spec.metadata['homepage_uri'] = spec.homepage
21
19
  spec.metadata['source_code_uri'] = 'https://github.com/keypup-io/cloudtasker'
22
20
  spec.metadata['changelog_uri'] = 'https://github.com/keypup-io/cloudtasker/master/tree/CHANGELOG.md'
@@ -36,6 +34,7 @@ Gem::Specification.new do |spec|
36
34
  spec.add_dependency 'google-cloud-tasks'
37
35
  spec.add_dependency 'jwt'
38
36
  spec.add_dependency 'redis'
37
+ spec.add_dependency 'retriable'
39
38
 
40
39
  spec.add_development_dependency 'appraisal'
41
40
  spec.add_development_dependency 'bundler', '~> 2.0'
data/docs/UNIQUE_JOBS.md CHANGED
@@ -81,6 +81,68 @@ Below is the list of available conflict strategies can be specified through the
81
81
  | `raise` | All locks | A `Cloudtasker::UniqueJob::LockError` will be raised when a conflict occurs |
82
82
  | `reschedule` | `while_executing` | The job will be rescheduled 5 seconds later when a conflict occurs |
83
83
 
84
+ ## Lock Time To Live (TTL) & deadlocks
85
+ **Note**: Lock TTL has been introduced in `v0.10.rc6`
86
+
87
+ To make jobs unique Cloudtasker sets a lock key - a hash of class name + job arguments - in Redis. Unique crash situations may lead to lock keys not being cleaned up when jobs complete - e.g. Redis crash with rollback from last known state on disk. Situations like these may lead to having a unique job deadlock: jobs with the same class and arguments would stop being processed because they're unable to acquire a lock that will never be cleaned up.
88
+
89
+ In order to prevent deadlocks Cloudtasker configures lock keys to automatically expire in Redis after `job schedule time + lock_ttl (default: 10 minutes)`. This forced expiration ensures that deadlocks eventually get cleaned up shortly after the expected run time of a job.
90
+
91
+ The `lock_ttl (default: 10 minutes)` duration represent the expected max duration of the job. The default 10 minutes value was chosen because it's twice the default request timeout value in Cloud Run. This usually leaves enough room for queue lag (5 minutes) + job processing (5 minutes).
92
+
93
+ Queue lag is certainly the most unpredictable factor here. Job processing time is less of a factor. Jobs running for more than 5 minutes should be split into sub-jobs to limit invocation time over HTTP anyway. Cloudtasker [batch jobs](BATCH_JOBS.md) can help split big jobs into sub-jobs in an atomic way.
94
+
95
+ The default lock key expiration of `job schedule time + 10 minutes` may look aggressive but it is a better choice than having real-time jobs stuck for X hours after a crash recovery.
96
+
97
+ We **strongly recommend** adapting the `lock_ttl` option either globally or for each worker based on expected queue lag and job duration.
98
+
99
+ **Example 1**: Global configuration
100
+ ```ruby
101
+ # config/initializers/cloudtasker.rb
102
+
103
+ # General Cloudtasker configuration
104
+ Cloudtasker.configure do |config|
105
+ # ...
106
+ end
107
+
108
+ # Unique job extension configuration
109
+ Cloudtasker::UniqueJob.configure do |config|
110
+ config.lock_ttl = 3 * 60 # 3 minutes
111
+ end
112
+ ```
113
+
114
+ **Example 2**: Worker-level - fast
115
+ ```ruby
116
+ # app/workers/realtime_worker_on_fast_queue.rb
117
+
118
+ class RealtimeWorkerOnFastQueue
119
+ include Cloudtasker::Worker
120
+
121
+ # Ensure lock is removed 30 seconds after schedule time
122
+ cloudtasker_options lock: :until_executing, lock_ttl: 30
123
+
124
+ def perform(arg1, arg2)
125
+ # ...
126
+ end
127
+ end
128
+ ```
129
+
130
+ **Example 3**: Worker-level - slow
131
+ ```ruby
132
+ # app/workers/non_critical_worker_on_slow_queue.rb
133
+
134
+ class NonCriticalWorkerOnSlowQueue
135
+ include Cloudtasker::Worker
136
+
137
+ # Ensure lock is removed 24 hours after schedule time
138
+ cloudtasker_options lock: :until_executing, lock_ttl: 3600 * 24
139
+
140
+ def perform(arg1, arg2)
141
+ # ...
142
+ end
143
+ end
144
+ ```
145
+
84
146
  ## Configuring unique arguments
85
147
 
86
148
  By default Cloudtasker considers all job arguments to evaluate the uniqueness of a job. This behaviour is configurable per worker by defining a `unique_args` method on the worker itself returning the list of args defining uniqueness.
@@ -1,5 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require 'google/cloud/tasks'
4
+ require 'retriable'
5
+
3
6
  module Cloudtasker
4
7
  module Backend
5
8
  # Manage tasks pushed to GCP Cloud Task
@@ -113,9 +116,10 @@ module Cloudtasker
113
116
  # @return [Cloudtasker::Backend::GoogleCloudTask, nil] The retrieved task.
114
117
  #
115
118
  def self.find(id)
116
- resp = client.get_task(id)
119
+ resp = with_gax_retries { client.get_task(id) }
117
120
  resp ? new(resp) : nil
118
- rescue Google::Gax::RetryError
121
+ rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound
122
+ # The ID does not exist
119
123
  nil
120
124
  end
121
125
 
@@ -133,10 +137,8 @@ module Cloudtasker
133
137
  relative_queue = payload.delete(:queue)
134
138
 
135
139
  # Create task
136
- resp = client.create_task(queue_path(relative_queue), payload)
140
+ resp = with_gax_retries { client.create_task(queue_path(relative_queue), payload) }
137
141
  resp ? new(resp) : nil
138
- rescue Google::Gax::RetryError
139
- nil
140
142
  end
141
143
 
142
144
  #
@@ -145,11 +147,21 @@ module Cloudtasker
145
147
  # @param [String] id The id of the task.
146
148
  #
147
149
  def self.delete(id)
148
- client.delete_task(id)
149
- rescue Google::Gax::NotFoundError, Google::Gax::RetryError, GRPC::NotFound, Google::Gax::PermissionDeniedError
150
+ with_gax_retries { client.delete_task(id) }
151
+ rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound, Google::Gax::PermissionDeniedError
152
+ # The ID does not exist
150
153
  nil
151
154
  end
152
155
 
156
+ #
157
+ # Helper method encapsulating the retry strategy for GAX calls
158
+ #
159
+ def self.with_gax_retries
160
+ Retriable.retriable(on: [Google::Gax::UnavailableError], tries: 3) do
161
+ yield
162
+ end
163
+ end
164
+
153
165
  #
154
166
  # Build a new instance of the class.
155
167
  #
@@ -1,7 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'cloudtasker/redis_client'
4
-
5
3
  module Cloudtasker
6
4
  module Backend
7
5
  # Manage local tasks pushed to memory.
@@ -153,7 +151,8 @@ module Cloudtasker
153
151
  #
154
152
  def execute
155
153
  # Execute worker
156
- resp = WorkerHandler.with_worker_handling(payload, &:execute)
154
+ worker_payload = payload.merge(job_retries: job_retries, task_id: id)
155
+ resp = WorkerHandler.with_worker_handling(worker_payload, &:execute)
157
156
 
158
157
  # Delete task
159
158
  self.class.delete(id)
@@ -247,7 +247,8 @@ module Cloudtasker
247
247
  uri = URI(http_request[:url])
248
248
  req = Net::HTTP::Post.new(uri.path, http_request[:headers])
249
249
 
250
- # Add retries header
250
+ # Add task headers
251
+ req[Cloudtasker::Config::TASK_ID_HEADER] = id
251
252
  req[Cloudtasker::Config::RETRY_HEADER] = retries
252
253
 
253
254
  # Set job payload
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.for(worker).execute { yield }
10
10
  end
11
11
  end
@@ -25,6 +25,9 @@ module Cloudtasker
25
25
  #
26
26
  RETRY_HEADER = 'X-CloudTasks-TaskRetryCount'
27
27
 
28
+ # Cloud Task ID header
29
+ TASK_ID_HEADER = 'X-CloudTasks-TaskName'
30
+
28
31
  # Content-Transfer-Encoding header in Cloud Task responses
29
32
  ENCODING_HEADER = 'Content-Transfer-Encoding'
30
33
 
@@ -4,15 +4,10 @@ require 'fugit'
4
4
 
5
5
  module Cloudtasker
6
6
  module Cron
7
- # TODO: handle deletion of cron jobs
8
- #
9
7
  # Manage cron jobs
10
8
  class Job
11
9
  attr_reader :worker
12
10
 
13
- # Key Namespace used for object saved under this class
14
- SUB_NAMESPACE = 'job'
15
-
16
11
  #
17
12
  # Build a new instance of the class
18
13
  #
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).execute { yield }
10
10
  end
11
11
  end
@@ -9,9 +9,6 @@ module Cloudtasker
9
9
  class Schedule
10
10
  attr_accessor :id, :cron, :worker, :task_id, :job_id, :queue, :args
11
11
 
12
- # Key Namespace used for object saved under this class
13
- SUB_NAMESPACE = 'schedule'
14
-
15
12
  #
16
13
  # Return the redis client.
17
14
  #
@@ -3,3 +3,30 @@
3
3
  require_relative 'unique_job/middleware'
4
4
 
5
5
  Cloudtasker::UniqueJob::Middleware.configure
6
+
7
+ module Cloudtasker
8
+ # UniqueJob configurator
9
+ module UniqueJob
10
+ # The maximum duration a lock can remain in place
11
+ # after schedule time.
12
+ DEFAULT_LOCK_TTL = 10 * 60 # 10 minutes
13
+
14
+ class << self
15
+ attr_writer :lock_ttl
16
+
17
+ # Configure the middleware
18
+ def configure
19
+ yield(self)
20
+ end
21
+
22
+ #
23
+ # Return the max TTL for locks
24
+ #
25
+ # @return [Integer] The lock TTL.
26
+ #
27
+ def lock_ttl
28
+ @lock_ttl || DEFAULT_LOCK_TTL
29
+ end
30
+ end
31
+ end
32
+ end
@@ -5,21 +5,19 @@ module Cloudtasker
5
5
  # Wrapper class for Cloudtasker::Worker delegating to lock
6
6
  # and conflict strategies
7
7
  class Job
8
- attr_reader :worker
8
+ attr_reader :worker, :call_opts
9
9
 
10
10
  # The default lock strategy to use. Defaults to "no lock".
11
11
  DEFAULT_LOCK = UniqueJob::Lock::NoOp
12
12
 
13
- # Key Namespace used for object saved under this class
14
- SUB_NAMESPACE = 'job'
15
-
16
13
  #
17
14
  # Build a new instance of the class.
18
15
  #
19
16
  # @param [Cloudtasker::Worker] worker The worker at hand
20
17
  #
21
- def initialize(worker)
18
+ def initialize(worker, **kwargs)
22
19
  @worker = worker
20
+ @call_opts = kwargs
23
21
  end
24
22
 
25
23
  #
@@ -31,6 +29,43 @@ module Cloudtasker
31
29
  worker.class.cloudtasker_options_hash
32
30
  end
33
31
 
32
+ #
33
+ # Return the Time To Live (TTL) that should be set in Redis for
34
+ # the lock key. Having a TTL on lock keys ensures that jobs
35
+ # do not end up stuck due to a dead lock situation.
36
+ #
37
+ # The TTL is calculated using schedule time + expected
38
+ # max job duration.
39
+ #
40
+ # The expected max job duration is set to 10 minutes by default.
41
+ # This value was chosen because it's twice the default request timeout
42
+ # value in Cloud Run. This leaves enough room for queue lag (5 minutes)
43
+ # + job processing (5 minutes).
44
+ #
45
+ # Queue lag is certainly the most unpredictable factor here.
46
+ # Job processing time is less of a factor. Jobs running for more than 5 minutes
47
+ # should be split into sub-jobs to limit invocation time over HTTP. Cloudtasker batch
48
+ # jobs can help achieve that if you need to make one big job split into sub-jobs "atomic".
49
+ #
50
+ # The default lock key expiration of "time_at + 10 minutes" may look aggressive but it
51
+ # is still a better choice than potentially having real-time jobs stuck for X hours.
52
+ #
53
+ # The expected max job duration can be configured via the `lock_ttl`
54
+ # option on the job itself.
55
+ #
56
+ # @return [Integer] The TTL in seconds
57
+ #
58
+ def lock_ttl
59
+ now = Time.now.to_i
60
+
61
+ # Get scheduled at and lock duration
62
+ scheduled_at = [call_opts[:time_at].to_i, now].compact.max
63
+ lock_duration = (options[:lock_ttl] || Cloudtasker::UniqueJob.lock_ttl).to_i
64
+
65
+ # Return TTL
66
+ scheduled_at + lock_duration - now
67
+ end
68
+
34
69
  #
35
70
  # Return the instantiated lock.
36
71
  #
@@ -121,7 +156,7 @@ module Cloudtasker
121
156
  raise(LockError, locked_id) if locked_id && locked_id != id
122
157
 
123
158
  # Take job lock if the lock is currently free
124
- redis.set(unique_gid, id) unless locked_id
159
+ redis.set(unique_gid, id, ex: lock_ttl) unless locked_id
125
160
  end
126
161
  end
127
162
 
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Client middleware, invoked when jobs are scheduled
7
7
  class Client
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).lock_instance.schedule { yield }
10
10
  end
11
11
  end
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).lock_instance.execute { yield }
10
10
  end
11
11
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Cloudtasker
4
- VERSION = '0.10.rc5'
4
+ VERSION = '0.10.rc6'
5
5
  end
@@ -8,7 +8,7 @@ module Cloudtasker
8
8
  base.extend(ClassMethods)
9
9
  base.attr_writer :job_queue
10
10
  base.attr_accessor :job_args, :job_id, :job_meta, :job_reenqueued, :job_retries,
11
- :perform_started_at, :perform_ended_at
11
+ :perform_started_at, :perform_ended_at, :task_id
12
12
  end
13
13
 
14
14
  #
@@ -47,7 +47,7 @@ module Cloudtasker
47
47
  return nil unless worker_klass.include?(self)
48
48
 
49
49
  # Return instantiated worker
50
- worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries))
50
+ worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries, :task_id))
51
51
  rescue NameError
52
52
  nil
53
53
  end
@@ -140,12 +140,13 @@ module Cloudtasker
140
140
  # @param [Array<any>] job_args The list of perform args.
141
141
  # @param [String] job_id A unique ID identifying this job.
142
142
  #
143
- def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0)
143
+ def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0, task_id: nil)
144
144
  @job_args = job_args || []
145
145
  @job_id = job_id || SecureRandom.uuid
146
146
  @job_meta = MetaStore.new(job_meta)
147
147
  @job_retries = job_retries || 0
148
148
  @job_queue = job_queue
149
+ @task_id = task_id
149
150
  end
150
151
 
151
152
  #
@@ -197,18 +198,36 @@ module Cloudtasker
197
198
  raise(e)
198
199
  end
199
200
 
201
+ #
202
+ # Return a unix timestamp specifying when to run the task.
203
+ #
204
+ # @param [Integer, nil] interval The time to wait.
205
+ # @param [Integer, nil] time_at The time at which the job should run.
206
+ #
207
+ # @return [Integer, nil] The Unix timestamp.
208
+ #
209
+ def schedule_time(interval: nil, time_at: nil)
210
+ return nil unless interval || time_at
211
+
212
+ # Generate the complete Unix timestamp
213
+ (time_at || Time.now).to_i + interval.to_i
214
+ end
215
+
200
216
  #
201
217
  # Enqueue a worker, with or without delay.
202
218
  #
203
219
  # @param [Integer] interval The delay in seconds.
204
- #
205
220
  # @param [Time, Integer] interval The time at which the job should run
206
221
  #
207
222
  # @return [Cloudtasker::CloudTask] The Google Task response
208
223
  #
209
- def schedule(interval: nil, time_at: nil)
210
- Cloudtasker.config.client_middleware.invoke(self) do
211
- WorkerHandler.new(self).schedule(interval: interval, time_at: time_at)
224
+ def schedule(**args)
225
+ # Evaluate when to schedule the job
226
+ time_at = schedule_time(args)
227
+
228
+ # Schedule job through client middlewares
229
+ Cloudtasker.config.client_middleware.invoke(self, time_at: time_at) do
230
+ WorkerHandler.new(self).schedule(time_at: time_at)
212
231
  end
213
232
  end
214
233
 
@@ -250,7 +269,8 @@ module Cloudtasker
250
269
  job_args: job_args,
251
270
  job_meta: job_meta.to_h,
252
271
  job_retries: job_retries,
253
- job_queue: job_queue
272
+ job_queue: job_queue,
273
+ task_id: task_id
254
274
  }
255
275
  end
256
276
 
@@ -56,11 +56,6 @@ module Cloudtasker
56
56
  with_worker_handling(input_payload, &:execute)
57
57
  end
58
58
 
59
- # TODO: do not delete redis payload if job has been re-enqueued
60
- # worker.job_reenqueued
61
- #
62
- # Idea: change with_worker_handling to with_worker_handling and build the worker
63
- # inside the with_worker_handling block.
64
59
  #
65
60
  # Local middleware used to retrieve the job arg payload from cache
66
61
  # if a arg payload reference is present.
@@ -210,35 +205,17 @@ module Cloudtasker
210
205
  }.merge(worker_args_payload)
211
206
  end
212
207
 
213
- #
214
- # Return a protobuf timestamp specifying how to wait
215
- # before running a task.
216
- #
217
- # @param [Integer, nil] interval The time to wait.
218
- # @param [Integer, nil] time_at The time at which the job should run.
219
- #
220
- # @return [Integer, nil] The Unix timestamp.
221
- #
222
- def schedule_time(interval: nil, time_at: nil)
223
- return nil unless interval || time_at
224
-
225
- # Generate the complete Unix timestamp
226
- (time_at || Time.now).to_i + interval.to_i
227
- end
228
-
229
208
  #
230
209
  # Schedule the task on GCP Cloud Task.
231
210
  #
232
- # @param [Integer, nil] interval How to wait before running the task.
211
+ # @param [Integer, nil] time_at A unix timestamp specifying when to run the job.
233
212
  # Leave to `nil` to run now.
234
213
  #
235
214
  # @return [Cloudtasker::CloudTask] The Google Task response
236
215
  #
237
- def schedule(interval: nil, time_at: nil)
216
+ def schedule(time_at: nil)
238
217
  # Generate task payload
239
- task = task_payload.merge(
240
- schedule_time: schedule_time(interval: interval, time_at: time_at)
241
- ).compact
218
+ task = task_payload.merge(schedule_time: time_at).compact
242
219
 
243
220
  # Create and return remote task
244
221
  CloudTask.create(task)
@@ -11,7 +11,7 @@ module Cloudtasker
11
11
  end
12
12
 
13
13
  # Only log the job meta information by default (exclude arguments)
14
- DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue) }
14
+ DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue, :task_id) }
15
15
 
16
16
  #
17
17
  # Build a new instance of the class.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cloudtasker
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.10.rc5
4
+ version: 0.10.rc6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Arnaud Lachaume
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-05-03 00:00:00.000000000 Z
11
+ date: 2020-05-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -94,6 +94,20 @@ dependencies:
94
94
  - - ">="
95
95
  - !ruby/object:Gem::Version
96
96
  version: '0'
97
+ - !ruby/object:Gem::Dependency
98
+ name: retriable
99
+ requirement: !ruby/object:Gem::Requirement
100
+ requirements:
101
+ - - ">="
102
+ - !ruby/object:Gem::Version
103
+ version: '0'
104
+ type: :runtime
105
+ prerelease: false
106
+ version_requirements: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - ">="
109
+ - !ruby/object:Gem::Version
110
+ version: '0'
97
111
  - !ruby/object:Gem::Dependency
98
112
  name: appraisal
99
113
  requirement: !ruby/object:Gem::Requirement