cloudtasker 0.10.rc1 → 0.10.rc6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d4cba7de3e429d612adf6c9c2f4424b6ef73db39d4db93b70804800300011e1b
4
- data.tar.gz: 3775cdf3f16430cf8decd49dfc28be9e26f0ef6a63d45224bdc5ed11b13a86fc
3
+ metadata.gz: 796e69d0470947fe0af6ea530cfd6c5b69e95b6a1be42b307e84b27dd9246198
4
+ data.tar.gz: 5c51dd25b3033546a7115d83d638123e80a9ee26fb74845b88a02247edb39461
5
5
  SHA512:
6
- metadata.gz: 5e2e15dc54fad72e3508763855a99804b591126968ccbcaccd8211d51b8b1e28bf6d2907c746f2b14c53c3c065ce1eb06871f30b39419df1f3d7b8a4e1b1fded
7
- data.tar.gz: a2808491a7251b5212587351deb84a99f688a62f7a54c3f5b9c8ebe3a3b6a1ca6adda6ffe898424069223cef8260f1a49c8a086010c0e59dedc5e50ed371e830
6
+ metadata.gz: 134e6d344e75a9500850b135ea215c2b64d31cf366892acc0aab1b37c47e758dfa7e2bd00015ec6d8bb47556aba002bb578f2365ea34196d3343b24cee0f2156
7
+ data.tar.gz: 73bd3cbd91fa1938b97df2cd7f5125a3f90205fb31b5d12efe5b64c4c0aa03e74f6f83cd8c9267b95b5553be93395a45db4f365bbb49b5623a97bf982bf9d2d7
@@ -0,0 +1,41 @@
1
+ name: Test
2
+
3
+ on:
4
+ push:
5
+ branches: [ master ]
6
+ pull_request:
7
+ branches: [ master ]
8
+
9
+ jobs:
10
+ build:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ matrix:
14
+ ruby:
15
+ - '2.5.x'
16
+ - '2.6.x'
17
+ appraisal:
18
+ - 'google-cloud-tasks-1.0'
19
+ - 'google-cloud-tasks-1.1'
20
+ - 'google-cloud-tasks-1.2'
21
+ - 'google-cloud-tasks-1.3'
22
+ - 'rails-5.2'
23
+ - 'rails-6.0'
24
+ steps:
25
+ - name: Setup System
26
+ run: sudo apt-get install libsqlite3-dev
27
+ - uses: actions/checkout@v2
28
+ - uses: zhulik/redis-action@1.1.0
29
+ - name: Set up Ruby 2.6
30
+ uses: actions/setup-ruby@v1
31
+ with:
32
+ ruby-version: ${{ matrix.ruby }}
33
+ - name: Build and test with Rake
34
+ env:
35
+ APPRAISAL_CONTEXT: ${{ matrix.appraisal }}
36
+ run: |
37
+ gem install bundler
38
+ bundle install --jobs 4 --retry 3
39
+ bundle exec rubocop
40
+ bundle exec appraisal ${APPRAISAL_CONTEXT} bundle
41
+ bundle exec appraisal ${APPRAISAL_CONTEXT} rspec
data/.rubocop.yml CHANGED
@@ -2,12 +2,15 @@ require: rubocop-rspec
2
2
 
3
3
  AllCops:
4
4
  Exclude:
5
- - 'gemfiles/vendor/**/*'
5
+ - 'gemfiles/**/*'
6
6
  - 'vendor/**/*'
7
7
 
8
8
  Metrics/ClassLength:
9
9
  Max: 150
10
10
 
11
+ Metrics/ModuleLength:
12
+ Max: 150
13
+
11
14
  Metrics/AbcSize:
12
15
  Max: 20
13
16
 
@@ -34,4 +37,7 @@ Metrics/BlockLength:
34
37
  Style/Documentation:
35
38
  Exclude:
36
39
  - 'examples/**/*'
37
- - 'spec/**/*'
40
+ - 'spec/**/*'
41
+
42
+ Metrics/ParameterLists:
43
+ CountKeywordArgs: false
data/README.md CHANGED
@@ -1,4 +1,4 @@
1
- [![Build Status](https://travis-ci.org/keypup-io/cloudtasker.svg?branch=master)](https://travis-ci.org/keypup-io/cloudtasker) [![Gem Version](https://badge.fury.io/rb/cloudtasker.svg)](https://badge.fury.io/rb/cloudtasker)
1
+ ![Build Status](https://github.com/keypup-io/cloudtasker/workflows/Test/badge.svg) [![Gem Version](https://badge.fury.io/rb/cloudtasker.svg)](https://badge.fury.io/rb/cloudtasker)
2
2
 
3
3
  # Cloudtasker
4
4
 
@@ -246,6 +246,8 @@ Cloudtasker.configure do |config|
246
246
  # You can set this configuration parameter to a KB value if you want to store jobs
247
247
  # args in redis only if the JSONified arguments payload exceeds that threshold.
248
248
  #
249
+ # Supported since: v0.10.rc1
250
+ #
249
251
  # Default: false
250
252
  #
251
253
  # Store all job payloads in Redis:
@@ -465,7 +467,7 @@ end
465
467
 
466
468
  Will generate the following log with context `{:worker=> ..., :job_id=> ..., :job_meta=> ...}`
467
469
  ```log
468
- [Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}}
470
+ [Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}, :task_id => "4e755d3f-6de0-426c-b4ac-51edd445c045"}
469
471
  ```
470
472
 
471
473
  The way contextual information is displayed depends on the logger itself. For example with [semantic_logger](http://rocketjob.github.io/semantic_logger) contextual information might not appear in the log message but show up as payload data on the log entry itself (e.g. using the fluentd adapter).
@@ -501,9 +503,24 @@ end
501
503
 
502
504
  See the [Cloudtasker::Worker class](lib/cloudtasker/worker.rb) for more information on attributes available to be logged in your `log_context_processor` proc.
503
505
 
506
+ ### Searching logs: Job ID vs Task ID
507
+ **Note**: `task_id` field is available in logs starting with `0.10.rc6`
508
+
509
+ Job instances are assigned two different different IDs for tracking and logging purpose: `job_id` and `task_id`. These IDs are found in each log entry to facilitate search.
510
+
511
+ | Field | Definition |
512
+ |------|-------------|
513
+ | `job_id` | This ID is generated by Cloudtasker. It identifies the job along its entire lifecyle. It is persistent across retries and reschedules. |
514
+ | `task_id` | This ID is generated by Google Cloud Tasks. It identifies a job instance on the Google Cloud Task side. It is persistent across retries but NOT across reschedules. |
515
+
516
+ The Google Cloud Task UI (GCP console) lists all the tasks pending/retrying and their associated task id (also called "Task name"). From there you can:
517
+ 1. Use a task ID to lookup the logs of a specific job instance in Stackdriver Logging (or any other logging solution).
518
+ 2. From (1) you can retrieve the `job_id` attribute of the job.
519
+ 3. From (2) you can use the `job_id` to lookup the job logs along its entire lifecycle.
520
+
504
521
  ## Error Handling
505
522
 
506
- Jobs failing will automatically return an HTTP error to Cloud Task and trigger a retry at a later time. The number of retries Cloud Task will do depends on the configuration of your queue in Cloud Tasks.
523
+ Jobs failing will automatically return an HTTP error to Cloud Task and trigger a retry at a later time. The number of Cloud Task retries Cloud Task will depend on the configuration of your queue in Cloud Tasks.
507
524
 
508
525
  ### HTTP Error codes
509
526
 
@@ -549,6 +566,8 @@ By default jobs are retried 25 times - using an exponential backoff - before bei
549
566
 
550
567
  Note that the number of retries set on your Cloud Task queue should be many times higher than the number of retries configured in Cloudtasker because Cloud Task also includes failures to connect to your application. Ideally set the number of retries to `unlimited` in Cloud Tasks.
551
568
 
569
+ **Note**: The `X-CloudTasks-TaskExecutionCount` header sent by Google Cloud Tasks and providing the number of retries outside of `HTTP 503` (instance not reachable) is currently bugged and remains at `0` all the time. Starting with `0.10.rc3` Cloudtasker uses the `X-CloudTasks-TaskRetryCount` header to detect the number of retries. This header includes `HTTP 503` errors which means that if your application is down at some point, jobs will fail and these failures will be counted toward the maximum number of retries. A [bug report](https://issuetracker.google.com/issues/154532072) has been raised with GCP to address this issue. Once fixed we will revert to using `X-CloudTasks-TaskExecutionCount` to avoid counting `HTTP 503` as job failures.
570
+
552
571
  E.g. Set max number of retries globally via the cloudtasker initializer.
553
572
  ```ruby
554
573
  # config/initializers/cloudtasker.rb
@@ -583,7 +602,6 @@ end
583
602
  ```
584
603
 
585
604
 
586
-
587
605
  ## Best practices building workers
588
606
 
589
607
  Below are recommendations and notes about creating workers.
@@ -658,6 +676,8 @@ Google Cloud Tasks enforces a limit of 100 KB for job payloads. Taking into acco
658
676
  Any excessive job payload (> 100 KB) will raise a `Cloudtasker::MaxTaskSizeExceededError`, both in production and development mode.
659
677
 
660
678
  #### Option 1: Use Cloudtasker optional support for payload storage in Redis
679
+ **Supported since**: `0.10.rc1`
680
+
661
681
  Cloudtasker provides optional support for storing argument payloads in Redis instead of sending them to Google Cloud Tasks.
662
682
 
663
683
  To enable it simply put the following in your Cloudtasker initializer:
@@ -51,19 +51,28 @@ module Cloudtasker
51
51
  end
52
52
 
53
53
  # Return content parsed as JSON and add job retries count
54
- JSON.parse(content).merge(job_retries: job_retries)
54
+ JSON.parse(content).merge(job_retries: job_retries, task_id: task_id)
55
55
  end
56
56
  end
57
57
 
58
58
  #
59
59
  # Extract the number of times this task failed at runtime.
60
60
  #
61
- # @return [Integer] The number of failures
61
+ # @return [Integer] The number of failures.
62
62
  #
63
63
  def job_retries
64
64
  request.headers[Cloudtasker::Config::RETRY_HEADER].to_i
65
65
  end
66
66
 
67
+ #
68
+ # Return the Google Cloud Task ID from headers.
69
+ #
70
+ # @return [String] The task ID.
71
+ #
72
+ def task_id
73
+ request.headers[Cloudtasker::Config::TASK_ID_HEADER]
74
+ end
75
+
67
76
  #
68
77
  # Authenticate incoming requests using a bearer token
69
78
  #
data/cloudtasker.gemspec CHANGED
@@ -15,8 +15,6 @@ Gem::Specification.new do |spec|
15
15
  spec.homepage = 'https://github.com/keypup-io/cloudtasker'
16
16
  spec.license = 'MIT'
17
17
 
18
- # spec.metadata["allowed_push_host"] = "TODO: Set to 'http://mygemserver.com'"
19
-
20
18
  spec.metadata['homepage_uri'] = spec.homepage
21
19
  spec.metadata['source_code_uri'] = 'https://github.com/keypup-io/cloudtasker'
22
20
  spec.metadata['changelog_uri'] = 'https://github.com/keypup-io/cloudtasker/master/tree/CHANGELOG.md'
@@ -31,10 +29,12 @@ Gem::Specification.new do |spec|
31
29
  spec.require_paths = ['lib']
32
30
 
33
31
  spec.add_dependency 'activesupport'
32
+ spec.add_dependency 'connection_pool'
34
33
  spec.add_dependency 'fugit'
35
34
  spec.add_dependency 'google-cloud-tasks'
36
35
  spec.add_dependency 'jwt'
37
36
  spec.add_dependency 'redis'
37
+ spec.add_dependency 'retriable'
38
38
 
39
39
  spec.add_development_dependency 'appraisal'
40
40
  spec.add_development_dependency 'bundler', '~> 2.0'
data/docs/UNIQUE_JOBS.md CHANGED
@@ -81,6 +81,68 @@ Below is the list of available conflict strategies can be specified through the
81
81
  | `raise` | All locks | A `Cloudtasker::UniqueJob::LockError` will be raised when a conflict occurs |
82
82
  | `reschedule` | `while_executing` | The job will be rescheduled 5 seconds later when a conflict occurs |
83
83
 
84
+ ## Lock Time To Live (TTL) & deadlocks
85
+ **Note**: Lock TTL has been introduced in `v0.10.rc6`
86
+
87
+ To make jobs unique Cloudtasker sets a lock key - a hash of class name + job arguments - in Redis. Unique crash situations may lead to lock keys not being cleaned up when jobs complete - e.g. Redis crash with rollback from last known state on disk. Situations like these may lead to having a unique job deadlock: jobs with the same class and arguments would stop being processed because they're unable to acquire a lock that will never be cleaned up.
88
+
89
+ In order to prevent deadlocks Cloudtasker configures lock keys to automatically expire in Redis after `job schedule time + lock_ttl (default: 10 minutes)`. This forced expiration ensures that deadlocks eventually get cleaned up shortly after the expected run time of a job.
90
+
91
+ The `lock_ttl (default: 10 minutes)` duration represent the expected max duration of the job. The default 10 minutes value was chosen because it's twice the default request timeout value in Cloud Run. This usually leaves enough room for queue lag (5 minutes) + job processing (5 minutes).
92
+
93
+ Queue lag is certainly the most unpredictable factor here. Job processing time is less of a factor. Jobs running for more than 5 minutes should be split into sub-jobs to limit invocation time over HTTP anyway. Cloudtasker [batch jobs](BATCH_JOBS.md) can help split big jobs into sub-jobs in an atomic way.
94
+
95
+ The default lock key expiration of `job schedule time + 10 minutes` may look aggressive but it is a better choice than having real-time jobs stuck for X hours after a crash recovery.
96
+
97
+ We **strongly recommend** adapting the `lock_ttl` option either globally or for each worker based on expected queue lag and job duration.
98
+
99
+ **Example 1**: Global configuration
100
+ ```ruby
101
+ # config/initializers/cloudtasker.rb
102
+
103
+ # General Cloudtasker configuration
104
+ Cloudtasker.configure do |config|
105
+ # ...
106
+ end
107
+
108
+ # Unique job extension configuration
109
+ Cloudtasker::UniqueJob.configure do |config|
110
+ config.lock_ttl = 3 * 60 # 3 minutes
111
+ end
112
+ ```
113
+
114
+ **Example 2**: Worker-level - fast
115
+ ```ruby
116
+ # app/workers/realtime_worker_on_fast_queue.rb
117
+
118
+ class RealtimeWorkerOnFastQueue
119
+ include Cloudtasker::Worker
120
+
121
+ # Ensure lock is removed 30 seconds after schedule time
122
+ cloudtasker_options lock: :until_executing, lock_ttl: 30
123
+
124
+ def perform(arg1, arg2)
125
+ # ...
126
+ end
127
+ end
128
+ ```
129
+
130
+ **Example 3**: Worker-level - slow
131
+ ```ruby
132
+ # app/workers/non_critical_worker_on_slow_queue.rb
133
+
134
+ class NonCriticalWorkerOnSlowQueue
135
+ include Cloudtasker::Worker
136
+
137
+ # Ensure lock is removed 24 hours after schedule time
138
+ cloudtasker_options lock: :until_executing, lock_ttl: 3600 * 24
139
+
140
+ def perform(arg1, arg2)
141
+ # ...
142
+ end
143
+ end
144
+ ```
145
+
84
146
  ## Configuring unique arguments
85
147
 
86
148
  By default Cloudtasker considers all job arguments to evaluate the uniqueness of a job. This behaviour is configurable per worker by defining a `unique_args` method on the worker itself returning the list of args defining uniqueness.
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'google-cloud-tasks', '1.0'
5
+ gem "google-cloud-tasks", "1.0"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'google-cloud-tasks', '1.1'
5
+ gem "google-cloud-tasks", "1.1"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'google-cloud-tasks', '1.2'
5
+ gem "google-cloud-tasks", "1.2"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'google-cloud-tasks', '1.3'
5
+ gem "google-cloud-tasks", "1.3"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'rails', '5.2'
5
+ gem "rails", "5.2"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,9 +1,7 @@
1
- # frozen_string_literal: true
2
-
3
1
  # This file was generated by Appraisal
4
2
 
5
- source 'https://rubygems.org'
3
+ source "https://rubygems.org"
6
4
 
7
- gem 'rails', '6.0'
5
+ gem "rails", "6.0"
8
6
 
9
- gemspec path: '../'
7
+ gemspec path: "../"
@@ -1,5 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require 'google/cloud/tasks'
4
+ require 'retriable'
5
+
3
6
  module Cloudtasker
4
7
  module Backend
5
8
  # Manage tasks pushed to GCP Cloud Task
@@ -113,9 +116,10 @@ module Cloudtasker
113
116
  # @return [Cloudtasker::Backend::GoogleCloudTask, nil] The retrieved task.
114
117
  #
115
118
  def self.find(id)
116
- resp = client.get_task(id)
119
+ resp = with_gax_retries { client.get_task(id) }
117
120
  resp ? new(resp) : nil
118
- rescue Google::Gax::RetryError
121
+ rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound
122
+ # The ID does not exist
119
123
  nil
120
124
  end
121
125
 
@@ -133,10 +137,8 @@ module Cloudtasker
133
137
  relative_queue = payload.delete(:queue)
134
138
 
135
139
  # Create task
136
- resp = client.create_task(queue_path(relative_queue), payload)
140
+ resp = with_gax_retries { client.create_task(queue_path(relative_queue), payload) }
137
141
  resp ? new(resp) : nil
138
- rescue Google::Gax::RetryError
139
- nil
140
142
  end
141
143
 
142
144
  #
@@ -145,11 +147,21 @@ module Cloudtasker
145
147
  # @param [String] id The id of the task.
146
148
  #
147
149
  def self.delete(id)
148
- client.delete_task(id)
149
- rescue Google::Gax::RetryError, GRPC::NotFound, Google::Gax::PermissionDeniedError
150
+ with_gax_retries { client.delete_task(id) }
151
+ rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound, Google::Gax::PermissionDeniedError
152
+ # The ID does not exist
150
153
  nil
151
154
  end
152
155
 
156
+ #
157
+ # Helper method encapsulating the retry strategy for GAX calls
158
+ #
159
+ def self.with_gax_retries
160
+ Retriable.retriable(on: [Google::Gax::UnavailableError], tries: 3) do
161
+ yield
162
+ end
163
+ end
164
+
153
165
  #
154
166
  # Build a new instance of the class.
155
167
  #
@@ -1,7 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'cloudtasker/redis_client'
4
-
5
3
  module Cloudtasker
6
4
  module Backend
7
5
  # Manage local tasks pushed to memory.
@@ -153,7 +151,8 @@ module Cloudtasker
153
151
  #
154
152
  def execute
155
153
  # Execute worker
156
- resp = WorkerHandler.with_worker_handling(payload, &:execute)
154
+ worker_payload = payload.merge(job_retries: job_retries, task_id: id)
155
+ resp = WorkerHandler.with_worker_handling(worker_payload, &:execute)
157
156
 
158
157
  # Delete task
159
158
  self.class.delete(id)
@@ -247,8 +247,9 @@ module Cloudtasker
247
247
  uri = URI(http_request[:url])
248
248
  req = Net::HTTP::Post.new(uri.path, http_request[:headers])
249
249
 
250
- # Add retries header
251
- req['X-CloudTasks-TaskExecutionCount'] = retries
250
+ # Add task headers
251
+ req[Cloudtasker::Config::TASK_ID_HEADER] = id
252
+ req[Cloudtasker::Config::RETRY_HEADER] = retries
252
253
 
253
254
  # Set job payload
254
255
  req.body = http_request[:body]
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.for(worker).execute { yield }
10
10
  end
11
11
  end
@@ -13,7 +13,20 @@ module Cloudtasker
13
13
  MAX_TASK_SIZE = 100 * 1024 # 100 KB
14
14
 
15
15
  # Retry header in Cloud Task responses
16
- RETRY_HEADER = 'X-CloudTasks-TaskExecutionCount'
16
+ #
17
+ # TODO: use 'X-CloudTasks-TaskExecutionCount' instead of 'X-CloudTasks-TaskRetryCount'
18
+ # 'X-CloudTasks-TaskExecutionCount' is currently bugged and remains at 0 even on retries.
19
+ #
20
+ # See bug: https://issuetracker.google.com/issues/154532072
21
+ #
22
+ # Definitions:
23
+ # X-CloudTasks-TaskRetryCount: total number of retries (including 504 "instance unreachable")
24
+ # X-CloudTasks-TaskExecutionCount: number of non-503 retries (= actual number of job failures)
25
+ #
26
+ RETRY_HEADER = 'X-CloudTasks-TaskRetryCount'
27
+
28
+ # Cloud Task ID header
29
+ TASK_ID_HEADER = 'X-CloudTasks-TaskName'
17
30
 
18
31
  # Content-Transfer-Encoding header in Cloud Task responses
19
32
  ENCODING_HEADER = 'Content-Transfer-Encoding'
@@ -33,7 +46,15 @@ module Cloudtasker
33
46
  DEFAULT_QUEUE_CONCURRENCY = 10
34
47
  DEFAULT_QUEUE_RETRIES = -1 # unlimited
35
48
 
36
- # The number of times jobs will be attempted before declaring them dead
49
+ # The number of times jobs will be attempted before declaring them dead.
50
+ #
51
+ # With the default retry configuration (maxDoublings = 16 and minBackoff = 0.100s)
52
+ # it means that jobs will be declared dead after 20h of consecutive failing.
53
+ #
54
+ # Note that this configuration parameter is internal to Cloudtasker and does not
55
+ # affect the Cloud Task queue configuration. The number of retries configured
56
+ # on the Cloud Task queue should be higher than the number below to also cover
57
+ # failures due to the instance being unreachable.
37
58
  DEFAULT_MAX_RETRY_ATTEMPTS = 25
38
59
 
39
60
  PROCESSOR_HOST_MISSING = <<~DOC
@@ -4,15 +4,10 @@ require 'fugit'
4
4
 
5
5
  module Cloudtasker
6
6
  module Cron
7
- # TODO: handle deletion of cron jobs
8
- #
9
7
  # Manage cron jobs
10
8
  class Job
11
9
  attr_reader :worker
12
10
 
13
- # Key Namespace used for object saved under this class
14
- SUB_NAMESPACE = 'job'
15
-
16
11
  #
17
12
  # Build a new instance of the class
18
13
  #
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).execute { yield }
10
10
  end
11
11
  end
@@ -9,9 +9,6 @@ module Cloudtasker
9
9
  class Schedule
10
10
  attr_accessor :id, :cron, :worker, :task_id, :job_id, :queue, :args
11
11
 
12
- # Key Namespace used for object saved under this class
13
- SUB_NAMESPACE = 'schedule'
14
-
15
12
  #
16
13
  # Return the redis client.
17
14
  #
@@ -12,6 +12,9 @@ module Cloudtasker
12
12
  # Default number of threads to allocate to process a specific queue
13
13
  QUEUE_CONCURRENCY = 1
14
14
 
15
+ # Job Polling. How frequently to poll jobs in redis.
16
+ JOB_POLLING_FREQUENCY = 0.5 # seconds
17
+
15
18
  #
16
19
  # Stop the local server.
17
20
  #
@@ -46,7 +49,7 @@ module Cloudtasker
46
49
  @start ||= Thread.new do
47
50
  until @done
48
51
  queues.each { |(n, c)| process_jobs(n, c) }
49
- sleep 1
52
+ sleep JOB_POLLING_FREQUENCY
50
53
  end
51
54
  Cloudtasker.logger.info('[Cloudtasker/Server] Local server exiting...')
52
55
  end
@@ -82,7 +85,7 @@ module Cloudtasker
82
85
  # Deliver task
83
86
  begin
84
87
  Thread.current['task'].deliver
85
- rescue Errno::ECONNREFUSED => e
88
+ rescue Errno::EBADF, Errno::ECONNREFUSED => e
86
89
  raise(e) unless Thread.current['attempts'] < 3
87
90
 
88
91
  # Retry on connection error, in case the web server is not
@@ -1,15 +1,28 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require 'redis'
4
+ require 'connection_pool'
4
5
 
5
6
  module Cloudtasker
6
7
  # A wrapper with helper methods for redis
7
8
  class RedisClient
8
9
  # Suffix added to cache keys when locking them
9
10
  LOCK_KEY_PREFIX = 'cloudtasker/lock'
11
+ LOCK_DURATION = 2 # seconds
12
+ LOCK_WAIT_DURATION = 0.03 # seconds
13
+
14
+ # Default pool size used for Redis
15
+ DEFAULT_POOL_SIZE = ENV.fetch('RAILS_MAX_THREADS') { 25 }
16
+ DEFAULT_POOL_TIMEOUT = 5
10
17
 
11
18
  def self.client
12
- @client ||= Redis.new(Cloudtasker.config.redis || {})
19
+ @client ||= begin
20
+ pool_size = Cloudtasker.config.redis&.dig(:pool_size) || DEFAULT_POOL_SIZE
21
+ pool_timeout = Cloudtasker.config.redis&.dig(:pool_timeout) || DEFAULT_POOL_TIMEOUT
22
+ ConnectionPool.new(size: pool_size, timeout: pool_timeout) do
23
+ Redis.new(Cloudtasker.config.redis || {})
24
+ end
25
+ end
13
26
  end
14
27
 
15
28
  #
@@ -29,7 +42,7 @@ module Cloudtasker
29
42
  # @return [Hash, Array] The content of the cache key, parsed as JSON.
30
43
  #
31
44
  def fetch(key)
32
- return nil unless (val = client.get(key.to_s))
45
+ return nil unless (val = get(key.to_s))
33
46
 
34
47
  JSON.parse(val, symbolize_names: true)
35
48
  rescue JSON::ParserError
@@ -45,12 +58,15 @@ module Cloudtasker
45
58
  # @return [String] Redis response code.
46
59
  #
47
60
  def write(key, content)
48
- client.set(key.to_s, content.to_json)
61
+ set(key.to_s, content.to_json)
49
62
  end
50
63
 
51
64
  #
52
65
  # Acquire a lock on a cache entry.
53
66
  #
67
+ # Locks are enforced to be short-lived (2s).
68
+ # The yielded block should limit its logic to short operations (e.g. redis get/set).
69
+ #
54
70
  # @example
55
71
  # redis = RedisClient.new
56
72
  # redis.with_lock('foo')
@@ -65,12 +81,14 @@ module Cloudtasker
65
81
 
66
82
  # Wait to acquire lock
67
83
  lock_key = [LOCK_KEY_PREFIX, cache_key].join('/')
68
- true until client.setnx(lock_key, true)
84
+ client.with do |conn|
85
+ sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: LOCK_DURATION)
86
+ end
69
87
 
70
88
  # yield content
71
89
  yield
72
90
  ensure
73
- client.del(lock_key)
91
+ del(lock_key)
74
92
  end
75
93
 
76
94
  #
@@ -99,10 +117,12 @@ module Cloudtasker
99
117
  list = []
100
118
 
101
119
  # Scan and capture matching keys
102
- while cursor != 0
103
- scan = client.scan(cursor || 0, match: pattern)
104
- list += scan[1]
105
- cursor = scan[0].to_i
120
+ client.with do |conn|
121
+ while cursor != 0
122
+ scan = conn.scan(cursor || 0, match: pattern)
123
+ list += scan[1]
124
+ cursor = scan[0].to_i
125
+ end
106
126
  end
107
127
 
108
128
  list
@@ -118,8 +138,8 @@ module Cloudtasker
118
138
  # @return [Any] The method return value
119
139
  #
120
140
  def method_missing(name, *args, &block)
121
- if client.respond_to?(name)
122
- client.send(name, *args, &block)
141
+ if Redis.method_defined?(name)
142
+ client.with { |c| c.send(name, *args, &block) }
123
143
  else
124
144
  super
125
145
  end
@@ -134,7 +154,7 @@ module Cloudtasker
134
154
  # @return [Boolean] Return true if the class respond to this method.
135
155
  #
136
156
  def respond_to_missing?(name, include_private = false)
137
- client.respond_to?(name) || super
157
+ Redis.method_defined?(name) || super
138
158
  end
139
159
  end
140
160
  end
@@ -3,3 +3,30 @@
3
3
  require_relative 'unique_job/middleware'
4
4
 
5
5
  Cloudtasker::UniqueJob::Middleware.configure
6
+
7
+ module Cloudtasker
8
+ # UniqueJob configurator
9
+ module UniqueJob
10
+ # The maximum duration a lock can remain in place
11
+ # after schedule time.
12
+ DEFAULT_LOCK_TTL = 10 * 60 # 10 minutes
13
+
14
+ class << self
15
+ attr_writer :lock_ttl
16
+
17
+ # Configure the middleware
18
+ def configure
19
+ yield(self)
20
+ end
21
+
22
+ #
23
+ # Return the max TTL for locks
24
+ #
25
+ # @return [Integer] The lock TTL.
26
+ #
27
+ def lock_ttl
28
+ @lock_ttl || DEFAULT_LOCK_TTL
29
+ end
30
+ end
31
+ end
32
+ end
@@ -5,21 +5,19 @@ module Cloudtasker
5
5
  # Wrapper class for Cloudtasker::Worker delegating to lock
6
6
  # and conflict strategies
7
7
  class Job
8
- attr_reader :worker
8
+ attr_reader :worker, :call_opts
9
9
 
10
10
  # The default lock strategy to use. Defaults to "no lock".
11
11
  DEFAULT_LOCK = UniqueJob::Lock::NoOp
12
12
 
13
- # Key Namespace used for object saved under this class
14
- SUB_NAMESPACE = 'job'
15
-
16
13
  #
17
14
  # Build a new instance of the class.
18
15
  #
19
16
  # @param [Cloudtasker::Worker] worker The worker at hand
20
17
  #
21
- def initialize(worker)
18
+ def initialize(worker, **kwargs)
22
19
  @worker = worker
20
+ @call_opts = kwargs
23
21
  end
24
22
 
25
23
  #
@@ -31,6 +29,43 @@ module Cloudtasker
31
29
  worker.class.cloudtasker_options_hash
32
30
  end
33
31
 
32
+ #
33
+ # Return the Time To Live (TTL) that should be set in Redis for
34
+ # the lock key. Having a TTL on lock keys ensures that jobs
35
+ # do not end up stuck due to a dead lock situation.
36
+ #
37
+ # The TTL is calculated using schedule time + expected
38
+ # max job duration.
39
+ #
40
+ # The expected max job duration is set to 10 minutes by default.
41
+ # This value was chosen because it's twice the default request timeout
42
+ # value in Cloud Run. This leaves enough room for queue lag (5 minutes)
43
+ # + job processing (5 minutes).
44
+ #
45
+ # Queue lag is certainly the most unpredictable factor here.
46
+ # Job processing time is less of a factor. Jobs running for more than 5 minutes
47
+ # should be split into sub-jobs to limit invocation time over HTTP. Cloudtasker batch
48
+ # jobs can help achieve that if you need to make one big job split into sub-jobs "atomic".
49
+ #
50
+ # The default lock key expiration of "time_at + 10 minutes" may look aggressive but it
51
+ # is still a better choice than potentially having real-time jobs stuck for X hours.
52
+ #
53
+ # The expected max job duration can be configured via the `lock_ttl`
54
+ # option on the job itself.
55
+ #
56
+ # @return [Integer] The TTL in seconds
57
+ #
58
+ def lock_ttl
59
+ now = Time.now.to_i
60
+
61
+ # Get scheduled at and lock duration
62
+ scheduled_at = [call_opts[:time_at].to_i, now].compact.max
63
+ lock_duration = (options[:lock_ttl] || Cloudtasker::UniqueJob.lock_ttl).to_i
64
+
65
+ # Return TTL
66
+ scheduled_at + lock_duration - now
67
+ end
68
+
34
69
  #
35
70
  # Return the instantiated lock.
36
71
  #
@@ -121,7 +156,7 @@ module Cloudtasker
121
156
  raise(LockError, locked_id) if locked_id && locked_id != id
122
157
 
123
158
  # Take job lock if the lock is currently free
124
- redis.set(unique_gid, id) unless locked_id
159
+ redis.set(unique_gid, id, ex: lock_ttl) unless locked_id
125
160
  end
126
161
  end
127
162
 
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Client middleware, invoked when jobs are scheduled
7
7
  class Client
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).lock_instance.schedule { yield }
10
10
  end
11
11
  end
@@ -5,7 +5,7 @@ module Cloudtasker
5
5
  module Middleware
6
6
  # Server middleware, invoked when jobs are executed
7
7
  class Server
8
- def call(worker)
8
+ def call(worker, **_kwargs)
9
9
  Job.new(worker).lock_instance.execute { yield }
10
10
  end
11
11
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Cloudtasker
4
- VERSION = '0.10.rc1'
4
+ VERSION = '0.10.rc6'
5
5
  end
@@ -7,7 +7,8 @@ module Cloudtasker
7
7
  def self.included(base)
8
8
  base.extend(ClassMethods)
9
9
  base.attr_writer :job_queue
10
- base.attr_accessor :job_args, :job_id, :job_meta, :job_reenqueued, :job_retries
10
+ base.attr_accessor :job_args, :job_id, :job_meta, :job_reenqueued, :job_retries,
11
+ :perform_started_at, :perform_ended_at, :task_id
11
12
  end
12
13
 
13
14
  #
@@ -46,7 +47,7 @@ module Cloudtasker
46
47
  return nil unless worker_klass.include?(self)
47
48
 
48
49
  # Return instantiated worker
49
- worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries))
50
+ worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries, :task_id))
50
51
  rescue NameError
51
52
  nil
52
53
  end
@@ -139,12 +140,13 @@ module Cloudtasker
139
140
  # @param [Array<any>] job_args The list of perform args.
140
141
  # @param [String] job_id A unique ID identifying this job.
141
142
  #
142
- def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0)
143
+ def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0, task_id: nil)
143
144
  @job_args = job_args || []
144
145
  @job_id = job_id || SecureRandom.uuid
145
146
  @job_meta = MetaStore.new(job_meta)
146
147
  @job_retries = job_retries || 0
147
148
  @job_queue = job_queue
149
+ @task_id = task_id
148
150
  end
149
151
 
150
152
  #
@@ -181,35 +183,51 @@ module Cloudtasker
181
183
  #
182
184
  def execute
183
185
  logger.info('Starting job...')
184
- resp = Cloudtasker.config.server_middleware.invoke(self) do
185
- begin
186
- perform(*job_args)
187
- rescue StandardError => e
188
- try(:on_error, e)
189
- return raise(e) unless job_dead?
190
186
 
191
- # Flag job as dead
192
- logger.info('Job dead')
193
- try(:on_dead, e)
194
- raise(DeadWorkerError, e)
195
- end
196
- end
197
- logger.info('Job done')
187
+ # Perform job logic
188
+ resp = execute_middleware_chain
189
+
190
+ # Log job completion and return result
191
+ logger.info("Job done after #{job_duration}s") { { duration: job_duration } }
198
192
  resp
193
+ rescue DeadWorkerError => e
194
+ logger.info("Job dead after #{job_duration}s and #{job_retries} retries") { { duration: job_duration } }
195
+ raise(e)
196
+ rescue StandardError => e
197
+ logger.info("Job failed after #{job_duration}s") { { duration: job_duration } }
198
+ raise(e)
199
+ end
200
+
201
+ #
202
+ # Return a unix timestamp specifying when to run the task.
203
+ #
204
+ # @param [Integer, nil] interval The time to wait.
205
+ # @param [Integer, nil] time_at The time at which the job should run.
206
+ #
207
+ # @return [Integer, nil] The Unix timestamp.
208
+ #
209
+ def schedule_time(interval: nil, time_at: nil)
210
+ return nil unless interval || time_at
211
+
212
+ # Generate the complete Unix timestamp
213
+ (time_at || Time.now).to_i + interval.to_i
199
214
  end
200
215
 
201
216
  #
202
217
  # Enqueue a worker, with or without delay.
203
218
  #
204
219
  # @param [Integer] interval The delay in seconds.
205
- #
206
220
  # @param [Time, Integer] interval The time at which the job should run
207
221
  #
208
222
  # @return [Cloudtasker::CloudTask] The Google Task response
209
223
  #
210
- def schedule(interval: nil, time_at: nil)
211
- Cloudtasker.config.client_middleware.invoke(self) do
212
- WorkerHandler.new(self).schedule(interval: interval, time_at: time_at)
224
+ def schedule(**args)
225
+ # Evaluate when to schedule the job
226
+ time_at = schedule_time(args)
227
+
228
+ # Schedule job through client middlewares
229
+ Cloudtasker.config.client_middleware.invoke(self, time_at: time_at) do
230
+ WorkerHandler.new(self).schedule(time_at: time_at)
213
231
  end
214
232
  end
215
233
 
@@ -251,7 +269,8 @@ module Cloudtasker
251
269
  job_args: job_args,
252
270
  job_meta: job_meta.to_h,
253
271
  job_retries: job_retries,
254
- job_queue: job_queue
272
+ job_queue: job_queue,
273
+ task_id: task_id
255
274
  }
256
275
  end
257
276
 
@@ -286,5 +305,46 @@ module Cloudtasker
286
305
  def job_dead?
287
306
  job_retries >= Cloudtasker.config.max_retries
288
307
  end
308
+
309
+ #
310
+ # Return the time taken (in seconds) to perform the job. This duration
311
+ # includes the middlewares and the actual perform method.
312
+ #
313
+ # @return [Float] The time taken in seconds as a floating point number.
314
+ #
315
+ def job_duration
316
+ return 0.0 unless perform_ended_at && perform_started_at
317
+
318
+ (perform_ended_at - perform_started_at).ceil(3)
319
+ end
320
+
321
+ #=============================
322
+ # Private
323
+ #=============================
324
+ private
325
+
326
+ #
327
+ # Execute the worker perform method through the middleware chain.
328
+ #
329
+ # @return [Any] The result of the perform method.
330
+ #
331
+ def execute_middleware_chain
332
+ self.perform_started_at = Time.now
333
+
334
+ Cloudtasker.config.server_middleware.invoke(self) do
335
+ begin
336
+ perform(*job_args)
337
+ rescue StandardError => e
338
+ try(:on_error, e)
339
+ return raise(e) unless job_dead?
340
+
341
+ # Flag job as dead
342
+ try(:on_dead, e)
343
+ raise(DeadWorkerError, e)
344
+ end
345
+ end
346
+ ensure
347
+ self.perform_ended_at = Time.now
348
+ end
289
349
  end
290
350
  end
@@ -56,11 +56,6 @@ module Cloudtasker
56
56
  with_worker_handling(input_payload, &:execute)
57
57
  end
58
58
 
59
- # TODO: do not delete redis payload if job has been re-enqueued
60
- # worker.job_reenqueued
61
- #
62
- # Idea: change with_worker_handling to with_worker_handling and build the worker
63
- # inside the with_worker_handling block.
64
59
  #
65
60
  # Local middleware used to retrieve the job arg payload from cache
66
61
  # if a arg payload reference is present.
@@ -210,35 +205,17 @@ module Cloudtasker
210
205
  }.merge(worker_args_payload)
211
206
  end
212
207
 
213
- #
214
- # Return a protobuf timestamp specifying how to wait
215
- # before running a task.
216
- #
217
- # @param [Integer, nil] interval The time to wait.
218
- # @param [Integer, nil] time_at The time at which the job should run.
219
- #
220
- # @return [Integer, nil] The Unix timestamp.
221
- #
222
- def schedule_time(interval: nil, time_at: nil)
223
- return nil unless interval || time_at
224
-
225
- # Generate the complete Unix timestamp
226
- (time_at || Time.now).to_i + interval.to_i
227
- end
228
-
229
208
  #
230
209
  # Schedule the task on GCP Cloud Task.
231
210
  #
232
- # @param [Integer, nil] interval How to wait before running the task.
211
+ # @param [Integer, nil] time_at A unix timestamp specifying when to run the job.
233
212
  # Leave to `nil` to run now.
234
213
  #
235
214
  # @return [Cloudtasker::CloudTask] The Google Task response
236
215
  #
237
- def schedule(interval: nil, time_at: nil)
216
+ def schedule(time_at: nil)
238
217
  # Generate task payload
239
- task = task_payload.merge(
240
- schedule_time: schedule_time(interval: interval, time_at: time_at)
241
- ).compact
218
+ task = task_payload.merge(schedule_time: time_at).compact
242
219
 
243
220
  # Create and return remote task
244
221
  CloudTask.create(task)
@@ -11,7 +11,7 @@ module Cloudtasker
11
11
  end
12
12
 
13
13
  # Only log the job meta information by default (exclude arguments)
14
- DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue) }
14
+ DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue, :task_id) }
15
15
 
16
16
  #
17
17
  # Build a new instance of the class.
@@ -59,7 +59,7 @@ module Cloudtasker
59
59
  # @return [String] The formatted log message
60
60
  #
61
61
  def formatted_message(msg)
62
- "[Cloudtasker][#{worker.job_id}] #{msg}"
62
+ "[Cloudtasker][#{worker.class}][#{worker.job_id}] #{msg}"
63
63
  end
64
64
 
65
65
  #
@@ -141,7 +141,8 @@ module Cloudtasker
141
141
  # @param [Proc] &block Optional context block.
142
142
  #
143
143
  def log_message(level, msg, &block)
144
- payload_block = block || log_block
144
+ # Merge log-specific context into worker-specific context
145
+ payload_block = -> { log_block.call.merge(block&.call || {}) }
145
146
 
146
147
  # ActiveSupport::Logger does not support passing a payload through a block on top
147
148
  # of a message.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cloudtasker
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.10.rc1
4
+ version: 0.10.rc6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Arnaud Lachaume
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-03-09 00:00:00.000000000 Z
11
+ date: 2020-05-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -24,6 +24,20 @@ dependencies:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
26
  version: '0'
27
+ - !ruby/object:Gem::Dependency
28
+ name: connection_pool
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - ">="
32
+ - !ruby/object:Gem::Version
33
+ version: '0'
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - ">="
39
+ - !ruby/object:Gem::Version
40
+ version: '0'
27
41
  - !ruby/object:Gem::Dependency
28
42
  name: fugit
29
43
  requirement: !ruby/object:Gem::Requirement
@@ -80,6 +94,20 @@ dependencies:
80
94
  - - ">="
81
95
  - !ruby/object:Gem::Version
82
96
  version: '0'
97
+ - !ruby/object:Gem::Dependency
98
+ name: retriable
99
+ requirement: !ruby/object:Gem::Requirement
100
+ requirements:
101
+ - - ">="
102
+ - !ruby/object:Gem::Version
103
+ version: '0'
104
+ type: :runtime
105
+ prerelease: false
106
+ version_requirements: !ruby/object:Gem::Requirement
107
+ requirements:
108
+ - - ">="
109
+ - !ruby/object:Gem::Version
110
+ version: '0'
83
111
  - !ruby/object:Gem::Dependency
84
112
  name: appraisal
85
113
  requirement: !ruby/object:Gem::Requirement
@@ -256,10 +284,10 @@ executables:
256
284
  extensions: []
257
285
  extra_rdoc_files: []
258
286
  files:
287
+ - ".github/workflows/test.yml"
259
288
  - ".gitignore"
260
289
  - ".rspec"
261
290
  - ".rubocop.yml"
262
- - ".travis.yml"
263
291
  - Appraisals
264
292
  - CHANGELOG.md
265
293
  - CODE_OF_CONDUCT.md
@@ -368,8 +396,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
368
396
  - !ruby/object:Gem::Version
369
397
  version: 1.3.1
370
398
  requirements: []
371
- rubyforge_project:
372
- rubygems_version: 2.7.6.2
399
+ rubygems_version: 3.0.0
373
400
  signing_key:
374
401
  specification_version: 4
375
402
  summary: Background jobs for Ruby using Google Cloud Tasks (beta)
data/.travis.yml DELETED
@@ -1,16 +0,0 @@
1
- ---
2
- language: ruby
3
- cache: bundler
4
- rvm:
5
- - 2.5.5
6
- services:
7
- - redis-server
8
- before_install: gem install bundler -v 2.0.2
9
- before_script: bundle exec rubocop
10
- gemfile:
11
- - gemfiles/google_cloud_tasks_1.0.gemfile
12
- - gemfiles/google_cloud_tasks_1.1.gemfile
13
- - gemfiles/google_cloud_tasks_1.2.gemfile
14
- - gemfiles/google_cloud_tasks_1.3.gemfile
15
- - gemfiles/rails_5.2.gemfile
16
- - gemfiles/rails_6.0.gemfile