cloudtasker 0.10.rc4 → 0.10.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.github/workflows/test.yml +3 -3
- data/.rubocop.yml +7 -1
- data/CHANGELOG.md +25 -0
- data/README.md +124 -13
- data/app/controllers/cloudtasker/worker_controller.rb +11 -2
- data/cloudtasker.gemspec +3 -3
- data/docs/UNIQUE_JOBS.md +62 -0
- data/lib/cloudtasker/backend/google_cloud_task.rb +19 -7
- data/lib/cloudtasker/backend/memory_task.rb +14 -5
- data/lib/cloudtasker/backend/redis_task.rb +2 -1
- data/lib/cloudtasker/batch/middleware/server.rb +1 -1
- data/lib/cloudtasker/config.rb +3 -0
- data/lib/cloudtasker/cron/job.rb +0 -5
- data/lib/cloudtasker/cron/middleware/server.rb +1 -1
- data/lib/cloudtasker/cron/schedule.rb +0 -3
- data/lib/cloudtasker/redis_client.rb +27 -12
- data/lib/cloudtasker/unique_job.rb +27 -0
- data/lib/cloudtasker/unique_job/job.rb +41 -6
- data/lib/cloudtasker/unique_job/middleware/client.rb +1 -1
- data/lib/cloudtasker/unique_job/middleware/server.rb +1 -1
- data/lib/cloudtasker/version.rb +1 -1
- data/lib/cloudtasker/worker.rb +28 -8
- data/lib/cloudtasker/worker_handler.rb +3 -26
- data/lib/cloudtasker/worker_logger.rb +1 -1
- metadata +34 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 0a7bb0597bbd5656c6c6c273b4a65097c0b9c44be4b5944a5c1fbdf2d03f9c7c
|
4
|
+
data.tar.gz: c1e1d33203c8dfa5a090427c91f7b94e6046fbc0037fbb74285e61851cab3f93
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a03d48359589520a040e8214ed40d778aece0e667de81bfb15b447c4f4f93f296955b08624c9fa1a5e8ca5e5cb6773389b490566f076f0e1f50ebbf0c46d376a
|
7
|
+
data.tar.gz: 5f64a1e30faa954e2e046f787c216b9c02a992493561abe5dbbef9a7678833ef1644947b4e6f3743ebe43c79c62d90c48b46c91a6002088f88e94e7689b91a6d
|
data/.github/workflows/test.yml
CHANGED
@@ -2,9 +2,9 @@ name: Test
|
|
2
2
|
|
3
3
|
on:
|
4
4
|
push:
|
5
|
-
branches: [ master ]
|
5
|
+
branches: [ master, 0.9-stable ]
|
6
6
|
pull_request:
|
7
|
-
branches: [ master ]
|
7
|
+
branches: [ master, 0.9-stable ]
|
8
8
|
|
9
9
|
jobs:
|
10
10
|
build:
|
@@ -38,4 +38,4 @@ jobs:
|
|
38
38
|
bundle install --jobs 4 --retry 3
|
39
39
|
bundle exec rubocop
|
40
40
|
bundle exec appraisal ${APPRAISAL_CONTEXT} bundle
|
41
|
-
bundle exec appraisal ${APPRAISAL_CONTEXT} rspec
|
41
|
+
bundle exec appraisal ${APPRAISAL_CONTEXT} rspec
|
data/.rubocop.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,30 @@
|
|
1
1
|
# Changelog
|
2
2
|
|
3
|
+
## [v0.10.0](https://github.com/keypup-io/cloudtasker/tree/v0.10.0) (2020-09-02)
|
4
|
+
|
5
|
+
[Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.9.3...v0.10.0)
|
6
|
+
|
7
|
+
**Improvements:**
|
8
|
+
- Logging: Add worker name in log messages
|
9
|
+
- Logging: Add job duration in log messages
|
10
|
+
- Logging: Add Cloud Cloud Task ID in log messages
|
11
|
+
- Unique Job: Support TTL for lock keys. This feature prevents queues from being dead-locked when a critical crash occurs while processing a unique job.
|
12
|
+
- Worker: support payload storage in Redis instead of sending the payload to Google Cloud Tasks. This is useful when job arguments are expected to exceed 100kb, which is the limit set by Google Cloud Tasks
|
13
|
+
|
14
|
+
**Fixed bugs:**
|
15
|
+
- Local processing error: improve error handling and retries around network interruptions
|
16
|
+
- Redis client: prevent deadlocks in high concurrency scenario by slowing down poll time and enforcing lock expiration
|
17
|
+
- Redis client: use connecion pool with Redis to prevent race conditions
|
18
|
+
- Google API: improve error handling on job creation
|
19
|
+
- Google API: use the `X-CloudTasks-TaskRetryCount` instead of `X-CloudTasks-TaskExecutionCount` to detect how many retries Google Cloud Tasks has performed. Using `X-CloudTasks-TaskRetryCount` is theoretically less accurate than using `X-CloudTasks-TaskExecutionCount` because it includes the number of "app unreachable" retries but `X-CloudTasks-TaskExecutionCount` is currently bugged and remains at zero all the time. See [this issue](https://github.com/keypup-io/cloudtasker/issues/6)
|
20
|
+
|
21
|
+
## [v0.9.3](https://github.com/keypup-io/cloudtasker/tree/v0.9.3) (2020-06-25)
|
22
|
+
|
23
|
+
[Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.9.2...v0.9.3)
|
24
|
+
|
25
|
+
**Fixed bugs:**
|
26
|
+
- Google Cloud Tasks: lock version to `~> 1.0` (Google recently released a v2 which changes its bindings completely). An [issue](https://github.com/keypup-io/cloudtasker/issues/11) has been raised to upgrade Cloudtasker to `google-cloud-tasks` `v2`.
|
27
|
+
|
3
28
|
## [v0.9.2](https://github.com/keypup-io/cloudtasker/tree/v0.9.2) (2020-03-04)
|
4
29
|
|
5
30
|
[Full Changelog](https://github.com/keypup-io/cloudtasker/compare/v0.9.1...v0.9.2)
|
data/README.md
CHANGED
@@ -6,11 +6,11 @@ Background jobs for Ruby using Google Cloud Tasks.
|
|
6
6
|
|
7
7
|
Cloudtasker provides an easy to manage interface to Google Cloud Tasks for background job processing. Workers can be defined programmatically using the Cloudtasker DSL and enqueued for processing using a simple to use API.
|
8
8
|
|
9
|
-
Cloudtasker is particularly suited for serverless applications only responding to HTTP requests and where running a dedicated job processing is not an option (e.g. deploy via [Cloud Run](https://cloud.google.com/run)). All jobs enqueued in Cloud Tasks via Cloudtasker eventually get processed by your application via HTTP requests.
|
9
|
+
Cloudtasker is particularly suited for serverless applications only responding to HTTP requests and where running a dedicated job processing server is not an option (e.g. deploy via [Cloud Run](https://cloud.google.com/run)). All jobs enqueued in Cloud Tasks via Cloudtasker eventually get processed by your application via HTTP requests.
|
10
10
|
|
11
11
|
Cloudtasker also provides optional modules for running [cron jobs](docs/CRON_JOBS.md), [batch jobs](docs/BATCH_JOBS.md) and [unique jobs](docs/UNIQUE_JOBS.md).
|
12
12
|
|
13
|
-
A local processing server is also available
|
13
|
+
A local processing server is also available for development. This local server processes jobs in lieu of Cloud Tasks and allows you to work offline.
|
14
14
|
|
15
15
|
## Summary
|
16
16
|
|
@@ -34,7 +34,11 @@ A local processing server is also available in development. This local server pr
|
|
34
34
|
1. [HTTP Error codes](#http-error-codes)
|
35
35
|
2. [Error callbacks](#error-callbacks)
|
36
36
|
3. [Max retries](#max-retries)
|
37
|
-
10. [
|
37
|
+
10. [Testing](#testing)
|
38
|
+
1. [Test helper setup](#test-helper-setup)
|
39
|
+
2. [In-memory queues](#in-memory-queues)
|
40
|
+
3. [Unit tests](#unit-tests)
|
41
|
+
11. [Best practices building workers](#best-practices-building-workers)
|
38
42
|
|
39
43
|
## Installation
|
40
44
|
|
@@ -48,7 +52,7 @@ And then execute:
|
|
48
52
|
|
49
53
|
$ bundle
|
50
54
|
|
51
|
-
Or install it yourself
|
55
|
+
Or install it yourself with:
|
52
56
|
|
53
57
|
$ gem install cloudtasker
|
54
58
|
|
@@ -218,7 +222,7 @@ Cloudtasker.configure do |config|
|
|
218
222
|
|
219
223
|
#
|
220
224
|
# Specify how many retries are allowed on jobs. This number of retries excludes any
|
221
|
-
# connectivity error
|
225
|
+
# connectivity error due to the application being down or unreachable.
|
222
226
|
#
|
223
227
|
# Default: 25
|
224
228
|
#
|
@@ -246,7 +250,7 @@ Cloudtasker.configure do |config|
|
|
246
250
|
# You can set this configuration parameter to a KB value if you want to store jobs
|
247
251
|
# args in redis only if the JSONified arguments payload exceeds that threshold.
|
248
252
|
#
|
249
|
-
# Supported since: v0.10.
|
253
|
+
# Supported since: v0.10.0
|
250
254
|
#
|
251
255
|
# Default: false
|
252
256
|
#
|
@@ -289,7 +293,7 @@ MyWorker.schedule(args: [arg1, arg2], time_at: Time.parse('2025-01-01 00:50:00Z'
|
|
289
293
|
MyWorker.schedule(args: [arg1, arg2], time_in: 5 * 60, queue: 'critical')
|
290
294
|
```
|
291
295
|
|
292
|
-
Cloudtasker also provides a helper for re-enqueuing jobs. Re-enqueued jobs keep the same
|
296
|
+
Cloudtasker also provides a helper for re-enqueuing jobs. Re-enqueued jobs keep the same job id. Some middlewares may rely on this to track the fact that that a job didn't actually complete (e.g. Cloustasker batch). This is optional and you can always fallback to using exception management (raise an error) to retry/re-enqueue jobs.
|
293
297
|
|
294
298
|
E.g.
|
295
299
|
```ruby
|
@@ -467,14 +471,14 @@ end
|
|
467
471
|
|
468
472
|
Will generate the following log with context `{:worker=> ..., :job_id=> ..., :job_meta=> ...}`
|
469
473
|
```log
|
470
|
-
[Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}}
|
474
|
+
[Cloudtasker][d76040a1-367e-4e3b-854e-e05a74d5f773] Job run with foo. This is working!: {:worker=>"DummyWorker", :job_id=>"d76040a1-367e-4e3b-854e-e05a74d5f773", :job_meta=>{}, :task_id => "4e755d3f-6de0-426c-b4ac-51edd445c045"}
|
471
475
|
```
|
472
476
|
|
473
477
|
The way contextual information is displayed depends on the logger itself. For example with [semantic_logger](http://rocketjob.github.io/semantic_logger) contextual information might not appear in the log message but show up as payload data on the log entry itself (e.g. using the fluentd adapter).
|
474
478
|
|
475
479
|
Contextual information can be customised globally and locally using a log context_processor. By default the `Cloudtasker::WorkerLogger` is configured the following way:
|
476
480
|
```ruby
|
477
|
-
Cloudtasker::WorkerLogger.log_context_processor = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta) }
|
481
|
+
Cloudtasker::WorkerLogger.log_context_processor = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue, :task_id) }
|
478
482
|
```
|
479
483
|
|
480
484
|
You can decide to add a global identifier for your worker logs using the following:
|
@@ -482,7 +486,7 @@ You can decide to add a global identifier for your worker logs using the followi
|
|
482
486
|
# config/initializers/cloudtasker.rb
|
483
487
|
|
484
488
|
Cloudtasker::WorkerLogger.log_context_processor = lambda { |worker|
|
485
|
-
worker.to_h.slice(:worker, :job_id, :job_meta).merge(app: 'my-app')
|
489
|
+
worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue, :task_id).merge(app: 'my-app')
|
486
490
|
}
|
487
491
|
```
|
488
492
|
|
@@ -503,9 +507,24 @@ end
|
|
503
507
|
|
504
508
|
See the [Cloudtasker::Worker class](lib/cloudtasker/worker.rb) for more information on attributes available to be logged in your `log_context_processor` proc.
|
505
509
|
|
510
|
+
### Searching logs: Job ID vs Task ID
|
511
|
+
**Note**: `task_id` field is available in logs starting with `0.10.0`
|
512
|
+
|
513
|
+
Job instances are assigned two different different IDs for tracking and logging purpose: `job_id` and `task_id`. These IDs are found in each log entry to facilitate search.
|
514
|
+
|
515
|
+
| Field | Definition |
|
516
|
+
|------|-------------|
|
517
|
+
| `job_id` | This ID is generated by Cloudtasker. It identifies the job along its entire lifecyle. It is persistent across retries and reschedules. |
|
518
|
+
| `task_id` | This ID is generated by Google Cloud Tasks. It identifies a job instance on the Google Cloud Task side. It is persistent across retries but NOT across reschedules. |
|
519
|
+
|
520
|
+
The Google Cloud Task UI (GCP console) lists all the tasks pending/retrying and their associated task id (also called "Task name"). From there you can:
|
521
|
+
1. Use a task ID to lookup the logs of a specific job instance in Stackdriver Logging (or any other logging solution).
|
522
|
+
2. From (1) you can retrieve the `job_id` attribute of the job.
|
523
|
+
3. From (2) you can use the `job_id` to lookup the job logs along its entire lifecycle.
|
524
|
+
|
506
525
|
## Error Handling
|
507
526
|
|
508
|
-
Jobs
|
527
|
+
Jobs failures will return an HTTP error to Cloud Task and trigger a retry at a later time. The number of Cloud Task retries depends on the configuration of your queue in Cloud Tasks.
|
509
528
|
|
510
529
|
### HTTP Error codes
|
511
530
|
|
@@ -513,6 +532,7 @@ Jobs failing will automatically return the following HTTP error code to Cloud Ta
|
|
513
532
|
|
514
533
|
| Code | Description |
|
515
534
|
|------|-------------|
|
535
|
+
| 204 | The job was processed successfully |
|
516
536
|
| 205 | The job is dead and has been removed from the queue |
|
517
537
|
| 404 | The job has specified an incorrect worker class. |
|
518
538
|
| 422 | An error happened during the execution of the worker (`perform` method) |
|
@@ -551,7 +571,7 @@ By default jobs are retried 25 times - using an exponential backoff - before bei
|
|
551
571
|
|
552
572
|
Note that the number of retries set on your Cloud Task queue should be many times higher than the number of retries configured in Cloudtasker because Cloud Task also includes failures to connect to your application. Ideally set the number of retries to `unlimited` in Cloud Tasks.
|
553
573
|
|
554
|
-
**Note**: The `X-CloudTasks-TaskExecutionCount` header sent by Google Cloud Tasks and providing the number of retries outside of `HTTP 503` (instance not reachable) is currently bugged and remains at `0` all the time. Starting with `
|
574
|
+
**Note**: The `X-CloudTasks-TaskExecutionCount` header sent by Google Cloud Tasks and providing the number of retries outside of `HTTP 503` (instance not reachable) is currently bugged and remains at `0` all the time. Starting with `v0.10.0` Cloudtasker uses the `X-CloudTasks-TaskRetryCount` header to detect the number of retries. This header includes `HTTP 503` errors which means that if your application is down at some point, jobs will fail and these failures will be counted toward the maximum number of retries. A [bug report](https://issuetracker.google.com/issues/154532072) has been raised with GCP to address this issue. Once fixed we will revert to using `X-CloudTasks-TaskExecutionCount` to avoid counting `HTTP 503` as job failures.
|
555
575
|
|
556
576
|
E.g. Set max number of retries globally via the cloudtasker initializer.
|
557
577
|
```ruby
|
@@ -586,6 +606,97 @@ class SomeErrorWorker
|
|
586
606
|
end
|
587
607
|
```
|
588
608
|
|
609
|
+
## Testing
|
610
|
+
Cloudtasker provides several options to test your workers.
|
611
|
+
|
612
|
+
### Test helper setup
|
613
|
+
Require `cloudtasker/testing` in your `rails_helper.rb` (Rspec Rails) or `spec_helper.rb` (Rspec) or test unit helper file then enable one of the three modes:
|
614
|
+
|
615
|
+
```ruby
|
616
|
+
require 'cloudtasker/testing'
|
617
|
+
|
618
|
+
# Mode 1 (default): Push jobs to Google Cloud Tasks (env != development) or Redis (env == development)
|
619
|
+
Cloudtasker::Testing.enable!
|
620
|
+
|
621
|
+
# Mode 2: Push jobs to an in-memory queue. Jobs will not be processed until you call
|
622
|
+
# Cloudtasker::Worker.drain_all (process all jobs) or MyWorker.drain (process jobs for specific worker)
|
623
|
+
Cloudtasker::Testing.fake!
|
624
|
+
|
625
|
+
# Mode 3: Push jobs to an in-memory queue. Jobs will be processed immediately.
|
626
|
+
Cloudtasker::Testing.inline!
|
627
|
+
```
|
628
|
+
|
629
|
+
You can query the current testing mode with:
|
630
|
+
```ruby
|
631
|
+
Cloudtasker::Testing.enabled?
|
632
|
+
Cloudtasker::Testing.fake?
|
633
|
+
Cloudtasker::Testing.inline?
|
634
|
+
```
|
635
|
+
|
636
|
+
Each testing mode accepts a block argument to temporarily switch to it:
|
637
|
+
```ruby
|
638
|
+
# Enable fake mode for all tests
|
639
|
+
Cloudtasker::Testing.fake!
|
640
|
+
|
641
|
+
# Enable inline! mode temporarily for a given test
|
642
|
+
Cloudtasker.inline! do
|
643
|
+
MyWorker.perform_async(1,2)
|
644
|
+
end
|
645
|
+
```
|
646
|
+
|
647
|
+
Note that extension middlewares - e.g. unique job, batch job etc. - run in test mode. You can disable middlewares in your tests by adding the following to your test helper:
|
648
|
+
```ruby
|
649
|
+
# Remove all middlewares
|
650
|
+
Cloudtasker.configure do |c|
|
651
|
+
c.client_middleware.clear
|
652
|
+
c.server_middleware.clear
|
653
|
+
end
|
654
|
+
|
655
|
+
# Remove all unique job middlewares
|
656
|
+
Cloudtasker.configure do |c|
|
657
|
+
c.client_middleware.remove(Cloudtasker::UniqueJob::Middleware::Client)
|
658
|
+
c.server_middleware.remove(Cloudtasker::UniqueJob::Middleware::Server)
|
659
|
+
end
|
660
|
+
```
|
661
|
+
|
662
|
+
### In-memory queues
|
663
|
+
The `fake!` or `inline!` modes use in-memory queues, which can be queried and controlled using the following methods:
|
664
|
+
|
665
|
+
```ruby
|
666
|
+
# Perform all jobs in queue
|
667
|
+
Cloudtasker::Worker.drain_all
|
668
|
+
|
669
|
+
# Remove all jobs in queue
|
670
|
+
Cloudtasker::Worker.clear_all
|
671
|
+
|
672
|
+
# Perform all jobs in queue for a specific worker type
|
673
|
+
MyWorker.drain
|
674
|
+
|
675
|
+
# Return the list of jobs in queue for a specific worker type
|
676
|
+
MyWorker.jobs
|
677
|
+
```
|
678
|
+
|
679
|
+
### Unit tests
|
680
|
+
Below are examples of rspec tests. It is assumed that `Cloudtasker::Testing.fake!` has been set in the test helper.
|
681
|
+
|
682
|
+
**Example 1**: Testing that a job is scheduled
|
683
|
+
```ruby
|
684
|
+
describe 'worker scheduling'
|
685
|
+
subject(:enqueue_job) { MyWorker.perform_async(1,2) }
|
686
|
+
|
687
|
+
it { expect { enqueue_job }.to change(MyWorker.jobs, :size).by(1) }
|
688
|
+
end
|
689
|
+
```
|
690
|
+
|
691
|
+
**Example 2**: Testing job execution logic
|
692
|
+
```ruby
|
693
|
+
describe 'worker calls api'
|
694
|
+
subject { Cloudtasker::Testing.inline! { MyApiWorker.perform_async(1,2) } }
|
695
|
+
|
696
|
+
before { expect(MyApi).to receive(:fetch).and_return([]) }
|
697
|
+
it { is_expected.to be_truthy }
|
698
|
+
end
|
699
|
+
```
|
589
700
|
|
590
701
|
## Best practices building workers
|
591
702
|
|
@@ -661,7 +772,7 @@ Google Cloud Tasks enforces a limit of 100 KB for job payloads. Taking into acco
|
|
661
772
|
Any excessive job payload (> 100 KB) will raise a `Cloudtasker::MaxTaskSizeExceededError`, both in production and development mode.
|
662
773
|
|
663
774
|
#### Option 1: Use Cloudtasker optional support for payload storage in Redis
|
664
|
-
**Supported since**: `0.10.
|
775
|
+
**Supported since**: `0.10.0`
|
665
776
|
|
666
777
|
Cloudtasker provides optional support for storing argument payloads in Redis instead of sending them to Google Cloud Tasks.
|
667
778
|
|
@@ -51,19 +51,28 @@ module Cloudtasker
|
|
51
51
|
end
|
52
52
|
|
53
53
|
# Return content parsed as JSON and add job retries count
|
54
|
-
JSON.parse(content).merge(job_retries: job_retries)
|
54
|
+
JSON.parse(content).merge(job_retries: job_retries, task_id: task_id)
|
55
55
|
end
|
56
56
|
end
|
57
57
|
|
58
58
|
#
|
59
59
|
# Extract the number of times this task failed at runtime.
|
60
60
|
#
|
61
|
-
# @return [Integer] The number of failures
|
61
|
+
# @return [Integer] The number of failures.
|
62
62
|
#
|
63
63
|
def job_retries
|
64
64
|
request.headers[Cloudtasker::Config::RETRY_HEADER].to_i
|
65
65
|
end
|
66
66
|
|
67
|
+
#
|
68
|
+
# Return the Google Cloud Task ID from headers.
|
69
|
+
#
|
70
|
+
# @return [String] The task ID.
|
71
|
+
#
|
72
|
+
def task_id
|
73
|
+
request.headers[Cloudtasker::Config::TASK_ID_HEADER]
|
74
|
+
end
|
75
|
+
|
67
76
|
#
|
68
77
|
# Authenticate incoming requests using a bearer token
|
69
78
|
#
|
data/cloudtasker.gemspec
CHANGED
@@ -15,8 +15,6 @@ Gem::Specification.new do |spec|
|
|
15
15
|
spec.homepage = 'https://github.com/keypup-io/cloudtasker'
|
16
16
|
spec.license = 'MIT'
|
17
17
|
|
18
|
-
# spec.metadata["allowed_push_host"] = "TODO: Set to 'http://mygemserver.com'"
|
19
|
-
|
20
18
|
spec.metadata['homepage_uri'] = spec.homepage
|
21
19
|
spec.metadata['source_code_uri'] = 'https://github.com/keypup-io/cloudtasker'
|
22
20
|
spec.metadata['changelog_uri'] = 'https://github.com/keypup-io/cloudtasker/master/tree/CHANGELOG.md'
|
@@ -31,10 +29,12 @@ Gem::Specification.new do |spec|
|
|
31
29
|
spec.require_paths = ['lib']
|
32
30
|
|
33
31
|
spec.add_dependency 'activesupport'
|
32
|
+
spec.add_dependency 'connection_pool'
|
34
33
|
spec.add_dependency 'fugit'
|
35
|
-
spec.add_dependency 'google-cloud-tasks'
|
34
|
+
spec.add_dependency 'google-cloud-tasks', '~> 1.0'
|
36
35
|
spec.add_dependency 'jwt'
|
37
36
|
spec.add_dependency 'redis'
|
37
|
+
spec.add_dependency 'retriable'
|
38
38
|
|
39
39
|
spec.add_development_dependency 'appraisal'
|
40
40
|
spec.add_development_dependency 'bundler', '~> 2.0'
|
data/docs/UNIQUE_JOBS.md
CHANGED
@@ -81,6 +81,68 @@ Below is the list of available conflict strategies can be specified through the
|
|
81
81
|
| `raise` | All locks | A `Cloudtasker::UniqueJob::LockError` will be raised when a conflict occurs |
|
82
82
|
| `reschedule` | `while_executing` | The job will be rescheduled 5 seconds later when a conflict occurs |
|
83
83
|
|
84
|
+
## Lock Time To Live (TTL) & deadlocks
|
85
|
+
**Note**: Lock TTL has been introduced in `v0.10.rc6`
|
86
|
+
|
87
|
+
To make jobs unique Cloudtasker sets a lock key - a hash of class name + job arguments - in Redis. Unique crash situations may lead to lock keys not being cleaned up when jobs complete - e.g. Redis crash with rollback from last known state on disk. Situations like these may lead to having a unique job deadlock: jobs with the same class and arguments would stop being processed because they're unable to acquire a lock that will never be cleaned up.
|
88
|
+
|
89
|
+
In order to prevent deadlocks Cloudtasker configures lock keys to automatically expire in Redis after `job schedule time + lock_ttl (default: 10 minutes)`. This forced expiration ensures that deadlocks eventually get cleaned up shortly after the expected run time of a job.
|
90
|
+
|
91
|
+
The `lock_ttl (default: 10 minutes)` duration represent the expected max duration of the job. The default 10 minutes value was chosen because it's twice the default request timeout value in Cloud Run. This usually leaves enough room for queue lag (5 minutes) + job processing (5 minutes).
|
92
|
+
|
93
|
+
Queue lag is certainly the most unpredictable factor here. Job processing time is less of a factor. Jobs running for more than 5 minutes should be split into sub-jobs to limit invocation time over HTTP anyway. Cloudtasker [batch jobs](BATCH_JOBS.md) can help split big jobs into sub-jobs in an atomic way.
|
94
|
+
|
95
|
+
The default lock key expiration of `job schedule time + 10 minutes` may look aggressive but it is a better choice than having real-time jobs stuck for X hours after a crash recovery.
|
96
|
+
|
97
|
+
We **strongly recommend** adapting the `lock_ttl` option either globally or for each worker based on expected queue lag and job duration.
|
98
|
+
|
99
|
+
**Example 1**: Global configuration
|
100
|
+
```ruby
|
101
|
+
# config/initializers/cloudtasker.rb
|
102
|
+
|
103
|
+
# General Cloudtasker configuration
|
104
|
+
Cloudtasker.configure do |config|
|
105
|
+
# ...
|
106
|
+
end
|
107
|
+
|
108
|
+
# Unique job extension configuration
|
109
|
+
Cloudtasker::UniqueJob.configure do |config|
|
110
|
+
config.lock_ttl = 3 * 60 # 3 minutes
|
111
|
+
end
|
112
|
+
```
|
113
|
+
|
114
|
+
**Example 2**: Worker-level - fast
|
115
|
+
```ruby
|
116
|
+
# app/workers/realtime_worker_on_fast_queue.rb
|
117
|
+
|
118
|
+
class RealtimeWorkerOnFastQueue
|
119
|
+
include Cloudtasker::Worker
|
120
|
+
|
121
|
+
# Ensure lock is removed 30 seconds after schedule time
|
122
|
+
cloudtasker_options lock: :until_executing, lock_ttl: 30
|
123
|
+
|
124
|
+
def perform(arg1, arg2)
|
125
|
+
# ...
|
126
|
+
end
|
127
|
+
end
|
128
|
+
```
|
129
|
+
|
130
|
+
**Example 3**: Worker-level - slow
|
131
|
+
```ruby
|
132
|
+
# app/workers/non_critical_worker_on_slow_queue.rb
|
133
|
+
|
134
|
+
class NonCriticalWorkerOnSlowQueue
|
135
|
+
include Cloudtasker::Worker
|
136
|
+
|
137
|
+
# Ensure lock is removed 24 hours after schedule time
|
138
|
+
cloudtasker_options lock: :until_executing, lock_ttl: 3600 * 24
|
139
|
+
|
140
|
+
def perform(arg1, arg2)
|
141
|
+
# ...
|
142
|
+
end
|
143
|
+
end
|
144
|
+
```
|
145
|
+
|
84
146
|
## Configuring unique arguments
|
85
147
|
|
86
148
|
By default Cloudtasker considers all job arguments to evaluate the uniqueness of a job. This behaviour is configurable per worker by defining a `unique_args` method on the worker itself returning the list of args defining uniqueness.
|
@@ -1,5 +1,8 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
|
+
require 'google/cloud/tasks'
|
4
|
+
require 'retriable'
|
5
|
+
|
3
6
|
module Cloudtasker
|
4
7
|
module Backend
|
5
8
|
# Manage tasks pushed to GCP Cloud Task
|
@@ -113,9 +116,10 @@ module Cloudtasker
|
|
113
116
|
# @return [Cloudtasker::Backend::GoogleCloudTask, nil] The retrieved task.
|
114
117
|
#
|
115
118
|
def self.find(id)
|
116
|
-
resp = client.get_task(id)
|
119
|
+
resp = with_gax_retries { client.get_task(id) }
|
117
120
|
resp ? new(resp) : nil
|
118
|
-
rescue Google::Gax::RetryError
|
121
|
+
rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound
|
122
|
+
# The ID does not exist
|
119
123
|
nil
|
120
124
|
end
|
121
125
|
|
@@ -133,10 +137,8 @@ module Cloudtasker
|
|
133
137
|
relative_queue = payload.delete(:queue)
|
134
138
|
|
135
139
|
# Create task
|
136
|
-
resp = client.create_task(queue_path(relative_queue), payload)
|
140
|
+
resp = with_gax_retries { client.create_task(queue_path(relative_queue), payload) }
|
137
141
|
resp ? new(resp) : nil
|
138
|
-
rescue Google::Gax::RetryError
|
139
|
-
nil
|
140
142
|
end
|
141
143
|
|
142
144
|
#
|
@@ -145,11 +147,21 @@ module Cloudtasker
|
|
145
147
|
# @param [String] id The id of the task.
|
146
148
|
#
|
147
149
|
def self.delete(id)
|
148
|
-
client.delete_task(id)
|
149
|
-
rescue Google::Gax::
|
150
|
+
with_gax_retries { client.delete_task(id) }
|
151
|
+
rescue Google::Gax::RetryError, Google::Gax::NotFoundError, GRPC::NotFound, Google::Gax::PermissionDeniedError
|
152
|
+
# The ID does not exist
|
150
153
|
nil
|
151
154
|
end
|
152
155
|
|
156
|
+
#
|
157
|
+
# Helper method encapsulating the retry strategy for GAX calls
|
158
|
+
#
|
159
|
+
def self.with_gax_retries
|
160
|
+
Retriable.retriable(on: [Google::Gax::UnavailableError], tries: 3) do
|
161
|
+
yield
|
162
|
+
end
|
163
|
+
end
|
164
|
+
|
153
165
|
#
|
154
166
|
# Build a new instance of the class.
|
155
167
|
#
|
@@ -1,7 +1,5 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
|
-
require 'cloudtasker/redis_client'
|
4
|
-
|
5
3
|
module Cloudtasker
|
6
4
|
module Backend
|
7
5
|
# Manage local tasks pushed to memory.
|
@@ -10,6 +8,15 @@ module Cloudtasker
|
|
10
8
|
attr_accessor :job_retries
|
11
9
|
attr_reader :id, :http_request, :schedule_time, :queue
|
12
10
|
|
11
|
+
#
|
12
|
+
# Return true if we are in test inline execution mode.
|
13
|
+
#
|
14
|
+
# @return [Boolean] True if inline mode enabled.
|
15
|
+
#
|
16
|
+
def self.inline_mode?
|
17
|
+
defined?(Cloudtasker::Testing) && Cloudtasker::Testing.inline?
|
18
|
+
end
|
19
|
+
|
13
20
|
#
|
14
21
|
# Return the task queue. A worker class name
|
15
22
|
#
|
@@ -59,7 +66,7 @@ module Cloudtasker
|
|
59
66
|
queue << task
|
60
67
|
|
61
68
|
# Execute task immediately if in testing and inline mode enabled
|
62
|
-
task.execute if
|
69
|
+
task.execute if inline_mode?
|
63
70
|
|
64
71
|
task
|
65
72
|
end
|
@@ -153,13 +160,15 @@ module Cloudtasker
|
|
153
160
|
#
|
154
161
|
def execute
|
155
162
|
# Execute worker
|
156
|
-
|
163
|
+
worker_payload = payload.merge(job_retries: job_retries, task_id: id)
|
164
|
+
resp = WorkerHandler.with_worker_handling(worker_payload, &:execute)
|
157
165
|
|
158
166
|
# Delete task
|
159
167
|
self.class.delete(id)
|
160
168
|
resp
|
161
|
-
rescue StandardError
|
169
|
+
rescue StandardError => e
|
162
170
|
self.job_retries += 1
|
171
|
+
raise(e) if self.class.inline_mode?
|
163
172
|
end
|
164
173
|
|
165
174
|
#
|
@@ -247,7 +247,8 @@ module Cloudtasker
|
|
247
247
|
uri = URI(http_request[:url])
|
248
248
|
req = Net::HTTP::Post.new(uri.path, http_request[:headers])
|
249
249
|
|
250
|
-
# Add
|
250
|
+
# Add task headers
|
251
|
+
req[Cloudtasker::Config::TASK_ID_HEADER] = id
|
251
252
|
req[Cloudtasker::Config::RETRY_HEADER] = retries
|
252
253
|
|
253
254
|
# Set job payload
|
data/lib/cloudtasker/config.rb
CHANGED
@@ -25,6 +25,9 @@ module Cloudtasker
|
|
25
25
|
#
|
26
26
|
RETRY_HEADER = 'X-CloudTasks-TaskRetryCount'
|
27
27
|
|
28
|
+
# Cloud Task ID header
|
29
|
+
TASK_ID_HEADER = 'X-CloudTasks-TaskName'
|
30
|
+
|
28
31
|
# Content-Transfer-Encoding header in Cloud Task responses
|
29
32
|
ENCODING_HEADER = 'Content-Transfer-Encoding'
|
30
33
|
|
data/lib/cloudtasker/cron/job.rb
CHANGED
@@ -4,15 +4,10 @@ require 'fugit'
|
|
4
4
|
|
5
5
|
module Cloudtasker
|
6
6
|
module Cron
|
7
|
-
# TODO: handle deletion of cron jobs
|
8
|
-
#
|
9
7
|
# Manage cron jobs
|
10
8
|
class Job
|
11
9
|
attr_reader :worker
|
12
10
|
|
13
|
-
# Key Namespace used for object saved under this class
|
14
|
-
SUB_NAMESPACE = 'job'
|
15
|
-
|
16
11
|
#
|
17
12
|
# Build a new instance of the class
|
18
13
|
#
|
@@ -1,6 +1,7 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
require 'redis'
|
4
|
+
require 'connection_pool'
|
4
5
|
|
5
6
|
module Cloudtasker
|
6
7
|
# A wrapper with helper methods for redis
|
@@ -10,8 +11,18 @@ module Cloudtasker
|
|
10
11
|
LOCK_DURATION = 2 # seconds
|
11
12
|
LOCK_WAIT_DURATION = 0.03 # seconds
|
12
13
|
|
14
|
+
# Default pool size used for Redis
|
15
|
+
DEFAULT_POOL_SIZE = ENV.fetch('RAILS_MAX_THREADS') { 25 }
|
16
|
+
DEFAULT_POOL_TIMEOUT = 5
|
17
|
+
|
13
18
|
def self.client
|
14
|
-
@client ||=
|
19
|
+
@client ||= begin
|
20
|
+
pool_size = Cloudtasker.config.redis&.dig(:pool_size) || DEFAULT_POOL_SIZE
|
21
|
+
pool_timeout = Cloudtasker.config.redis&.dig(:pool_timeout) || DEFAULT_POOL_TIMEOUT
|
22
|
+
ConnectionPool.new(size: pool_size, timeout: pool_timeout) do
|
23
|
+
Redis.new(Cloudtasker.config.redis || {})
|
24
|
+
end
|
25
|
+
end
|
15
26
|
end
|
16
27
|
|
17
28
|
#
|
@@ -31,7 +42,7 @@ module Cloudtasker
|
|
31
42
|
# @return [Hash, Array] The content of the cache key, parsed as JSON.
|
32
43
|
#
|
33
44
|
def fetch(key)
|
34
|
-
return nil unless (val =
|
45
|
+
return nil unless (val = get(key.to_s))
|
35
46
|
|
36
47
|
JSON.parse(val, symbolize_names: true)
|
37
48
|
rescue JSON::ParserError
|
@@ -47,7 +58,7 @@ module Cloudtasker
|
|
47
58
|
# @return [String] Redis response code.
|
48
59
|
#
|
49
60
|
def write(key, content)
|
50
|
-
|
61
|
+
set(key.to_s, content.to_json)
|
51
62
|
end
|
52
63
|
|
53
64
|
#
|
@@ -70,12 +81,14 @@ module Cloudtasker
|
|
70
81
|
|
71
82
|
# Wait to acquire lock
|
72
83
|
lock_key = [LOCK_KEY_PREFIX, cache_key].join('/')
|
73
|
-
|
84
|
+
client.with do |conn|
|
85
|
+
sleep(LOCK_WAIT_DURATION) until conn.set(lock_key, true, nx: true, ex: LOCK_DURATION)
|
86
|
+
end
|
74
87
|
|
75
88
|
# yield content
|
76
89
|
yield
|
77
90
|
ensure
|
78
|
-
|
91
|
+
del(lock_key)
|
79
92
|
end
|
80
93
|
|
81
94
|
#
|
@@ -104,10 +117,12 @@ module Cloudtasker
|
|
104
117
|
list = []
|
105
118
|
|
106
119
|
# Scan and capture matching keys
|
107
|
-
|
108
|
-
|
109
|
-
|
110
|
-
|
120
|
+
client.with do |conn|
|
121
|
+
while cursor != 0
|
122
|
+
scan = conn.scan(cursor || 0, match: pattern)
|
123
|
+
list += scan[1]
|
124
|
+
cursor = scan[0].to_i
|
125
|
+
end
|
111
126
|
end
|
112
127
|
|
113
128
|
list
|
@@ -123,8 +138,8 @@ module Cloudtasker
|
|
123
138
|
# @return [Any] The method return value
|
124
139
|
#
|
125
140
|
def method_missing(name, *args, &block)
|
126
|
-
if
|
127
|
-
client.send(name, *args, &block)
|
141
|
+
if Redis.method_defined?(name)
|
142
|
+
client.with { |c| c.send(name, *args, &block) }
|
128
143
|
else
|
129
144
|
super
|
130
145
|
end
|
@@ -139,7 +154,7 @@ module Cloudtasker
|
|
139
154
|
# @return [Boolean] Return true if the class respond to this method.
|
140
155
|
#
|
141
156
|
def respond_to_missing?(name, include_private = false)
|
142
|
-
|
157
|
+
Redis.method_defined?(name) || super
|
143
158
|
end
|
144
159
|
end
|
145
160
|
end
|
@@ -3,3 +3,30 @@
|
|
3
3
|
require_relative 'unique_job/middleware'
|
4
4
|
|
5
5
|
Cloudtasker::UniqueJob::Middleware.configure
|
6
|
+
|
7
|
+
module Cloudtasker
|
8
|
+
# UniqueJob configurator
|
9
|
+
module UniqueJob
|
10
|
+
# The maximum duration a lock can remain in place
|
11
|
+
# after schedule time.
|
12
|
+
DEFAULT_LOCK_TTL = 10 * 60 # 10 minutes
|
13
|
+
|
14
|
+
class << self
|
15
|
+
attr_writer :lock_ttl
|
16
|
+
|
17
|
+
# Configure the middleware
|
18
|
+
def configure
|
19
|
+
yield(self)
|
20
|
+
end
|
21
|
+
|
22
|
+
#
|
23
|
+
# Return the max TTL for locks
|
24
|
+
#
|
25
|
+
# @return [Integer] The lock TTL.
|
26
|
+
#
|
27
|
+
def lock_ttl
|
28
|
+
@lock_ttl || DEFAULT_LOCK_TTL
|
29
|
+
end
|
30
|
+
end
|
31
|
+
end
|
32
|
+
end
|
@@ -5,21 +5,19 @@ module Cloudtasker
|
|
5
5
|
# Wrapper class for Cloudtasker::Worker delegating to lock
|
6
6
|
# and conflict strategies
|
7
7
|
class Job
|
8
|
-
attr_reader :worker
|
8
|
+
attr_reader :worker, :call_opts
|
9
9
|
|
10
10
|
# The default lock strategy to use. Defaults to "no lock".
|
11
11
|
DEFAULT_LOCK = UniqueJob::Lock::NoOp
|
12
12
|
|
13
|
-
# Key Namespace used for object saved under this class
|
14
|
-
SUB_NAMESPACE = 'job'
|
15
|
-
|
16
13
|
#
|
17
14
|
# Build a new instance of the class.
|
18
15
|
#
|
19
16
|
# @param [Cloudtasker::Worker] worker The worker at hand
|
20
17
|
#
|
21
|
-
def initialize(worker)
|
18
|
+
def initialize(worker, **kwargs)
|
22
19
|
@worker = worker
|
20
|
+
@call_opts = kwargs
|
23
21
|
end
|
24
22
|
|
25
23
|
#
|
@@ -31,6 +29,43 @@ module Cloudtasker
|
|
31
29
|
worker.class.cloudtasker_options_hash
|
32
30
|
end
|
33
31
|
|
32
|
+
#
|
33
|
+
# Return the Time To Live (TTL) that should be set in Redis for
|
34
|
+
# the lock key. Having a TTL on lock keys ensures that jobs
|
35
|
+
# do not end up stuck due to a dead lock situation.
|
36
|
+
#
|
37
|
+
# The TTL is calculated using schedule time + expected
|
38
|
+
# max job duration.
|
39
|
+
#
|
40
|
+
# The expected max job duration is set to 10 minutes by default.
|
41
|
+
# This value was chosen because it's twice the default request timeout
|
42
|
+
# value in Cloud Run. This leaves enough room for queue lag (5 minutes)
|
43
|
+
# + job processing (5 minutes).
|
44
|
+
#
|
45
|
+
# Queue lag is certainly the most unpredictable factor here.
|
46
|
+
# Job processing time is less of a factor. Jobs running for more than 5 minutes
|
47
|
+
# should be split into sub-jobs to limit invocation time over HTTP. Cloudtasker batch
|
48
|
+
# jobs can help achieve that if you need to make one big job split into sub-jobs "atomic".
|
49
|
+
#
|
50
|
+
# The default lock key expiration of "time_at + 10 minutes" may look aggressive but it
|
51
|
+
# is still a better choice than potentially having real-time jobs stuck for X hours.
|
52
|
+
#
|
53
|
+
# The expected max job duration can be configured via the `lock_ttl`
|
54
|
+
# option on the job itself.
|
55
|
+
#
|
56
|
+
# @return [Integer] The TTL in seconds
|
57
|
+
#
|
58
|
+
def lock_ttl
|
59
|
+
now = Time.now.to_i
|
60
|
+
|
61
|
+
# Get scheduled at and lock duration
|
62
|
+
scheduled_at = [call_opts[:time_at].to_i, now].compact.max
|
63
|
+
lock_duration = (options[:lock_ttl] || Cloudtasker::UniqueJob.lock_ttl).to_i
|
64
|
+
|
65
|
+
# Return TTL
|
66
|
+
scheduled_at + lock_duration - now
|
67
|
+
end
|
68
|
+
|
34
69
|
#
|
35
70
|
# Return the instantiated lock.
|
36
71
|
#
|
@@ -121,7 +156,7 @@ module Cloudtasker
|
|
121
156
|
raise(LockError, locked_id) if locked_id && locked_id != id
|
122
157
|
|
123
158
|
# Take job lock if the lock is currently free
|
124
|
-
redis.set(unique_gid, id) unless locked_id
|
159
|
+
redis.set(unique_gid, id, ex: lock_ttl) unless locked_id
|
125
160
|
end
|
126
161
|
end
|
127
162
|
|
data/lib/cloudtasker/version.rb
CHANGED
data/lib/cloudtasker/worker.rb
CHANGED
@@ -8,7 +8,7 @@ module Cloudtasker
|
|
8
8
|
base.extend(ClassMethods)
|
9
9
|
base.attr_writer :job_queue
|
10
10
|
base.attr_accessor :job_args, :job_id, :job_meta, :job_reenqueued, :job_retries,
|
11
|
-
:perform_started_at, :perform_ended_at
|
11
|
+
:perform_started_at, :perform_ended_at, :task_id
|
12
12
|
end
|
13
13
|
|
14
14
|
#
|
@@ -47,7 +47,7 @@ module Cloudtasker
|
|
47
47
|
return nil unless worker_klass.include?(self)
|
48
48
|
|
49
49
|
# Return instantiated worker
|
50
|
-
worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries))
|
50
|
+
worker_klass.new(payload.slice(:job_queue, :job_args, :job_id, :job_meta, :job_retries, :task_id))
|
51
51
|
rescue NameError
|
52
52
|
nil
|
53
53
|
end
|
@@ -140,12 +140,13 @@ module Cloudtasker
|
|
140
140
|
# @param [Array<any>] job_args The list of perform args.
|
141
141
|
# @param [String] job_id A unique ID identifying this job.
|
142
142
|
#
|
143
|
-
def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0)
|
143
|
+
def initialize(job_queue: nil, job_args: nil, job_id: nil, job_meta: {}, job_retries: 0, task_id: nil)
|
144
144
|
@job_args = job_args || []
|
145
145
|
@job_id = job_id || SecureRandom.uuid
|
146
146
|
@job_meta = MetaStore.new(job_meta)
|
147
147
|
@job_retries = job_retries || 0
|
148
148
|
@job_queue = job_queue
|
149
|
+
@task_id = task_id
|
149
150
|
end
|
150
151
|
|
151
152
|
#
|
@@ -197,18 +198,36 @@ module Cloudtasker
|
|
197
198
|
raise(e)
|
198
199
|
end
|
199
200
|
|
201
|
+
#
|
202
|
+
# Return a unix timestamp specifying when to run the task.
|
203
|
+
#
|
204
|
+
# @param [Integer, nil] interval The time to wait.
|
205
|
+
# @param [Integer, nil] time_at The time at which the job should run.
|
206
|
+
#
|
207
|
+
# @return [Integer, nil] The Unix timestamp.
|
208
|
+
#
|
209
|
+
def schedule_time(interval: nil, time_at: nil)
|
210
|
+
return nil unless interval || time_at
|
211
|
+
|
212
|
+
# Generate the complete Unix timestamp
|
213
|
+
(time_at || Time.now).to_i + interval.to_i
|
214
|
+
end
|
215
|
+
|
200
216
|
#
|
201
217
|
# Enqueue a worker, with or without delay.
|
202
218
|
#
|
203
219
|
# @param [Integer] interval The delay in seconds.
|
204
|
-
#
|
205
220
|
# @param [Time, Integer] interval The time at which the job should run
|
206
221
|
#
|
207
222
|
# @return [Cloudtasker::CloudTask] The Google Task response
|
208
223
|
#
|
209
|
-
def schedule(
|
210
|
-
|
211
|
-
|
224
|
+
def schedule(**args)
|
225
|
+
# Evaluate when to schedule the job
|
226
|
+
time_at = schedule_time(args)
|
227
|
+
|
228
|
+
# Schedule job through client middlewares
|
229
|
+
Cloudtasker.config.client_middleware.invoke(self, time_at: time_at) do
|
230
|
+
WorkerHandler.new(self).schedule(time_at: time_at)
|
212
231
|
end
|
213
232
|
end
|
214
233
|
|
@@ -250,7 +269,8 @@ module Cloudtasker
|
|
250
269
|
job_args: job_args,
|
251
270
|
job_meta: job_meta.to_h,
|
252
271
|
job_retries: job_retries,
|
253
|
-
job_queue: job_queue
|
272
|
+
job_queue: job_queue,
|
273
|
+
task_id: task_id
|
254
274
|
}
|
255
275
|
end
|
256
276
|
|
@@ -56,11 +56,6 @@ module Cloudtasker
|
|
56
56
|
with_worker_handling(input_payload, &:execute)
|
57
57
|
end
|
58
58
|
|
59
|
-
# TODO: do not delete redis payload if job has been re-enqueued
|
60
|
-
# worker.job_reenqueued
|
61
|
-
#
|
62
|
-
# Idea: change with_worker_handling to with_worker_handling and build the worker
|
63
|
-
# inside the with_worker_handling block.
|
64
59
|
#
|
65
60
|
# Local middleware used to retrieve the job arg payload from cache
|
66
61
|
# if a arg payload reference is present.
|
@@ -210,35 +205,17 @@ module Cloudtasker
|
|
210
205
|
}.merge(worker_args_payload)
|
211
206
|
end
|
212
207
|
|
213
|
-
#
|
214
|
-
# Return a protobuf timestamp specifying how to wait
|
215
|
-
# before running a task.
|
216
|
-
#
|
217
|
-
# @param [Integer, nil] interval The time to wait.
|
218
|
-
# @param [Integer, nil] time_at The time at which the job should run.
|
219
|
-
#
|
220
|
-
# @return [Integer, nil] The Unix timestamp.
|
221
|
-
#
|
222
|
-
def schedule_time(interval: nil, time_at: nil)
|
223
|
-
return nil unless interval || time_at
|
224
|
-
|
225
|
-
# Generate the complete Unix timestamp
|
226
|
-
(time_at || Time.now).to_i + interval.to_i
|
227
|
-
end
|
228
|
-
|
229
208
|
#
|
230
209
|
# Schedule the task on GCP Cloud Task.
|
231
210
|
#
|
232
|
-
# @param [Integer, nil]
|
211
|
+
# @param [Integer, nil] time_at A unix timestamp specifying when to run the job.
|
233
212
|
# Leave to `nil` to run now.
|
234
213
|
#
|
235
214
|
# @return [Cloudtasker::CloudTask] The Google Task response
|
236
215
|
#
|
237
|
-
def schedule(
|
216
|
+
def schedule(time_at: nil)
|
238
217
|
# Generate task payload
|
239
|
-
task = task_payload.merge(
|
240
|
-
schedule_time: schedule_time(interval: interval, time_at: time_at)
|
241
|
-
).compact
|
218
|
+
task = task_payload.merge(schedule_time: time_at).compact
|
242
219
|
|
243
220
|
# Create and return remote task
|
244
221
|
CloudTask.create(task)
|
@@ -11,7 +11,7 @@ module Cloudtasker
|
|
11
11
|
end
|
12
12
|
|
13
13
|
# Only log the job meta information by default (exclude arguments)
|
14
|
-
DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue) }
|
14
|
+
DEFAULT_CONTEXT_PROCESSOR = ->(worker) { worker.to_h.slice(:worker, :job_id, :job_meta, :job_queue, :task_id) }
|
15
15
|
|
16
16
|
#
|
17
17
|
# Build a new instance of the class.
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: cloudtasker
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.10.
|
4
|
+
version: 0.10.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Arnaud Lachaume
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2020-
|
11
|
+
date: 2020-09-02 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: activesupport
|
@@ -25,7 +25,7 @@ dependencies:
|
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '0'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
|
-
name:
|
28
|
+
name: connection_pool
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
30
30
|
requirements:
|
31
31
|
- - ">="
|
@@ -39,7 +39,7 @@ dependencies:
|
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: '0'
|
41
41
|
- !ruby/object:Gem::Dependency
|
42
|
-
name:
|
42
|
+
name: fugit
|
43
43
|
requirement: !ruby/object:Gem::Requirement
|
44
44
|
requirements:
|
45
45
|
- - ">="
|
@@ -52,6 +52,20 @@ dependencies:
|
|
52
52
|
- - ">="
|
53
53
|
- !ruby/object:Gem::Version
|
54
54
|
version: '0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: google-cloud-tasks
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - "~>"
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: '1.0'
|
62
|
+
type: :runtime
|
63
|
+
prerelease: false
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
65
|
+
requirements:
|
66
|
+
- - "~>"
|
67
|
+
- !ruby/object:Gem::Version
|
68
|
+
version: '1.0'
|
55
69
|
- !ruby/object:Gem::Dependency
|
56
70
|
name: jwt
|
57
71
|
requirement: !ruby/object:Gem::Requirement
|
@@ -80,6 +94,20 @@ dependencies:
|
|
80
94
|
- - ">="
|
81
95
|
- !ruby/object:Gem::Version
|
82
96
|
version: '0'
|
97
|
+
- !ruby/object:Gem::Dependency
|
98
|
+
name: retriable
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
100
|
+
requirements:
|
101
|
+
- - ">="
|
102
|
+
- !ruby/object:Gem::Version
|
103
|
+
version: '0'
|
104
|
+
type: :runtime
|
105
|
+
prerelease: false
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
107
|
+
requirements:
|
108
|
+
- - ">="
|
109
|
+
- !ruby/object:Gem::Version
|
110
|
+
version: '0'
|
83
111
|
- !ruby/object:Gem::Dependency
|
84
112
|
name: appraisal
|
85
113
|
requirement: !ruby/object:Gem::Requirement
|
@@ -364,9 +392,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
364
392
|
version: '0'
|
365
393
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
366
394
|
requirements:
|
367
|
-
- - "
|
395
|
+
- - ">="
|
368
396
|
- !ruby/object:Gem::Version
|
369
|
-
version:
|
397
|
+
version: '0'
|
370
398
|
requirements: []
|
371
399
|
rubygems_version: 3.0.0
|
372
400
|
signing_key:
|