lowkiq 1.0.6 → 1.2.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +8 -0
- data/Gemfile.lock +12 -2
- data/README.md +127 -63
- data/README.ru.md +2 -2
- data/assets/app.js +1 -1
- data/lib/lowkiq/queue/fetch.rb +2 -2
- data/lib/lowkiq/queue/queue.rb +3 -3
- data/lib/lowkiq/queue/queue_metrics.rb +3 -5
- data/lib/lowkiq/queue/shard_metrics.rb +3 -4
- data/lib/lowkiq/schedulers/lag.rb +1 -1
- data/lib/lowkiq/shard_handler.rb +2 -1
- data/lib/lowkiq/utils.rb +1 -1
- data/lib/lowkiq/version.rb +1 -1
- data/lib/lowkiq/web/api.rb +1 -1
- data/lib/lowkiq/worker.rb +4 -2
- data/lib/lowkiq.rb +7 -6
- data/lowkiq.gemspec +1 -0
- metadata +21 -7
- data/lib/lowkiq/extend_tracker.rb +0 -13
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 17cdb6368062bd5a178c3152fcacb7d8d0e390206c220f76b6940ddd79f0ac31
|
4
|
+
data.tar.gz: 46b6ac23e95ce8054799b4bae22c07b0375895274f69bdd494a9da882fc9e450
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a1d9f295b8b58a8412bba69dd4f162bbed17ce3a64d3543992af2397213db878a4f62953dfa1b9b888d2ffca41f7f56745015da6d78f791c553205d91f004377
|
7
|
+
data.tar.gz: c834e85f721561e632cea9923000128b02ccc6d7a3a9e4c29e99301577a1ff45950760df4bf621c68c92550ddc59763d86ae109851b7d298e499d3ee736cd961
|
data/CHANGELOG.md
ADDED
data/Gemfile.lock
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
PATH
|
2
2
|
remote: .
|
3
3
|
specs:
|
4
|
-
lowkiq (1.0
|
4
|
+
lowkiq (1.1.0)
|
5
5
|
connection_pool (~> 2.2, >= 2.2.2)
|
6
6
|
rack (>= 1.5.0)
|
7
7
|
redis (>= 4.0.1, < 5)
|
@@ -9,13 +9,22 @@ PATH
|
|
9
9
|
GEM
|
10
10
|
remote: https://rubygems.org/
|
11
11
|
specs:
|
12
|
+
byebug (11.1.3)
|
13
|
+
coderay (1.1.3)
|
12
14
|
connection_pool (2.2.3)
|
13
15
|
diff-lcs (1.3)
|
16
|
+
method_source (1.0.0)
|
17
|
+
pry (0.13.1)
|
18
|
+
coderay (~> 1.1)
|
19
|
+
method_source (~> 1.0)
|
20
|
+
pry-byebug (3.9.0)
|
21
|
+
byebug (~> 11.0)
|
22
|
+
pry (~> 0.13.0)
|
14
23
|
rack (2.2.2)
|
15
24
|
rack-test (1.1.0)
|
16
25
|
rack (>= 1.0, < 3)
|
17
26
|
rake (12.3.3)
|
18
|
-
redis (4.2.
|
27
|
+
redis (4.2.5)
|
19
28
|
rspec (3.9.0)
|
20
29
|
rspec-core (~> 3.9.0)
|
21
30
|
rspec-expectations (~> 3.9.0)
|
@@ -36,6 +45,7 @@ PLATFORMS
|
|
36
45
|
DEPENDENCIES
|
37
46
|
bundler (~> 2.1.0)
|
38
47
|
lowkiq!
|
48
|
+
pry-byebug (~> 3.9.0)
|
39
49
|
rack-test (~> 1.1)
|
40
50
|
rake (~> 12.3.0)
|
41
51
|
rspec (~> 3.0)
|
data/README.md
CHANGED
@@ -28,13 +28,15 @@ Ordered background jobs processing
|
|
28
28
|
* [Recommendations on configuration](#recommendations-on-configuration)
|
29
29
|
+ [`SomeWorker.shards_count`](#someworkershards_count)
|
30
30
|
+ [`SomeWorker.max_retry_count`](#someworkermax_retry_count)
|
31
|
+
* [Changing of worker's shards amount](#changing-of-workers-shards-amount)
|
32
|
+
* [Extended error info](#extended-error-info)
|
31
33
|
|
32
34
|
## Rationale
|
33
35
|
|
34
36
|
We've faced some problems using Sidekiq while processing messages from a side system.
|
35
|
-
For instance, the message is
|
36
|
-
The side system will send
|
37
|
-
Orders are frequently updated and a queue
|
37
|
+
For instance, the message is the data of an order at a particular time.
|
38
|
+
The side system will send new data of an order on every change.
|
39
|
+
Orders are frequently updated and a queue contains some closely located messages of the same order.
|
38
40
|
|
39
41
|
Sidekiq doesn't guarantee a strict message order, because a queue is processed by multiple threads.
|
40
42
|
For example, we've received 2 messages: M1 and M2.
|
@@ -43,8 +45,8 @@ so M2 can be processed before M1.
|
|
43
45
|
|
44
46
|
Parallel processing of such kind of messages can result in:
|
45
47
|
|
46
|
-
+
|
47
|
-
+ overwriting new data with old one
|
48
|
+
+ deadlocks
|
49
|
+
+ overwriting new data with an old one
|
48
50
|
|
49
51
|
Lowkiq has been created to eliminate such problems by avoiding parallel task processing within one entity.
|
50
52
|
|
@@ -52,17 +54,17 @@ Lowkiq has been created to eliminate such problems by avoiding parallel task pro
|
|
52
54
|
|
53
55
|
Lowkiq's queues are reliable i.e.,
|
54
56
|
Lowkiq saves information about a job being processed
|
55
|
-
and returns
|
57
|
+
and returns uncompleted jobs to the queue on startup.
|
56
58
|
|
57
59
|
Jobs in queues are ordered by preassigned execution time, so they are not FIFO queues.
|
58
60
|
|
59
|
-
Every job has
|
61
|
+
Every job has its identifier. Lowkiq guarantees that jobs with equal IDs are processed by the same thread.
|
60
62
|
|
61
63
|
Every queue is divided into a permanent set of shards.
|
62
|
-
A job is placed into particular shard based on an id of the job.
|
64
|
+
A job is placed into a particular shard based on an id of the job.
|
63
65
|
So jobs with the same id are always placed into the same shard.
|
64
66
|
All jobs of the shard are always processed with the same thread.
|
65
|
-
This guarantees the
|
67
|
+
This guarantees the sequential processing of jobs with the same ids and excludes the possibility of locks.
|
66
68
|
|
67
69
|
Besides the id, every job has a payload.
|
68
70
|
Payloads are accumulated for jobs with the same id.
|
@@ -71,8 +73,8 @@ It's useful when you need to process only the last message and drop all previous
|
|
71
73
|
|
72
74
|
A worker corresponds to a queue and contains a job processing logic.
|
73
75
|
|
74
|
-
|
75
|
-
Adding or removing queues or
|
76
|
+
The fixed number of threads is used to process all jobs of all queues.
|
77
|
+
Adding or removing queues or their shards won't affect the number of threads.
|
76
78
|
|
77
79
|
## Sidekiq comparison
|
78
80
|
|
@@ -83,16 +85,16 @@ But if you use plugins like
|
|
83
85
|
[sidekiq-merger](https://github.com/dtaniwaki/sidekiq-merger)
|
84
86
|
or implement your own lock system, you should look at Lowkiq.
|
85
87
|
|
86
|
-
For example, sidekiq-grouping accumulates a batch of jobs
|
87
|
-
With this approach queue can
|
88
|
+
For example, sidekiq-grouping accumulates a batch of jobs then enqueues it and accumulates the next batch.
|
89
|
+
With this approach, a queue can contain two batches with data of the same order.
|
88
90
|
These batches are parallel processed with different threads, so we come back to the initial problem.
|
89
91
|
|
90
|
-
Lowkiq was designed to avoid any
|
92
|
+
Lowkiq was designed to avoid any type of locking.
|
91
93
|
|
92
94
|
Furthermore, Lowkiq's queues are reliable. Only Sidekiq Pro or plugins can add such functionality.
|
93
95
|
|
94
|
-
This [benchmark](examples/benchmark) shows overhead on
|
95
|
-
|
96
|
+
This [benchmark](examples/benchmark) shows overhead on Redis usage.
|
97
|
+
These are the results for 5 threads, 100,000 blank jobs:
|
96
98
|
|
97
99
|
+ lowkiq: 155 sec or 1.55 ms per job
|
98
100
|
+ lowkiq +hiredis: 80 sec or 0.80 ms per job
|
@@ -100,29 +102,29 @@ This is the results for 5 threads, 100,000 blank jobs:
|
|
100
102
|
|
101
103
|
This difference is related to different queues structure.
|
102
104
|
Sidekiq uses one list for all workers and fetches the job entirely for O(1).
|
103
|
-
Lowkiq uses several data structures, including sorted sets for
|
105
|
+
Lowkiq uses several data structures, including sorted sets for keeping ids of jobs.
|
104
106
|
So fetching only an id of a job takes O(log(N)).
|
105
107
|
|
106
108
|
## Queue
|
107
109
|
|
108
110
|
Please, look at [the presentation](https://docs.google.com/presentation/d/e/2PACX-1vRdwA2Ck22r26KV1DbY__XcYpj2FdlnR-2G05w1YULErnJLB_JL1itYbBC6_JbLSPOHwJ0nwvnIHH2A/pub?start=false&loop=false&delayms=3000).
|
109
111
|
|
110
|
-
Every job has following attributes:
|
112
|
+
Every job has the following attributes:
|
111
113
|
|
112
114
|
+ `id` is a job identifier with string type.
|
113
|
-
+ `payloads` is a sorted set of payloads ordered by
|
114
|
-
+ `perform_in` is planned execution time. It's
|
115
|
+
+ `payloads` is a sorted set of payloads ordered by its score. A payload is an object. A score is a real number.
|
116
|
+
+ `perform_in` is planned execution time. It's a Unix timestamp with a real number type.
|
115
117
|
+ `retry_count` is amount of retries. It's a real number.
|
116
118
|
|
117
|
-
For example, `id` can be an identifier of replicated entity.
|
118
|
-
`payloads` is a sorted set ordered by score of payload and resulted by grouping a payload of job by
|
119
|
-
`payload` can be a ruby object
|
120
|
-
`score` can be `payload`'s creation date (
|
121
|
-
By default `score` and `perform_in` are current
|
119
|
+
For example, `id` can be an identifier of a replicated entity.
|
120
|
+
`payloads` is a sorted set ordered by a score of payload and resulted by grouping a payload of the job by its `id`.
|
121
|
+
`payload` can be a ruby object because it is serialized by `Marshal.dump`.
|
122
|
+
`score` can be `payload`'s creation date (Unix timestamp) or it's an incremental version number.
|
123
|
+
By default, `score` and `perform_in` are current Unix timestamp.
|
122
124
|
`retry_count` for new unprocessed job equals to `-1`,
|
123
125
|
for one-time failed is `0`, so the planned retries are counted, not the performed ones.
|
124
126
|
|
125
|
-
|
127
|
+
Job execution can be unsuccessful. In this case, its `retry_count` is incremented, the new `perform_in` is calculated with determined formula, and it moves back to a queue.
|
126
128
|
|
127
129
|
In case of `retry_count` is getting `>=` `max_retry_count` an element of `payloads` with less (oldest) score is moved to a morgue,
|
128
130
|
rest elements are moved back to the queue, wherein `retry_count` and `perform_in` are reset to `-1` and `now()` respectively.
|
@@ -146,14 +148,14 @@ If `max_retry_count = 1`, retries stop.
|
|
146
148
|
|
147
149
|
They are applied when:
|
148
150
|
|
149
|
-
+ a job
|
150
|
-
+ a job
|
151
|
-
+ a job from morgue
|
151
|
+
+ a job has been in a queue and a new one with the same id is added
|
152
|
+
+ a job is failed, but a new one with the same id has been added
|
153
|
+
+ a job from a morgue is moved back to a queue, but the queue has had a job with the same id
|
152
154
|
|
153
155
|
Algorithm:
|
154
156
|
|
155
|
-
+ payloads
|
156
|
-
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the
|
157
|
+
+ payloads are merged, the minimal score is chosen for equal payloads
|
158
|
+
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the job from the queue
|
157
159
|
+ if a failed job and queued job is merged, `perform_in` and `retry_count` is taken from the failed one
|
158
160
|
+ if morgue job and queued job is merged, `perform_in = now()`, `retry_count = -1`
|
159
161
|
|
@@ -172,8 +174,8 @@ Example:
|
|
172
174
|
{ id: "1", payloads: #{"v1": 1, "v2": 3, "v3": 4}, retry_count: 0, perform_in: 1536323288 }
|
173
175
|
```
|
174
176
|
|
175
|
-
|
176
|
-
A job in morgue has following attributes:
|
177
|
+
A morgue is a part of a queue. Jobs in a morgue are not processed.
|
178
|
+
A job in a morgue has the following attributes:
|
177
179
|
|
178
180
|
+ id is the job identifier
|
179
181
|
+ payloads
|
@@ -204,6 +206,12 @@ module ATestWorker
|
|
204
206
|
10 * (count + 1) # (i.e. 10, 20, 30, 40, 50)
|
205
207
|
end
|
206
208
|
|
209
|
+
def self.retries_exhausted(batch)
|
210
|
+
batch.each do |job|
|
211
|
+
Rails.logger.info "retries exhausted for #{name} with error #{job[:error]}"
|
212
|
+
end
|
213
|
+
end
|
214
|
+
|
207
215
|
def self.perform(payloads_by_id)
|
208
216
|
# payloads_by_id is a hash map
|
209
217
|
payloads_by_id.each do |id, payloads|
|
@@ -216,6 +224,12 @@ module ATestWorker
|
|
216
224
|
end
|
217
225
|
```
|
218
226
|
|
227
|
+
And then you have to add it to Lowkiq in your initializer file due to problems with autoloading:
|
228
|
+
|
229
|
+
```ruby
|
230
|
+
Lowkiq.workers = [ ATestWorker ]
|
231
|
+
```
|
232
|
+
|
219
233
|
Default values:
|
220
234
|
|
221
235
|
```ruby
|
@@ -234,10 +248,10 @@ end
|
|
234
248
|
ATestWorker.perform_async [
|
235
249
|
{ id: 0 },
|
236
250
|
{ id: 1, payload: { attr: 'v1' } },
|
237
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
251
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
238
252
|
]
|
239
253
|
# payload by default equals to ""
|
240
|
-
# score and perform_in by default equals to Time.now.
|
254
|
+
# score and perform_in by default equals to Time.now.to_f
|
241
255
|
```
|
242
256
|
|
243
257
|
It is possible to redefine `perform_async` and calculate `id`, `score` и `perform_in` in a worker code:
|
@@ -272,8 +286,9 @@ ATestWorker.perform_async 1000.times.map { |id| { payload: {id: id} } }
|
|
272
286
|
|
273
287
|
Options and their default values are:
|
274
288
|
|
289
|
+
+ `Lowkiq.workers = []`- list of workers to use. Since 1.1.0.
|
275
290
|
+ `Lowkiq.poll_interval = 1` - delay in seconds between queue polling for new jobs.
|
276
|
-
Used only if
|
291
|
+
Used only if a queue was empty in a previous cycle or an error occurred.
|
277
292
|
+ `Lowkiq.threads_per_node = 5` - threads per node.
|
278
293
|
+ `Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL') }` - redis connection options
|
279
294
|
+ `Lowkiq.client_pool_size = 5` - redis pool size for queueing jobs
|
@@ -286,6 +301,10 @@ Options and their default values are:
|
|
286
301
|
+ `Lowkiq.dump_payload = Marshal.method :dump`
|
287
302
|
+ `Lowkiq.load_payload = Marshal.method :load`
|
288
303
|
|
304
|
+
+ `Lowkiq.format_error = -> (error) { error.message }` can be used to add error backtrace. Please see [Extended error info](#extended-error-info)
|
305
|
+
+ `Lowkiq.dump_error = -> (msg) { msg }` can be used to implement a custom compression logic for errors. Recommended when using `Lowkiq.format_error`.
|
306
|
+
+ `Lowkiq.load_error = -> (msg) { msg }` can be used to implement a custom decompression logic for errors.
|
307
|
+
|
289
308
|
```ruby
|
290
309
|
$logger = Logger.new(STDOUT)
|
291
310
|
|
@@ -327,7 +346,7 @@ Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL'), driver: :hiredis }
|
|
327
346
|
|
328
347
|
`path_to_app.rb` must load app. [Example](examples/dummy/lib/app.rb).
|
329
348
|
|
330
|
-
|
349
|
+
The lazy loading of worker modules is unacceptable.
|
331
350
|
For preliminarily loading modules use
|
332
351
|
`require`
|
333
352
|
or [`require_dependency`](https://api.rubyonrails.org/classes/ActiveSupport/Dependencies/Loadable.html#method-i-require_dependency)
|
@@ -335,15 +354,15 @@ for Ruby on Rails.
|
|
335
354
|
|
336
355
|
## Shutdown
|
337
356
|
|
338
|
-
Send TERM or INT signal to process (Ctrl-C).
|
339
|
-
|
357
|
+
Send TERM or INT signal to the process (Ctrl-C).
|
358
|
+
The process will wait for executed jobs to finish.
|
340
359
|
|
341
|
-
Note that if queue is empty, process sleeps `poll_interval` seconds,
|
360
|
+
Note that if a queue is empty, the process sleeps `poll_interval` seconds,
|
342
361
|
therefore, the process will not stop until the `poll_interval` seconds have passed.
|
343
362
|
|
344
363
|
## Debug
|
345
364
|
|
346
|
-
To get trace of all threads of app:
|
365
|
+
To get trace of all threads of an app:
|
347
366
|
|
348
367
|
```
|
349
368
|
kill -TTIN <pid>
|
@@ -357,11 +376,22 @@ docker-compose run --rm --service-port app bash
|
|
357
376
|
bundle
|
358
377
|
rspec
|
359
378
|
cd examples/dummy ; bundle exec ../../exe/lowkiq -r ./lib/app.rb
|
379
|
+
|
380
|
+
# open localhost:8080
|
381
|
+
```
|
382
|
+
|
383
|
+
```
|
384
|
+
docker-compose run --rm --service-port frontend bash
|
385
|
+
npm run dumb
|
386
|
+
# open localhost:8081
|
387
|
+
|
388
|
+
# npm run build
|
389
|
+
# npm run web-api
|
360
390
|
```
|
361
391
|
|
362
392
|
## Exceptions
|
363
393
|
|
364
|
-
`StandardError` thrown by worker are handled with middleware. Such exceptions
|
394
|
+
`StandardError` thrown by a worker are handled with middleware. Such exceptions don't lead to process stops.
|
365
395
|
|
366
396
|
All other exceptions cause the process to stop.
|
367
397
|
Lowkiq will wait for job execution by other threads.
|
@@ -384,15 +414,18 @@ end
|
|
384
414
|
```ruby
|
385
415
|
# config/initializers/lowkiq.rb
|
386
416
|
|
387
|
-
# loading all lowkiq workers
|
388
|
-
Dir["#{Rails.root}/app/lowkiq_workers/**/*.rb"].each { |file| require_dependency file }
|
389
|
-
|
390
417
|
# configuration:
|
391
418
|
# Lowkiq.redis = -> { Redis.new url: ENV.fetch('LOWKIQ_REDIS_URL') }
|
392
419
|
# Lowkiq.threads_per_node = ENV.fetch('LOWKIQ_THREADS_PER_NODE').to_i
|
393
420
|
# Lowkiq.client_pool_size = ENV.fetch('LOWKIQ_CLIENT_POOL_SIZE').to_i
|
394
421
|
# ...
|
395
422
|
|
423
|
+
# since 1.1.0
|
424
|
+
Lowkiq.workers = [
|
425
|
+
ATestWorker,
|
426
|
+
OtherCoolWorker
|
427
|
+
]
|
428
|
+
|
396
429
|
Lowkiq.server_middlewares << -> (worker, batch, &block) do
|
397
430
|
logger = Rails.logger
|
398
431
|
tag = "#{worker}-#{Thread.current.object_id}"
|
@@ -465,6 +498,17 @@ Lowkiq.on_server_init = ->() do
|
|
465
498
|
end
|
466
499
|
```
|
467
500
|
|
501
|
+
Note: In Rails 7, the worker files wouldn't be loaded by default in the initializers since they are managed by the `main` autoloader. To solve this, we can wrap setting the workers around the `to_prepare` configuration.
|
502
|
+
|
503
|
+
```ruby
|
504
|
+
Rails.application.config.to_prepare do
|
505
|
+
Lowkiq.workers = [
|
506
|
+
ATestWorker,
|
507
|
+
OtherCoolWorker
|
508
|
+
]
|
509
|
+
end
|
510
|
+
```
|
511
|
+
|
468
512
|
Execution: `bundle exec lowkiq -r ./config/environment.rb`
|
469
513
|
|
470
514
|
|
@@ -480,10 +524,10 @@ worker C: 0
|
|
480
524
|
worker D: 0, 1
|
481
525
|
```
|
482
526
|
|
483
|
-
Lowkiq uses fixed
|
527
|
+
Lowkiq uses a fixed number of threads for job processing, therefore it is necessary to distribute shards between threads.
|
484
528
|
Splitter does it.
|
485
529
|
|
486
|
-
To define a set of shards, which is being processed by thread,
|
530
|
+
To define a set of shards, which is being processed by a thread, let's move them to one list:
|
487
531
|
|
488
532
|
```
|
489
533
|
A0, A1, A2, B0, B1, B2, B3, C0, D0, D1
|
@@ -500,7 +544,7 @@ t1: A1, B1, C0
|
|
500
544
|
t2: A2, B2, D0
|
501
545
|
```
|
502
546
|
|
503
|
-
Besides Default Lowkiq has ByNode splitter. It allows
|
547
|
+
Besides Default Lowkiq has the ByNode splitter. It allows dividing the load by several processes (nodes).
|
504
548
|
|
505
549
|
```
|
506
550
|
Lowkiq.build_splitter = -> () do
|
@@ -511,7 +555,7 @@ Lowkiq.build_splitter = -> () do
|
|
511
555
|
end
|
512
556
|
```
|
513
557
|
|
514
|
-
So, instead of single process you need to execute multiple ones and to set environment variables up:
|
558
|
+
So, instead of a single process, you need to execute multiple ones and to set environment variables up:
|
515
559
|
|
516
560
|
```
|
517
561
|
# process 0
|
@@ -523,18 +567,18 @@ LOWKIQ_NUMBER_OF_NODES=2 LOWKIQ_NODE_NUMBER=1 bundle exec lowkiq -r ./lib/app.rb
|
|
523
567
|
|
524
568
|
Summary amount of threads are equal product of `ENV.fetch('LOWKIQ_NUMBER_OF_NODES')` and `Lowkiq.threads_per_node`.
|
525
569
|
|
526
|
-
You can also write your own splitter if your app needs extra distribution of shards between threads or nodes.
|
570
|
+
You can also write your own splitter if your app needs an extra distribution of shards between threads or nodes.
|
527
571
|
|
528
572
|
## Scheduler
|
529
573
|
|
530
|
-
Every thread processes a set of shards.
|
531
|
-
Every thread has
|
574
|
+
Every thread processes a set of shards. The scheduler selects shard for processing.
|
575
|
+
Every thread has its own instance of the scheduler.
|
532
576
|
|
533
577
|
Lowkiq has 2 schedulers for your choice.
|
534
|
-
`Seq`
|
578
|
+
`Seq` sequentially looks over shards.
|
535
579
|
`Lag` chooses shard with the oldest job minimizing the lag. It's used by default.
|
536
580
|
|
537
|
-
|
581
|
+
The scheduler can be set up through settings:
|
538
582
|
|
539
583
|
```
|
540
584
|
Lowkiq.build_scheduler = ->() { Lowkiq.build_seq_scheduler }
|
@@ -547,13 +591,13 @@ Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
|
|
547
591
|
### `SomeWorker.shards_count`
|
548
592
|
|
549
593
|
Sum of `shards_count` of all workers shouldn't be less than `Lowkiq.threads_per_node`
|
550
|
-
otherwise threads will stay idle.
|
594
|
+
otherwise, threads will stay idle.
|
551
595
|
|
552
596
|
Sum of `shards_count` of all workers can be equal to `Lowkiq.threads_per_node`.
|
553
|
-
In this case thread processes a single shard. This makes sense only with uniform queue load.
|
597
|
+
In this case, a thread processes a single shard. This makes sense only with a uniform queue load.
|
554
598
|
|
555
599
|
Sum of `shards_count` of all workers can be more than `Lowkiq.threads_per_node`.
|
556
|
-
In this case `shards_count` can be counted as a priority.
|
600
|
+
In this case, `shards_count` can be counted as a priority.
|
557
601
|
The larger it is, the more often the tasks of this queue will be processed.
|
558
602
|
|
559
603
|
There is no reason to set `shards_count` of one worker more than `Lowkiq.threads_per_node`,
|
@@ -561,8 +605,8 @@ because every thread will handle more than one shard from this queue, so it incr
|
|
561
605
|
|
562
606
|
### `SomeWorker.max_retry_count`
|
563
607
|
|
564
|
-
From `retry_in` and `max_retry_count`, you can calculate approximate time that payload of job will be in a queue.
|
565
|
-
After `max_retry_count` is reached
|
608
|
+
From `retry_in` and `max_retry_count`, you can calculate the approximate time that a payload of a job will be in a queue.
|
609
|
+
After `max_retry_count` is reached a payload with a minimal score will be moved to a morgue.
|
566
610
|
|
567
611
|
For default `retry_in` we receive the following table.
|
568
612
|
|
@@ -590,9 +634,9 @@ end
|
|
590
634
|
|
591
635
|
## Changing of worker's shards amount
|
592
636
|
|
593
|
-
Try to count
|
637
|
+
Try to count the number of shards right away and don't change it in the future.
|
594
638
|
|
595
|
-
If you can disable adding of new jobs, wait for queues to get empty and deploy the new version of code with changed amount of shards.
|
639
|
+
If you can disable adding of new jobs, wait for queues to get empty, and deploy the new version of code with a changed amount of shards.
|
596
640
|
|
597
641
|
If you can't do it, follow the next steps:
|
598
642
|
|
@@ -610,7 +654,7 @@ module ATestWorker
|
|
610
654
|
end
|
611
655
|
```
|
612
656
|
|
613
|
-
Set the number of shards and new queue name:
|
657
|
+
Set the number of shards and the new queue name:
|
614
658
|
|
615
659
|
```ruby
|
616
660
|
module ATestWorker
|
@@ -625,7 +669,7 @@ module ATestWorker
|
|
625
669
|
end
|
626
670
|
```
|
627
671
|
|
628
|
-
Add a worker moving jobs from the old queue to
|
672
|
+
Add a worker moving jobs from the old queue to the new one:
|
629
673
|
|
630
674
|
```ruby
|
631
675
|
module ATestMigrationWorker
|
@@ -645,3 +689,23 @@ module ATestMigrationWorker
|
|
645
689
|
end
|
646
690
|
end
|
647
691
|
```
|
692
|
+
|
693
|
+
## Extended error info
|
694
|
+
For failed jobs, lowkiq only stores `error.message` by default. This can be configured by using `Lowkiq.format_error` setting.
|
695
|
+
`Lowkiq.dump` and `Lowkiq.load_error` can be used to compress and decompress the error messages respectively.
|
696
|
+
Example:
|
697
|
+
```ruby
|
698
|
+
Lowkiq.format_error = -> (error) { error.full_message(highlight: false) }
|
699
|
+
|
700
|
+
Lowkiq.dump_error = Proc.new do |msg|
|
701
|
+
compressed = Zlib::Deflate.deflate(msg.to_s)
|
702
|
+
Base64.encode64(compressed)
|
703
|
+
end
|
704
|
+
|
705
|
+
Lowkiq.load_error = Proc.new do |input|
|
706
|
+
decoded = Base64.decode64(input)
|
707
|
+
Zlib::Inflate.inflate(decoded)
|
708
|
+
rescue
|
709
|
+
input
|
710
|
+
end
|
711
|
+
```
|
data/README.ru.md
CHANGED
@@ -247,10 +247,10 @@ end
|
|
247
247
|
ATestWorker.perform_async [
|
248
248
|
{ id: 0 },
|
249
249
|
{ id: 1, payload: { attr: 'v1' } },
|
250
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
250
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
251
251
|
]
|
252
252
|
# payload по умолчанию равен ""
|
253
|
-
# score и perform_in по умолчанию равны Time.now.
|
253
|
+
# score и perform_in по умолчанию равны Time.now.to_f
|
254
254
|
```
|
255
255
|
|
256
256
|
Вы можете переопределить `perform_async` и вычислять `id`, `score` и `perform_in` в воркере:
|