lowkiq 1.0.6 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +8 -0
- data/Gemfile.lock +12 -2
- data/README.md +84 -63
- data/README.ru.md +2 -2
- data/lib/lowkiq.rb +3 -6
- data/lib/lowkiq/queue/queue_metrics.rb +3 -5
- data/lib/lowkiq/queue/shard_metrics.rb +3 -4
- data/lib/lowkiq/schedulers/lag.rb +1 -1
- data/lib/lowkiq/utils.rb +1 -1
- data/lib/lowkiq/version.rb +1 -1
- data/lib/lowkiq/web/api.rb +1 -1
- data/lib/lowkiq/worker.rb +0 -2
- data/lowkiq.gemspec +1 -0
- metadata +17 -3
- data/lib/lowkiq/extend_tracker.rb +0 -13
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 1737d4cbb8058600c3c3b162c6d48b8749d5f1750524538f73300fbcd0090269
|
4
|
+
data.tar.gz: 6014eb81fa0550a93efa7e04d2785bc4616f8f004563c906fd9f3ee930ba3183
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 16bb89de9f1f25a745602e70ae9e172fb85cbbad9b9ed50061fd6c3c74dd0122c204c7b6c973bbb02e2c891bc45449882da479a3b59b5108b65073b9d000e8a2
|
7
|
+
data.tar.gz: 0d9c5316a08272e539c34a1cda2e688348b93f3c255ec9c2faef8f3db7363e1e8e87402a4cda880cce95249e2120686f86d05bdba5bb117c3cefb079885bc52f
|
data/CHANGELOG.md
ADDED
data/Gemfile.lock
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
PATH
|
2
2
|
remote: .
|
3
3
|
specs:
|
4
|
-
lowkiq (1.0
|
4
|
+
lowkiq (1.1.0)
|
5
5
|
connection_pool (~> 2.2, >= 2.2.2)
|
6
6
|
rack (>= 1.5.0)
|
7
7
|
redis (>= 4.0.1, < 5)
|
@@ -9,13 +9,22 @@ PATH
|
|
9
9
|
GEM
|
10
10
|
remote: https://rubygems.org/
|
11
11
|
specs:
|
12
|
+
byebug (11.1.3)
|
13
|
+
coderay (1.1.3)
|
12
14
|
connection_pool (2.2.3)
|
13
15
|
diff-lcs (1.3)
|
16
|
+
method_source (1.0.0)
|
17
|
+
pry (0.13.1)
|
18
|
+
coderay (~> 1.1)
|
19
|
+
method_source (~> 1.0)
|
20
|
+
pry-byebug (3.9.0)
|
21
|
+
byebug (~> 11.0)
|
22
|
+
pry (~> 0.13.0)
|
14
23
|
rack (2.2.2)
|
15
24
|
rack-test (1.1.0)
|
16
25
|
rack (>= 1.0, < 3)
|
17
26
|
rake (12.3.3)
|
18
|
-
redis (4.2.
|
27
|
+
redis (4.2.5)
|
19
28
|
rspec (3.9.0)
|
20
29
|
rspec-core (~> 3.9.0)
|
21
30
|
rspec-expectations (~> 3.9.0)
|
@@ -36,6 +45,7 @@ PLATFORMS
|
|
36
45
|
DEPENDENCIES
|
37
46
|
bundler (~> 2.1.0)
|
38
47
|
lowkiq!
|
48
|
+
pry-byebug (~> 3.9.0)
|
39
49
|
rack-test (~> 1.1)
|
40
50
|
rake (~> 12.3.0)
|
41
51
|
rspec (~> 3.0)
|
data/README.md
CHANGED
@@ -32,9 +32,9 @@ Ordered background jobs processing
|
|
32
32
|
## Rationale
|
33
33
|
|
34
34
|
We've faced some problems using Sidekiq while processing messages from a side system.
|
35
|
-
For instance, the message is
|
36
|
-
The side system will send
|
37
|
-
Orders are frequently updated and a queue
|
35
|
+
For instance, the message is the data of an order at a particular time.
|
36
|
+
The side system will send new data of an order on every change.
|
37
|
+
Orders are frequently updated and a queue contains some closely located messages of the same order.
|
38
38
|
|
39
39
|
Sidekiq doesn't guarantee a strict message order, because a queue is processed by multiple threads.
|
40
40
|
For example, we've received 2 messages: M1 and M2.
|
@@ -43,8 +43,8 @@ so M2 can be processed before M1.
|
|
43
43
|
|
44
44
|
Parallel processing of such kind of messages can result in:
|
45
45
|
|
46
|
-
+
|
47
|
-
+ overwriting new data with old one
|
46
|
+
+ deadlocks
|
47
|
+
+ overwriting new data with an old one
|
48
48
|
|
49
49
|
Lowkiq has been created to eliminate such problems by avoiding parallel task processing within one entity.
|
50
50
|
|
@@ -52,17 +52,17 @@ Lowkiq has been created to eliminate such problems by avoiding parallel task pro
|
|
52
52
|
|
53
53
|
Lowkiq's queues are reliable i.e.,
|
54
54
|
Lowkiq saves information about a job being processed
|
55
|
-
and returns
|
55
|
+
and returns uncompleted jobs to the queue on startup.
|
56
56
|
|
57
57
|
Jobs in queues are ordered by preassigned execution time, so they are not FIFO queues.
|
58
58
|
|
59
|
-
Every job has
|
59
|
+
Every job has its identifier. Lowkiq guarantees that jobs with equal IDs are processed by the same thread.
|
60
60
|
|
61
61
|
Every queue is divided into a permanent set of shards.
|
62
|
-
A job is placed into particular shard based on an id of the job.
|
62
|
+
A job is placed into a particular shard based on an id of the job.
|
63
63
|
So jobs with the same id are always placed into the same shard.
|
64
64
|
All jobs of the shard are always processed with the same thread.
|
65
|
-
This guarantees the
|
65
|
+
This guarantees the sequential processing of jobs with the same ids and excludes the possibility of locks.
|
66
66
|
|
67
67
|
Besides the id, every job has a payload.
|
68
68
|
Payloads are accumulated for jobs with the same id.
|
@@ -71,8 +71,8 @@ It's useful when you need to process only the last message and drop all previous
|
|
71
71
|
|
72
72
|
A worker corresponds to a queue and contains a job processing logic.
|
73
73
|
|
74
|
-
|
75
|
-
Adding or removing queues or
|
74
|
+
The fixed number of threads is used to process all jobs of all queues.
|
75
|
+
Adding or removing queues or their shards won't affect the number of threads.
|
76
76
|
|
77
77
|
## Sidekiq comparison
|
78
78
|
|
@@ -83,16 +83,16 @@ But if you use plugins like
|
|
83
83
|
[sidekiq-merger](https://github.com/dtaniwaki/sidekiq-merger)
|
84
84
|
or implement your own lock system, you should look at Lowkiq.
|
85
85
|
|
86
|
-
For example, sidekiq-grouping accumulates a batch of jobs
|
87
|
-
With this approach queue can
|
86
|
+
For example, sidekiq-grouping accumulates a batch of jobs then enqueues it and accumulates the next batch.
|
87
|
+
With this approach, a queue can contain two batches with data of the same order.
|
88
88
|
These batches are parallel processed with different threads, so we come back to the initial problem.
|
89
89
|
|
90
|
-
Lowkiq was designed to avoid any
|
90
|
+
Lowkiq was designed to avoid any type of locking.
|
91
91
|
|
92
92
|
Furthermore, Lowkiq's queues are reliable. Only Sidekiq Pro or plugins can add such functionality.
|
93
93
|
|
94
|
-
This [benchmark](examples/benchmark) shows overhead on
|
95
|
-
|
94
|
+
This [benchmark](examples/benchmark) shows overhead on Redis usage.
|
95
|
+
These are the results for 5 threads, 100,000 blank jobs:
|
96
96
|
|
97
97
|
+ lowkiq: 155 sec or 1.55 ms per job
|
98
98
|
+ lowkiq +hiredis: 80 sec or 0.80 ms per job
|
@@ -100,29 +100,29 @@ This is the results for 5 threads, 100,000 blank jobs:
|
|
100
100
|
|
101
101
|
This difference is related to different queues structure.
|
102
102
|
Sidekiq uses one list for all workers and fetches the job entirely for O(1).
|
103
|
-
Lowkiq uses several data structures, including sorted sets for
|
103
|
+
Lowkiq uses several data structures, including sorted sets for keeping ids of jobs.
|
104
104
|
So fetching only an id of a job takes O(log(N)).
|
105
105
|
|
106
106
|
## Queue
|
107
107
|
|
108
108
|
Please, look at [the presentation](https://docs.google.com/presentation/d/e/2PACX-1vRdwA2Ck22r26KV1DbY__XcYpj2FdlnR-2G05w1YULErnJLB_JL1itYbBC6_JbLSPOHwJ0nwvnIHH2A/pub?start=false&loop=false&delayms=3000).
|
109
109
|
|
110
|
-
Every job has following attributes:
|
110
|
+
Every job has the following attributes:
|
111
111
|
|
112
112
|
+ `id` is a job identifier with string type.
|
113
|
-
+ `payloads` is a sorted set of payloads ordered by
|
114
|
-
+ `perform_in` is planned execution time. It's
|
113
|
+
+ `payloads` is a sorted set of payloads ordered by its score. A payload is an object. A score is a real number.
|
114
|
+
+ `perform_in` is planned execution time. It's a Unix timestamp with a real number type.
|
115
115
|
+ `retry_count` is amount of retries. It's a real number.
|
116
116
|
|
117
|
-
For example, `id` can be an identifier of replicated entity.
|
118
|
-
`payloads` is a sorted set ordered by score of payload and resulted by grouping a payload of job by
|
119
|
-
`payload` can be a ruby object
|
120
|
-
`score` can be `payload`'s creation date (
|
121
|
-
By default `score` and `perform_in` are current
|
117
|
+
For example, `id` can be an identifier of a replicated entity.
|
118
|
+
`payloads` is a sorted set ordered by a score of payload and resulted by grouping a payload of the job by its `id`.
|
119
|
+
`payload` can be a ruby object because it is serialized by `Marshal.dump`.
|
120
|
+
`score` can be `payload`'s creation date (Unix timestamp) or it's an incremental version number.
|
121
|
+
By default, `score` and `perform_in` are current Unix timestamp.
|
122
122
|
`retry_count` for new unprocessed job equals to `-1`,
|
123
123
|
for one-time failed is `0`, so the planned retries are counted, not the performed ones.
|
124
124
|
|
125
|
-
|
125
|
+
Job execution can be unsuccessful. In this case, its `retry_count` is incremented, the new `perform_in` is calculated with determined formula, and it moves back to a queue.
|
126
126
|
|
127
127
|
In case of `retry_count` is getting `>=` `max_retry_count` an element of `payloads` with less (oldest) score is moved to a morgue,
|
128
128
|
rest elements are moved back to the queue, wherein `retry_count` and `perform_in` are reset to `-1` and `now()` respectively.
|
@@ -146,14 +146,14 @@ If `max_retry_count = 1`, retries stop.
|
|
146
146
|
|
147
147
|
They are applied when:
|
148
148
|
|
149
|
-
+ a job
|
150
|
-
+ a job
|
151
|
-
+ a job from morgue
|
149
|
+
+ a job has been in a queue and a new one with the same id is added
|
150
|
+
+ a job is failed, but a new one with the same id has been added
|
151
|
+
+ a job from a morgue is moved back to a queue, but the queue has had a job with the same id
|
152
152
|
|
153
153
|
Algorithm:
|
154
154
|
|
155
|
-
+ payloads
|
156
|
-
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the
|
155
|
+
+ payloads are merged, the minimal score is chosen for equal payloads
|
156
|
+
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the job from the queue
|
157
157
|
+ if a failed job and queued job is merged, `perform_in` and `retry_count` is taken from the failed one
|
158
158
|
+ if morgue job and queued job is merged, `perform_in = now()`, `retry_count = -1`
|
159
159
|
|
@@ -172,8 +172,8 @@ Example:
|
|
172
172
|
{ id: "1", payloads: #{"v1": 1, "v2": 3, "v3": 4}, retry_count: 0, perform_in: 1536323288 }
|
173
173
|
```
|
174
174
|
|
175
|
-
|
176
|
-
A job in morgue has following attributes:
|
175
|
+
A morgue is a part of a queue. Jobs in a morgue are not processed.
|
176
|
+
A job in a morgue has the following attributes:
|
177
177
|
|
178
178
|
+ id is the job identifier
|
179
179
|
+ payloads
|
@@ -216,6 +216,12 @@ module ATestWorker
|
|
216
216
|
end
|
217
217
|
```
|
218
218
|
|
219
|
+
And then you have to add it to Lowkiq in your initializer file due to problems with autoloading:
|
220
|
+
|
221
|
+
```ruby
|
222
|
+
Lowkiq.workers = [ ATestWorker ]
|
223
|
+
```
|
224
|
+
|
219
225
|
Default values:
|
220
226
|
|
221
227
|
```ruby
|
@@ -234,10 +240,10 @@ end
|
|
234
240
|
ATestWorker.perform_async [
|
235
241
|
{ id: 0 },
|
236
242
|
{ id: 1, payload: { attr: 'v1' } },
|
237
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
243
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
238
244
|
]
|
239
245
|
# payload by default equals to ""
|
240
|
-
# score and perform_in by default equals to Time.now.
|
246
|
+
# score and perform_in by default equals to Time.now.to_f
|
241
247
|
```
|
242
248
|
|
243
249
|
It is possible to redefine `perform_async` and calculate `id`, `score` и `perform_in` in a worker code:
|
@@ -272,8 +278,9 @@ ATestWorker.perform_async 1000.times.map { |id| { payload: {id: id} } }
|
|
272
278
|
|
273
279
|
Options and their default values are:
|
274
280
|
|
281
|
+
+ `Lowkiq.workers = []`- list of workers to use. Since 1.1.0.
|
275
282
|
+ `Lowkiq.poll_interval = 1` - delay in seconds between queue polling for new jobs.
|
276
|
-
Used only if
|
283
|
+
Used only if a queue was empty in a previous cycle or an error occurred.
|
277
284
|
+ `Lowkiq.threads_per_node = 5` - threads per node.
|
278
285
|
+ `Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL') }` - redis connection options
|
279
286
|
+ `Lowkiq.client_pool_size = 5` - redis pool size for queueing jobs
|
@@ -327,7 +334,7 @@ Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL'), driver: :hiredis }
|
|
327
334
|
|
328
335
|
`path_to_app.rb` must load app. [Example](examples/dummy/lib/app.rb).
|
329
336
|
|
330
|
-
|
337
|
+
The lazy loading of worker modules is unacceptable.
|
331
338
|
For preliminarily loading modules use
|
332
339
|
`require`
|
333
340
|
or [`require_dependency`](https://api.rubyonrails.org/classes/ActiveSupport/Dependencies/Loadable.html#method-i-require_dependency)
|
@@ -335,15 +342,15 @@ for Ruby on Rails.
|
|
335
342
|
|
336
343
|
## Shutdown
|
337
344
|
|
338
|
-
Send TERM or INT signal to process (Ctrl-C).
|
339
|
-
|
345
|
+
Send TERM or INT signal to the process (Ctrl-C).
|
346
|
+
The process will wait for executed jobs to finish.
|
340
347
|
|
341
|
-
Note that if queue is empty, process sleeps `poll_interval` seconds,
|
348
|
+
Note that if a queue is empty, the process sleeps `poll_interval` seconds,
|
342
349
|
therefore, the process will not stop until the `poll_interval` seconds have passed.
|
343
350
|
|
344
351
|
## Debug
|
345
352
|
|
346
|
-
To get trace of all threads of app:
|
353
|
+
To get trace of all threads of an app:
|
347
354
|
|
348
355
|
```
|
349
356
|
kill -TTIN <pid>
|
@@ -357,11 +364,22 @@ docker-compose run --rm --service-port app bash
|
|
357
364
|
bundle
|
358
365
|
rspec
|
359
366
|
cd examples/dummy ; bundle exec ../../exe/lowkiq -r ./lib/app.rb
|
367
|
+
|
368
|
+
# open localhost:8080
|
369
|
+
```
|
370
|
+
|
371
|
+
```
|
372
|
+
docker-compose run --rm --service-port frontend bash
|
373
|
+
npm run dumb
|
374
|
+
# open localhost:8081
|
375
|
+
|
376
|
+
# npm run build
|
377
|
+
# npm run web-api
|
360
378
|
```
|
361
379
|
|
362
380
|
## Exceptions
|
363
381
|
|
364
|
-
`StandardError` thrown by worker are handled with middleware. Such exceptions
|
382
|
+
`StandardError` thrown by a worker are handled with middleware. Such exceptions don't lead to process stops.
|
365
383
|
|
366
384
|
All other exceptions cause the process to stop.
|
367
385
|
Lowkiq will wait for job execution by other threads.
|
@@ -384,15 +402,18 @@ end
|
|
384
402
|
```ruby
|
385
403
|
# config/initializers/lowkiq.rb
|
386
404
|
|
387
|
-
# loading all lowkiq workers
|
388
|
-
Dir["#{Rails.root}/app/lowkiq_workers/**/*.rb"].each { |file| require_dependency file }
|
389
|
-
|
390
405
|
# configuration:
|
391
406
|
# Lowkiq.redis = -> { Redis.new url: ENV.fetch('LOWKIQ_REDIS_URL') }
|
392
407
|
# Lowkiq.threads_per_node = ENV.fetch('LOWKIQ_THREADS_PER_NODE').to_i
|
393
408
|
# Lowkiq.client_pool_size = ENV.fetch('LOWKIQ_CLIENT_POOL_SIZE').to_i
|
394
409
|
# ...
|
395
410
|
|
411
|
+
# since 1.1.0
|
412
|
+
Lowkiq.workers = [
|
413
|
+
ATestWorker,
|
414
|
+
OtherCoolWorker
|
415
|
+
]
|
416
|
+
|
396
417
|
Lowkiq.server_middlewares << -> (worker, batch, &block) do
|
397
418
|
logger = Rails.logger
|
398
419
|
tag = "#{worker}-#{Thread.current.object_id}"
|
@@ -480,10 +501,10 @@ worker C: 0
|
|
480
501
|
worker D: 0, 1
|
481
502
|
```
|
482
503
|
|
483
|
-
Lowkiq uses fixed
|
504
|
+
Lowkiq uses a fixed number of threads for job processing, therefore it is necessary to distribute shards between threads.
|
484
505
|
Splitter does it.
|
485
506
|
|
486
|
-
To define a set of shards, which is being processed by thread,
|
507
|
+
To define a set of shards, which is being processed by a thread, let's move them to one list:
|
487
508
|
|
488
509
|
```
|
489
510
|
A0, A1, A2, B0, B1, B2, B3, C0, D0, D1
|
@@ -500,7 +521,7 @@ t1: A1, B1, C0
|
|
500
521
|
t2: A2, B2, D0
|
501
522
|
```
|
502
523
|
|
503
|
-
Besides Default Lowkiq has ByNode splitter. It allows
|
524
|
+
Besides Default Lowkiq has the ByNode splitter. It allows dividing the load by several processes (nodes).
|
504
525
|
|
505
526
|
```
|
506
527
|
Lowkiq.build_splitter = -> () do
|
@@ -511,7 +532,7 @@ Lowkiq.build_splitter = -> () do
|
|
511
532
|
end
|
512
533
|
```
|
513
534
|
|
514
|
-
So, instead of single process you need to execute multiple ones and to set environment variables up:
|
535
|
+
So, instead of a single process, you need to execute multiple ones and to set environment variables up:
|
515
536
|
|
516
537
|
```
|
517
538
|
# process 0
|
@@ -523,18 +544,18 @@ LOWKIQ_NUMBER_OF_NODES=2 LOWKIQ_NODE_NUMBER=1 bundle exec lowkiq -r ./lib/app.rb
|
|
523
544
|
|
524
545
|
Summary amount of threads are equal product of `ENV.fetch('LOWKIQ_NUMBER_OF_NODES')` and `Lowkiq.threads_per_node`.
|
525
546
|
|
526
|
-
You can also write your own splitter if your app needs extra distribution of shards between threads or nodes.
|
547
|
+
You can also write your own splitter if your app needs an extra distribution of shards between threads or nodes.
|
527
548
|
|
528
549
|
## Scheduler
|
529
550
|
|
530
|
-
Every thread processes a set of shards.
|
531
|
-
Every thread has
|
551
|
+
Every thread processes a set of shards. The scheduler selects shard for processing.
|
552
|
+
Every thread has its own instance of the scheduler.
|
532
553
|
|
533
554
|
Lowkiq has 2 schedulers for your choice.
|
534
|
-
`Seq`
|
555
|
+
`Seq` sequentially looks over shards.
|
535
556
|
`Lag` chooses shard with the oldest job minimizing the lag. It's used by default.
|
536
557
|
|
537
|
-
|
558
|
+
The scheduler can be set up through settings:
|
538
559
|
|
539
560
|
```
|
540
561
|
Lowkiq.build_scheduler = ->() { Lowkiq.build_seq_scheduler }
|
@@ -547,13 +568,13 @@ Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
|
|
547
568
|
### `SomeWorker.shards_count`
|
548
569
|
|
549
570
|
Sum of `shards_count` of all workers shouldn't be less than `Lowkiq.threads_per_node`
|
550
|
-
otherwise threads will stay idle.
|
571
|
+
otherwise, threads will stay idle.
|
551
572
|
|
552
573
|
Sum of `shards_count` of all workers can be equal to `Lowkiq.threads_per_node`.
|
553
|
-
In this case thread processes a single shard. This makes sense only with uniform queue load.
|
574
|
+
In this case, a thread processes a single shard. This makes sense only with a uniform queue load.
|
554
575
|
|
555
576
|
Sum of `shards_count` of all workers can be more than `Lowkiq.threads_per_node`.
|
556
|
-
In this case `shards_count` can be counted as a priority.
|
577
|
+
In this case, `shards_count` can be counted as a priority.
|
557
578
|
The larger it is, the more often the tasks of this queue will be processed.
|
558
579
|
|
559
580
|
There is no reason to set `shards_count` of one worker more than `Lowkiq.threads_per_node`,
|
@@ -561,8 +582,8 @@ because every thread will handle more than one shard from this queue, so it incr
|
|
561
582
|
|
562
583
|
### `SomeWorker.max_retry_count`
|
563
584
|
|
564
|
-
From `retry_in` and `max_retry_count`, you can calculate approximate time that payload of job will be in a queue.
|
565
|
-
After `max_retry_count` is reached
|
585
|
+
From `retry_in` and `max_retry_count`, you can calculate the approximate time that a payload of a job will be in a queue.
|
586
|
+
After `max_retry_count` is reached a payload with a minimal score will be moved to a morgue.
|
566
587
|
|
567
588
|
For default `retry_in` we receive the following table.
|
568
589
|
|
@@ -590,9 +611,9 @@ end
|
|
590
611
|
|
591
612
|
## Changing of worker's shards amount
|
592
613
|
|
593
|
-
Try to count
|
614
|
+
Try to count the number of shards right away and don't change it in the future.
|
594
615
|
|
595
|
-
If you can disable adding of new jobs, wait for queues to get empty and deploy the new version of code with changed amount of shards.
|
616
|
+
If you can disable adding of new jobs, wait for queues to get empty, and deploy the new version of code with a changed amount of shards.
|
596
617
|
|
597
618
|
If you can't do it, follow the next steps:
|
598
619
|
|
@@ -610,7 +631,7 @@ module ATestWorker
|
|
610
631
|
end
|
611
632
|
```
|
612
633
|
|
613
|
-
Set the number of shards and new queue name:
|
634
|
+
Set the number of shards and the new queue name:
|
614
635
|
|
615
636
|
```ruby
|
616
637
|
module ATestWorker
|
@@ -625,7 +646,7 @@ module ATestWorker
|
|
625
646
|
end
|
626
647
|
```
|
627
648
|
|
628
|
-
Add a worker moving jobs from the old queue to
|
649
|
+
Add a worker moving jobs from the old queue to the new one:
|
629
650
|
|
630
651
|
```ruby
|
631
652
|
module ATestMigrationWorker
|
data/README.ru.md
CHANGED
@@ -247,10 +247,10 @@ end
|
|
247
247
|
ATestWorker.perform_async [
|
248
248
|
{ id: 0 },
|
249
249
|
{ id: 1, payload: { attr: 'v1' } },
|
250
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
250
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
251
251
|
]
|
252
252
|
# payload по умолчанию равен ""
|
253
|
-
# score и perform_in по умолчанию равны Time.now.
|
253
|
+
# score и perform_in по умолчанию равны Time.now.to_f
|
254
254
|
```
|
255
255
|
|
256
256
|
Вы можете переопределить `perform_async` и вычислять `id`, `score` и `perform_in` в воркере:
|
data/lib/lowkiq.rb
CHANGED
@@ -10,7 +10,6 @@ require "lowkiq/version"
|
|
10
10
|
require "lowkiq/utils"
|
11
11
|
require "lowkiq/script"
|
12
12
|
|
13
|
-
require "lowkiq/extend_tracker"
|
14
13
|
require "lowkiq/option_parser"
|
15
14
|
|
16
15
|
require "lowkiq/splitters/default"
|
@@ -42,7 +41,8 @@ module Lowkiq
|
|
42
41
|
:server_middlewares, :on_server_init,
|
43
42
|
:build_scheduler, :build_splitter,
|
44
43
|
:last_words,
|
45
|
-
:dump_payload, :load_payload
|
44
|
+
:dump_payload, :load_payload,
|
45
|
+
:workers
|
46
46
|
|
47
47
|
def server_redis_pool
|
48
48
|
@server_redis_pool ||= ConnectionPool.new(size: threads_per_node, timeout: pool_timeout, &redis)
|
@@ -63,10 +63,6 @@ module Lowkiq
|
|
63
63
|
end
|
64
64
|
end
|
65
65
|
|
66
|
-
def workers
|
67
|
-
Worker.extended_modules
|
68
|
-
end
|
69
|
-
|
70
66
|
def shard_handlers
|
71
67
|
self.workers.flat_map do |w|
|
72
68
|
ShardHandler.build_many w, self.server_wrapper
|
@@ -112,4 +108,5 @@ module Lowkiq
|
|
112
108
|
self.last_words = ->(ex) {}
|
113
109
|
self.dump_payload = ::Marshal.method :dump
|
114
110
|
self.load_payload = ::Marshal.method :load
|
111
|
+
self.workers = []
|
115
112
|
end
|
@@ -64,11 +64,9 @@ module Lowkiq
|
|
64
64
|
|
65
65
|
def coerce_lag(res)
|
66
66
|
_id, score = res.first
|
67
|
-
|
68
|
-
|
69
|
-
return
|
70
|
-
lag = @timestamp.call - score.to_i
|
71
|
-
return 0 if lag < 0
|
67
|
+
return 0.0 if score.nil?
|
68
|
+
lag = @timestamp.call - score
|
69
|
+
return 0.0 if lag < 0.0
|
72
70
|
lag
|
73
71
|
end
|
74
72
|
|
@@ -41,10 +41,9 @@ module Lowkiq
|
|
41
41
|
|
42
42
|
def coerce_lag(res)
|
43
43
|
_id, score = res.first
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
return 0 if lag < 0
|
44
|
+
return 0.0 if score.nil?
|
45
|
+
lag = @timestamp.call - score
|
46
|
+
return 0.0 if lag < 0.0
|
48
47
|
lag
|
49
48
|
end
|
50
49
|
end
|
data/lib/lowkiq/utils.rb
CHANGED
data/lib/lowkiq/version.rb
CHANGED
data/lib/lowkiq/web/api.rb
CHANGED
data/lib/lowkiq/worker.rb
CHANGED
data/lowkiq.gemspec
CHANGED
@@ -34,4 +34,5 @@ Gem::Specification.new do |spec|
|
|
34
34
|
spec.add_development_dependency "rspec", "~> 3.0"
|
35
35
|
spec.add_development_dependency "rspec-mocks", "~> 3.8"
|
36
36
|
spec.add_development_dependency "rack-test", "~> 1.1"
|
37
|
+
spec.add_development_dependency "pry-byebug", "~> 3.9.0"
|
37
38
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: lowkiq
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0
|
4
|
+
version: 1.1.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Mikhail Kuzmin
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2021-01-25 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: redis
|
@@ -134,6 +134,20 @@ dependencies:
|
|
134
134
|
- - "~>"
|
135
135
|
- !ruby/object:Gem::Version
|
136
136
|
version: '1.1'
|
137
|
+
- !ruby/object:Gem::Dependency
|
138
|
+
name: pry-byebug
|
139
|
+
requirement: !ruby/object:Gem::Requirement
|
140
|
+
requirements:
|
141
|
+
- - "~>"
|
142
|
+
- !ruby/object:Gem::Version
|
143
|
+
version: 3.9.0
|
144
|
+
type: :development
|
145
|
+
prerelease: false
|
146
|
+
version_requirements: !ruby/object:Gem::Requirement
|
147
|
+
requirements:
|
148
|
+
- - "~>"
|
149
|
+
- !ruby/object:Gem::Version
|
150
|
+
version: 3.9.0
|
137
151
|
description: Lowkiq
|
138
152
|
email:
|
139
153
|
- Mihail.Kuzmin@bia-tech.ru
|
@@ -144,6 +158,7 @@ extra_rdoc_files: []
|
|
144
158
|
files:
|
145
159
|
- ".gitignore"
|
146
160
|
- ".rspec"
|
161
|
+
- CHANGELOG.md
|
147
162
|
- Gemfile
|
148
163
|
- Gemfile.lock
|
149
164
|
- LICENSE.md
|
@@ -157,7 +172,6 @@ files:
|
|
157
172
|
- docker-compose.yml
|
158
173
|
- exe/lowkiq
|
159
174
|
- lib/lowkiq.rb
|
160
|
-
- lib/lowkiq/extend_tracker.rb
|
161
175
|
- lib/lowkiq/option_parser.rb
|
162
176
|
- lib/lowkiq/queue/actions.rb
|
163
177
|
- lib/lowkiq/queue/fetch.rb
|