lowkiq 1.0.2 → 1.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +8 -0
- data/Gemfile.lock +14 -4
- data/README.md +107 -66
- data/README.ru.md +2 -2
- data/docker-compose.yml +1 -1
- data/lib/lowkiq.rb +8 -7
- data/lib/lowkiq/queue/fetch.rb +2 -2
- data/lib/lowkiq/queue/keys.rb +16 -4
- data/lib/lowkiq/queue/queue.rb +103 -55
- data/lib/lowkiq/queue/queue_metrics.rb +3 -5
- data/lib/lowkiq/queue/shard_metrics.rb +3 -4
- data/lib/lowkiq/schedulers/lag.rb +1 -1
- data/lib/lowkiq/script.rb +42 -0
- data/lib/lowkiq/server.rb +4 -0
- data/lib/lowkiq/shard_handler.rb +3 -3
- data/lib/lowkiq/utils.rb +1 -1
- data/lib/lowkiq/version.rb +1 -1
- data/lib/lowkiq/web.rb +2 -0
- data/lib/lowkiq/web/api.rb +1 -1
- data/lib/lowkiq/worker.rb +0 -2
- data/lowkiq.gemspec +1 -0
- metadata +19 -5
- data/lib/lowkiq/extend_tracker.rb +0 -13
- data/lib/lowkiq/queue/marshal.rb +0 -23
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 1737d4cbb8058600c3c3b162c6d48b8749d5f1750524538f73300fbcd0090269
|
4
|
+
data.tar.gz: 6014eb81fa0550a93efa7e04d2785bc4616f8f004563c906fd9f3ee930ba3183
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 16bb89de9f1f25a745602e70ae9e172fb85cbbad9b9ed50061fd6c3c74dd0122c204c7b6c973bbb02e2c891bc45449882da479a3b59b5108b65073b9d000e8a2
|
7
|
+
data.tar.gz: 0d9c5316a08272e539c34a1cda2e688348b93f3c255ec9c2faef8f3db7363e1e8e87402a4cda880cce95249e2120686f86d05bdba5bb117c3cefb079885bc52f
|
data/CHANGELOG.md
ADDED
data/Gemfile.lock
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
PATH
|
2
2
|
remote: .
|
3
3
|
specs:
|
4
|
-
lowkiq (1.
|
4
|
+
lowkiq (1.1.0)
|
5
5
|
connection_pool (~> 2.2, >= 2.2.2)
|
6
6
|
rack (>= 1.5.0)
|
7
7
|
redis (>= 4.0.1, < 5)
|
@@ -9,13 +9,22 @@ PATH
|
|
9
9
|
GEM
|
10
10
|
remote: https://rubygems.org/
|
11
11
|
specs:
|
12
|
-
|
12
|
+
byebug (11.1.3)
|
13
|
+
coderay (1.1.3)
|
14
|
+
connection_pool (2.2.3)
|
13
15
|
diff-lcs (1.3)
|
16
|
+
method_source (1.0.0)
|
17
|
+
pry (0.13.1)
|
18
|
+
coderay (~> 1.1)
|
19
|
+
method_source (~> 1.0)
|
20
|
+
pry-byebug (3.9.0)
|
21
|
+
byebug (~> 11.0)
|
22
|
+
pry (~> 0.13.0)
|
14
23
|
rack (2.2.2)
|
15
24
|
rack-test (1.1.0)
|
16
25
|
rack (>= 1.0, < 3)
|
17
26
|
rake (12.3.3)
|
18
|
-
redis (4.
|
27
|
+
redis (4.2.5)
|
19
28
|
rspec (3.9.0)
|
20
29
|
rspec-core (~> 3.9.0)
|
21
30
|
rspec-expectations (~> 3.9.0)
|
@@ -36,10 +45,11 @@ PLATFORMS
|
|
36
45
|
DEPENDENCIES
|
37
46
|
bundler (~> 2.1.0)
|
38
47
|
lowkiq!
|
48
|
+
pry-byebug (~> 3.9.0)
|
39
49
|
rack-test (~> 1.1)
|
40
50
|
rake (~> 12.3.0)
|
41
51
|
rspec (~> 3.0)
|
42
52
|
rspec-mocks (~> 3.8)
|
43
53
|
|
44
54
|
BUNDLED WITH
|
45
|
-
2.1.
|
55
|
+
2.1.4
|
data/README.md
CHANGED
@@ -16,6 +16,7 @@ Ordered background jobs processing
|
|
16
16
|
* [Api](#api)
|
17
17
|
* [Ring app](#ring-app)
|
18
18
|
* [Configuration](#configuration)
|
19
|
+
* [Performance](#performance)
|
19
20
|
* [Execution](#execution)
|
20
21
|
* [Shutdown](#shutdown)
|
21
22
|
* [Debug](#debug)
|
@@ -31,9 +32,9 @@ Ordered background jobs processing
|
|
31
32
|
## Rationale
|
32
33
|
|
33
34
|
We've faced some problems using Sidekiq while processing messages from a side system.
|
34
|
-
For instance, the message is
|
35
|
-
The side system will send
|
36
|
-
Orders are frequently updated and a queue
|
35
|
+
For instance, the message is the data of an order at a particular time.
|
36
|
+
The side system will send new data of an order on every change.
|
37
|
+
Orders are frequently updated and a queue contains some closely located messages of the same order.
|
37
38
|
|
38
39
|
Sidekiq doesn't guarantee a strict message order, because a queue is processed by multiple threads.
|
39
40
|
For example, we've received 2 messages: M1 and M2.
|
@@ -42,8 +43,8 @@ so M2 can be processed before M1.
|
|
42
43
|
|
43
44
|
Parallel processing of such kind of messages can result in:
|
44
45
|
|
45
|
-
+
|
46
|
-
+ overwriting new data with old one
|
46
|
+
+ deadlocks
|
47
|
+
+ overwriting new data with an old one
|
47
48
|
|
48
49
|
Lowkiq has been created to eliminate such problems by avoiding parallel task processing within one entity.
|
49
50
|
|
@@ -51,17 +52,17 @@ Lowkiq has been created to eliminate such problems by avoiding parallel task pro
|
|
51
52
|
|
52
53
|
Lowkiq's queues are reliable i.e.,
|
53
54
|
Lowkiq saves information about a job being processed
|
54
|
-
and returns
|
55
|
+
and returns uncompleted jobs to the queue on startup.
|
55
56
|
|
56
57
|
Jobs in queues are ordered by preassigned execution time, so they are not FIFO queues.
|
57
58
|
|
58
|
-
Every job has
|
59
|
+
Every job has its identifier. Lowkiq guarantees that jobs with equal IDs are processed by the same thread.
|
59
60
|
|
60
61
|
Every queue is divided into a permanent set of shards.
|
61
|
-
A job is placed into particular shard based on an id of the job.
|
62
|
+
A job is placed into a particular shard based on an id of the job.
|
62
63
|
So jobs with the same id are always placed into the same shard.
|
63
64
|
All jobs of the shard are always processed with the same thread.
|
64
|
-
This guarantees the
|
65
|
+
This guarantees the sequential processing of jobs with the same ids and excludes the possibility of locks.
|
65
66
|
|
66
67
|
Besides the id, every job has a payload.
|
67
68
|
Payloads are accumulated for jobs with the same id.
|
@@ -70,8 +71,8 @@ It's useful when you need to process only the last message and drop all previous
|
|
70
71
|
|
71
72
|
A worker corresponds to a queue and contains a job processing logic.
|
72
73
|
|
73
|
-
|
74
|
-
Adding or removing queues or
|
74
|
+
The fixed number of threads is used to process all jobs of all queues.
|
75
|
+
Adding or removing queues or their shards won't affect the number of threads.
|
75
76
|
|
76
77
|
## Sidekiq comparison
|
77
78
|
|
@@ -82,45 +83,46 @@ But if you use plugins like
|
|
82
83
|
[sidekiq-merger](https://github.com/dtaniwaki/sidekiq-merger)
|
83
84
|
or implement your own lock system, you should look at Lowkiq.
|
84
85
|
|
85
|
-
For example, sidekiq-grouping accumulates a batch of jobs
|
86
|
-
With this approach queue can
|
86
|
+
For example, sidekiq-grouping accumulates a batch of jobs then enqueues it and accumulates the next batch.
|
87
|
+
With this approach, a queue can contain two batches with data of the same order.
|
87
88
|
These batches are parallel processed with different threads, so we come back to the initial problem.
|
88
89
|
|
89
|
-
Lowkiq was designed to avoid any
|
90
|
+
Lowkiq was designed to avoid any type of locking.
|
90
91
|
|
91
92
|
Furthermore, Lowkiq's queues are reliable. Only Sidekiq Pro or plugins can add such functionality.
|
92
93
|
|
93
|
-
This [benchmark](examples/benchmark) shows overhead on
|
94
|
-
|
94
|
+
This [benchmark](examples/benchmark) shows overhead on Redis usage.
|
95
|
+
These are the results for 5 threads, 100,000 blank jobs:
|
95
96
|
|
96
|
-
+ lowkiq:
|
97
|
-
+
|
97
|
+
+ lowkiq: 155 sec or 1.55 ms per job
|
98
|
+
+ lowkiq +hiredis: 80 sec or 0.80 ms per job
|
99
|
+
+ sidekiq: 15 sec or 0.15 ms per job
|
98
100
|
|
99
101
|
This difference is related to different queues structure.
|
100
102
|
Sidekiq uses one list for all workers and fetches the job entirely for O(1).
|
101
|
-
Lowkiq uses several data structures, including sorted sets for
|
103
|
+
Lowkiq uses several data structures, including sorted sets for keeping ids of jobs.
|
102
104
|
So fetching only an id of a job takes O(log(N)).
|
103
105
|
|
104
106
|
## Queue
|
105
107
|
|
106
108
|
Please, look at [the presentation](https://docs.google.com/presentation/d/e/2PACX-1vRdwA2Ck22r26KV1DbY__XcYpj2FdlnR-2G05w1YULErnJLB_JL1itYbBC6_JbLSPOHwJ0nwvnIHH2A/pub?start=false&loop=false&delayms=3000).
|
107
109
|
|
108
|
-
Every job has following attributes:
|
110
|
+
Every job has the following attributes:
|
109
111
|
|
110
112
|
+ `id` is a job identifier with string type.
|
111
|
-
+ `payloads` is a sorted set of payloads ordered by
|
112
|
-
+ `perform_in` is planned execution time. It's
|
113
|
+
+ `payloads` is a sorted set of payloads ordered by its score. A payload is an object. A score is a real number.
|
114
|
+
+ `perform_in` is planned execution time. It's a Unix timestamp with a real number type.
|
113
115
|
+ `retry_count` is amount of retries. It's a real number.
|
114
116
|
|
115
|
-
For example, `id` can be an identifier of replicated entity.
|
116
|
-
`payloads` is a sorted set ordered by score of payload and resulted by grouping a payload of job by
|
117
|
-
`payload` can be a ruby object
|
118
|
-
`score` can be `payload`'s creation date (
|
119
|
-
By default `score` and `perform_in` are current
|
117
|
+
For example, `id` can be an identifier of a replicated entity.
|
118
|
+
`payloads` is a sorted set ordered by a score of payload and resulted by grouping a payload of the job by its `id`.
|
119
|
+
`payload` can be a ruby object because it is serialized by `Marshal.dump`.
|
120
|
+
`score` can be `payload`'s creation date (Unix timestamp) or it's an incremental version number.
|
121
|
+
By default, `score` and `perform_in` are current Unix timestamp.
|
120
122
|
`retry_count` for new unprocessed job equals to `-1`,
|
121
123
|
for one-time failed is `0`, so the planned retries are counted, not the performed ones.
|
122
124
|
|
123
|
-
|
125
|
+
Job execution can be unsuccessful. In this case, its `retry_count` is incremented, the new `perform_in` is calculated with determined formula, and it moves back to a queue.
|
124
126
|
|
125
127
|
In case of `retry_count` is getting `>=` `max_retry_count` an element of `payloads` with less (oldest) score is moved to a morgue,
|
126
128
|
rest elements are moved back to the queue, wherein `retry_count` and `perform_in` are reset to `-1` and `now()` respectively.
|
@@ -144,14 +146,14 @@ If `max_retry_count = 1`, retries stop.
|
|
144
146
|
|
145
147
|
They are applied when:
|
146
148
|
|
147
|
-
+ a job
|
148
|
-
+ a job
|
149
|
-
+ a job from morgue
|
149
|
+
+ a job has been in a queue and a new one with the same id is added
|
150
|
+
+ a job is failed, but a new one with the same id has been added
|
151
|
+
+ a job from a morgue is moved back to a queue, but the queue has had a job with the same id
|
150
152
|
|
151
153
|
Algorithm:
|
152
154
|
|
153
|
-
+ payloads
|
154
|
-
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the
|
155
|
+
+ payloads are merged, the minimal score is chosen for equal payloads
|
156
|
+
+ if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the job from the queue
|
155
157
|
+ if a failed job and queued job is merged, `perform_in` and `retry_count` is taken from the failed one
|
156
158
|
+ if morgue job and queued job is merged, `perform_in = now()`, `retry_count = -1`
|
157
159
|
|
@@ -170,8 +172,8 @@ Example:
|
|
170
172
|
{ id: "1", payloads: #{"v1": 1, "v2": 3, "v3": 4}, retry_count: 0, perform_in: 1536323288 }
|
171
173
|
```
|
172
174
|
|
173
|
-
|
174
|
-
A job in morgue has following attributes:
|
175
|
+
A morgue is a part of a queue. Jobs in a morgue are not processed.
|
176
|
+
A job in a morgue has the following attributes:
|
175
177
|
|
176
178
|
+ id is the job identifier
|
177
179
|
+ payloads
|
@@ -214,6 +216,12 @@ module ATestWorker
|
|
214
216
|
end
|
215
217
|
```
|
216
218
|
|
219
|
+
And then you have to add it to Lowkiq in your initializer file due to problems with autoloading:
|
220
|
+
|
221
|
+
```ruby
|
222
|
+
Lowkiq.workers = [ ATestWorker ]
|
223
|
+
```
|
224
|
+
|
217
225
|
Default values:
|
218
226
|
|
219
227
|
```ruby
|
@@ -232,10 +240,10 @@ end
|
|
232
240
|
ATestWorker.perform_async [
|
233
241
|
{ id: 0 },
|
234
242
|
{ id: 1, payload: { attr: 'v1' } },
|
235
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
243
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
236
244
|
]
|
237
245
|
# payload by default equals to ""
|
238
|
-
# score and perform_in by default equals to Time.now.
|
246
|
+
# score and perform_in by default equals to Time.now.to_f
|
239
247
|
```
|
240
248
|
|
241
249
|
It is possible to redefine `perform_async` and calculate `id`, `score` и `perform_in` in a worker code:
|
@@ -268,10 +276,11 @@ ATestWorker.perform_async 1000.times.map { |id| { payload: {id: id} } }
|
|
268
276
|
|
269
277
|
## Configuration
|
270
278
|
|
271
|
-
|
279
|
+
Options and their default values are:
|
272
280
|
|
281
|
+
+ `Lowkiq.workers = []`- list of workers to use. Since 1.1.0.
|
273
282
|
+ `Lowkiq.poll_interval = 1` - delay in seconds between queue polling for new jobs.
|
274
|
-
Used only if
|
283
|
+
Used only if a queue was empty in a previous cycle or an error occurred.
|
275
284
|
+ `Lowkiq.threads_per_node = 5` - threads per node.
|
276
285
|
+ `Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL') }` - redis connection options
|
277
286
|
+ `Lowkiq.client_pool_size = 5` - redis pool size for queueing jobs
|
@@ -281,6 +290,8 @@ Default options and values are:
|
|
281
290
|
+ `Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }` is a scheduler
|
282
291
|
+ `Lowkiq.build_splitter = ->() { Lowkiq.build_default_splitter }` is a splitter
|
283
292
|
+ `Lowkiq.last_words = ->(ex) {}` is an exception handler of descendants of `StandardError` caused the process stop
|
293
|
+
+ `Lowkiq.dump_payload = Marshal.method :dump`
|
294
|
+
+ `Lowkiq.load_payload = Marshal.method :load`
|
284
295
|
|
285
296
|
```ruby
|
286
297
|
$logger = Logger.new(STDOUT)
|
@@ -301,13 +312,29 @@ Lowkiq.server_middlewares << -> (worker, batch, &block) do
|
|
301
312
|
end
|
302
313
|
```
|
303
314
|
|
315
|
+
## Performance
|
316
|
+
|
317
|
+
Use [hiredis](https://github.com/redis/hiredis-rb) for better performance.
|
318
|
+
|
319
|
+
```ruby
|
320
|
+
# Gemfile
|
321
|
+
|
322
|
+
gem "hiredis"
|
323
|
+
```
|
324
|
+
|
325
|
+
```ruby
|
326
|
+
# config
|
327
|
+
|
328
|
+
Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL'), driver: :hiredis }
|
329
|
+
```
|
330
|
+
|
304
331
|
## Execution
|
305
332
|
|
306
333
|
`lowkiq -r ./path_to_app`
|
307
334
|
|
308
335
|
`path_to_app.rb` must load app. [Example](examples/dummy/lib/app.rb).
|
309
336
|
|
310
|
-
|
337
|
+
The lazy loading of worker modules is unacceptable.
|
311
338
|
For preliminarily loading modules use
|
312
339
|
`require`
|
313
340
|
or [`require_dependency`](https://api.rubyonrails.org/classes/ActiveSupport/Dependencies/Loadable.html#method-i-require_dependency)
|
@@ -315,15 +342,15 @@ for Ruby on Rails.
|
|
315
342
|
|
316
343
|
## Shutdown
|
317
344
|
|
318
|
-
Send TERM or INT signal to process (Ctrl-C).
|
319
|
-
|
345
|
+
Send TERM or INT signal to the process (Ctrl-C).
|
346
|
+
The process will wait for executed jobs to finish.
|
320
347
|
|
321
|
-
Note that if queue is empty, process sleeps `poll_interval` seconds,
|
348
|
+
Note that if a queue is empty, the process sleeps `poll_interval` seconds,
|
322
349
|
therefore, the process will not stop until the `poll_interval` seconds have passed.
|
323
350
|
|
324
351
|
## Debug
|
325
352
|
|
326
|
-
To get trace of all threads of app:
|
353
|
+
To get trace of all threads of an app:
|
327
354
|
|
328
355
|
```
|
329
356
|
kill -TTIN <pid>
|
@@ -337,11 +364,22 @@ docker-compose run --rm --service-port app bash
|
|
337
364
|
bundle
|
338
365
|
rspec
|
339
366
|
cd examples/dummy ; bundle exec ../../exe/lowkiq -r ./lib/app.rb
|
367
|
+
|
368
|
+
# open localhost:8080
|
369
|
+
```
|
370
|
+
|
371
|
+
```
|
372
|
+
docker-compose run --rm --service-port frontend bash
|
373
|
+
npm run dumb
|
374
|
+
# open localhost:8081
|
375
|
+
|
376
|
+
# npm run build
|
377
|
+
# npm run web-api
|
340
378
|
```
|
341
379
|
|
342
380
|
## Exceptions
|
343
381
|
|
344
|
-
`StandardError` thrown by worker are handled with middleware. Such exceptions
|
382
|
+
`StandardError` thrown by a worker are handled with middleware. Such exceptions don't lead to process stops.
|
345
383
|
|
346
384
|
All other exceptions cause the process to stop.
|
347
385
|
Lowkiq will wait for job execution by other threads.
|
@@ -364,15 +402,18 @@ end
|
|
364
402
|
```ruby
|
365
403
|
# config/initializers/lowkiq.rb
|
366
404
|
|
367
|
-
# loading all lowkiq workers
|
368
|
-
Dir["#{Rails.root}/app/lowkiq_workers/**/*.rb"].each { |file| require_dependency file }
|
369
|
-
|
370
405
|
# configuration:
|
371
406
|
# Lowkiq.redis = -> { Redis.new url: ENV.fetch('LOWKIQ_REDIS_URL') }
|
372
407
|
# Lowkiq.threads_per_node = ENV.fetch('LOWKIQ_THREADS_PER_NODE').to_i
|
373
408
|
# Lowkiq.client_pool_size = ENV.fetch('LOWKIQ_CLIENT_POOL_SIZE').to_i
|
374
409
|
# ...
|
375
410
|
|
411
|
+
# since 1.1.0
|
412
|
+
Lowkiq.workers = [
|
413
|
+
ATestWorker,
|
414
|
+
OtherCoolWorker
|
415
|
+
]
|
416
|
+
|
376
417
|
Lowkiq.server_middlewares << -> (worker, batch, &block) do
|
377
418
|
logger = Rails.logger
|
378
419
|
tag = "#{worker}-#{Thread.current.object_id}"
|
@@ -460,10 +501,10 @@ worker C: 0
|
|
460
501
|
worker D: 0, 1
|
461
502
|
```
|
462
503
|
|
463
|
-
Lowkiq uses fixed
|
504
|
+
Lowkiq uses a fixed number of threads for job processing, therefore it is necessary to distribute shards between threads.
|
464
505
|
Splitter does it.
|
465
506
|
|
466
|
-
To define a set of shards, which is being processed by thread,
|
507
|
+
To define a set of shards, which is being processed by a thread, let's move them to one list:
|
467
508
|
|
468
509
|
```
|
469
510
|
A0, A1, A2, B0, B1, B2, B3, C0, D0, D1
|
@@ -480,7 +521,7 @@ t1: A1, B1, C0
|
|
480
521
|
t2: A2, B2, D0
|
481
522
|
```
|
482
523
|
|
483
|
-
Besides Default Lowkiq has ByNode splitter. It allows
|
524
|
+
Besides Default Lowkiq has the ByNode splitter. It allows dividing the load by several processes (nodes).
|
484
525
|
|
485
526
|
```
|
486
527
|
Lowkiq.build_splitter = -> () do
|
@@ -491,7 +532,7 @@ Lowkiq.build_splitter = -> () do
|
|
491
532
|
end
|
492
533
|
```
|
493
534
|
|
494
|
-
So, instead of single process you need to execute multiple ones and to set environment variables up:
|
535
|
+
So, instead of a single process, you need to execute multiple ones and to set environment variables up:
|
495
536
|
|
496
537
|
```
|
497
538
|
# process 0
|
@@ -503,18 +544,18 @@ LOWKIQ_NUMBER_OF_NODES=2 LOWKIQ_NODE_NUMBER=1 bundle exec lowkiq -r ./lib/app.rb
|
|
503
544
|
|
504
545
|
Summary amount of threads are equal product of `ENV.fetch('LOWKIQ_NUMBER_OF_NODES')` and `Lowkiq.threads_per_node`.
|
505
546
|
|
506
|
-
You can also write your own splitter if your app needs extra distribution of shards between threads or nodes.
|
547
|
+
You can also write your own splitter if your app needs an extra distribution of shards between threads or nodes.
|
507
548
|
|
508
549
|
## Scheduler
|
509
550
|
|
510
|
-
Every thread processes a set of shards.
|
511
|
-
Every thread has
|
551
|
+
Every thread processes a set of shards. The scheduler selects shard for processing.
|
552
|
+
Every thread has its own instance of the scheduler.
|
512
553
|
|
513
554
|
Lowkiq has 2 schedulers for your choice.
|
514
|
-
`Seq`
|
555
|
+
`Seq` sequentially looks over shards.
|
515
556
|
`Lag` chooses shard with the oldest job minimizing the lag. It's used by default.
|
516
557
|
|
517
|
-
|
558
|
+
The scheduler can be set up through settings:
|
518
559
|
|
519
560
|
```
|
520
561
|
Lowkiq.build_scheduler = ->() { Lowkiq.build_seq_scheduler }
|
@@ -527,13 +568,13 @@ Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
|
|
527
568
|
### `SomeWorker.shards_count`
|
528
569
|
|
529
570
|
Sum of `shards_count` of all workers shouldn't be less than `Lowkiq.threads_per_node`
|
530
|
-
otherwise threads will stay idle.
|
571
|
+
otherwise, threads will stay idle.
|
531
572
|
|
532
573
|
Sum of `shards_count` of all workers can be equal to `Lowkiq.threads_per_node`.
|
533
|
-
In this case thread processes a single shard. This makes sense only with uniform queue load.
|
574
|
+
In this case, a thread processes a single shard. This makes sense only with a uniform queue load.
|
534
575
|
|
535
576
|
Sum of `shards_count` of all workers can be more than `Lowkiq.threads_per_node`.
|
536
|
-
In this case `shards_count` can be counted as a priority.
|
577
|
+
In this case, `shards_count` can be counted as a priority.
|
537
578
|
The larger it is, the more often the tasks of this queue will be processed.
|
538
579
|
|
539
580
|
There is no reason to set `shards_count` of one worker more than `Lowkiq.threads_per_node`,
|
@@ -541,8 +582,8 @@ because every thread will handle more than one shard from this queue, so it incr
|
|
541
582
|
|
542
583
|
### `SomeWorker.max_retry_count`
|
543
584
|
|
544
|
-
From `retry_in` and `max_retry_count`, you can calculate approximate time that payload of job will be in a queue.
|
545
|
-
After `max_retry_count` is reached
|
585
|
+
From `retry_in` and `max_retry_count`, you can calculate the approximate time that a payload of a job will be in a queue.
|
586
|
+
After `max_retry_count` is reached a payload with a minimal score will be moved to a morgue.
|
546
587
|
|
547
588
|
For default `retry_in` we receive the following table.
|
548
589
|
|
@@ -570,9 +611,9 @@ end
|
|
570
611
|
|
571
612
|
## Changing of worker's shards amount
|
572
613
|
|
573
|
-
Try to count
|
614
|
+
Try to count the number of shards right away and don't change it in the future.
|
574
615
|
|
575
|
-
If you can disable adding of new jobs, wait for queues to get empty and deploy the new version of code with changed amount of shards.
|
616
|
+
If you can disable adding of new jobs, wait for queues to get empty, and deploy the new version of code with a changed amount of shards.
|
576
617
|
|
577
618
|
If you can't do it, follow the next steps:
|
578
619
|
|
@@ -590,7 +631,7 @@ module ATestWorker
|
|
590
631
|
end
|
591
632
|
```
|
592
633
|
|
593
|
-
Set the number of shards and new queue name:
|
634
|
+
Set the number of shards and the new queue name:
|
594
635
|
|
595
636
|
```ruby
|
596
637
|
module ATestWorker
|
@@ -605,7 +646,7 @@ module ATestWorker
|
|
605
646
|
end
|
606
647
|
```
|
607
648
|
|
608
|
-
Add a worker moving jobs from the old queue to
|
649
|
+
Add a worker moving jobs from the old queue to the new one:
|
609
650
|
|
610
651
|
```ruby
|
611
652
|
module ATestMigrationWorker
|
data/README.ru.md
CHANGED
@@ -247,10 +247,10 @@ end
|
|
247
247
|
ATestWorker.perform_async [
|
248
248
|
{ id: 0 },
|
249
249
|
{ id: 1, payload: { attr: 'v1' } },
|
250
|
-
{ id: 2, payload: { attr: 'v1' }, score: Time.now.
|
250
|
+
{ id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
|
251
251
|
]
|
252
252
|
# payload по умолчанию равен ""
|
253
|
-
# score и perform_in по умолчанию равны Time.now.
|
253
|
+
# score и perform_in по умолчанию равны Time.now.to_f
|
254
254
|
```
|
255
255
|
|
256
256
|
Вы можете переопределить `perform_async` и вычислять `id`, `score` и `perform_in` в воркере:
|
data/docker-compose.yml
CHANGED
data/lib/lowkiq.rb
CHANGED
@@ -4,11 +4,12 @@ require "zlib"
|
|
4
4
|
require "json"
|
5
5
|
require "ostruct"
|
6
6
|
require "optparse"
|
7
|
+
require "digest"
|
7
8
|
|
8
9
|
require "lowkiq/version"
|
9
10
|
require "lowkiq/utils"
|
11
|
+
require "lowkiq/script"
|
10
12
|
|
11
|
-
require "lowkiq/extend_tracker"
|
12
13
|
require "lowkiq/option_parser"
|
13
14
|
|
14
15
|
require "lowkiq/splitters/default"
|
@@ -19,7 +20,6 @@ require "lowkiq/schedulers/seq"
|
|
19
20
|
|
20
21
|
require "lowkiq/server"
|
21
22
|
|
22
|
-
require "lowkiq/queue/marshal"
|
23
23
|
require "lowkiq/queue/keys"
|
24
24
|
require "lowkiq/queue/fetch"
|
25
25
|
require "lowkiq/queue/queue"
|
@@ -40,7 +40,9 @@ module Lowkiq
|
|
40
40
|
:redis, :client_pool_size, :pool_timeout,
|
41
41
|
:server_middlewares, :on_server_init,
|
42
42
|
:build_scheduler, :build_splitter,
|
43
|
-
:last_words
|
43
|
+
:last_words,
|
44
|
+
:dump_payload, :load_payload,
|
45
|
+
:workers
|
44
46
|
|
45
47
|
def server_redis_pool
|
46
48
|
@server_redis_pool ||= ConnectionPool.new(size: threads_per_node, timeout: pool_timeout, &redis)
|
@@ -61,10 +63,6 @@ module Lowkiq
|
|
61
63
|
end
|
62
64
|
end
|
63
65
|
|
64
|
-
def workers
|
65
|
-
Worker.extended_modules
|
66
|
-
end
|
67
|
-
|
68
66
|
def shard_handlers
|
69
67
|
self.workers.flat_map do |w|
|
70
68
|
ShardHandler.build_many w, self.server_wrapper
|
@@ -108,4 +106,7 @@ module Lowkiq
|
|
108
106
|
self.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
|
109
107
|
self.build_splitter = ->() { Lowkiq.build_default_splitter }
|
110
108
|
self.last_words = ->(ex) {}
|
109
|
+
self.dump_payload = ::Marshal.method :dump
|
110
|
+
self.load_payload = ::Marshal.method :load
|
111
|
+
self.workers = []
|
111
112
|
end
|
data/lib/lowkiq/queue/fetch.rb
CHANGED
@@ -21,7 +21,7 @@ module Lowkiq
|
|
21
21
|
id: x[0],
|
22
22
|
perform_in: x[1][0],
|
23
23
|
retry_count: x[1][1],
|
24
|
-
payloads: x[1][2].map { |(payload, score)| [
|
24
|
+
payloads: x[1][2].map { |(payload, score)| [Lowkiq.load_payload.call(payload), score] },
|
25
25
|
error: x[1][3],
|
26
26
|
}.compact
|
27
27
|
end.compact
|
@@ -41,7 +41,7 @@ module Lowkiq
|
|
41
41
|
{
|
42
42
|
id: x[0],
|
43
43
|
updated_at: x[1][0],
|
44
|
-
payloads: x[1][1].map { |(payload, score)| [
|
44
|
+
payloads: x[1][1].map { |(payload, score)| [Lowkiq.load_payload.call(payload), score] },
|
45
45
|
error: x[1][2],
|
46
46
|
}.compact
|
47
47
|
end.compact
|
data/lib/lowkiq/queue/keys.rb
CHANGED
@@ -39,14 +39,26 @@ module Lowkiq
|
|
39
39
|
[@prefix, :errors].join(':')
|
40
40
|
end
|
41
41
|
|
42
|
-
def processing_key(shard)
|
43
|
-
[@prefix, :processing, shard].join(':')
|
44
|
-
end
|
45
|
-
|
46
42
|
def processing_length_by_shard_hash
|
47
43
|
[@prefix, :processing_length_by_shard].join(':')
|
48
44
|
end
|
49
45
|
|
46
|
+
def processing_ids_with_perform_in_hash(shard)
|
47
|
+
[@prefix, :processing, :ids_with_perform_in, shard].join(':')
|
48
|
+
end
|
49
|
+
|
50
|
+
def processing_ids_with_retry_count_hash(shard)
|
51
|
+
[@prefix, :processing, :ids_with_retry_count, shard].join(':')
|
52
|
+
end
|
53
|
+
|
54
|
+
def processing_payloads_zset(id)
|
55
|
+
[@prefix, :processing, :payloads, id].join(':')
|
56
|
+
end
|
57
|
+
|
58
|
+
def processing_errors_hash(shard)
|
59
|
+
[@prefix, :processing, :errors, shard].join(':')
|
60
|
+
end
|
61
|
+
|
50
62
|
def morgue_all_ids_lex_zset
|
51
63
|
[@prefix, :morgue, :all_ids_lex].join(':')
|
52
64
|
end
|
data/lib/lowkiq/queue/queue.rb
CHANGED
@@ -29,7 +29,7 @@ module Lowkiq
|
|
29
29
|
redis.zadd @keys.all_ids_scored_by_retry_count_zset, retry_count, id, nx: true
|
30
30
|
|
31
31
|
redis.zadd @keys.ids_scored_by_perform_in_zset(shard), perform_in, id, nx: true
|
32
|
-
redis.zadd @keys.payloads_zset(id), score,
|
32
|
+
redis.zadd @keys.payloads_zset(id), score, Lowkiq.dump_payload.call(payload), nx: true
|
33
33
|
end
|
34
34
|
end
|
35
35
|
end
|
@@ -37,44 +37,56 @@ module Lowkiq
|
|
37
37
|
|
38
38
|
def pop(shard, limit:)
|
39
39
|
@pool.with do |redis|
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
limit: [0, limit]
|
45
|
-
|
46
|
-
if ids.empty?
|
47
|
-
redis.unwatch
|
48
|
-
return []
|
49
|
-
end
|
40
|
+
ids = redis.zrangebyscore @keys.ids_scored_by_perform_in_zset(shard),
|
41
|
+
0, @timestamp.call,
|
42
|
+
limit: [0, limit]
|
43
|
+
return [] if ids.empty?
|
50
44
|
|
51
|
-
|
45
|
+
res = redis.multi do |redis|
|
46
|
+
redis.hset @keys.processing_length_by_shard_hash, shard, ids.length
|
52
47
|
|
53
|
-
|
54
|
-
|
55
|
-
redis.
|
56
|
-
|
48
|
+
ids.each do |id|
|
49
|
+
redis.zrem @keys.all_ids_lex_zset, id
|
50
|
+
redis.zrem @keys.ids_scored_by_perform_in_zset(shard), id
|
51
|
+
|
52
|
+
Script.zremhset redis,
|
53
|
+
@keys.all_ids_scored_by_perform_in_zset,
|
54
|
+
@keys.processing_ids_with_perform_in_hash(shard),
|
55
|
+
id
|
56
|
+
Script.zremhset redis,
|
57
|
+
@keys.all_ids_scored_by_retry_count_zset,
|
58
|
+
@keys.processing_ids_with_retry_count_hash(shard),
|
59
|
+
id
|
60
|
+
redis.rename @keys.payloads_zset(id),
|
61
|
+
@keys.processing_payloads_zset(id)
|
62
|
+
Script.hmove redis,
|
63
|
+
@keys.errors_hash,
|
64
|
+
@keys.processing_errors_hash(shard),
|
65
|
+
id
|
57
66
|
end
|
58
|
-
|
67
|
+
processing_data_pipeline(redis, shard, ids)
|
68
|
+
end
|
59
69
|
|
60
|
-
|
70
|
+
res.shift 1 + ids.length * 6
|
71
|
+
processing_data_build res, ids
|
61
72
|
end
|
62
73
|
end
|
63
74
|
|
64
75
|
def push_back(batch)
|
65
76
|
@pool.with do |redis|
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
77
|
+
timestamp = @timestamp.call
|
78
|
+
redis.multi do |redis|
|
79
|
+
batch.each do |job|
|
80
|
+
id = job.fetch(:id)
|
81
|
+
perform_in = job.fetch(:perform_in, timestamp)
|
82
|
+
retry_count = job.fetch(:retry_count, -1)
|
83
|
+
payloads = job.fetch(:payloads).map do |(payload, score)|
|
84
|
+
[score, Lowkiq.dump_payload.call(payload)]
|
85
|
+
end
|
86
|
+
error = job.fetch(:error, nil)
|
74
87
|
|
75
|
-
|
88
|
+
shard = id_to_shard id
|
76
89
|
|
77
|
-
redis.multi do
|
78
90
|
redis.zadd @keys.all_ids_lex_zset, 0, id
|
79
91
|
redis.zadd @keys.all_ids_scored_by_perform_in_zset, perform_in, id
|
80
92
|
redis.zadd @keys.all_ids_scored_by_retry_count_zset, retry_count, id
|
@@ -88,40 +100,52 @@ module Lowkiq
|
|
88
100
|
end
|
89
101
|
end
|
90
102
|
|
91
|
-
def ack(shard, result = nil)
|
103
|
+
def ack(shard, data, result = nil)
|
104
|
+
ids = data.map { |job| job[:id] }
|
105
|
+
length = ids.length
|
106
|
+
|
92
107
|
@pool.with do |redis|
|
93
|
-
length = redis.hget(@keys.processing_length_by_shard_hash, shard).to_i
|
94
108
|
redis.multi do
|
95
|
-
redis.del
|
109
|
+
redis.del @keys.processing_ids_with_perform_in_hash(shard)
|
110
|
+
redis.del @keys.processing_ids_with_retry_count_hash(shard)
|
111
|
+
redis.del @keys.processing_errors_hash(shard)
|
112
|
+
ids.each do |id|
|
113
|
+
redis.del @keys.processing_payloads_zset(id)
|
114
|
+
end
|
96
115
|
redis.hdel @keys.processing_length_by_shard_hash, shard
|
97
|
-
|
98
116
|
redis.incrby @keys.processed_key, length if result == :success
|
99
|
-
redis.incrby @keys.failed_key,
|
117
|
+
redis.incrby @keys.failed_key, length if result == :fail
|
100
118
|
end
|
101
119
|
end
|
102
120
|
end
|
103
121
|
|
104
122
|
def processing_data(shard)
|
105
|
-
|
106
|
-
redis.
|
107
|
-
|
108
|
-
return [] if data.nil?
|
123
|
+
@pool.with do |redis|
|
124
|
+
ids = redis.hkeys @keys.processing_ids_with_perform_in_hash(shard)
|
125
|
+
return [] if ids.empty?
|
109
126
|
|
110
|
-
|
127
|
+
res = redis.multi do |redis|
|
128
|
+
processing_data_pipeline redis, shard, ids
|
129
|
+
end
|
130
|
+
|
131
|
+
processing_data_build res, ids
|
132
|
+
end
|
111
133
|
end
|
112
134
|
|
113
135
|
def push_to_morgue(batch)
|
114
136
|
@pool.with do |redis|
|
115
|
-
|
116
|
-
|
117
|
-
|
118
|
-
|
119
|
-
|
120
|
-
|
137
|
+
timestamp = @timestamp.call
|
138
|
+
redis.multi do
|
139
|
+
batch.each do |job|
|
140
|
+
id = job.fetch(:id)
|
141
|
+
payloads = job.fetch(:payloads).map do |(payload, score)|
|
142
|
+
[score, Lowkiq.dump_payload.call(payload)]
|
143
|
+
end
|
144
|
+
error = job.fetch(:error, nil)
|
145
|
+
|
121
146
|
|
122
|
-
redis.multi do
|
123
147
|
redis.zadd @keys.morgue_all_ids_lex_zset, 0, id
|
124
|
-
redis.zadd @keys.morgue_all_ids_scored_by_updated_at_zset,
|
148
|
+
redis.zadd @keys.morgue_all_ids_scored_by_updated_at_zset, timestamp, id
|
125
149
|
redis.zadd @keys.morgue_payloads_zset(id), payloads, nx: true
|
126
150
|
|
127
151
|
redis.hset @keys.morgue_errors_hash, id, error unless error.nil?
|
@@ -146,7 +170,15 @@ module Lowkiq
|
|
146
170
|
def delete(ids)
|
147
171
|
@pool.with do |redis|
|
148
172
|
redis.multi do
|
149
|
-
|
173
|
+
ids.each do |id|
|
174
|
+
shard = id_to_shard id
|
175
|
+
redis.zrem @keys.all_ids_lex_zset, id
|
176
|
+
redis.zrem @keys.all_ids_scored_by_perform_in_zset, id
|
177
|
+
redis.zrem @keys.all_ids_scored_by_retry_count_zset, id
|
178
|
+
redis.zrem @keys.ids_scored_by_perform_in_zset(shard), id
|
179
|
+
redis.del @keys.payloads_zset(id)
|
180
|
+
redis.hdel @keys.errors_hash, id
|
181
|
+
end
|
150
182
|
end
|
151
183
|
end
|
152
184
|
end
|
@@ -161,17 +193,33 @@ module Lowkiq
|
|
161
193
|
Zlib.crc32(id.to_s) % @shards_count
|
162
194
|
end
|
163
195
|
|
164
|
-
def
|
196
|
+
def processing_data_pipeline(redis, shard, ids)
|
197
|
+
redis.hgetall @keys.processing_ids_with_perform_in_hash(shard)
|
198
|
+
redis.hgetall @keys.processing_ids_with_retry_count_hash(shard)
|
199
|
+
redis.hgetall @keys.processing_errors_hash(shard)
|
200
|
+
|
165
201
|
ids.each do |id|
|
166
|
-
|
167
|
-
redis.zrem @keys.all_ids_lex_zset, id
|
168
|
-
redis.zrem @keys.all_ids_scored_by_perform_in_zset, id
|
169
|
-
redis.zrem @keys.all_ids_scored_by_retry_count_zset, id
|
170
|
-
redis.zrem @keys.ids_scored_by_perform_in_zset(shard), id
|
171
|
-
redis.del @keys.payloads_zset(id)
|
172
|
-
redis.hdel @keys.errors_hash, id
|
202
|
+
redis.zrange @keys.processing_payloads_zset(id), 0, -1, with_scores: true
|
173
203
|
end
|
174
204
|
end
|
205
|
+
|
206
|
+
def processing_data_build(arr, ids)
|
207
|
+
ids_with_perform_in = arr.shift
|
208
|
+
ids_with_retry_count = arr.shift
|
209
|
+
errors = arr.shift
|
210
|
+
payloads = arr
|
211
|
+
|
212
|
+
ids.zip(payloads).map do |(id, payloads)|
|
213
|
+
next if payloads.empty?
|
214
|
+
{
|
215
|
+
id: id,
|
216
|
+
perform_in: ids_with_perform_in[id].to_f,
|
217
|
+
retry_count: ids_with_retry_count[id].to_f,
|
218
|
+
payloads: payloads.map { |(payload, score)| [Lowkiq.load_payload.call(payload), score] },
|
219
|
+
error: errors[id]
|
220
|
+
}.compact
|
221
|
+
end.compact
|
222
|
+
end
|
175
223
|
end
|
176
224
|
end
|
177
225
|
end
|
@@ -64,11 +64,9 @@ module Lowkiq
|
|
64
64
|
|
65
65
|
def coerce_lag(res)
|
66
66
|
_id, score = res.first
|
67
|
-
|
68
|
-
|
69
|
-
return
|
70
|
-
lag = @timestamp.call - score.to_i
|
71
|
-
return 0 if lag < 0
|
67
|
+
return 0.0 if score.nil?
|
68
|
+
lag = @timestamp.call - score
|
69
|
+
return 0.0 if lag < 0.0
|
72
70
|
lag
|
73
71
|
end
|
74
72
|
|
@@ -41,10 +41,9 @@ module Lowkiq
|
|
41
41
|
|
42
42
|
def coerce_lag(res)
|
43
43
|
_id, score = res.first
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
return 0 if lag < 0
|
44
|
+
return 0.0 if score.nil?
|
45
|
+
lag = @timestamp.call - score
|
46
|
+
return 0.0 if lag < 0.0
|
48
47
|
lag
|
49
48
|
end
|
50
49
|
end
|
@@ -0,0 +1,42 @@
|
|
1
|
+
module Lowkiq
|
2
|
+
module Script
|
3
|
+
module_function
|
4
|
+
|
5
|
+
ALL = {
|
6
|
+
hmove: <<-LUA,
|
7
|
+
local source = KEYS[1]
|
8
|
+
local destination = KEYS[2]
|
9
|
+
local key = ARGV[1]
|
10
|
+
local value = redis.call('hget', source, key)
|
11
|
+
if value then
|
12
|
+
redis.call('hdel', source, key)
|
13
|
+
redis.call('hset', destination, key, value)
|
14
|
+
end
|
15
|
+
LUA
|
16
|
+
zremhset: <<-LUA
|
17
|
+
local source = KEYS[1]
|
18
|
+
local destination = KEYS[2]
|
19
|
+
local member = ARGV[1]
|
20
|
+
local score = redis.call('zscore', source, member)
|
21
|
+
if score then
|
22
|
+
redis.call('zrem', source, member)
|
23
|
+
redis.call('hset', destination, member, score)
|
24
|
+
end
|
25
|
+
LUA
|
26
|
+
}.transform_values { |v| { sha: Digest::SHA1.hexdigest(v), source: v } }.freeze
|
27
|
+
|
28
|
+
def load!(redis)
|
29
|
+
ALL.each do |_, item|
|
30
|
+
redis.script(:load, item[:source])
|
31
|
+
end
|
32
|
+
end
|
33
|
+
|
34
|
+
def hmove(redis, source, destination, key)
|
35
|
+
redis.evalsha ALL[:hmove][:sha], keys: [source, destination], argv: [key]
|
36
|
+
end
|
37
|
+
|
38
|
+
def zremhset(redis, source, destination, member)
|
39
|
+
redis.evalsha ALL[:zremhset][:sha], keys: [source, destination], argv: [member]
|
40
|
+
end
|
41
|
+
end
|
42
|
+
end
|
data/lib/lowkiq/server.rb
CHANGED
data/lib/lowkiq/shard_handler.rb
CHANGED
@@ -31,7 +31,7 @@ module Lowkiq
|
|
31
31
|
@worker.perform batch
|
32
32
|
end
|
33
33
|
|
34
|
-
@queue.ack @shard_index, :success
|
34
|
+
@queue.ack @shard_index, data, :success
|
35
35
|
true
|
36
36
|
rescue => ex
|
37
37
|
fail! data, ex
|
@@ -39,7 +39,7 @@ module Lowkiq
|
|
39
39
|
|
40
40
|
@queue.push_back back
|
41
41
|
@queue.push_to_morgue morgue
|
42
|
-
@queue.ack @shard_index, :fail
|
42
|
+
@queue.ack @shard_index, data, :fail
|
43
43
|
false
|
44
44
|
end
|
45
45
|
end
|
@@ -48,7 +48,7 @@ module Lowkiq
|
|
48
48
|
data = @queue.processing_data @shard_index
|
49
49
|
return if data.nil?
|
50
50
|
@queue.push_back data
|
51
|
-
@queue.ack @shard_index
|
51
|
+
@queue.ack @shard_index, data
|
52
52
|
end
|
53
53
|
|
54
54
|
private
|
data/lib/lowkiq/utils.rb
CHANGED
data/lib/lowkiq/version.rb
CHANGED
data/lib/lowkiq/web.rb
CHANGED
@@ -28,6 +28,7 @@ module Lowkiq
|
|
28
28
|
|
29
29
|
APP = Rack::Builder.new do
|
30
30
|
map "/api" do
|
31
|
+
use Rack::ContentType, "application/json"
|
31
32
|
run Api
|
32
33
|
end
|
33
34
|
|
@@ -35,6 +36,7 @@ module Lowkiq
|
|
35
36
|
run Rack::File.new ASSETS, { 'Cache-Control' => 'public, max-age=86400' }
|
36
37
|
end
|
37
38
|
|
39
|
+
use Rack::ContentType, "text/html"
|
38
40
|
run HTML
|
39
41
|
end
|
40
42
|
|
data/lib/lowkiq/web/api.rb
CHANGED
data/lib/lowkiq/worker.rb
CHANGED
data/lowkiq.gemspec
CHANGED
@@ -34,4 +34,5 @@ Gem::Specification.new do |spec|
|
|
34
34
|
spec.add_development_dependency "rspec", "~> 3.0"
|
35
35
|
spec.add_development_dependency "rspec-mocks", "~> 3.8"
|
36
36
|
spec.add_development_dependency "rack-test", "~> 1.1"
|
37
|
+
spec.add_development_dependency "pry-byebug", "~> 3.9.0"
|
37
38
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: lowkiq
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0
|
4
|
+
version: 1.1.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Mikhail Kuzmin
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2021-01-25 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: redis
|
@@ -134,6 +134,20 @@ dependencies:
|
|
134
134
|
- - "~>"
|
135
135
|
- !ruby/object:Gem::Version
|
136
136
|
version: '1.1'
|
137
|
+
- !ruby/object:Gem::Dependency
|
138
|
+
name: pry-byebug
|
139
|
+
requirement: !ruby/object:Gem::Requirement
|
140
|
+
requirements:
|
141
|
+
- - "~>"
|
142
|
+
- !ruby/object:Gem::Version
|
143
|
+
version: 3.9.0
|
144
|
+
type: :development
|
145
|
+
prerelease: false
|
146
|
+
version_requirements: !ruby/object:Gem::Requirement
|
147
|
+
requirements:
|
148
|
+
- - "~>"
|
149
|
+
- !ruby/object:Gem::Version
|
150
|
+
version: 3.9.0
|
137
151
|
description: Lowkiq
|
138
152
|
email:
|
139
153
|
- Mihail.Kuzmin@bia-tech.ru
|
@@ -144,6 +158,7 @@ extra_rdoc_files: []
|
|
144
158
|
files:
|
145
159
|
- ".gitignore"
|
146
160
|
- ".rspec"
|
161
|
+
- CHANGELOG.md
|
147
162
|
- Gemfile
|
148
163
|
- Gemfile.lock
|
149
164
|
- LICENSE.md
|
@@ -157,12 +172,10 @@ files:
|
|
157
172
|
- docker-compose.yml
|
158
173
|
- exe/lowkiq
|
159
174
|
- lib/lowkiq.rb
|
160
|
-
- lib/lowkiq/extend_tracker.rb
|
161
175
|
- lib/lowkiq/option_parser.rb
|
162
176
|
- lib/lowkiq/queue/actions.rb
|
163
177
|
- lib/lowkiq/queue/fetch.rb
|
164
178
|
- lib/lowkiq/queue/keys.rb
|
165
|
-
- lib/lowkiq/queue/marshal.rb
|
166
179
|
- lib/lowkiq/queue/queries.rb
|
167
180
|
- lib/lowkiq/queue/queue.rb
|
168
181
|
- lib/lowkiq/queue/queue_metrics.rb
|
@@ -170,6 +183,7 @@ files:
|
|
170
183
|
- lib/lowkiq/redis_info.rb
|
171
184
|
- lib/lowkiq/schedulers/lag.rb
|
172
185
|
- lib/lowkiq/schedulers/seq.rb
|
186
|
+
- lib/lowkiq/script.rb
|
173
187
|
- lib/lowkiq/server.rb
|
174
188
|
- lib/lowkiq/shard_handler.rb
|
175
189
|
- lib/lowkiq/splitters/by_node.rb
|
@@ -201,7 +215,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
201
215
|
- !ruby/object:Gem::Version
|
202
216
|
version: '0'
|
203
217
|
requirements: []
|
204
|
-
rubygems_version: 3.
|
218
|
+
rubygems_version: 3.1.4
|
205
219
|
signing_key:
|
206
220
|
specification_version: 4
|
207
221
|
summary: Lowkiq
|
data/lib/lowkiq/queue/marshal.rb
DELETED
@@ -1,23 +0,0 @@
|
|
1
|
-
module Lowkiq
|
2
|
-
module Queue
|
3
|
-
module Marshal
|
4
|
-
class << self
|
5
|
-
def dump_payload(data)
|
6
|
-
::Marshal.dump data
|
7
|
-
end
|
8
|
-
|
9
|
-
def load_payload(str)
|
10
|
-
::Marshal.load str
|
11
|
-
end
|
12
|
-
|
13
|
-
def dump_data(data)
|
14
|
-
::Marshal.dump data
|
15
|
-
end
|
16
|
-
|
17
|
-
def load_data(str)
|
18
|
-
::Marshal.load str
|
19
|
-
end
|
20
|
-
end
|
21
|
-
end
|
22
|
-
end
|
23
|
-
end
|