lowkiq 1.0.6 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 765d6d3c54595e43fe428927bab38b8e797e3727bd4bc229009133e3a3df2356
4
- data.tar.gz: d7dcf125b3e9734f749f1c0ae385f2b48e4fb1b75f2574082bfecc95b6122666
3
+ metadata.gz: 1737d4cbb8058600c3c3b162c6d48b8749d5f1750524538f73300fbcd0090269
4
+ data.tar.gz: 6014eb81fa0550a93efa7e04d2785bc4616f8f004563c906fd9f3ee930ba3183
5
5
  SHA512:
6
- metadata.gz: 987fe866751990b60dbdac866a2789c85f8a3ef5743170d77a7b03a88c368a2f94ce051d9def1e037f36aa42b3b404eb3fd5e0acce00fa760f54223b387a7451
7
- data.tar.gz: c39de8f08d3b98a4eb4ffa586435d1935c3bf803b936591106ff97eb38486323c34a7e5c25f24f122592a08b2661df2f201802508f4c6a3b61ecdda2a04acfe3
6
+ metadata.gz: 16bb89de9f1f25a745602e70ae9e172fb85cbbad9b9ed50061fd6c3c74dd0122c204c7b6c973bbb02e2c891bc45449882da479a3b59b5108b65073b9d000e8a2
7
+ data.tar.gz: 0d9c5316a08272e539c34a1cda2e688348b93f3c255ec9c2faef8f3db7363e1e8e87402a4cda880cce95249e2120686f86d05bdba5bb117c3cefb079885bc52f
@@ -0,0 +1,8 @@
1
+ # 1.1.0
2
+
3
+ * Timestamps are float rather than int. #23
4
+ * Due to problems with autoloading, you now need to manually assign a list of workers. #22
5
+
6
+ ```ruby
7
+ Lowkiq.workers = [ ATestWorker, ATest2Worker ]
8
+ ```
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- lowkiq (1.0.5)
4
+ lowkiq (1.1.0)
5
5
  connection_pool (~> 2.2, >= 2.2.2)
6
6
  rack (>= 1.5.0)
7
7
  redis (>= 4.0.1, < 5)
@@ -9,13 +9,22 @@ PATH
9
9
  GEM
10
10
  remote: https://rubygems.org/
11
11
  specs:
12
+ byebug (11.1.3)
13
+ coderay (1.1.3)
12
14
  connection_pool (2.2.3)
13
15
  diff-lcs (1.3)
16
+ method_source (1.0.0)
17
+ pry (0.13.1)
18
+ coderay (~> 1.1)
19
+ method_source (~> 1.0)
20
+ pry-byebug (3.9.0)
21
+ byebug (~> 11.0)
22
+ pry (~> 0.13.0)
14
23
  rack (2.2.2)
15
24
  rack-test (1.1.0)
16
25
  rack (>= 1.0, < 3)
17
26
  rake (12.3.3)
18
- redis (4.2.2)
27
+ redis (4.2.5)
19
28
  rspec (3.9.0)
20
29
  rspec-core (~> 3.9.0)
21
30
  rspec-expectations (~> 3.9.0)
@@ -36,6 +45,7 @@ PLATFORMS
36
45
  DEPENDENCIES
37
46
  bundler (~> 2.1.0)
38
47
  lowkiq!
48
+ pry-byebug (~> 3.9.0)
39
49
  rack-test (~> 1.1)
40
50
  rake (~> 12.3.0)
41
51
  rspec (~> 3.0)
data/README.md CHANGED
@@ -32,9 +32,9 @@ Ordered background jobs processing
32
32
  ## Rationale
33
33
 
34
34
  We've faced some problems using Sidekiq while processing messages from a side system.
35
- For instance, the message is a data of an order in particular time.
36
- The side system will send a new data of an order on an every change.
37
- Orders are frequently updated and a queue containts some closely located messages of the same order.
35
+ For instance, the message is the data of an order at a particular time.
36
+ The side system will send new data of an order on every change.
37
+ Orders are frequently updated and a queue contains some closely located messages of the same order.
38
38
 
39
39
  Sidekiq doesn't guarantee a strict message order, because a queue is processed by multiple threads.
40
40
  For example, we've received 2 messages: M1 and M2.
@@ -43,8 +43,8 @@ so M2 can be processed before M1.
43
43
 
44
44
  Parallel processing of such kind of messages can result in:
45
45
 
46
- + dead locks
47
- + overwriting new data with old one
46
+ + deadlocks
47
+ + overwriting new data with an old one
48
48
 
49
49
  Lowkiq has been created to eliminate such problems by avoiding parallel task processing within one entity.
50
50
 
@@ -52,17 +52,17 @@ Lowkiq has been created to eliminate such problems by avoiding parallel task pro
52
52
 
53
53
  Lowkiq's queues are reliable i.e.,
54
54
  Lowkiq saves information about a job being processed
55
- and returns incompleted jobs back to the queue on startup.
55
+ and returns uncompleted jobs to the queue on startup.
56
56
 
57
57
  Jobs in queues are ordered by preassigned execution time, so they are not FIFO queues.
58
58
 
59
- Every job has it's own identifier. Lowkiq guarantees that jobs with equal id are processed by the same thread.
59
+ Every job has its identifier. Lowkiq guarantees that jobs with equal IDs are processed by the same thread.
60
60
 
61
61
  Every queue is divided into a permanent set of shards.
62
- A job is placed into particular shard based on an id of the job.
62
+ A job is placed into a particular shard based on an id of the job.
63
63
  So jobs with the same id are always placed into the same shard.
64
64
  All jobs of the shard are always processed with the same thread.
65
- This guarantees the sequently processing of jobs with the same ids and excludes the possibility of locks.
65
+ This guarantees the sequential processing of jobs with the same ids and excludes the possibility of locks.
66
66
 
67
67
  Besides the id, every job has a payload.
68
68
  Payloads are accumulated for jobs with the same id.
@@ -71,8 +71,8 @@ It's useful when you need to process only the last message and drop all previous
71
71
 
72
72
  A worker corresponds to a queue and contains a job processing logic.
73
73
 
74
- Fixed amount of threads is used to process all job of all queues.
75
- Adding or removing queues or it's shards won't affect the amount of threads.
74
+ The fixed number of threads is used to process all jobs of all queues.
75
+ Adding or removing queues or their shards won't affect the number of threads.
76
76
 
77
77
  ## Sidekiq comparison
78
78
 
@@ -83,16 +83,16 @@ But if you use plugins like
83
83
  [sidekiq-merger](https://github.com/dtaniwaki/sidekiq-merger)
84
84
  or implement your own lock system, you should look at Lowkiq.
85
85
 
86
- For example, sidekiq-grouping accumulates a batch of jobs than enqueues it and accumulates a next batch.
87
- With this approach queue can contains two batches with a data of the same order.
86
+ For example, sidekiq-grouping accumulates a batch of jobs then enqueues it and accumulates the next batch.
87
+ With this approach, a queue can contain two batches with data of the same order.
88
88
  These batches are parallel processed with different threads, so we come back to the initial problem.
89
89
 
90
- Lowkiq was designed to avoid any types of locking.
90
+ Lowkiq was designed to avoid any type of locking.
91
91
 
92
92
  Furthermore, Lowkiq's queues are reliable. Only Sidekiq Pro or plugins can add such functionality.
93
93
 
94
- This [benchmark](examples/benchmark) shows overhead on redis usage.
95
- This is the results for 5 threads, 100,000 blank jobs:
94
+ This [benchmark](examples/benchmark) shows overhead on Redis usage.
95
+ These are the results for 5 threads, 100,000 blank jobs:
96
96
 
97
97
  + lowkiq: 155 sec or 1.55 ms per job
98
98
  + lowkiq +hiredis: 80 sec or 0.80 ms per job
@@ -100,29 +100,29 @@ This is the results for 5 threads, 100,000 blank jobs:
100
100
 
101
101
  This difference is related to different queues structure.
102
102
  Sidekiq uses one list for all workers and fetches the job entirely for O(1).
103
- Lowkiq uses several data structures, including sorted sets for storing ids of jobs.
103
+ Lowkiq uses several data structures, including sorted sets for keeping ids of jobs.
104
104
  So fetching only an id of a job takes O(log(N)).
105
105
 
106
106
  ## Queue
107
107
 
108
108
  Please, look at [the presentation](https://docs.google.com/presentation/d/e/2PACX-1vRdwA2Ck22r26KV1DbY__XcYpj2FdlnR-2G05w1YULErnJLB_JL1itYbBC6_JbLSPOHwJ0nwvnIHH2A/pub?start=false&loop=false&delayms=3000).
109
109
 
110
- Every job has following attributes:
110
+ Every job has the following attributes:
111
111
 
112
112
  + `id` is a job identifier with string type.
113
- + `payloads` is a sorted set of payloads ordered by it's score. Payload is an object. Score is a real number.
114
- + `perform_in` is planned execution time. It's unix timestamp with real number type.
113
+ + `payloads` is a sorted set of payloads ordered by its score. A payload is an object. A score is a real number.
114
+ + `perform_in` is planned execution time. It's a Unix timestamp with a real number type.
115
115
  + `retry_count` is amount of retries. It's a real number.
116
116
 
117
- For example, `id` can be an identifier of replicated entity.
118
- `payloads` is a sorted set ordered by score of payload and resulted by grouping a payload of job by it's `id`.
119
- `payload` can be a ruby object, because it is serialized by `Marshal.dump`.
120
- `score` can be `payload`'s creation date (unix timestamp) or it's incremental version number.
121
- By default `score` and `perform_in` are current unix timestamp.
117
+ For example, `id` can be an identifier of a replicated entity.
118
+ `payloads` is a sorted set ordered by a score of payload and resulted by grouping a payload of the job by its `id`.
119
+ `payload` can be a ruby object because it is serialized by `Marshal.dump`.
120
+ `score` can be `payload`'s creation date (Unix timestamp) or it's an incremental version number.
121
+ By default, `score` and `perform_in` are current Unix timestamp.
122
122
  `retry_count` for new unprocessed job equals to `-1`,
123
123
  for one-time failed is `0`, so the planned retries are counted, not the performed ones.
124
124
 
125
- A job execution can be unsuccessful. In this case, its `retry_count` is incremented, new `perform_in` is calculated with determined formula and it moves back to a queue.
125
+ Job execution can be unsuccessful. In this case, its `retry_count` is incremented, the new `perform_in` is calculated with determined formula, and it moves back to a queue.
126
126
 
127
127
  In case of `retry_count` is getting `>=` `max_retry_count` an element of `payloads` with less (oldest) score is moved to a morgue,
128
128
  rest elements are moved back to the queue, wherein `retry_count` and `perform_in` are reset to `-1` and `now()` respectively.
@@ -146,14 +146,14 @@ If `max_retry_count = 1`, retries stop.
146
146
 
147
147
  They are applied when:
148
148
 
149
- + a job had been in a queue and a new one with the same id was added
150
- + a job was failed, but a new one with the same id had been added
151
- + a job from morgue was moved back to queue, but queue had had a job with the same id
149
+ + a job has been in a queue and a new one with the same id is added
150
+ + a job is failed, but a new one with the same id has been added
151
+ + a job from a morgue is moved back to a queue, but the queue has had a job with the same id
152
152
 
153
153
  Algorithm:
154
154
 
155
- + payloads is merged, minimal score is chosen for equal payloads
156
- + if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the the job from the queue
155
+ + payloads are merged, the minimal score is chosen for equal payloads
156
+ + if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the job from the queue
157
157
  + if a failed job and queued job is merged, `perform_in` and `retry_count` is taken from the failed one
158
158
  + if morgue job and queued job is merged, `perform_in = now()`, `retry_count = -1`
159
159
 
@@ -172,8 +172,8 @@ Example:
172
172
  { id: "1", payloads: #{"v1": 1, "v2": 3, "v3": 4}, retry_count: 0, perform_in: 1536323288 }
173
173
  ```
174
174
 
175
- Morgue is a part of the queue. Jobs in morgue are not processed.
176
- A job in morgue has following attributes:
175
+ A morgue is a part of a queue. Jobs in a morgue are not processed.
176
+ A job in a morgue has the following attributes:
177
177
 
178
178
  + id is the job identifier
179
179
  + payloads
@@ -216,6 +216,12 @@ module ATestWorker
216
216
  end
217
217
  ```
218
218
 
219
+ And then you have to add it to Lowkiq in your initializer file due to problems with autoloading:
220
+
221
+ ```ruby
222
+ Lowkiq.workers = [ ATestWorker ]
223
+ ```
224
+
219
225
  Default values:
220
226
 
221
227
  ```ruby
@@ -234,10 +240,10 @@ end
234
240
  ATestWorker.perform_async [
235
241
  { id: 0 },
236
242
  { id: 1, payload: { attr: 'v1' } },
237
- { id: 2, payload: { attr: 'v1' }, score: Time.now.to_i, perform_in: Time.now.to_i },
243
+ { id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
238
244
  ]
239
245
  # payload by default equals to ""
240
- # score and perform_in by default equals to Time.now.to_i
246
+ # score and perform_in by default equals to Time.now.to_f
241
247
  ```
242
248
 
243
249
  It is possible to redefine `perform_async` and calculate `id`, `score` и `perform_in` in a worker code:
@@ -272,8 +278,9 @@ ATestWorker.perform_async 1000.times.map { |id| { payload: {id: id} } }
272
278
 
273
279
  Options and their default values are:
274
280
 
281
+ + `Lowkiq.workers = []`- list of workers to use. Since 1.1.0.
275
282
  + `Lowkiq.poll_interval = 1` - delay in seconds between queue polling for new jobs.
276
- Used only if the queue was empty at previous cycle or error was occured.
283
+ Used only if a queue was empty in a previous cycle or an error occurred.
277
284
  + `Lowkiq.threads_per_node = 5` - threads per node.
278
285
  + `Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL') }` - redis connection options
279
286
  + `Lowkiq.client_pool_size = 5` - redis pool size for queueing jobs
@@ -327,7 +334,7 @@ Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL'), driver: :hiredis }
327
334
 
328
335
  `path_to_app.rb` must load app. [Example](examples/dummy/lib/app.rb).
329
336
 
330
- Lazy loading of workers modules is unacceptable.
337
+ The lazy loading of worker modules is unacceptable.
331
338
  For preliminarily loading modules use
332
339
  `require`
333
340
  or [`require_dependency`](https://api.rubyonrails.org/classes/ActiveSupport/Dependencies/Loadable.html#method-i-require_dependency)
@@ -335,15 +342,15 @@ for Ruby on Rails.
335
342
 
336
343
  ## Shutdown
337
344
 
338
- Send TERM or INT signal to process (Ctrl-C).
339
- Process will wait for executed jobs to finish.
345
+ Send TERM or INT signal to the process (Ctrl-C).
346
+ The process will wait for executed jobs to finish.
340
347
 
341
- Note that if queue is empty, process sleeps `poll_interval` seconds,
348
+ Note that if a queue is empty, the process sleeps `poll_interval` seconds,
342
349
  therefore, the process will not stop until the `poll_interval` seconds have passed.
343
350
 
344
351
  ## Debug
345
352
 
346
- To get trace of all threads of app:
353
+ To get trace of all threads of an app:
347
354
 
348
355
  ```
349
356
  kill -TTIN <pid>
@@ -357,11 +364,22 @@ docker-compose run --rm --service-port app bash
357
364
  bundle
358
365
  rspec
359
366
  cd examples/dummy ; bundle exec ../../exe/lowkiq -r ./lib/app.rb
367
+
368
+ # open localhost:8080
369
+ ```
370
+
371
+ ```
372
+ docker-compose run --rm --service-port frontend bash
373
+ npm run dumb
374
+ # open localhost:8081
375
+
376
+ # npm run build
377
+ # npm run web-api
360
378
  ```
361
379
 
362
380
  ## Exceptions
363
381
 
364
- `StandardError` thrown by worker are handled with middleware. Such exceptions doesn't lead to process stop.
382
+ `StandardError` thrown by a worker are handled with middleware. Such exceptions don't lead to process stops.
365
383
 
366
384
  All other exceptions cause the process to stop.
367
385
  Lowkiq will wait for job execution by other threads.
@@ -384,15 +402,18 @@ end
384
402
  ```ruby
385
403
  # config/initializers/lowkiq.rb
386
404
 
387
- # loading all lowkiq workers
388
- Dir["#{Rails.root}/app/lowkiq_workers/**/*.rb"].each { |file| require_dependency file }
389
-
390
405
  # configuration:
391
406
  # Lowkiq.redis = -> { Redis.new url: ENV.fetch('LOWKIQ_REDIS_URL') }
392
407
  # Lowkiq.threads_per_node = ENV.fetch('LOWKIQ_THREADS_PER_NODE').to_i
393
408
  # Lowkiq.client_pool_size = ENV.fetch('LOWKIQ_CLIENT_POOL_SIZE').to_i
394
409
  # ...
395
410
 
411
+ # since 1.1.0
412
+ Lowkiq.workers = [
413
+ ATestWorker,
414
+ OtherCoolWorker
415
+ ]
416
+
396
417
  Lowkiq.server_middlewares << -> (worker, batch, &block) do
397
418
  logger = Rails.logger
398
419
  tag = "#{worker}-#{Thread.current.object_id}"
@@ -480,10 +501,10 @@ worker C: 0
480
501
  worker D: 0, 1
481
502
  ```
482
503
 
483
- Lowkiq uses fixed amount of threads for job processing, therefore it is necessary to distribute shards between threads.
504
+ Lowkiq uses a fixed number of threads for job processing, therefore it is necessary to distribute shards between threads.
484
505
  Splitter does it.
485
506
 
486
- To define a set of shards, which is being processed by thread, lets move them to one list:
507
+ To define a set of shards, which is being processed by a thread, let's move them to one list:
487
508
 
488
509
  ```
489
510
  A0, A1, A2, B0, B1, B2, B3, C0, D0, D1
@@ -500,7 +521,7 @@ t1: A1, B1, C0
500
521
  t2: A2, B2, D0
501
522
  ```
502
523
 
503
- Besides Default Lowkiq has ByNode splitter. It allows to divide the load by several processes (nodes).
524
+ Besides Default Lowkiq has the ByNode splitter. It allows dividing the load by several processes (nodes).
504
525
 
505
526
  ```
506
527
  Lowkiq.build_splitter = -> () do
@@ -511,7 +532,7 @@ Lowkiq.build_splitter = -> () do
511
532
  end
512
533
  ```
513
534
 
514
- So, instead of single process you need to execute multiple ones and to set environment variables up:
535
+ So, instead of a single process, you need to execute multiple ones and to set environment variables up:
515
536
 
516
537
  ```
517
538
  # process 0
@@ -523,18 +544,18 @@ LOWKIQ_NUMBER_OF_NODES=2 LOWKIQ_NODE_NUMBER=1 bundle exec lowkiq -r ./lib/app.rb
523
544
 
524
545
  Summary amount of threads are equal product of `ENV.fetch('LOWKIQ_NUMBER_OF_NODES')` and `Lowkiq.threads_per_node`.
525
546
 
526
- You can also write your own splitter if your app needs extra distribution of shards between threads or nodes.
547
+ You can also write your own splitter if your app needs an extra distribution of shards between threads or nodes.
527
548
 
528
549
  ## Scheduler
529
550
 
530
- Every thread processes a set of shards. Scheduler select shard for processing.
531
- Every thread has it's own instance of scheduler.
551
+ Every thread processes a set of shards. The scheduler selects shard for processing.
552
+ Every thread has its own instance of the scheduler.
532
553
 
533
554
  Lowkiq has 2 schedulers for your choice.
534
- `Seq` sequentally looks over shards.
555
+ `Seq` sequentially looks over shards.
535
556
  `Lag` chooses shard with the oldest job minimizing the lag. It's used by default.
536
557
 
537
- Scheduler can be set up through settings:
558
+ The scheduler can be set up through settings:
538
559
 
539
560
  ```
540
561
  Lowkiq.build_scheduler = ->() { Lowkiq.build_seq_scheduler }
@@ -547,13 +568,13 @@ Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
547
568
  ### `SomeWorker.shards_count`
548
569
 
549
570
  Sum of `shards_count` of all workers shouldn't be less than `Lowkiq.threads_per_node`
550
- otherwise threads will stay idle.
571
+ otherwise, threads will stay idle.
551
572
 
552
573
  Sum of `shards_count` of all workers can be equal to `Lowkiq.threads_per_node`.
553
- In this case thread processes a single shard. This makes sense only with uniform queue load.
574
+ In this case, a thread processes a single shard. This makes sense only with a uniform queue load.
554
575
 
555
576
  Sum of `shards_count` of all workers can be more than `Lowkiq.threads_per_node`.
556
- In this case `shards_count` can be counted as a priority.
577
+ In this case, `shards_count` can be counted as a priority.
557
578
  The larger it is, the more often the tasks of this queue will be processed.
558
579
 
559
580
  There is no reason to set `shards_count` of one worker more than `Lowkiq.threads_per_node`,
@@ -561,8 +582,8 @@ because every thread will handle more than one shard from this queue, so it incr
561
582
 
562
583
  ### `SomeWorker.max_retry_count`
563
584
 
564
- From `retry_in` and `max_retry_count`, you can calculate approximate time that payload of job will be in a queue.
565
- After `max_retry_count` is reached the payload with a minimal score will be moved to a morgue.
585
+ From `retry_in` and `max_retry_count`, you can calculate the approximate time that a payload of a job will be in a queue.
586
+ After `max_retry_count` is reached a payload with a minimal score will be moved to a morgue.
566
587
 
567
588
  For default `retry_in` we receive the following table.
568
589
 
@@ -590,9 +611,9 @@ end
590
611
 
591
612
  ## Changing of worker's shards amount
592
613
 
593
- Try to count amount of shards right away and don't change it in future.
614
+ Try to count the number of shards right away and don't change it in the future.
594
615
 
595
- If you can disable adding of new jobs, wait for queues to get empty and deploy the new version of code with changed amount of shards.
616
+ If you can disable adding of new jobs, wait for queues to get empty, and deploy the new version of code with a changed amount of shards.
596
617
 
597
618
  If you can't do it, follow the next steps:
598
619
 
@@ -610,7 +631,7 @@ module ATestWorker
610
631
  end
611
632
  ```
612
633
 
613
- Set the number of shards and new queue name:
634
+ Set the number of shards and the new queue name:
614
635
 
615
636
  ```ruby
616
637
  module ATestWorker
@@ -625,7 +646,7 @@ module ATestWorker
625
646
  end
626
647
  ```
627
648
 
628
- Add a worker moving jobs from the old queue to a new one:
649
+ Add a worker moving jobs from the old queue to the new one:
629
650
 
630
651
  ```ruby
631
652
  module ATestMigrationWorker
@@ -247,10 +247,10 @@ end
247
247
  ATestWorker.perform_async [
248
248
  { id: 0 },
249
249
  { id: 1, payload: { attr: 'v1' } },
250
- { id: 2, payload: { attr: 'v1' }, score: Time.now.to_i, perform_in: Time.now.to_i },
250
+ { id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
251
251
  ]
252
252
  # payload по умолчанию равен ""
253
- # score и perform_in по умолчанию равны Time.now.to_i
253
+ # score и perform_in по умолчанию равны Time.now.to_f
254
254
  ```
255
255
 
256
256
  Вы можете переопределить `perform_async` и вычислять `id`, `score` и `perform_in` в воркере:
@@ -10,7 +10,6 @@ require "lowkiq/version"
10
10
  require "lowkiq/utils"
11
11
  require "lowkiq/script"
12
12
 
13
- require "lowkiq/extend_tracker"
14
13
  require "lowkiq/option_parser"
15
14
 
16
15
  require "lowkiq/splitters/default"
@@ -42,7 +41,8 @@ module Lowkiq
42
41
  :server_middlewares, :on_server_init,
43
42
  :build_scheduler, :build_splitter,
44
43
  :last_words,
45
- :dump_payload, :load_payload
44
+ :dump_payload, :load_payload,
45
+ :workers
46
46
 
47
47
  def server_redis_pool
48
48
  @server_redis_pool ||= ConnectionPool.new(size: threads_per_node, timeout: pool_timeout, &redis)
@@ -63,10 +63,6 @@ module Lowkiq
63
63
  end
64
64
  end
65
65
 
66
- def workers
67
- Worker.extended_modules
68
- end
69
-
70
66
  def shard_handlers
71
67
  self.workers.flat_map do |w|
72
68
  ShardHandler.build_many w, self.server_wrapper
@@ -112,4 +108,5 @@ module Lowkiq
112
108
  self.last_words = ->(ex) {}
113
109
  self.dump_payload = ::Marshal.method :dump
114
110
  self.load_payload = ::Marshal.method :load
111
+ self.workers = []
115
112
  end
@@ -64,11 +64,9 @@ module Lowkiq
64
64
 
65
65
  def coerce_lag(res)
66
66
  _id, score = res.first
67
-
68
- return 0 if score.nil?
69
- return 1 if score == 0 # на случай Actions#perform_all_jobs_now
70
- lag = @timestamp.call - score.to_i
71
- return 0 if lag < 0
67
+ return 0.0 if score.nil?
68
+ lag = @timestamp.call - score
69
+ return 0.0 if lag < 0.0
72
70
  lag
73
71
  end
74
72
 
@@ -41,10 +41,9 @@ module Lowkiq
41
41
 
42
42
  def coerce_lag(res)
43
43
  _id, score = res.first
44
-
45
- return 0 if score.nil?
46
- lag = @timestamp.call - score.to_i
47
- return 0 if lag < 0
44
+ return 0.0 if score.nil?
45
+ lag = @timestamp.call - score
46
+ return 0.0 if lag < 0.0
48
47
  lag
49
48
  end
50
49
  end
@@ -12,7 +12,7 @@ module Lowkiq
12
12
  metrics = @metrics.call identifiers
13
13
  shard_handler, _lag =
14
14
  shard_handlers.zip(metrics.map(&:lag))
15
- .select { |(_, lag)| lag > 0 }
15
+ .select { |(_, lag)| lag > 0.0 }
16
16
  .max_by { |(_, lag)| lag }
17
17
 
18
18
  if shard_handler
@@ -29,7 +29,7 @@ module Lowkiq
29
29
 
30
30
  module Timestamp
31
31
  def self.now
32
- Time.now.to_i
32
+ Time.now.to_f
33
33
  end
34
34
  end
35
35
  end
@@ -1,3 +1,3 @@
1
1
  module Lowkiq
2
- VERSION = "1.0.6"
2
+ VERSION = "1.1.0"
3
3
  end
@@ -18,7 +18,7 @@ module Lowkiq
18
18
  total = {
19
19
  length: metrics.map(&:length).reduce(&:+).to_i,
20
20
  morgue_length: metrics.map(&:morgue_length).reduce(&:+).to_i,
21
- lag: metrics.map(&:lag).max.to_i,
21
+ lag: metrics.map(&:lag).max.to_f,
22
22
  }
23
23
  {
24
24
  total: total,
@@ -1,7 +1,5 @@
1
1
  module Lowkiq
2
2
  module Worker
3
- extend ExtendTracker
4
-
5
3
  attr_accessor :shards_count,
6
4
  :batch_size,
7
5
  :max_retry_count,
@@ -34,4 +34,5 @@ Gem::Specification.new do |spec|
34
34
  spec.add_development_dependency "rspec", "~> 3.0"
35
35
  spec.add_development_dependency "rspec-mocks", "~> 3.8"
36
36
  spec.add_development_dependency "rack-test", "~> 1.1"
37
+ spec.add_development_dependency "pry-byebug", "~> 3.9.0"
37
38
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: lowkiq
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.6
4
+ version: 1.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mikhail Kuzmin
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2020-10-08 00:00:00.000000000 Z
11
+ date: 2021-01-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: redis
@@ -134,6 +134,20 @@ dependencies:
134
134
  - - "~>"
135
135
  - !ruby/object:Gem::Version
136
136
  version: '1.1'
137
+ - !ruby/object:Gem::Dependency
138
+ name: pry-byebug
139
+ requirement: !ruby/object:Gem::Requirement
140
+ requirements:
141
+ - - "~>"
142
+ - !ruby/object:Gem::Version
143
+ version: 3.9.0
144
+ type: :development
145
+ prerelease: false
146
+ version_requirements: !ruby/object:Gem::Requirement
147
+ requirements:
148
+ - - "~>"
149
+ - !ruby/object:Gem::Version
150
+ version: 3.9.0
137
151
  description: Lowkiq
138
152
  email:
139
153
  - Mihail.Kuzmin@bia-tech.ru
@@ -144,6 +158,7 @@ extra_rdoc_files: []
144
158
  files:
145
159
  - ".gitignore"
146
160
  - ".rspec"
161
+ - CHANGELOG.md
147
162
  - Gemfile
148
163
  - Gemfile.lock
149
164
  - LICENSE.md
@@ -157,7 +172,6 @@ files:
157
172
  - docker-compose.yml
158
173
  - exe/lowkiq
159
174
  - lib/lowkiq.rb
160
- - lib/lowkiq/extend_tracker.rb
161
175
  - lib/lowkiq/option_parser.rb
162
176
  - lib/lowkiq/queue/actions.rb
163
177
  - lib/lowkiq/queue/fetch.rb
@@ -1,13 +0,0 @@
1
- module Lowkiq
2
- module ExtendTracker
3
- def extended(mod)
4
- @extended_modules ||= []
5
- @extended_modules << mod
6
- @extended_modules.sort_by!(&:name).uniq!
7
- end
8
-
9
- def extended_modules
10
- @extended_modules
11
- end
12
- end
13
- end