lowkiq 1.0.6 → 1.2.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 765d6d3c54595e43fe428927bab38b8e797e3727bd4bc229009133e3a3df2356
4
- data.tar.gz: d7dcf125b3e9734f749f1c0ae385f2b48e4fb1b75f2574082bfecc95b6122666
3
+ metadata.gz: 17cdb6368062bd5a178c3152fcacb7d8d0e390206c220f76b6940ddd79f0ac31
4
+ data.tar.gz: 46b6ac23e95ce8054799b4bae22c07b0375895274f69bdd494a9da882fc9e450
5
5
  SHA512:
6
- metadata.gz: 987fe866751990b60dbdac866a2789c85f8a3ef5743170d77a7b03a88c368a2f94ce051d9def1e037f36aa42b3b404eb3fd5e0acce00fa760f54223b387a7451
7
- data.tar.gz: c39de8f08d3b98a4eb4ffa586435d1935c3bf803b936591106ff97eb38486323c34a7e5c25f24f122592a08b2661df2f201802508f4c6a3b61ecdda2a04acfe3
6
+ metadata.gz: a1d9f295b8b58a8412bba69dd4f162bbed17ce3a64d3543992af2397213db878a4f62953dfa1b9b888d2ffca41f7f56745015da6d78f791c553205d91f004377
7
+ data.tar.gz: c834e85f721561e632cea9923000128b02ccc6d7a3a9e4c29e99301577a1ff45950760df4bf621c68c92550ddc59763d86ae109851b7d298e499d3ee736cd961
data/CHANGELOG.md ADDED
@@ -0,0 +1,8 @@
1
+ # 1.1.0
2
+
3
+ * Timestamps are float rather than int. #23
4
+ * Due to problems with autoloading, you now need to manually assign a list of workers. #22
5
+
6
+ ```ruby
7
+ Lowkiq.workers = [ ATestWorker, ATest2Worker ]
8
+ ```
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- lowkiq (1.0.5)
4
+ lowkiq (1.1.0)
5
5
  connection_pool (~> 2.2, >= 2.2.2)
6
6
  rack (>= 1.5.0)
7
7
  redis (>= 4.0.1, < 5)
@@ -9,13 +9,22 @@ PATH
9
9
  GEM
10
10
  remote: https://rubygems.org/
11
11
  specs:
12
+ byebug (11.1.3)
13
+ coderay (1.1.3)
12
14
  connection_pool (2.2.3)
13
15
  diff-lcs (1.3)
16
+ method_source (1.0.0)
17
+ pry (0.13.1)
18
+ coderay (~> 1.1)
19
+ method_source (~> 1.0)
20
+ pry-byebug (3.9.0)
21
+ byebug (~> 11.0)
22
+ pry (~> 0.13.0)
14
23
  rack (2.2.2)
15
24
  rack-test (1.1.0)
16
25
  rack (>= 1.0, < 3)
17
26
  rake (12.3.3)
18
- redis (4.2.2)
27
+ redis (4.2.5)
19
28
  rspec (3.9.0)
20
29
  rspec-core (~> 3.9.0)
21
30
  rspec-expectations (~> 3.9.0)
@@ -36,6 +45,7 @@ PLATFORMS
36
45
  DEPENDENCIES
37
46
  bundler (~> 2.1.0)
38
47
  lowkiq!
48
+ pry-byebug (~> 3.9.0)
39
49
  rack-test (~> 1.1)
40
50
  rake (~> 12.3.0)
41
51
  rspec (~> 3.0)
data/README.md CHANGED
@@ -28,13 +28,15 @@ Ordered background jobs processing
28
28
  * [Recommendations on configuration](#recommendations-on-configuration)
29
29
  + [`SomeWorker.shards_count`](#someworkershards_count)
30
30
  + [`SomeWorker.max_retry_count`](#someworkermax_retry_count)
31
+ * [Changing of worker's shards amount](#changing-of-workers-shards-amount)
32
+ * [Extended error info](#extended-error-info)
31
33
 
32
34
  ## Rationale
33
35
 
34
36
  We've faced some problems using Sidekiq while processing messages from a side system.
35
- For instance, the message is a data of an order in particular time.
36
- The side system will send a new data of an order on an every change.
37
- Orders are frequently updated and a queue containts some closely located messages of the same order.
37
+ For instance, the message is the data of an order at a particular time.
38
+ The side system will send new data of an order on every change.
39
+ Orders are frequently updated and a queue contains some closely located messages of the same order.
38
40
 
39
41
  Sidekiq doesn't guarantee a strict message order, because a queue is processed by multiple threads.
40
42
  For example, we've received 2 messages: M1 and M2.
@@ -43,8 +45,8 @@ so M2 can be processed before M1.
43
45
 
44
46
  Parallel processing of such kind of messages can result in:
45
47
 
46
- + dead locks
47
- + overwriting new data with old one
48
+ + deadlocks
49
+ + overwriting new data with an old one
48
50
 
49
51
  Lowkiq has been created to eliminate such problems by avoiding parallel task processing within one entity.
50
52
 
@@ -52,17 +54,17 @@ Lowkiq has been created to eliminate such problems by avoiding parallel task pro
52
54
 
53
55
  Lowkiq's queues are reliable i.e.,
54
56
  Lowkiq saves information about a job being processed
55
- and returns incompleted jobs back to the queue on startup.
57
+ and returns uncompleted jobs to the queue on startup.
56
58
 
57
59
  Jobs in queues are ordered by preassigned execution time, so they are not FIFO queues.
58
60
 
59
- Every job has it's own identifier. Lowkiq guarantees that jobs with equal id are processed by the same thread.
61
+ Every job has its identifier. Lowkiq guarantees that jobs with equal IDs are processed by the same thread.
60
62
 
61
63
  Every queue is divided into a permanent set of shards.
62
- A job is placed into particular shard based on an id of the job.
64
+ A job is placed into a particular shard based on an id of the job.
63
65
  So jobs with the same id are always placed into the same shard.
64
66
  All jobs of the shard are always processed with the same thread.
65
- This guarantees the sequently processing of jobs with the same ids and excludes the possibility of locks.
67
+ This guarantees the sequential processing of jobs with the same ids and excludes the possibility of locks.
66
68
 
67
69
  Besides the id, every job has a payload.
68
70
  Payloads are accumulated for jobs with the same id.
@@ -71,8 +73,8 @@ It's useful when you need to process only the last message and drop all previous
71
73
 
72
74
  A worker corresponds to a queue and contains a job processing logic.
73
75
 
74
- Fixed amount of threads is used to process all job of all queues.
75
- Adding or removing queues or it's shards won't affect the amount of threads.
76
+ The fixed number of threads is used to process all jobs of all queues.
77
+ Adding or removing queues or their shards won't affect the number of threads.
76
78
 
77
79
  ## Sidekiq comparison
78
80
 
@@ -83,16 +85,16 @@ But if you use plugins like
83
85
  [sidekiq-merger](https://github.com/dtaniwaki/sidekiq-merger)
84
86
  or implement your own lock system, you should look at Lowkiq.
85
87
 
86
- For example, sidekiq-grouping accumulates a batch of jobs than enqueues it and accumulates a next batch.
87
- With this approach queue can contains two batches with a data of the same order.
88
+ For example, sidekiq-grouping accumulates a batch of jobs then enqueues it and accumulates the next batch.
89
+ With this approach, a queue can contain two batches with data of the same order.
88
90
  These batches are parallel processed with different threads, so we come back to the initial problem.
89
91
 
90
- Lowkiq was designed to avoid any types of locking.
92
+ Lowkiq was designed to avoid any type of locking.
91
93
 
92
94
  Furthermore, Lowkiq's queues are reliable. Only Sidekiq Pro or plugins can add such functionality.
93
95
 
94
- This [benchmark](examples/benchmark) shows overhead on redis usage.
95
- This is the results for 5 threads, 100,000 blank jobs:
96
+ This [benchmark](examples/benchmark) shows overhead on Redis usage.
97
+ These are the results for 5 threads, 100,000 blank jobs:
96
98
 
97
99
  + lowkiq: 155 sec or 1.55 ms per job
98
100
  + lowkiq +hiredis: 80 sec or 0.80 ms per job
@@ -100,29 +102,29 @@ This is the results for 5 threads, 100,000 blank jobs:
100
102
 
101
103
  This difference is related to different queues structure.
102
104
  Sidekiq uses one list for all workers and fetches the job entirely for O(1).
103
- Lowkiq uses several data structures, including sorted sets for storing ids of jobs.
105
+ Lowkiq uses several data structures, including sorted sets for keeping ids of jobs.
104
106
  So fetching only an id of a job takes O(log(N)).
105
107
 
106
108
  ## Queue
107
109
 
108
110
  Please, look at [the presentation](https://docs.google.com/presentation/d/e/2PACX-1vRdwA2Ck22r26KV1DbY__XcYpj2FdlnR-2G05w1YULErnJLB_JL1itYbBC6_JbLSPOHwJ0nwvnIHH2A/pub?start=false&loop=false&delayms=3000).
109
111
 
110
- Every job has following attributes:
112
+ Every job has the following attributes:
111
113
 
112
114
  + `id` is a job identifier with string type.
113
- + `payloads` is a sorted set of payloads ordered by it's score. Payload is an object. Score is a real number.
114
- + `perform_in` is planned execution time. It's unix timestamp with real number type.
115
+ + `payloads` is a sorted set of payloads ordered by its score. A payload is an object. A score is a real number.
116
+ + `perform_in` is planned execution time. It's a Unix timestamp with a real number type.
115
117
  + `retry_count` is amount of retries. It's a real number.
116
118
 
117
- For example, `id` can be an identifier of replicated entity.
118
- `payloads` is a sorted set ordered by score of payload and resulted by grouping a payload of job by it's `id`.
119
- `payload` can be a ruby object, because it is serialized by `Marshal.dump`.
120
- `score` can be `payload`'s creation date (unix timestamp) or it's incremental version number.
121
- By default `score` and `perform_in` are current unix timestamp.
119
+ For example, `id` can be an identifier of a replicated entity.
120
+ `payloads` is a sorted set ordered by a score of payload and resulted by grouping a payload of the job by its `id`.
121
+ `payload` can be a ruby object because it is serialized by `Marshal.dump`.
122
+ `score` can be `payload`'s creation date (Unix timestamp) or it's an incremental version number.
123
+ By default, `score` and `perform_in` are current Unix timestamp.
122
124
  `retry_count` for new unprocessed job equals to `-1`,
123
125
  for one-time failed is `0`, so the planned retries are counted, not the performed ones.
124
126
 
125
- A job execution can be unsuccessful. In this case, its `retry_count` is incremented, new `perform_in` is calculated with determined formula and it moves back to a queue.
127
+ Job execution can be unsuccessful. In this case, its `retry_count` is incremented, the new `perform_in` is calculated with determined formula, and it moves back to a queue.
126
128
 
127
129
  In case of `retry_count` is getting `>=` `max_retry_count` an element of `payloads` with less (oldest) score is moved to a morgue,
128
130
  rest elements are moved back to the queue, wherein `retry_count` and `perform_in` are reset to `-1` and `now()` respectively.
@@ -146,14 +148,14 @@ If `max_retry_count = 1`, retries stop.
146
148
 
147
149
  They are applied when:
148
150
 
149
- + a job had been in a queue and a new one with the same id was added
150
- + a job was failed, but a new one with the same id had been added
151
- + a job from morgue was moved back to queue, but queue had had a job with the same id
151
+ + a job has been in a queue and a new one with the same id is added
152
+ + a job is failed, but a new one with the same id has been added
153
+ + a job from a morgue is moved back to a queue, but the queue has had a job with the same id
152
154
 
153
155
  Algorithm:
154
156
 
155
- + payloads is merged, minimal score is chosen for equal payloads
156
- + if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the the job from the queue
157
+ + payloads are merged, the minimal score is chosen for equal payloads
158
+ + if a new job and queued job is merged, `perform_in` and `retry_count` is taken from the job from the queue
157
159
  + if a failed job and queued job is merged, `perform_in` and `retry_count` is taken from the failed one
158
160
  + if morgue job and queued job is merged, `perform_in = now()`, `retry_count = -1`
159
161
 
@@ -172,8 +174,8 @@ Example:
172
174
  { id: "1", payloads: #{"v1": 1, "v2": 3, "v3": 4}, retry_count: 0, perform_in: 1536323288 }
173
175
  ```
174
176
 
175
- Morgue is a part of the queue. Jobs in morgue are not processed.
176
- A job in morgue has following attributes:
177
+ A morgue is a part of a queue. Jobs in a morgue are not processed.
178
+ A job in a morgue has the following attributes:
177
179
 
178
180
  + id is the job identifier
179
181
  + payloads
@@ -204,6 +206,12 @@ module ATestWorker
204
206
  10 * (count + 1) # (i.e. 10, 20, 30, 40, 50)
205
207
  end
206
208
 
209
+ def self.retries_exhausted(batch)
210
+ batch.each do |job|
211
+ Rails.logger.info "retries exhausted for #{name} with error #{job[:error]}"
212
+ end
213
+ end
214
+
207
215
  def self.perform(payloads_by_id)
208
216
  # payloads_by_id is a hash map
209
217
  payloads_by_id.each do |id, payloads|
@@ -216,6 +224,12 @@ module ATestWorker
216
224
  end
217
225
  ```
218
226
 
227
+ And then you have to add it to Lowkiq in your initializer file due to problems with autoloading:
228
+
229
+ ```ruby
230
+ Lowkiq.workers = [ ATestWorker ]
231
+ ```
232
+
219
233
  Default values:
220
234
 
221
235
  ```ruby
@@ -234,10 +248,10 @@ end
234
248
  ATestWorker.perform_async [
235
249
  { id: 0 },
236
250
  { id: 1, payload: { attr: 'v1' } },
237
- { id: 2, payload: { attr: 'v1' }, score: Time.now.to_i, perform_in: Time.now.to_i },
251
+ { id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
238
252
  ]
239
253
  # payload by default equals to ""
240
- # score and perform_in by default equals to Time.now.to_i
254
+ # score and perform_in by default equals to Time.now.to_f
241
255
  ```
242
256
 
243
257
  It is possible to redefine `perform_async` and calculate `id`, `score` и `perform_in` in a worker code:
@@ -272,8 +286,9 @@ ATestWorker.perform_async 1000.times.map { |id| { payload: {id: id} } }
272
286
 
273
287
  Options and their default values are:
274
288
 
289
+ + `Lowkiq.workers = []`- list of workers to use. Since 1.1.0.
275
290
  + `Lowkiq.poll_interval = 1` - delay in seconds between queue polling for new jobs.
276
- Used only if the queue was empty at previous cycle or error was occured.
291
+ Used only if a queue was empty in a previous cycle or an error occurred.
277
292
  + `Lowkiq.threads_per_node = 5` - threads per node.
278
293
  + `Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL') }` - redis connection options
279
294
  + `Lowkiq.client_pool_size = 5` - redis pool size for queueing jobs
@@ -286,6 +301,10 @@ Options and their default values are:
286
301
  + `Lowkiq.dump_payload = Marshal.method :dump`
287
302
  + `Lowkiq.load_payload = Marshal.method :load`
288
303
 
304
+ + `Lowkiq.format_error = -> (error) { error.message }` can be used to add error backtrace. Please see [Extended error info](#extended-error-info)
305
+ + `Lowkiq.dump_error = -> (msg) { msg }` can be used to implement a custom compression logic for errors. Recommended when using `Lowkiq.format_error`.
306
+ + `Lowkiq.load_error = -> (msg) { msg }` can be used to implement a custom decompression logic for errors.
307
+
289
308
  ```ruby
290
309
  $logger = Logger.new(STDOUT)
291
310
 
@@ -327,7 +346,7 @@ Lowkiq.redis = ->() { Redis.new url: ENV.fetch('REDIS_URL'), driver: :hiredis }
327
346
 
328
347
  `path_to_app.rb` must load app. [Example](examples/dummy/lib/app.rb).
329
348
 
330
- Lazy loading of workers modules is unacceptable.
349
+ The lazy loading of worker modules is unacceptable.
331
350
  For preliminarily loading modules use
332
351
  `require`
333
352
  or [`require_dependency`](https://api.rubyonrails.org/classes/ActiveSupport/Dependencies/Loadable.html#method-i-require_dependency)
@@ -335,15 +354,15 @@ for Ruby on Rails.
335
354
 
336
355
  ## Shutdown
337
356
 
338
- Send TERM or INT signal to process (Ctrl-C).
339
- Process will wait for executed jobs to finish.
357
+ Send TERM or INT signal to the process (Ctrl-C).
358
+ The process will wait for executed jobs to finish.
340
359
 
341
- Note that if queue is empty, process sleeps `poll_interval` seconds,
360
+ Note that if a queue is empty, the process sleeps `poll_interval` seconds,
342
361
  therefore, the process will not stop until the `poll_interval` seconds have passed.
343
362
 
344
363
  ## Debug
345
364
 
346
- To get trace of all threads of app:
365
+ To get trace of all threads of an app:
347
366
 
348
367
  ```
349
368
  kill -TTIN <pid>
@@ -357,11 +376,22 @@ docker-compose run --rm --service-port app bash
357
376
  bundle
358
377
  rspec
359
378
  cd examples/dummy ; bundle exec ../../exe/lowkiq -r ./lib/app.rb
379
+
380
+ # open localhost:8080
381
+ ```
382
+
383
+ ```
384
+ docker-compose run --rm --service-port frontend bash
385
+ npm run dumb
386
+ # open localhost:8081
387
+
388
+ # npm run build
389
+ # npm run web-api
360
390
  ```
361
391
 
362
392
  ## Exceptions
363
393
 
364
- `StandardError` thrown by worker are handled with middleware. Such exceptions doesn't lead to process stop.
394
+ `StandardError` thrown by a worker are handled with middleware. Such exceptions don't lead to process stops.
365
395
 
366
396
  All other exceptions cause the process to stop.
367
397
  Lowkiq will wait for job execution by other threads.
@@ -384,15 +414,18 @@ end
384
414
  ```ruby
385
415
  # config/initializers/lowkiq.rb
386
416
 
387
- # loading all lowkiq workers
388
- Dir["#{Rails.root}/app/lowkiq_workers/**/*.rb"].each { |file| require_dependency file }
389
-
390
417
  # configuration:
391
418
  # Lowkiq.redis = -> { Redis.new url: ENV.fetch('LOWKIQ_REDIS_URL') }
392
419
  # Lowkiq.threads_per_node = ENV.fetch('LOWKIQ_THREADS_PER_NODE').to_i
393
420
  # Lowkiq.client_pool_size = ENV.fetch('LOWKIQ_CLIENT_POOL_SIZE').to_i
394
421
  # ...
395
422
 
423
+ # since 1.1.0
424
+ Lowkiq.workers = [
425
+ ATestWorker,
426
+ OtherCoolWorker
427
+ ]
428
+
396
429
  Lowkiq.server_middlewares << -> (worker, batch, &block) do
397
430
  logger = Rails.logger
398
431
  tag = "#{worker}-#{Thread.current.object_id}"
@@ -465,6 +498,17 @@ Lowkiq.on_server_init = ->() do
465
498
  end
466
499
  ```
467
500
 
501
+ Note: In Rails 7, the worker files wouldn't be loaded by default in the initializers since they are managed by the `main` autoloader. To solve this, we can wrap setting the workers around the `to_prepare` configuration.
502
+
503
+ ```ruby
504
+ Rails.application.config.to_prepare do
505
+ Lowkiq.workers = [
506
+ ATestWorker,
507
+ OtherCoolWorker
508
+ ]
509
+ end
510
+ ```
511
+
468
512
  Execution: `bundle exec lowkiq -r ./config/environment.rb`
469
513
 
470
514
 
@@ -480,10 +524,10 @@ worker C: 0
480
524
  worker D: 0, 1
481
525
  ```
482
526
 
483
- Lowkiq uses fixed amount of threads for job processing, therefore it is necessary to distribute shards between threads.
527
+ Lowkiq uses a fixed number of threads for job processing, therefore it is necessary to distribute shards between threads.
484
528
  Splitter does it.
485
529
 
486
- To define a set of shards, which is being processed by thread, lets move them to one list:
530
+ To define a set of shards, which is being processed by a thread, let's move them to one list:
487
531
 
488
532
  ```
489
533
  A0, A1, A2, B0, B1, B2, B3, C0, D0, D1
@@ -500,7 +544,7 @@ t1: A1, B1, C0
500
544
  t2: A2, B2, D0
501
545
  ```
502
546
 
503
- Besides Default Lowkiq has ByNode splitter. It allows to divide the load by several processes (nodes).
547
+ Besides Default Lowkiq has the ByNode splitter. It allows dividing the load by several processes (nodes).
504
548
 
505
549
  ```
506
550
  Lowkiq.build_splitter = -> () do
@@ -511,7 +555,7 @@ Lowkiq.build_splitter = -> () do
511
555
  end
512
556
  ```
513
557
 
514
- So, instead of single process you need to execute multiple ones and to set environment variables up:
558
+ So, instead of a single process, you need to execute multiple ones and to set environment variables up:
515
559
 
516
560
  ```
517
561
  # process 0
@@ -523,18 +567,18 @@ LOWKIQ_NUMBER_OF_NODES=2 LOWKIQ_NODE_NUMBER=1 bundle exec lowkiq -r ./lib/app.rb
523
567
 
524
568
  Summary amount of threads are equal product of `ENV.fetch('LOWKIQ_NUMBER_OF_NODES')` and `Lowkiq.threads_per_node`.
525
569
 
526
- You can also write your own splitter if your app needs extra distribution of shards between threads or nodes.
570
+ You can also write your own splitter if your app needs an extra distribution of shards between threads or nodes.
527
571
 
528
572
  ## Scheduler
529
573
 
530
- Every thread processes a set of shards. Scheduler select shard for processing.
531
- Every thread has it's own instance of scheduler.
574
+ Every thread processes a set of shards. The scheduler selects shard for processing.
575
+ Every thread has its own instance of the scheduler.
532
576
 
533
577
  Lowkiq has 2 schedulers for your choice.
534
- `Seq` sequentally looks over shards.
578
+ `Seq` sequentially looks over shards.
535
579
  `Lag` chooses shard with the oldest job minimizing the lag. It's used by default.
536
580
 
537
- Scheduler can be set up through settings:
581
+ The scheduler can be set up through settings:
538
582
 
539
583
  ```
540
584
  Lowkiq.build_scheduler = ->() { Lowkiq.build_seq_scheduler }
@@ -547,13 +591,13 @@ Lowkiq.build_scheduler = ->() { Lowkiq.build_lag_scheduler }
547
591
  ### `SomeWorker.shards_count`
548
592
 
549
593
  Sum of `shards_count` of all workers shouldn't be less than `Lowkiq.threads_per_node`
550
- otherwise threads will stay idle.
594
+ otherwise, threads will stay idle.
551
595
 
552
596
  Sum of `shards_count` of all workers can be equal to `Lowkiq.threads_per_node`.
553
- In this case thread processes a single shard. This makes sense only with uniform queue load.
597
+ In this case, a thread processes a single shard. This makes sense only with a uniform queue load.
554
598
 
555
599
  Sum of `shards_count` of all workers can be more than `Lowkiq.threads_per_node`.
556
- In this case `shards_count` can be counted as a priority.
600
+ In this case, `shards_count` can be counted as a priority.
557
601
  The larger it is, the more often the tasks of this queue will be processed.
558
602
 
559
603
  There is no reason to set `shards_count` of one worker more than `Lowkiq.threads_per_node`,
@@ -561,8 +605,8 @@ because every thread will handle more than one shard from this queue, so it incr
561
605
 
562
606
  ### `SomeWorker.max_retry_count`
563
607
 
564
- From `retry_in` and `max_retry_count`, you can calculate approximate time that payload of job will be in a queue.
565
- After `max_retry_count` is reached the payload with a minimal score will be moved to a morgue.
608
+ From `retry_in` and `max_retry_count`, you can calculate the approximate time that a payload of a job will be in a queue.
609
+ After `max_retry_count` is reached a payload with a minimal score will be moved to a morgue.
566
610
 
567
611
  For default `retry_in` we receive the following table.
568
612
 
@@ -590,9 +634,9 @@ end
590
634
 
591
635
  ## Changing of worker's shards amount
592
636
 
593
- Try to count amount of shards right away and don't change it in future.
637
+ Try to count the number of shards right away and don't change it in the future.
594
638
 
595
- If you can disable adding of new jobs, wait for queues to get empty and deploy the new version of code with changed amount of shards.
639
+ If you can disable adding of new jobs, wait for queues to get empty, and deploy the new version of code with a changed amount of shards.
596
640
 
597
641
  If you can't do it, follow the next steps:
598
642
 
@@ -610,7 +654,7 @@ module ATestWorker
610
654
  end
611
655
  ```
612
656
 
613
- Set the number of shards and new queue name:
657
+ Set the number of shards and the new queue name:
614
658
 
615
659
  ```ruby
616
660
  module ATestWorker
@@ -625,7 +669,7 @@ module ATestWorker
625
669
  end
626
670
  ```
627
671
 
628
- Add a worker moving jobs from the old queue to a new one:
672
+ Add a worker moving jobs from the old queue to the new one:
629
673
 
630
674
  ```ruby
631
675
  module ATestMigrationWorker
@@ -645,3 +689,23 @@ module ATestMigrationWorker
645
689
  end
646
690
  end
647
691
  ```
692
+
693
+ ## Extended error info
694
+ For failed jobs, lowkiq only stores `error.message` by default. This can be configured by using `Lowkiq.format_error` setting.
695
+ `Lowkiq.dump` and `Lowkiq.load_error` can be used to compress and decompress the error messages respectively.
696
+ Example:
697
+ ```ruby
698
+ Lowkiq.format_error = -> (error) { error.full_message(highlight: false) }
699
+
700
+ Lowkiq.dump_error = Proc.new do |msg|
701
+ compressed = Zlib::Deflate.deflate(msg.to_s)
702
+ Base64.encode64(compressed)
703
+ end
704
+
705
+ Lowkiq.load_error = Proc.new do |input|
706
+ decoded = Base64.decode64(input)
707
+ Zlib::Inflate.inflate(decoded)
708
+ rescue
709
+ input
710
+ end
711
+ ```
data/README.ru.md CHANGED
@@ -247,10 +247,10 @@ end
247
247
  ATestWorker.perform_async [
248
248
  { id: 0 },
249
249
  { id: 1, payload: { attr: 'v1' } },
250
- { id: 2, payload: { attr: 'v1' }, score: Time.now.to_i, perform_in: Time.now.to_i },
250
+ { id: 2, payload: { attr: 'v1' }, score: Time.now.to_f, perform_in: Time.now.to_f },
251
251
  ]
252
252
  # payload по умолчанию равен ""
253
- # score и perform_in по умолчанию равны Time.now.to_i
253
+ # score и perform_in по умолчанию равны Time.now.to_f
254
254
  ```
255
255
 
256
256
  Вы можете переопределить `perform_async` и вычислять `id`, `score` и `perform_in` в воркере: