sidekiq_bulk_job 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA256:
3
- metadata.gz: 795d7a4f3d359d67ee4cf6894df0fc14193da298e778e779da208f38168f6a96
4
- data.tar.gz: baf26c1a2d731da06faed59c9824298e4c22c7ae3cf8a67e4fb7c29fa2d4860e
2
+ SHA1:
3
+ metadata.gz: ca8157b27f077c0db69ac593ad1fcbd8c14e0eab
4
+ data.tar.gz: 6c87da78b2105230eb522ff3dc6c09f4c73c3540
5
5
  SHA512:
6
- metadata.gz: ac4f23ee0e7a81ad91ee343c3c5df63baf656d48a3eebe2925f18cd6e87589f519c477f3f2c49c4cf0fa923586506591ee81014fce20a8ff848ab63a8df0b195
7
- data.tar.gz: b770c30598a61234cbcd1534c84ed9f7f0782b47f3315c51e1aa2368b2a82eabdcc1cdbc8b062bbce491e7b726ef662e74448dd6474abd9a89be5c9d6ad7cace
6
+ metadata.gz: 5ba3674183b7f3fa818d4c2bec3f8b6d3f8a632afcd98d38a3beff78e165ad8fe2078d0149dc7601e6e78ec80914f10cba35f819707f42f45cef756fe39cf546
7
+ data.tar.gz: afe1b3f3787cfe828da123984323cb0b45d4877388238b7b080e8b5ea33d1e718e5225ac6f32e8467a2f77dc7340fd791ad17a3e22e8166d3c9a0c03bf8ca3b7
data/README.md CHANGED
@@ -1,8 +1,6 @@
1
1
  # SidekiqBulkJob
2
2
 
3
- Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/sidekiq_bulk_job`. To experiment with that code, run `bin/console` for an interactive prompt.
4
-
5
- TODO: Delete this and the text above, and describe your gem
3
+ A tool to collect the same class jobs together and running in batch.
6
4
 
7
5
  ## Installation
8
6
 
@@ -22,26 +20,226 @@ Or install it yourself as:
22
20
 
23
21
  ## Usage
24
22
 
25
- ###
23
+ ### Initialization:
24
+
25
+ ##### Parameters:
26
+
27
+ * redis: redis client.
28
+ * logger: log object,default Logger.new(STDOUT).
29
+ * process_fail: a callback when the job fail.
30
+ * async_delay: await delay time,default 60 seconds.
31
+ * scheduled_delay: scheduled job delay time,default 10 seconds.
32
+ * queue: default sidekiq running queue. By default the batch job will run at queue as same as sidekiq worker defined.
33
+ * batch_size: batch size in same job,default 3000.
34
+ * prefix: redis key prefix, default SidekiqBulkJob.
35
+
36
+ ```ruby
37
+ process_fail = lambda do |job_class_name, args, exception|
38
+ # do something
39
+ # send email
40
+ end
41
+ SidekiqBulkJob.config({
42
+ redis: Redis.new,
43
+ logger: Logger.new(STDOUT),
44
+ process_fail: process_fail,
45
+ async_delay: ASYNC_DELAY,
46
+ scheduled_delay: SCHEDULED_DELAY,
47
+ queue: :test,
48
+ batch_size: BATCH_SIZE,
49
+ prefix: "SidekiqBulkJob"
50
+ })
51
+ // push a job
52
+ SidekiqBulkJob.perform_async(TestJob, 10)
53
+ ```
54
+
55
+ ### Usage
56
+
57
+ At first define a TestJob as example
26
58
  ```ruby
27
- process_fail = lambda do |job_class_name, args, exception|
28
- # do somethine
29
- # send email
59
+ # create a sidekiq worker, use default queue
60
+ class TestJob
61
+ include Sidekiq::Worker
62
+ sidekiq_options queue: :default
63
+
64
+ def perform(*args)
65
+ puts args
66
+ end
30
67
  end
31
- client = Redis.new
32
- logger = Logger.new(STDOUT)
33
- logger.level = Logger::WARN
34
- SidekiqBulkJob.config redis: client, logger: logger, process_fail: process_fail, queue: :default, batch_size: 3000, prefix: "SidekiqBulkJob"
68
+ ```
69
+
70
+ ##### Use SidekiqBulkJob async
35
71
 
36
- // push a job
72
+ SidekiqBulkJob will collect the same job in to a list, a batch job will create when beyond the ```batch_size``` in ```async_delay``` amount, and clear the list. The list will continue to collect the job which pushing inside. If reach the```async_delay``` time, the SidekiqBulkJob will also created to finish all job collected.
73
+
74
+ ```ruby
75
+ # create a sidekiq worker, use default queue
76
+ class TestJob
77
+ include Sidekiq::Worker
78
+ sidekiq_options queue: :default
79
+
80
+ def perform(*args)
81
+ puts args
82
+ end
83
+ end
84
+
85
+ # simple use
37
86
  SidekiqBulkJob.perform_async(TestJob, 10)
87
+
88
+ # here will not create 1001 job in sidekiq
89
+ # now there are tow jobs created, one is collected 1000 TestJob in batch, another has 1 job inside.
90
+ (BATCH_SIZE + 1).times do |i|
91
+ SidekiqBulkJob.perform_async(TestJob, i)
92
+ end
38
93
  ```
39
94
 
40
- ## Development
95
+ ##### Use SidekiqWork batch_perform_async to run async task
96
+
97
+ ```ruby
98
+ # same as SidekiqBulkJob.perform_async(TestJob, 10)
99
+ TestJob.batch_perform_async(10)
100
+ ```
101
+
102
+ ##### Use SidekiqBulkJob perform_at/perform_in to set scheduled task
103
+
104
+ ```ruby
105
+ # run at 1 minute after with single job
106
+ SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
107
+ # same as below
108
+ SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
109
+ ```
110
+
111
+ ##### Use SidekiqWork batch_perform_at/batch_perform_in to set scheduled task
112
+
113
+ ```ruby
114
+ # same as SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
115
+ TestJob.batch_perform_at(1.minutes.after, 10)
116
+ # same as SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
117
+ TestJob.batch_perform_in(1.minute, 10)
118
+ ```
119
+
120
+ ##### Use setter to set task
121
+
122
+ ```ruby
123
+ # set queue to test and run async
124
+ TestJob.set(queue: :test).batch_perform_async(10)
125
+ # set queue to test and run after 90 seconds
126
+ TestJob.set(queue: :test, in: 90).batch_perform_async(10)
127
+
128
+ #batch_perform_in first params interval will be overrided 'in'/'at' option at setter
129
+ # run after 90 seconds instead of 10 seconds
130
+ TestJob.set(queue: :test, in: 90).batch_perform_in(10, 10)
131
+ ```
132
+
133
+ ## 中文
134
+
135
+ ### 初始化:
136
+
137
+ ##### 参数:
41
138
 
42
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
139
+ * redis: redis client
140
+ * logger: 日志对象,默认Logger.new(STDOUT)
141
+ * process_fail: 当job处理失败的通用回调
142
+ * async_delay: 延迟等待时间,默认60秒
143
+ * scheduled_delay: 定时任务延迟时间,默认10秒
144
+ * queue: 默认运行队列。根据job本身设置的队列运行,当没有设置时候就使用这里设置的队列运行
145
+ * batch_size: 同种类型job批量运行数量,默认3000
146
+ * prefix: 存储到redis的前缀,默认SidekiqBulkJob
43
147
 
44
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
148
+ ```ruby
149
+ process_fail = lambda do |job_class_name, args, exception|
150
+ # do something
151
+ # send email
152
+ end
153
+ SidekiqBulkJob.config({
154
+ redis: Redis.new,
155
+ logger: Logger.new(STDOUT),
156
+ process_fail: process_fail,
157
+ async_delay: ASYNC_DELAY,
158
+ scheduled_delay: SCHEDULED_DELAY,
159
+ queue: :test,
160
+ batch_size: BATCH_SIZE,
161
+ prefix: "SidekiqBulkJob"
162
+ })
163
+ // push a job
164
+ SidekiqBulkJob.perform_async(TestJob, 10)
165
+ ```
166
+
167
+ ### 用法
168
+
169
+ 设置一个TestJob举例子
170
+ ```ruby
171
+ # create a sidekiq worker, use default queue
172
+ class TestJob
173
+ include Sidekiq::Worker
174
+ sidekiq_options queue: :default
175
+
176
+ def perform(*args)
177
+ puts args
178
+ end
179
+ end
180
+ ```
181
+
182
+ ##### 使用SidekiqBulkJob的async接口
183
+
184
+ SidekiqBulkJob会把同类型的job汇总到一个list中,当```async_delay```时间内超过```batch_size```,会新建一个batch job立刻执行汇总的全部jobs,清空list,清空的list会继续收集后续推入的job;如果在```async_delay```时间内未到达```batch_size```则会在最后一个job推入后等待```async_delay```时间创建一个batch job执行汇总的全部jobs
185
+ ```ruby
186
+ # create a sidekiq worker, use default queue
187
+ class TestJob
188
+ include Sidekiq::Worker
189
+ sidekiq_options queue: :default
190
+
191
+ def perform(*args)
192
+ puts args
193
+ end
194
+ end
195
+
196
+ # simple use
197
+ SidekiqBulkJob.perform_async(TestJob, 10)
198
+
199
+ # here will not create 1001 job in sidekiq
200
+ # now there are tow jobs created, one is collected 1000 TestJob in batch, another has 1 job inside.
201
+ (BATCH_SIZE + 1).times do |i|
202
+ SidekiqBulkJob.perform_async(TestJob, i)
203
+ end
204
+ ```
205
+
206
+ ##### 使用SidekiqWork的batch_perform_async接口异步执行任务
207
+
208
+ ```ruby
209
+ # same as SidekiqBulkJob.perform_async(TestJob, 10)
210
+ TestJob.batch_perform_async(10)
211
+ ```
212
+
213
+ ##### 使用SidekiqBulkJob的perform_at/perform_in接口设置定时任务
214
+
215
+ ```ruby
216
+ # run at 1 minute after with single job
217
+ SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
218
+ # same as below
219
+ SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
220
+ ```
221
+
222
+ ##### 使用SidekiqWork的batch_perform_at/batch_perform_in接口设置定时任务
223
+
224
+ ```ruby
225
+ # same as SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
226
+ TestJob.batch_perform_at(1.minutes.after, 10)
227
+ # same as SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
228
+ TestJob.batch_perform_in(1.minute, 10)
229
+ ```
230
+
231
+ ##### 使用setter设置
232
+
233
+ ```ruby
234
+ # set queue to test and run async
235
+ TestJob.set(queue: :test).batch_perform_async(10)
236
+ # set queue to test and run after 90 seconds
237
+ TestJob.set(queue: :test, in: 90).batch_perform_async(10)
238
+
239
+ #batch_perform_in first params interval will be overrided 'in'/'at' option at setter
240
+ # run after 90 seconds instead of 10 seconds
241
+ TestJob.set(queue: :test, in: 90).batch_perform_in(10, 10)
242
+ ```
45
243
 
46
244
  ## Contributing
47
245
 
@@ -29,7 +29,7 @@ module SidekiqBulkJob
29
29
  job_class_name: job_class.to_s,
30
30
  perfrom_args: args,
31
31
  queue: options[:queue] || SidekiqBulkJob.queue
32
- }.compact
32
+ }.select { |_, value| !value.nil? }
33
33
  SidekiqBulkJob.process payload
34
34
  else
35
35
  perform_in(options[:at] || options[:in], job_class, *args)
@@ -51,7 +51,7 @@ module SidekiqBulkJob
51
51
  at: ts,
52
52
  perfrom_args: args,
53
53
  queue: options[:queue] || SidekiqBulkJob.queue
54
- }.compact
54
+ }.select { |_, value| !value.nil? }
55
55
  SidekiqBulkJob.process payload
56
56
  else
57
57
  perform_async(job_class, *args)
@@ -0,0 +1,59 @@
1
+ module SidekiqBulkJob
2
+ class BulkErrorHandler
3
+
4
+ ErrorCollection = Struct.new(:args, :exception) do
5
+ def message
6
+ exception.message
7
+ end
8
+
9
+ def backtrace
10
+ exception.backtrace
11
+ end
12
+ end
13
+
14
+ attr_accessor :job_class_name, :errors, :jid
15
+
16
+ def initialize(job_class_name, jid)
17
+ @jid = jid
18
+ @job_class_name = job_class_name
19
+ @errors = []
20
+ end
21
+
22
+ def add(job_args, exception)
23
+ errors << ErrorCollection.new(job_args, exception)
24
+ end
25
+
26
+ def backtrace
27
+ errors.map(&:backtrace).flatten
28
+ end
29
+
30
+ def args
31
+ errors.map(&:args)
32
+ end
33
+
34
+ def failed?
35
+ !errors.empty?
36
+ end
37
+
38
+ def raise_error
39
+ error = BulkError.new(errors.map(&:message).join('; '))
40
+ error.set_backtrace self.backtrace
41
+ error
42
+ end
43
+
44
+ def retry_count
45
+ SidekiqBulkJob.redis.incr jid
46
+ end
47
+
48
+ def clear
49
+ SidekiqBulkJob.redis.del jid
50
+ end
51
+
52
+ class BulkError < StandardError
53
+ def initialize(message)
54
+ super(message)
55
+ end
56
+ end
57
+
58
+ end
59
+ end
@@ -1,16 +1,18 @@
1
1
  require "sidekiq"
2
2
 
3
3
  require "sidekiq_bulk_job/job_retry"
4
+ require "sidekiq_bulk_job/bulk_error_handler"
4
5
  require "sidekiq_bulk_job/utils"
5
6
 
6
7
  module SidekiqBulkJob
7
8
  class BulkJob
8
9
  include Sidekiq::Worker
9
- sidekiq_options queue: :default, retry: false
10
+ sidekiq_options queue: :default, retry: true
10
11
 
11
12
  def perform(job_class_name, args_array)
12
13
  target_name, method_name = SidekiqBulkJob::Utils.split_class_name_with_method job_class_name
13
14
  job = SidekiqBulkJob::Utils.constantize(target_name)
15
+ error_handle = BulkErrorHandler.new(job_class_name, self.jid)
14
16
  args_array.each do |_args|
15
17
  begin
16
18
  args = SidekiqBulkJob::Utils.load _args
@@ -20,11 +22,16 @@ module SidekiqBulkJob
20
22
  job.new.send(method_name, *args)
21
23
  end
22
24
  rescue Exception => e
23
- SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.full_message}")
25
+ error_handle.add _args, e
26
+ SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.respond_to?(:full_message) ? e.full_message : e.message}")
24
27
  SidekiqBulkJob.fail_callback(job_class_name: job_class_name, args: args, exception: e)
25
- SidekiqBulkJob::JobRetry.new(job, args, e).push
26
28
  end
27
29
  end
30
+ if error_handle.failed?
31
+ SidekiqBulkJob::JobRetry.new(job, error_handle).push
32
+ else
33
+ error_handle.clear
34
+ end
28
35
  end
29
36
  end
30
37
  end
@@ -1,24 +1,26 @@
1
1
  require "sidekiq"
2
+ require "sidekiq/job_retry"
2
3
 
3
4
  require "sidekiq_bulk_job/utils"
4
- require 'sidekiq/job_retry'
5
+ require "sidekiq_bulk_job/bulk_error_handler"
5
6
 
6
7
  module SidekiqBulkJob
7
8
  class JobRetry
8
9
 
9
- def initialize(klass, args, exception, options={})
10
+ def initialize(klass, error_handle, options={})
10
11
  @handler = Sidekiq::JobRetry.new(options)
11
12
  @klass = klass
12
- @args = args
13
- @exception = exception
13
+ @error_handle = error_handle
14
+ @retry_count = 0
14
15
  end
15
16
 
16
17
  def push(options={})
18
+ @retry_count = SidekiqBulkJob.redis.incr @error_handle.jid
17
19
  opts = job_options(options)
18
20
  queue_as = queue(@klass) || :default
19
21
  begin
20
22
  @handler.local(SidekiqBulkJob::BulkJob, opts, queue_as) do
21
- raise @exception
23
+ raise @error_handle.raise_error
22
24
  end
23
25
  rescue Exception => e
24
26
  end
@@ -28,7 +30,12 @@ module SidekiqBulkJob
28
30
 
29
31
  def job_options(options={})
30
32
  # 0 retry: no retry and dead queue
31
- opts = { 'class' => @klass.to_s, 'args' => @args, 'retry' => 0 }.merge(options)
33
+ opts = {
34
+ 'class' => SidekiqBulkJob::BulkJob.to_s,
35
+ 'args' => @error_handle.args,
36
+ 'retry' => true,
37
+ 'retry_count' => @retry_count.to_i
38
+ }.merge(options)
32
39
  if Sidekiq::VERSION >= "6.0.2"
33
40
  Sidekiq.dump_json(opts)
34
41
  else
@@ -21,7 +21,7 @@ module SidekiqBulkJob
21
21
  job.new.send(method_name, *args)
22
22
  end
23
23
  rescue Exception => e
24
- SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.full_message}")
24
+ SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.respond_to?(:full_message) ? e.full_message : e.message}")
25
25
  SidekiqBulkJob.fail_callback(job_class_name: job_class_name, args: args, exception: e)
26
26
  SidekiqBulkJob::JobRetry.new(job, args, e).push
27
27
  end
@@ -1,3 +1,3 @@
1
1
  module SidekiqBulkJob
2
- VERSION = "0.1.4"
2
+ VERSION = "0.1.5"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq_bulk_job
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.4
4
+ version: 0.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - scalaview
8
- autorequire:
8
+ autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-11-16 00:00:00.000000000 Z
11
+ date: 2021-08-17 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: sidekiq
@@ -73,6 +73,7 @@ files:
73
73
  - bin/setup
74
74
  - lib/sidekiq_bulk_job.rb
75
75
  - lib/sidekiq_bulk_job/batch_runner.rb
76
+ - lib/sidekiq_bulk_job/bulk_error_handler.rb
76
77
  - lib/sidekiq_bulk_job/bulk_job.rb
77
78
  - lib/sidekiq_bulk_job/job_retry.rb
78
79
  - lib/sidekiq_bulk_job/monitor.rb
@@ -84,7 +85,7 @@ homepage: https://github.com/scalaview/sidekiq_bulk_job
84
85
  licenses:
85
86
  - MIT
86
87
  metadata: {}
87
- post_install_message:
88
+ post_install_message:
88
89
  rdoc_options: []
89
90
  require_paths:
90
91
  - lib
@@ -99,8 +100,9 @@ required_rubygems_version: !ruby/object:Gem::Requirement
99
100
  - !ruby/object:Gem::Version
100
101
  version: '0'
101
102
  requirements: []
102
- rubygems_version: 3.0.3
103
- signing_key:
103
+ rubyforge_project:
104
+ rubygems_version: 2.5.2
105
+ signing_key:
104
106
  specification_version: 4
105
107
  summary: Collect same jobs to single worker, reduce job number and improve thread
106
108
  utilization.