sidekiq_bulk_job 0.1.1 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA256:
3
- metadata.gz: 1acd50d98090657e0c060fd63d2d1733214b964447a233b7a2a4a83b65e54db2
4
- data.tar.gz: be8c7002b9200fcf374067ff49aa0208866ba50f218beedc91b021c42ecfa2b4
2
+ SHA1:
3
+ metadata.gz: ca8157b27f077c0db69ac593ad1fcbd8c14e0eab
4
+ data.tar.gz: 6c87da78b2105230eb522ff3dc6c09f4c73c3540
5
5
  SHA512:
6
- metadata.gz: e9b0fee851c8445ef0bc811a16f95716b5c772d3542b83be95c24c60246c0ea91cae491f1b428f8681cfba9bf3464c032779af4cc14664a550ace37fec1d3bff
7
- data.tar.gz: bcf7ceea6a57f54c414c4a1194b5feb5880aaa3fabbe48f72d95016b3bdce059652cb21ed1e511363d56d17519e78bc0075be9d06b0e143d4709fab4f050f811
6
+ metadata.gz: 5ba3674183b7f3fa818d4c2bec3f8b6d3f8a632afcd98d38a3beff78e165ad8fe2078d0149dc7601e6e78ec80914f10cba35f819707f42f45cef756fe39cf546
7
+ data.tar.gz: afe1b3f3787cfe828da123984323cb0b45d4877388238b7b080e8b5ea33d1e718e5225ac6f32e8467a2f77dc7340fd791ad17a3e22e8166d3c9a0c03bf8ca3b7
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- sidekiq_bulk_job (0.1.0)
4
+ sidekiq_bulk_job (0.1.4)
5
5
  sidekiq (~> 5.2.7)
6
6
 
7
7
  GEM
data/README.md CHANGED
@@ -1,8 +1,6 @@
1
1
  # SidekiqBulkJob
2
2
 
3
- Welcome to your new gem! In this directory, you'll find the files you need to be able to package up your Ruby library into a gem. Put your Ruby code in the file `lib/sidekiq_bulk_job`. To experiment with that code, run `bin/console` for an interactive prompt.
4
-
5
- TODO: Delete this and the text above, and describe your gem
3
+ A tool to collect the same class jobs together and running in batch.
6
4
 
7
5
  ## Installation
8
6
 
@@ -22,26 +20,226 @@ Or install it yourself as:
22
20
 
23
21
  ## Usage
24
22
 
25
- ###
23
+ ### Initialization:
24
+
25
+ ##### Parameters:
26
+
27
+ * redis: redis client.
28
+ * logger: log object,default Logger.new(STDOUT).
29
+ * process_fail: a callback when the job fail.
30
+ * async_delay: await delay time,default 60 seconds.
31
+ * scheduled_delay: scheduled job delay time,default 10 seconds.
32
+ * queue: default sidekiq running queue. By default the batch job will run at queue as same as sidekiq worker defined.
33
+ * batch_size: batch size in same job,default 3000.
34
+ * prefix: redis key prefix, default SidekiqBulkJob.
35
+
36
+ ```ruby
37
+ process_fail = lambda do |job_class_name, args, exception|
38
+ # do something
39
+ # send email
40
+ end
41
+ SidekiqBulkJob.config({
42
+ redis: Redis.new,
43
+ logger: Logger.new(STDOUT),
44
+ process_fail: process_fail,
45
+ async_delay: ASYNC_DELAY,
46
+ scheduled_delay: SCHEDULED_DELAY,
47
+ queue: :test,
48
+ batch_size: BATCH_SIZE,
49
+ prefix: "SidekiqBulkJob"
50
+ })
51
+ // push a job
52
+ SidekiqBulkJob.perform_async(TestJob, 10)
53
+ ```
54
+
55
+ ### Usage
56
+
57
+ At first define a TestJob as example
26
58
  ```ruby
27
- process_fail = lambda do |job_class_name, args, exception|
28
- # do somethine
29
- # send email
59
+ # create a sidekiq worker, use default queue
60
+ class TestJob
61
+ include Sidekiq::Worker
62
+ sidekiq_options queue: :default
63
+
64
+ def perform(*args)
65
+ puts args
66
+ end
30
67
  end
31
- client = Redis.new
32
- logger = Logger.new(STDOUT)
33
- logger.level = Logger::WARN
34
- SidekiqBulkJob.config redis: client, logger: logger, process_fail: process_fail, queue: :default, batch_size: 3000, prefix: "SidekiqBulkJob"
68
+ ```
69
+
70
+ ##### Use SidekiqBulkJob async
35
71
 
36
- // push a job
72
+ SidekiqBulkJob will collect the same job in to a list, a batch job will create when beyond the ```batch_size``` in ```async_delay``` amount, and clear the list. The list will continue to collect the job which pushing inside. If reach the```async_delay``` time, the SidekiqBulkJob will also created to finish all job collected.
73
+
74
+ ```ruby
75
+ # create a sidekiq worker, use default queue
76
+ class TestJob
77
+ include Sidekiq::Worker
78
+ sidekiq_options queue: :default
79
+
80
+ def perform(*args)
81
+ puts args
82
+ end
83
+ end
84
+
85
+ # simple use
37
86
  SidekiqBulkJob.perform_async(TestJob, 10)
87
+
88
+ # here will not create 1001 job in sidekiq
89
+ # now there are tow jobs created, one is collected 1000 TestJob in batch, another has 1 job inside.
90
+ (BATCH_SIZE + 1).times do |i|
91
+ SidekiqBulkJob.perform_async(TestJob, i)
92
+ end
38
93
  ```
39
94
 
40
- ## Development
95
+ ##### Use SidekiqWork batch_perform_async to run async task
96
+
97
+ ```ruby
98
+ # same as SidekiqBulkJob.perform_async(TestJob, 10)
99
+ TestJob.batch_perform_async(10)
100
+ ```
101
+
102
+ ##### Use SidekiqBulkJob perform_at/perform_in to set scheduled task
103
+
104
+ ```ruby
105
+ # run at 1 minute after with single job
106
+ SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
107
+ # same as below
108
+ SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
109
+ ```
110
+
111
+ ##### Use SidekiqWork batch_perform_at/batch_perform_in to set scheduled task
112
+
113
+ ```ruby
114
+ # same as SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
115
+ TestJob.batch_perform_at(1.minutes.after, 10)
116
+ # same as SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
117
+ TestJob.batch_perform_in(1.minute, 10)
118
+ ```
119
+
120
+ ##### Use setter to set task
121
+
122
+ ```ruby
123
+ # set queue to test and run async
124
+ TestJob.set(queue: :test).batch_perform_async(10)
125
+ # set queue to test and run after 90 seconds
126
+ TestJob.set(queue: :test, in: 90).batch_perform_async(10)
127
+
128
+ #batch_perform_in first params interval will be overrided 'in'/'at' option at setter
129
+ # run after 90 seconds instead of 10 seconds
130
+ TestJob.set(queue: :test, in: 90).batch_perform_in(10, 10)
131
+ ```
132
+
133
+ ## 中文
134
+
135
+ ### 初始化:
136
+
137
+ ##### 参数:
41
138
 
42
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
139
+ * redis: redis client
140
+ * logger: 日志对象,默认Logger.new(STDOUT)
141
+ * process_fail: 当job处理失败的通用回调
142
+ * async_delay: 延迟等待时间,默认60秒
143
+ * scheduled_delay: 定时任务延迟时间,默认10秒
144
+ * queue: 默认运行队列。根据job本身设置的队列运行,当没有设置时候就使用这里设置的队列运行
145
+ * batch_size: 同种类型job批量运行数量,默认3000
146
+ * prefix: 存储到redis的前缀,默认SidekiqBulkJob
43
147
 
44
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
148
+ ```ruby
149
+ process_fail = lambda do |job_class_name, args, exception|
150
+ # do something
151
+ # send email
152
+ end
153
+ SidekiqBulkJob.config({
154
+ redis: Redis.new,
155
+ logger: Logger.new(STDOUT),
156
+ process_fail: process_fail,
157
+ async_delay: ASYNC_DELAY,
158
+ scheduled_delay: SCHEDULED_DELAY,
159
+ queue: :test,
160
+ batch_size: BATCH_SIZE,
161
+ prefix: "SidekiqBulkJob"
162
+ })
163
+ // push a job
164
+ SidekiqBulkJob.perform_async(TestJob, 10)
165
+ ```
166
+
167
+ ### 用法
168
+
169
+ 设置一个TestJob举例子
170
+ ```ruby
171
+ # create a sidekiq worker, use default queue
172
+ class TestJob
173
+ include Sidekiq::Worker
174
+ sidekiq_options queue: :default
175
+
176
+ def perform(*args)
177
+ puts args
178
+ end
179
+ end
180
+ ```
181
+
182
+ ##### 使用SidekiqBulkJob的async接口
183
+
184
+ SidekiqBulkJob会把同类型的job汇总到一个list中,当```async_delay```时间内超过```batch_size```,会新建一个batch job立刻执行汇总的全部jobs,清空list,清空的list会继续收集后续推入的job;如果在```async_delay```时间内未到达```batch_size```则会在最后一个job推入后等待```async_delay```时间创建一个batch job执行汇总的全部jobs
185
+ ```ruby
186
+ # create a sidekiq worker, use default queue
187
+ class TestJob
188
+ include Sidekiq::Worker
189
+ sidekiq_options queue: :default
190
+
191
+ def perform(*args)
192
+ puts args
193
+ end
194
+ end
195
+
196
+ # simple use
197
+ SidekiqBulkJob.perform_async(TestJob, 10)
198
+
199
+ # here will not create 1001 job in sidekiq
200
+ # now there are tow jobs created, one is collected 1000 TestJob in batch, another has 1 job inside.
201
+ (BATCH_SIZE + 1).times do |i|
202
+ SidekiqBulkJob.perform_async(TestJob, i)
203
+ end
204
+ ```
205
+
206
+ ##### 使用SidekiqWork的batch_perform_async接口异步执行任务
207
+
208
+ ```ruby
209
+ # same as SidekiqBulkJob.perform_async(TestJob, 10)
210
+ TestJob.batch_perform_async(10)
211
+ ```
212
+
213
+ ##### 使用SidekiqBulkJob的perform_at/perform_in接口设置定时任务
214
+
215
+ ```ruby
216
+ # run at 1 minute after with single job
217
+ SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
218
+ # same as below
219
+ SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
220
+ ```
221
+
222
+ ##### 使用SidekiqWork的batch_perform_at/batch_perform_in接口设置定时任务
223
+
224
+ ```ruby
225
+ # same as SidekiqBulkJob.perform_at(1.minutes.after, TestJob, 10)
226
+ TestJob.batch_perform_at(1.minutes.after, 10)
227
+ # same as SidekiqBulkJob.perform_in(1 * 60, TestJob, 10)
228
+ TestJob.batch_perform_in(1.minute, 10)
229
+ ```
230
+
231
+ ##### 使用setter设置
232
+
233
+ ```ruby
234
+ # set queue to test and run async
235
+ TestJob.set(queue: :test).batch_perform_async(10)
236
+ # set queue to test and run after 90 seconds
237
+ TestJob.set(queue: :test, in: 90).batch_perform_async(10)
238
+
239
+ #batch_perform_in first params interval will be overrided 'in'/'at' option at setter
240
+ # run after 90 seconds instead of 10 seconds
241
+ TestJob.set(queue: :test, in: 90).batch_perform_in(10, 10)
242
+ ```
45
243
 
46
244
  ## Contributing
47
245
 
@@ -23,13 +23,13 @@ module SidekiqBulkJob
23
23
  end
24
24
 
25
25
  def perform_async(job_class, *args)
26
- options = Utils.symbolize_keys(@opts)
26
+ options = SidekiqBulkJob::Utils.symbolize_keys(@opts)
27
27
  if options[:at].nil? && options[:in].nil?
28
28
  payload = {
29
29
  job_class_name: job_class.to_s,
30
30
  perfrom_args: args,
31
31
  queue: options[:queue] || SidekiqBulkJob.queue
32
- }.compact
32
+ }.select { |_, value| !value.nil? }
33
33
  SidekiqBulkJob.process payload
34
34
  else
35
35
  perform_in(options[:at] || options[:in], job_class, *args)
@@ -45,13 +45,13 @@ module SidekiqBulkJob
45
45
 
46
46
  # Optimization to enqueue something now that is scheduled to go out now or in the past
47
47
  if ts > now.to_f
48
- options = Utils.symbolize_keys(@opts)
48
+ options = SidekiqBulkJob::Utils.symbolize_keys(@opts)
49
49
  payload = {
50
50
  job_class_name: job_class.to_s,
51
51
  at: ts,
52
52
  perfrom_args: args,
53
53
  queue: options[:queue] || SidekiqBulkJob.queue
54
- }.compact
54
+ }.select { |_, value| !value.nil? }
55
55
  SidekiqBulkJob.process payload
56
56
  else
57
57
  perform_async(job_class, *args)
@@ -64,18 +64,37 @@ module SidekiqBulkJob
64
64
 
65
65
  class << self
66
66
 
67
- attr_accessor :prefix, :redis, :queue, :batch_size, :logger, :process_fail
67
+ attr_accessor :prefix, :redis, :queue, :scheduled_delay, :async_delay, :batch_size, :logger, :process_fail
68
68
 
69
- def config(redis: , logger: , process_fail: , queue: :default, batch_size: 3000, prefix: "SidekiqBulkJob")
69
+ def config(redis: , logger: , process_fail: , async_delay: 60, scheduled_delay: 10, queue: :default, batch_size: 3000, prefix: "SidekiqBulkJob")
70
70
  if redis.nil?
71
71
  raise ArgumentError.new("redis not allow nil")
72
72
  end
73
+ if logger.nil?
74
+ raise ArgumentError.new("logger not allow nil")
75
+ end
76
+ if process_fail.nil?
77
+ raise ArgumentError.new("process_fail not allow nil")
78
+ end
79
+ if async_delay.to_f < 2
80
+ raise ArgumentError.new("async_delay not allow less than 2 seconds.")
81
+ elsif async_delay.to_f > 5 * 60
82
+ raise ArgumentError.new("async_delay not allow greater than 5 minutes.")
83
+ end
84
+ if scheduled_delay.to_f < 2
85
+ raise ArgumentError.new("scheduled_delay not allow less than 2 seconds.")
86
+ elsif scheduled_delay.to_f > 30
87
+ raise ArgumentError.new("scheduled_delay not allow greater than 2 seconds.")
88
+ end
89
+
73
90
  self.redis = redis
74
91
  self.queue = queue
75
92
  self.batch_size = batch_size
76
93
  self.prefix = prefix
77
94
  self.logger = logger
78
95
  self.process_fail = process_fail
96
+ self.async_delay = async_delay.to_f
97
+ self.scheduled_delay = scheduled_delay.to_f
79
98
  end
80
99
 
81
100
  def set(options)
@@ -113,7 +132,7 @@ module SidekiqBulkJob
113
132
  def process(job_class_name: , at: nil, perfrom_args: [], queue: self.queue)
114
133
  if at.nil?
115
134
  key = generate_key(job_class_name)
116
- client.lpush key, perfrom_args.to_json
135
+ client.lpush key, SidekiqBulkJob::Utils.dump(perfrom_args)
117
136
  bulk_run(job_class_name, key, queue: queue) if need_flush?(key)
118
137
  monitor(job_class_name, queue: queue)
119
138
  else
@@ -121,18 +140,18 @@ module SidekiqBulkJob
121
140
  args_redis_key = nil
122
141
  target = scheduled_set.find do |job|
123
142
  if job.klass == SidekiqBulkJob::ScheduledJob.to_s &&
124
- job.at.to_i.between?((at - 5).to_i, (at + 30).to_i) # 允许30秒延迟
143
+ job.at.to_i.between?((at - self.scheduled_delay).to_i, (at + self.scheduled_delay).to_i) # 允许30秒延迟
125
144
  _job_class_name, args_redis_key = job.args
126
145
  _job_class_name == job_class_name
127
146
  end
128
147
  end
129
148
  if !target.nil? && !args_redis_key.nil? && !args_redis_key.empty?
130
149
  # 往现有的job参数set里增加参数
131
- client.lpush args_redis_key, perfrom_args.to_json
150
+ client.lpush args_redis_key, SidekiqBulkJob::Utils.dump(perfrom_args)
132
151
  else
133
152
  # 新增加一个
134
153
  args_redis_key = SecureRandom.hex
135
- client.lpush args_redis_key, perfrom_args.to_json
154
+ client.lpush args_redis_key, SidekiqBulkJob::Utils.dump(perfrom_args)
136
155
  SidekiqBulkJob::ScheduledJob.client_push("queue" => queue, "class" => SidekiqBulkJob::ScheduledJob, "at" => at, "args" => [job_class_name, args_redis_key])
137
156
  end
138
157
  end
@@ -197,7 +216,7 @@ module SidekiqBulkJob
197
216
  if !_monitor.nil?
198
217
  # TODO debug log
199
218
  else
200
- SidekiqBulkJob::Monitor.client_push("queue" => queue, "at" => (time_now + 60).to_f, "class" => SidekiqBulkJob::Monitor, "args" => [time_now.to_f, job_class_name])
219
+ SidekiqBulkJob::Monitor.client_push("queue" => queue, "at" => (time_now + self.async_delay).to_f, "class" => SidekiqBulkJob::Monitor, "args" => [time_now.to_f, job_class_name])
201
220
  end
202
221
  end
203
222
 
@@ -0,0 +1,59 @@
1
+ module SidekiqBulkJob
2
+ class BulkErrorHandler
3
+
4
+ ErrorCollection = Struct.new(:args, :exception) do
5
+ def message
6
+ exception.message
7
+ end
8
+
9
+ def backtrace
10
+ exception.backtrace
11
+ end
12
+ end
13
+
14
+ attr_accessor :job_class_name, :errors, :jid
15
+
16
+ def initialize(job_class_name, jid)
17
+ @jid = jid
18
+ @job_class_name = job_class_name
19
+ @errors = []
20
+ end
21
+
22
+ def add(job_args, exception)
23
+ errors << ErrorCollection.new(job_args, exception)
24
+ end
25
+
26
+ def backtrace
27
+ errors.map(&:backtrace).flatten
28
+ end
29
+
30
+ def args
31
+ errors.map(&:args)
32
+ end
33
+
34
+ def failed?
35
+ !errors.empty?
36
+ end
37
+
38
+ def raise_error
39
+ error = BulkError.new(errors.map(&:message).join('; '))
40
+ error.set_backtrace self.backtrace
41
+ error
42
+ end
43
+
44
+ def retry_count
45
+ SidekiqBulkJob.redis.incr jid
46
+ end
47
+
48
+ def clear
49
+ SidekiqBulkJob.redis.del jid
50
+ end
51
+
52
+ class BulkError < StandardError
53
+ def initialize(message)
54
+ super(message)
55
+ end
56
+ end
57
+
58
+ end
59
+ end
@@ -1,25 +1,37 @@
1
1
  require "sidekiq"
2
2
 
3
3
  require "sidekiq_bulk_job/job_retry"
4
+ require "sidekiq_bulk_job/bulk_error_handler"
4
5
  require "sidekiq_bulk_job/utils"
5
6
 
6
7
  module SidekiqBulkJob
7
8
  class BulkJob
8
9
  include Sidekiq::Worker
9
- sidekiq_options queue: :default, retry: false
10
+ sidekiq_options queue: :default, retry: true
10
11
 
11
12
  def perform(job_class_name, args_array)
12
- job = Utils.constantize(job_class_name)
13
+ target_name, method_name = SidekiqBulkJob::Utils.split_class_name_with_method job_class_name
14
+ job = SidekiqBulkJob::Utils.constantize(target_name)
15
+ error_handle = BulkErrorHandler.new(job_class_name, self.jid)
13
16
  args_array.each do |_args|
14
17
  begin
15
- args = JSON.parse _args
16
- job.new.send(:perform, *args)
18
+ args = SidekiqBulkJob::Utils.load _args
19
+ if SidekiqBulkJob::Utils.class_with_method?(job_class_name)
20
+ job.send(method_name, *args)
21
+ else
22
+ job.new.send(method_name, *args)
23
+ end
17
24
  rescue Exception => e
18
- SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.full_message}")
25
+ error_handle.add _args, e
26
+ SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.respond_to?(:full_message) ? e.full_message : e.message}")
19
27
  SidekiqBulkJob.fail_callback(job_class_name: job_class_name, args: args, exception: e)
20
- SidekiqBulkJob::JobRetry.new(job, args, e).push
21
28
  end
22
29
  end
30
+ if error_handle.failed?
31
+ SidekiqBulkJob::JobRetry.new(job, error_handle).push
32
+ else
33
+ error_handle.clear
34
+ end
23
35
  end
24
36
  end
25
37
  end
@@ -1,24 +1,26 @@
1
1
  require "sidekiq"
2
+ require "sidekiq/job_retry"
2
3
 
3
4
  require "sidekiq_bulk_job/utils"
4
- require 'sidekiq/job_retry'
5
+ require "sidekiq_bulk_job/bulk_error_handler"
5
6
 
6
7
  module SidekiqBulkJob
7
8
  class JobRetry
8
9
 
9
- def initialize(klass, args, exception, options={})
10
+ def initialize(klass, error_handle, options={})
10
11
  @handler = Sidekiq::JobRetry.new(options)
11
12
  @klass = klass
12
- @args = args
13
- @exception = exception
13
+ @error_handle = error_handle
14
+ @retry_count = 0
14
15
  end
15
16
 
16
17
  def push(options={})
18
+ @retry_count = SidekiqBulkJob.redis.incr @error_handle.jid
17
19
  opts = job_options(options)
18
20
  queue_as = queue(@klass) || :default
19
21
  begin
20
22
  @handler.local(SidekiqBulkJob::BulkJob, opts, queue_as) do
21
- raise @exception
23
+ raise @error_handle.raise_error
22
24
  end
23
25
  rescue Exception => e
24
26
  end
@@ -28,7 +30,12 @@ module SidekiqBulkJob
28
30
 
29
31
  def job_options(options={})
30
32
  # 0 retry: no retry and dead queue
31
- opts = { 'class' => @klass.to_s, 'args' => @args, 'retry' => 0 }.merge(options)
33
+ opts = {
34
+ 'class' => SidekiqBulkJob::BulkJob.to_s,
35
+ 'args' => @error_handle.args,
36
+ 'retry' => true,
37
+ 'retry_count' => @retry_count.to_i
38
+ }.merge(options)
32
39
  if Sidekiq::VERSION >= "6.0.2"
33
40
  Sidekiq.dump_json(opts)
34
41
  else
@@ -37,8 +44,8 @@ module SidekiqBulkJob
37
44
  end
38
45
 
39
46
  def queue(woker)
40
- if !woker.sidekiq_options.nil? && !woker.sidekiq_options.empty?
41
- sidekiq_options = Utils.symbolize_keys(woker.sidekiq_options)
47
+ if woker.included_modules.include?(Sidekiq::Worker) && !woker.sidekiq_options.nil? && !woker.sidekiq_options.empty?
48
+ sidekiq_options = SidekiqBulkJob::Utils.symbolize_keys(woker.sidekiq_options)
42
49
  if !sidekiq_options[:queue].nil?
43
50
  sidekiq_options[:queue]
44
51
  end
@@ -9,14 +9,19 @@ module SidekiqBulkJob
9
9
  sidekiq_options queue: :default, retry: false
10
10
 
11
11
  def perform(job_class_name, args_redis_key)
12
- job = Utils.constantize(job_class_name)
12
+ target_name, method_name = SidekiqBulkJob::Utils.split_class_name_with_method job_class_name
13
+ job = SidekiqBulkJob::Utils.constantize(target_name)
13
14
  args_array = SidekiqBulkJob.flush args_redis_key
14
15
  args_array.each do |_args|
15
16
  begin
16
- args = JSON.parse _args
17
- job.new.send(:perform, *args)
17
+ args = SidekiqBulkJob::Utils.load _args
18
+ if SidekiqBulkJob::Utils.class_with_method?(job_class_name)
19
+ job.send(method_name, *args)
20
+ else
21
+ job.new.send(method_name, *args)
22
+ end
18
23
  rescue Exception => e
19
- SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.full_message}")
24
+ SidekiqBulkJob.logger.error("#{job_class_name} Args: #{args}, Error: #{e.respond_to?(:full_message) ? e.full_message : e.message}")
20
25
  SidekiqBulkJob.fail_callback(job_class_name: job_class_name, args: args, exception: e)
21
26
  SidekiqBulkJob::JobRetry.new(job, args, e).push
22
27
  end
@@ -1,72 +1,101 @@
1
- module Utils
1
+ require 'yaml'
2
+ require "sidekiq/extensions/active_record"
2
3
 
3
- class << self
4
+ module SidekiqBulkJob
5
+ module Utils
4
6
 
5
- def symbolize_keys(obj)
6
- case obj
7
- when Array
8
- obj.inject([]){|res, val|
9
- res << case val
10
- when Hash, Array
11
- symbolize_keys(val)
12
- else
13
- val
14
- end
15
- res
16
- }
17
- when Hash
18
- obj.inject({}){|res, (key, val)|
19
- nkey = case key
20
- when String
21
- key.to_sym
22
- else
23
- key
24
- end
25
- nval = case val
26
- when Hash, Array
27
- symbolize_keys(val)
28
- else
29
- val
30
- end
31
- res[nkey] = nval
32
- res
33
- }
34
- else
35
- obj
7
+ class << self
8
+
9
+ def symbolize_keys(obj)
10
+ case obj
11
+ when Array
12
+ obj.inject([]){|res, val|
13
+ res << case val
14
+ when Hash, Array
15
+ symbolize_keys(val)
16
+ else
17
+ val
18
+ end
19
+ res
20
+ }
21
+ when Hash
22
+ obj.inject({}){|res, (key, val)|
23
+ nkey = case key
24
+ when String
25
+ key.to_sym
26
+ else
27
+ key
28
+ end
29
+ nval = case val
30
+ when Hash, Array
31
+ symbolize_keys(val)
32
+ else
33
+ val
34
+ end
35
+ res[nkey] = nval
36
+ res
37
+ }
38
+ else
39
+ obj
40
+ end
36
41
  end
37
- end
38
42
 
39
- def constantize(camel_cased_word)
40
- names = camel_cased_word.split("::")
43
+ def constantize(camel_cased_word)
44
+ names = camel_cased_word.split("::")
41
45
 
42
- # Trigger a built-in NameError exception including the ill-formed constant in the message.
43
- Object.const_get(camel_cased_word) if names.empty?
46
+ # Trigger a built-in NameError exception including the ill-formed constant in the message.
47
+ Object.const_get(camel_cased_word) if names.empty?
44
48
 
45
- # Remove the first blank element in case of '::ClassName' notation.
46
- names.shift if names.size > 1 && names.first.empty?
49
+ # Remove the first blank element in case of '::ClassName' notation.
50
+ names.shift if names.size > 1 && names.first.empty?
47
51
 
48
- names.inject(Object) do |constant, name|
49
- if constant == Object
50
- constant.const_get(name)
51
- else
52
- candidate = constant.const_get(name)
53
- next candidate if constant.const_defined?(name, false)
54
- next candidate unless Object.const_defined?(name)
52
+ names.inject(Object) do |constant, name|
53
+ if constant == Object
54
+ constant.const_get(name)
55
+ else
56
+ candidate = constant.const_get(name)
57
+ next candidate if constant.const_defined?(name, false)
58
+ next candidate unless Object.const_defined?(name)
59
+
60
+ # Go down the ancestors to check if it is owned directly. The check
61
+ # stops when we reach Object or the end of ancestors tree.
62
+ constant = constant.ancestors.inject(constant) do |const, ancestor|
63
+ break const if ancestor == Object
64
+ break ancestor if ancestor.const_defined?(name, false)
65
+ const
66
+ end
55
67
 
56
- # Go down the ancestors to check if it is owned directly. The check
57
- # stops when we reach Object or the end of ancestors tree.
58
- constant = constant.ancestors.inject(constant) do |const, ancestor|
59
- break const if ancestor == Object
60
- break ancestor if ancestor.const_defined?(name, false)
61
- const
68
+ # owner is in Object, so raise
69
+ constant.const_get(name, false)
62
70
  end
71
+ end
72
+ end
73
+
74
+ def class_with_method?(klass_name)
75
+ klass_name.include?('.')
76
+ end
77
+
78
+ def split_class_name_with_method(klass_name)
79
+ if class_with_method?(klass_name)
80
+ klass_name.split('.')
81
+ else
82
+ [klass_name, :perform]
83
+ end
84
+ end
85
+
86
+ def load yaml, legacy_filename = Object.new, filename: nil, fallback: false, symbolize_names: false
87
+ YAML.load yaml, legacy_filename, filename: filename, fallback: fallback, symbolize_names: symbolize_names
88
+ end
63
89
 
64
- # owner is in Object, so raise
65
- constant.const_get(name, false)
90
+ def dump o, io = nil, options = {}
91
+ marshalled = YAML.dump o, io, options
92
+ if marshalled.size > Sidekiq::Extensions::SIZE_LIMIT
93
+ SidekiqBulkJob.logger.warn { "job argument is #{marshalled.bytesize} bytes, you should refactor it to reduce the size" }
66
94
  end
95
+ marshalled
67
96
  end
97
+
68
98
  end
69
99
 
70
100
  end
71
-
72
101
  end
@@ -1,3 +1,3 @@
1
1
  module SidekiqBulkJob
2
- VERSION = "0.1.1"
2
+ VERSION = "0.1.5"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sidekiq_bulk_job
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.1
4
+ version: 0.1.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - scalaview
8
- autorequire:
8
+ autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-11-09 00:00:00.000000000 Z
11
+ date: 2021-08-17 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: sidekiq
@@ -73,6 +73,7 @@ files:
73
73
  - bin/setup
74
74
  - lib/sidekiq_bulk_job.rb
75
75
  - lib/sidekiq_bulk_job/batch_runner.rb
76
+ - lib/sidekiq_bulk_job/bulk_error_handler.rb
76
77
  - lib/sidekiq_bulk_job/bulk_job.rb
77
78
  - lib/sidekiq_bulk_job/job_retry.rb
78
79
  - lib/sidekiq_bulk_job/monitor.rb
@@ -84,7 +85,7 @@ homepage: https://github.com/scalaview/sidekiq_bulk_job
84
85
  licenses:
85
86
  - MIT
86
87
  metadata: {}
87
- post_install_message:
88
+ post_install_message:
88
89
  rdoc_options: []
89
90
  require_paths:
90
91
  - lib
@@ -99,8 +100,9 @@ required_rubygems_version: !ruby/object:Gem::Requirement
99
100
  - !ruby/object:Gem::Version
100
101
  version: '0'
101
102
  requirements: []
102
- rubygems_version: 3.0.3
103
- signing_key:
103
+ rubyforge_project:
104
+ rubygems_version: 2.5.2
105
+ signing_key:
104
106
  specification_version: 4
105
107
  summary: Collect same jobs to single worker, reduce job number and improve thread
106
108
  utilization.