backburner 0.2.6 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG.md CHANGED
@@ -1,6 +1,12 @@
1
1
  # CHANGELOG
2
2
 
3
- ## Version 0.2.7 (Unreleased)
3
+ ## Version 0.3.1 (Unreleased)
4
+
5
+ ## Version 0.3.0 (Nov 14 2012)
6
+
7
+ * Major update with support for a 'threads_on_fork' processing strategy (Thanks @ShadowBelmolve)
8
+ * Different workers have different rake tasks (Thanks @ShadowBelmolve)
9
+ * Added processing strategy specific examples i.e stress.rb and adds new unit tests. (Thanks @ShadowBelmolve)
4
10
 
5
11
  ## Version 0.2.6 (Nov 12 2012)
6
12
 
data/README.md CHANGED
@@ -262,6 +262,7 @@ By default, Backburner comes with the following workers built-in:
262
262
  | Worker | Description |
263
263
  | ------- | ------------------------------- |
264
264
  | `Backburner::Workers::Simple` | Single threaded, no forking worker. Simplest option. |
265
+ | `Backburner::Workers::ThreadsOnFork` | Forking worker that utilizes threads for concurrent processing. |
265
266
 
266
267
  You can select the default worker for processing with:
267
268
 
@@ -274,12 +275,19 @@ end
274
275
  or determine the worker on the fly when invoking `work`:
275
276
 
276
277
  ```ruby
277
- Backburner.work('newsletter_sender', :worker => Backburner::Workers::Threaded)
278
+ Backburner.work('newsletter_sender', :worker => Backburner::Workers::ThreadsOnFork)
278
279
  ```
279
280
 
280
- or when more official workers are supported, through alternate rake tasks.
281
- Additional workers such as `threaded`, `forking` and `threads_on_fork` will hopefully be
282
- developed in the future. If you are interested in helping, please let us know.
281
+ or through associated rake tasks with:
282
+
283
+ ```
284
+ $ QUEUES=newsletter-sender,push-message THREADS=2 GARBAGE=1000 rake backburner:threads_on_fork:work
285
+ ```
286
+
287
+ For more information on the threads_on_fork worker, check out the
288
+ [ThreadsOnFork Worker](https://github.com/nesquena/backburner/wiki/ThreadsOnFork-worker) documentation.
289
+ Additional workers such as individual `threaded` and `forking` strategies will hopefully be contributed in the future.
290
+ If you are interested in helping out, please let us know.
283
291
 
284
292
  ### Default Queues
285
293
 
@@ -391,6 +399,7 @@ jobs processed by your beanstalk workers. An excellent addition to your Backburn
391
399
  * Kristen Tucker - Coming up with the gem name
392
400
  * [Tim Lee](https://github.com/timothy1ee), [Josh Hull](https://github.com/joshbuddy), [Nico Taing](https://github.com/Nico-Taing) - Helping me work through the idea
393
401
  * [Miso](http://gomiso.com) - Open-source friendly place to work
402
+ * [Renan T. Fernandes](https://github.com/ShadowBelmolve) - Added threads_on_fork worker
394
403
 
395
404
  ## Contributing
396
405
 
data/backburner.gemspec CHANGED
@@ -19,6 +19,6 @@ Gem::Specification.new do |s|
19
19
  s.add_runtime_dependency 'dante', '~> 0.1.5'
20
20
 
21
21
  s.add_development_dependency 'rake'
22
- s.add_development_dependency 'minitest', '~> 4.1.0'
22
+ s.add_development_dependency 'minitest', '3.2.0'
23
23
  s.add_development_dependency 'mocha'
24
24
  end
data/examples/custom.rb CHANGED
@@ -21,5 +21,5 @@ end
21
21
  Backburner.enqueue TestJob, 5, 3
22
22
  Backburner.enqueue TestJob, 10, 6
23
23
 
24
- # Work tasks
25
- Backburner.work("test-job")
24
+ # Work tasks using threaded worker
25
+ Backburner.work("test-job", :worker => Backburner::Workers::ThreadsOnFork)
@@ -0,0 +1,30 @@
1
+ $:.unshift "lib"
2
+ require 'backburner'
3
+
4
+ $values = []
5
+
6
+ # Define ruby job
7
+ class TestJob
8
+ include Backburner::Queue
9
+ queue "test-job"
10
+
11
+ def self.perform(value)
12
+ puts "[TestJob] Running perform with args: [#{value}]"
13
+ $values << value
14
+ puts "#{$values.size} total jobs processed"
15
+ end
16
+ end
17
+
18
+ # Configure Backburner
19
+ Backburner.configure do |config|
20
+ config.beanstalk_url = "beanstalk://127.0.0.1"
21
+ config.tube_namespace = "demo.production"
22
+ end
23
+
24
+ # Enqueue tasks
25
+ 1.upto(1000) do |i|
26
+ Backburner.enqueue TestJob, i
27
+ end
28
+
29
+ # Work tasks using threaded worker
30
+ Backburner.work("test-job", :worker => Backburner::Workers::ThreadsOnFork)
data/lib/backburner.rb CHANGED
@@ -11,6 +11,7 @@ require 'backburner/hooks'
11
11
  require 'backburner/performable'
12
12
  require 'backburner/worker'
13
13
  require 'backburner/workers/simple'
14
+ require 'backburner/workers/threads_on_fork'
14
15
  require 'backburner/queue'
15
16
 
16
17
  module Backburner
@@ -11,7 +11,7 @@ module Backburner
11
11
  # Options include `pri` (priority), `delay` (delay in secs), `ttr` (time to respond)
12
12
  #
13
13
  # @example
14
- # AsyncProxy(User, 10, :pri => 1000, :ttr => 1000)
14
+ # AsyncProxy.new(User, 10, :pri => 1000, :ttr => 1000)
15
15
  #
16
16
  def initialize(klazz, id=nil, opts={})
17
17
  @klazz, @id, @opts = klazz, id, opts
@@ -14,10 +14,12 @@ module Backburner
14
14
  end
15
15
 
16
16
  # Print out when a job completed
17
+ # If message is nil, job is considered complete
17
18
  def log_job_end(name, message = nil)
18
19
  ellapsed = Time.now - job_started_at
19
20
  ms = (ellapsed.to_f * 1000).to_i
20
- log_info("Finished #{name} in #{ms}ms #{message}")
21
+ action_word = message ? 'Finished' : 'Completed'
22
+ log_info("#{action_word} #{name} in #{ms}ms #{message}")
21
23
  end
22
24
 
23
25
  # Returns true if the job logging started
@@ -3,9 +3,32 @@
3
3
 
4
4
  namespace :backburner do
5
5
  # QUEUE=foo,bar,baz rake backburner:work
6
- desc "Start an backburner worker"
6
+ desc "Start backburner worker using default worker"
7
7
  task :work => :environment do
8
8
  queues = (ENV["QUEUE"] ? ENV["QUEUE"].split(',') : nil) rescue nil
9
9
  Backburner.work queues
10
10
  end
11
+
12
+ namespace :simple do
13
+ # QUEUE=foo,bar,baz rake backburner:simple:work
14
+ desc "Starts backburner worker using simple processing"
15
+ task :work => :environment do
16
+ queues = (ENV["QUEUE"] ? ENV["QUEUE"].split(',') : nil) rescue nil
17
+ Backburner.work queues, :worker => Backburner::Workers::Simple
18
+ end
19
+ end # simple
20
+
21
+ namespace :threads_on_fork do
22
+ # QUEUE=twitter:10:5000:5,parse_page,send_mail,verify_bithday THREADS=2 GARBAGE=1000 rake backburner:threads_on_fork:work
23
+ # twitter tube will have 10 threads, garbage after 5k executions and retry 5 times.
24
+ desc "Starts backburner worker using threads_on_fork processing"
25
+ task :work => :environment do
26
+ queues = (ENV["QUEUE"] ? ENV["QUEUE"].split(',') : nil) rescue nil
27
+ threads = ENV['THREADS'].to_i
28
+ garbage = ENV['GARBAGE'].to_i
29
+ Backburner::Workers::ThreadsOnFork.threads_number = threads if threads > 0
30
+ Backburner::Workers::ThreadsOnFork.garbage_after = garbage if garbage > 0
31
+ Backburner.work queues, :worker => Backburner::Workers::ThreadsOnFork
32
+ end
33
+ end # threads_on_fork
11
34
  end
@@ -1,3 +1,3 @@
1
1
  module Backburner
2
- VERSION = "0.2.6"
2
+ VERSION = "0.3.0"
3
3
  end
@@ -0,0 +1,239 @@
1
+ module Backburner
2
+ module Workers
3
+ class ThreadsOnFork < Worker
4
+
5
+ class << self
6
+ attr_accessor :shutdown
7
+ attr_accessor :threads_number
8
+ attr_accessor :garbage_after
9
+ attr_accessor :is_child
10
+
11
+ # return the pids of all alive children/forks
12
+ def child_pids
13
+ return [] if is_child
14
+ @child_pids ||= []
15
+ tmp_ids = []
16
+ for id in @child_pids
17
+ next if id.to_i == Process.pid
18
+ begin
19
+ Process.kill(0, id)
20
+ tmp_ids << id
21
+ rescue Errno::ESRCH => e
22
+ end
23
+ end
24
+ @child_pids = tmp_ids if @child_pids != tmp_ids
25
+ @child_pids
26
+ end
27
+
28
+ # Send a SIGTERM signal to all children
29
+ # This is the same of a normal exit
30
+ # We are simply asking the children to exit
31
+ def stop_forks
32
+ for id in child_pids
33
+ begin
34
+ Process.kill("SIGTERM", id)
35
+ rescue Errno::ESRCH
36
+ end
37
+ end
38
+ end
39
+
40
+ # Send a SIGKILL signal to all children
41
+ # This is the same of assassinate
42
+ # We are KILLING those folks that don't obey us
43
+ def kill_forks
44
+ for id in child_pids
45
+ begin
46
+ Process.kill("SIGKILL", id)
47
+ rescue Errno::ESRCH
48
+ end
49
+ end
50
+ end
51
+
52
+ def finish_forks
53
+ return if is_child
54
+ ids = child_pids
55
+ if ids.length > 0
56
+ puts "[ThreadsOnFork workers] Stopping forks: #{ids.join(", ")}"
57
+ stop_forks
58
+ Kernel.sleep 1
59
+ ids = child_pids
60
+ if ids.length > 0
61
+ puts "[ThreadsOnFork workers] Killing remaining forks: #{ids.join(", ")}"
62
+ kill_forks
63
+ Process.waitall
64
+ end
65
+ end
66
+ end
67
+ end
68
+
69
+ # Custom initializer just to set @tubes_data
70
+ def initialize(*args)
71
+ @tubes_data = {}
72
+ super
73
+ end
74
+
75
+ # Process the special tube_names of ThreadsOnFork worker
76
+ # The idea is tube_name:custom_threads_limit:custom_garbage_limit:custom_retries
77
+ # Any custom can be ignore. So if you want to set just the custom_retries
78
+ # you will need to write this 'tube_name:::10'
79
+ #
80
+ # @example
81
+ # process_tube_names(['foo:10:5:1', 'bar:2::3', 'lol'])
82
+ # => ['foo', 'bar', 'lol']
83
+ def process_tube_names(tube_names)
84
+ names = compact_tube_names(tube_names)
85
+ if names.nil?
86
+ nil
87
+ else
88
+ names.map do |name|
89
+ data = name.split(":")
90
+ tube_name = data.first
91
+ threads_number = data[1].empty? ? nil : data[1].to_i rescue nil
92
+ garbage_number = data[2].empty? ? nil : data[2].to_i rescue nil
93
+ retries_number = data[3].empty? ? nil : data[3].to_i rescue nil
94
+ @tubes_data[expand_tube_name(tube_name)] = {
95
+ :threads => threads_number,
96
+ :garbage => garbage_number,
97
+ :retries => retries_number
98
+ }
99
+ tube_name
100
+ end
101
+ end
102
+ end
103
+
104
+ def prepare
105
+ self.tube_names ||= Backburner.default_queues.any? ? Backburner.default_queues : all_existing_queues
106
+ self.tube_names = Array(self.tube_names)
107
+ tube_names.map! { |name| expand_tube_name(name) }
108
+ log_info "Working #{tube_names.size} queues: [ #{tube_names.join(', ')} ]"
109
+ end
110
+
111
+ # For each tube we will call fork_and_watch to create the fork
112
+ # The lock argument define if this method should block or no
113
+ def start(lock=true)
114
+ prepare
115
+ tube_names.each do |name|
116
+ fork_and_watch(name)
117
+ end
118
+
119
+ if lock
120
+ sleep 0.1 while true
121
+ end
122
+ end
123
+
124
+ # Make the fork and create a thread to watch the child process
125
+ # The exit code '99' means that the fork exited because of the garbage limit
126
+ # Any other code is an error
127
+ def fork_and_watch(name)
128
+ create_thread(name) do |tube_name|
129
+ until self.class.shutdown
130
+ pid = fork_tube(tube_name)
131
+ _, status = wait_for_process(pid)
132
+
133
+ # 99 = garbaged
134
+ if status.exitstatus != 99
135
+ log_error("Catastrophic failure: tube #{tube_name} exited with code #{status.exitstatus}.")
136
+ end
137
+ end
138
+ end
139
+ end
140
+
141
+ # This makes easy to test
142
+ def fork_tube(name)
143
+ fork_it do
144
+ fork_inner(name)
145
+ end
146
+ end
147
+
148
+ # Here we are already on the forked child
149
+ # We will watch just the selected tube and change the configuration of
150
+ # config.max_job_retries if needed
151
+ #
152
+ # If we limit the number of threads to 1 it will just run in a loop without
153
+ # creating any extra thread.
154
+ def fork_inner(name)
155
+ watch_tube(name)
156
+
157
+ if @tubes_data[name]
158
+ config.max_job_retries = @tubes_data[name][:retries] if @tubes_data[name][:retries]
159
+ else
160
+ @tubes_data[name] = {}
161
+ end
162
+ @garbage_after = @tubes_data[name][:garbage] || self.class.garbage_after
163
+ @threads_number = (@tubes_data[name][:threads] || self.class.threads_number || 1).to_i
164
+
165
+ @runs = 0
166
+
167
+ if @threads_number == 1
168
+ run_while_can
169
+ else
170
+ threads_count = Thread.list.count
171
+ @threads_number.times do
172
+ create_thread do
173
+ run_while_can
174
+ end
175
+ end
176
+ sleep 0.1 while Thread.list.count > threads_count
177
+ end
178
+
179
+ coolest_exit
180
+ end
181
+
182
+ # Run work_one_job while we can
183
+ def run_while_can
184
+ while @garbage_after.nil? or @garbage_after > @runs
185
+ @runs += 1
186
+ work_one_job
187
+ end
188
+ end
189
+
190
+ # Shortcut for watching a tube on beanstalk connection
191
+ def watch_tube(name)
192
+ connection.tubes.watch!(name)
193
+ end
194
+
195
+ # Exit with Kernel.exit! to avoid at_exit callbacks that should belongs to
196
+ # parent process
197
+ # We will use exitcode 99 that means the fork reached the garbage number
198
+ def coolest_exit
199
+ Kernel.exit! 99
200
+ end
201
+
202
+ # Create a thread. Easy to test
203
+ def create_thread(*args, &block)
204
+ Thread.new(*args, &block)
205
+ end
206
+
207
+ # Wait for a specific process. Easy to test
208
+ def wait_for_process(pid)
209
+ out = Process.wait2(pid)
210
+ self.class.child_pids.delete(pid)
211
+ out
212
+ end
213
+
214
+ # Forks the specified block and adds the process to the child process pool
215
+ def fork_it(&blk)
216
+ pid = Kernel.fork do
217
+ self.class.is_child = true
218
+ $0 = "[ThreadsOnFork worker] parent: #{Process.ppid}"
219
+ @connection = Connection.new(Backburner.configuration.beanstalk_url)
220
+ blk.call
221
+ end
222
+ self.class.child_pids << pid
223
+ pid
224
+ end
225
+
226
+ def connection
227
+ @connection || super
228
+ end
229
+
230
+ end
231
+ end
232
+ end
233
+
234
+ at_exit do
235
+ unless Backburner::Workers::ThreadsOnFork.is_child
236
+ Backburner::Workers::ThreadsOnFork.shutdown = true
237
+ end
238
+ Backburner::Workers::ThreadsOnFork.finish_forks
239
+ end
@@ -0,0 +1,37 @@
1
+ require File.expand_path('../test_helper', __FILE__)
2
+
3
+ class AsyncUser; def self.invoke_hook_events(*args); true; end; end
4
+
5
+ describe "Backburner::AsyncProxy class" do
6
+ before do
7
+ Backburner.default_queues.clear
8
+ end
9
+
10
+ after do
11
+ clear_jobs!("async-user")
12
+ end
13
+
14
+ describe "for method_missing enqueue" do
15
+ should "enqueue job onto worker with no args" do
16
+ @async = Backburner::AsyncProxy.new(AsyncUser, 10, :pri => 1000, :ttr => 100)
17
+ @async.foo
18
+ job, body = pop_one_job("async-user")
19
+ assert_equal "AsyncUser", body["class"]
20
+ assert_equal [10, "foo"], body["args"]
21
+ assert_equal 100, job.ttr
22
+ assert_equal 1000, job.pri
23
+ job.delete
24
+ end
25
+
26
+ should "enqueue job onto worker with args" do
27
+ @async = Backburner::AsyncProxy.new(AsyncUser, 10, :pri => 1000, :ttr => 100)
28
+ @async.bar(1, 2, 3)
29
+ job, body = pop_one_job("async-user")
30
+ assert_equal "AsyncUser", body["class"]
31
+ assert_equal [10, "bar", 1, 2, 3], body["args"]
32
+ assert_equal 100, job.ttr
33
+ assert_equal 1000, job.pri
34
+ job.delete
35
+ end
36
+ end # method_missing
37
+ end # AsyncProxy
@@ -0,0 +1,58 @@
1
+ class ResponseJob
2
+ include Backburner::Queue
3
+ queue_priority 1000
4
+ def self.perform(data)
5
+ $worker_test_count += data['worker_test_count'].to_i if data['worker_test_count']
6
+ $worker_success = data['worker_success'] if data['worker_success']
7
+ $worker_test_count = data['worker_test_count_set'].to_i if data['worker_test_count_set']
8
+ $worker_raise = data['worker_raise'] if data['worker_raise']
9
+ end
10
+ end
11
+
12
+ class TestJobFork
13
+ include Backburner::Queue
14
+ queue_priority 1000
15
+ def self.perform(x, y)
16
+ Backburner::Workers::ThreadsOnFork.enqueue ResponseJob, [{
17
+ :worker_test_count_set => x + y
18
+ }], :queue => 'response'
19
+ end
20
+ end
21
+
22
+ class TestFailJobFork
23
+ include Backburner::Queue
24
+ def self.perform(x, y)
25
+ Backburner::Workers::ThreadsOnFork.enqueue ResponseJob, [{
26
+ :worker_raise => true
27
+ }], :queue => 'response'
28
+ end
29
+ end
30
+
31
+ class TestRetryJobFork
32
+ include Backburner::Queue
33
+ def self.perform(x, y)
34
+ $worker_test_count += 1
35
+
36
+ if $worker_test_count <= 2
37
+ Backburner::Workers::ThreadsOnFork.enqueue ResponseJob, [{
38
+ :worker_test_count => 1
39
+ }], :queue => 'response'
40
+
41
+ raise RuntimeError
42
+ else # succeeds
43
+ Backburner::Workers::ThreadsOnFork.enqueue ResponseJob, [{
44
+ :worker_test_count => 1,
45
+ :worker_success => true
46
+ }], :queue => 'response'
47
+ end
48
+ end
49
+ end
50
+
51
+ class TestAsyncJobFork
52
+ include Backburner::Performable
53
+ def self.foo(x, y)
54
+ Backburner::Workers::ThreadsOnFork.enqueue ResponseJob, [{
55
+ :worker_test_count_set => x * y
56
+ }], :queue => 'response'
57
+ end
58
+ end
@@ -0,0 +1,22 @@
1
+ class Templogger
2
+ attr_reader :logger, :log_path
3
+
4
+ def initialize(root_path)
5
+ @file = Tempfile.new('foo', root_path)
6
+ @log_path = @file.path
7
+ @logger = Logger.new(@log_path)
8
+ end
9
+
10
+ # wait_for_match /Completed TestJobFork/m
11
+ def wait_for_match(match_pattern)
12
+ sleep 0.1 until self.body =~ match_pattern
13
+ end
14
+
15
+ def body
16
+ File.read(@log_path)
17
+ end
18
+
19
+ def close
20
+ @file.close
21
+ end
22
+ end
data/test/test_helper.rb CHANGED
@@ -4,6 +4,7 @@ require 'minitest/autorun'
4
4
  require 'mocha'
5
5
  $:.unshift File.expand_path("../../lib")
6
6
  require 'backburner'
7
+ require File.expand_path('../helpers/templogger', __FILE__)
7
8
 
8
9
  # Configure Backburner
9
10
  Backburner.configure do |config|
@@ -1,6 +1,6 @@
1
- require File.expand_path('../test_helper', __FILE__)
2
- require File.expand_path('../fixtures/test_jobs', __FILE__)
3
- require File.expand_path('../fixtures/hooked', __FILE__)
1
+ require File.expand_path('../../test_helper', __FILE__)
2
+ require File.expand_path('../../fixtures/test_jobs', __FILE__)
3
+ require File.expand_path('../../fixtures/hooked', __FILE__)
4
4
 
5
5
  describe "Backburner::Workers::Basic module" do
6
6
  before do
@@ -149,7 +149,7 @@ describe "Backburner::Workers::Basic module" do
149
149
  end
150
150
  assert_match /attempt 1 of 3, retrying/, out.first
151
151
  assert_match /attempt 2 of 3, retrying/, out[1]
152
- assert_match /Finished TestRetryJob/m, out.last
152
+ assert_match /Completed TestRetryJob/m, out.last
153
153
  refute_match(/failed/, out.last)
154
154
  assert_equal 3, $worker_test_count
155
155
  assert_equal true, $worker_success
@@ -223,7 +223,7 @@ describe "Backburner::Workers::Basic module" do
223
223
  worker.work_one_job
224
224
  end
225
225
  assert_match /!!before_perform_foo!! \[nil, "foo", 10\]/, out
226
- assert_match /before_perform_foo.*Finished/m, out
226
+ assert_match /before_perform_foo.*Completed/m, out
227
227
  refute_match(/Fail ran!!/, out)
228
228
  refute_match(/HookFailError/, out)
229
229
  end # stopping perform
@@ -0,0 +1,412 @@
1
+ require File.expand_path('../../test_helper', __FILE__)
2
+ require File.expand_path('../../fixtures/test_fork_jobs', __FILE__)
3
+
4
+ describe "Backburner::Workers::ThreadsOnFork module" do
5
+
6
+ before do
7
+ Backburner.default_queues.clear
8
+ @worker_class = Backburner::Workers::ThreadsOnFork
9
+ @worker_class.shutdown = false
10
+ @worker_class.is_child = false
11
+ @worker_class.threads_number = 1
12
+ @worker_class.garbage_after = 1
13
+ @ignore_forks = false
14
+ end
15
+
16
+ after do
17
+ Backburner.configure { |config| config.max_job_retries = 0; config.retry_delay = 5; config.logger = nil }
18
+ unless @ignore_forks
19
+ if @worker_class.instance_variable_get("@child_pids").length > 0
20
+ raise "Why is there forks alive?"
21
+ end
22
+ end
23
+ end
24
+
25
+ describe "for process_tube_names method" do
26
+ it "should interpreter the job_name:threads_limit:garbage_after:retries format" do
27
+ worker = @worker_class.new(["foo:1:2:3"])
28
+ assert_equal ["foo"], worker.tube_names
29
+ end
30
+
31
+ it "should interpreter event if is missing values" do
32
+ tubes = %W(foo1:1:2:3 foo2:4:5 foo3:6 foo4 foo5::7:8 foo6:::9 foo7::10)
33
+ worker = @worker_class.new(tubes)
34
+ assert_equal %W(foo1 foo2 foo3 foo4 foo5 foo6 foo7), worker.tube_names
35
+ end
36
+
37
+ it "should store interpreted values correctly" do
38
+ tubes = %W(foo1:1:2:3 foo2:4:5 foo3:6 foo4 foo5::7:8 foo6:::9 foo7::10)
39
+ worker = @worker_class.new(tubes)
40
+ assert_equal({
41
+ "demo.test.foo1" => { :threads => 1, :garbage => 2, :retries => 3 },
42
+ "demo.test.foo2" => { :threads => 4, :garbage => 5, :retries => nil },
43
+ "demo.test.foo3" => { :threads => 6, :garbage => nil, :retries => nil },
44
+ "demo.test.foo4" => { :threads => nil, :garbage => nil, :retries => nil },
45
+ "demo.test.foo5" => { :threads => nil, :garbage => 7, :retries => 8 },
46
+ "demo.test.foo6" => { :threads => nil, :garbage => nil, :retries => 9 },
47
+ "demo.test.foo7" => { :threads => nil, :garbage => 10, :retries => nil }
48
+ }, worker.instance_variable_get("@tubes_data"))
49
+ end
50
+ end
51
+
52
+ describe "for prepare method" do
53
+ it "should watch specified tubes" do
54
+ worker = @worker_class.new(["foo", "bar"])
55
+ out = capture_stdout { worker.prepare }
56
+ assert_equal ["demo.test.foo", "demo.test.bar"], worker.tube_names
57
+ assert_match /demo\.test\.foo/, out
58
+ end # multiple
59
+
60
+ it "should watch single tube" do
61
+ worker = @worker_class.new("foo")
62
+ out = capture_stdout { worker.prepare }
63
+ assert_equal ["demo.test.foo"], worker.tube_names
64
+ assert_match /demo\.test\.foo/, out
65
+ end # single
66
+
67
+ it "should respect default_queues settings" do
68
+ Backburner.default_queues.concat(["foo", "bar"])
69
+ worker = @worker_class.new
70
+ out = capture_stdout { worker.prepare }
71
+ assert_equal ["demo.test.foo", "demo.test.bar"], worker.tube_names
72
+ assert_match /demo\.test\.foo/, out
73
+ end
74
+
75
+ it "should assign based on all tubes" do
76
+ @worker_class.any_instance.expects(:all_existing_queues).once.returns("bar")
77
+ worker = @worker_class.new
78
+ out = capture_stdout { worker.prepare }
79
+ assert_equal ["demo.test.bar"], worker.tube_names
80
+ assert_match /demo\.test\.bar/, out
81
+ end # all assign
82
+
83
+ it "should properly retrieve all tubes" do
84
+ worker = @worker_class.new
85
+ out = capture_stdout { worker.prepare }
86
+ assert_contains worker.tube_names, "demo.test.test-job-fork"
87
+ assert_match /demo\.test\.test-job-fork/, out
88
+ end # all read
89
+ end # prepare
90
+
91
+ describe "forking and threading" do
92
+
93
+ it "start should call fork_and_watch for each tube" do
94
+ worker = @worker_class.new(%W(foo bar))
95
+ worker.expects(:fork_and_watch).with("demo.test.foo").once
96
+ worker.expects(:fork_and_watch).with("demo.test.bar").once
97
+ silenced { worker.start(false) }
98
+ end
99
+
100
+ it "fork_and_watch should create a thread to fork and watch" do
101
+ worker = @worker_class.new(%(foo))
102
+ worker.expects(:create_thread).once.with("demo.test.foo")
103
+ silenced { worker.start(false) }
104
+ end
105
+
106
+ it "fork_and_watch thread should wait with wait_for_process" do
107
+ process_exit = stub('process_exit')
108
+ process_exit.expects(:exitstatus).returns(99)
109
+ worker = @worker_class.new(%(foo))
110
+ worker.expects(:wait_for_process).with(12).returns([nil, process_exit])
111
+
112
+ wc = @worker_class
113
+ # TODO: Is there a best way do do this?
114
+ worker.define_singleton_method :fork_it do
115
+ wc.shutdown = true
116
+ 12
117
+ end
118
+ def worker.create_thread(*args, &block); block.call(*args) end
119
+
120
+ out = silenced(2) { worker.start(false) }
121
+ refute_match /Catastrophic failure/, out
122
+ end
123
+
124
+ it "fork_and_watch thread should log an error if exitstatus is != 99" do
125
+ process_exit = stub('process_exit')
126
+ process_exit.expects(:exitstatus).twice.returns(0)
127
+ worker = @worker_class.new(%(foo))
128
+ worker.expects(:wait_for_process).with(12).returns([nil, process_exit])
129
+
130
+ wc = @worker_class
131
+ # TODO: Is there a best way do do this?
132
+ worker.define_singleton_method :fork_it do
133
+ wc.shutdown = true
134
+ 12
135
+ end
136
+ def worker.create_thread(*args, &block); block.call(*args) end
137
+ out = silenced(2) { worker.start(false) }
138
+ assert_match /Catastrophic failure: tube demo\.test\.foo exited with code 0\./, out
139
+ end
140
+
141
+ describe "fork_inner" do
142
+
143
+ before do
144
+ @worker_class.any_instance.expects(:coolest_exit).once
145
+ end
146
+
147
+ it "should watch just the channel it receive as argument" do
148
+ worker = @worker_class.new(%(foo))
149
+ @worker_class.expects(:threads_number).returns(1)
150
+ worker.expects(:run_while_can).once
151
+ silenced do
152
+ worker.prepare
153
+ worker.fork_inner('demo.test.bar')
154
+ end
155
+ assert_same_elements %W(demo.test.bar), @worker_class.connection.tubes.watched.map(&:name)
156
+ end
157
+
158
+ it "should not create threads if the number of threads is 1" do
159
+ worker = @worker_class.new(%(foo))
160
+ @worker_class.expects(:threads_number).returns(1)
161
+ worker.expects(:run_while_can).once
162
+ worker.expects(:create_thread).never
163
+ silenced do
164
+ worker.prepare
165
+ worker.fork_inner('demo.test.foo')
166
+ end
167
+ end
168
+
169
+ it "should create threads if the number of threads is > 1" do
170
+ worker = @worker_class.new(%(foo))
171
+ @worker_class.expects(:threads_number).returns(5)
172
+ worker.expects(:create_thread).times(5)
173
+ silenced do
174
+ worker.prepare
175
+ worker.fork_inner('demo.test.foo')
176
+ end
177
+ end
178
+
179
+ it "should create threads that call run_while_can" do
180
+ worker = @worker_class.new(%(foo))
181
+ @worker_class.expects(:threads_number).returns(5)
182
+ worker.expects(:run_while_can).times(5)
183
+ # TODO
184
+ def worker.create_thread(*args, &block); block.call(*args) end
185
+ silenced do
186
+ worker.prepare
187
+ worker.fork_inner('demo.test.foo')
188
+ end
189
+ end
190
+
191
+ it "should set @garbage_after, @threads_number and set retries if needed" do
192
+ worker = @worker_class.new(%W(foo1 foo2:10 foo3:20:30 foo4:40:50:60))
193
+ default_threads = 1
194
+ default_garbage = 5
195
+ default_retries = 100
196
+ @worker_class.expects(:threads_number).times(1).returns(default_threads)
197
+ @worker_class.expects(:garbage_after).times(2).returns(default_garbage)
198
+ @worker_class.any_instance.expects(:coolest_exit).times(3)
199
+ Backburner.configuration.max_job_retries = default_retries
200
+
201
+ worker.expects(:create_thread).times(70)
202
+ worker.expects(:run_while_can).once
203
+
204
+ silenced do
205
+ worker.prepare
206
+ worker.fork_inner('demo.test.foo1')
207
+ end
208
+
209
+ assert_equal worker.instance_variable_get("@threads_number"), default_threads
210
+ assert_equal worker.instance_variable_get("@garbage_after"), default_garbage
211
+ assert_equal Backburner.configuration.max_job_retries, default_retries
212
+
213
+ silenced do
214
+ worker.fork_inner('demo.test.foo2')
215
+ end
216
+
217
+ assert_equal worker.instance_variable_get("@threads_number"), 10
218
+ assert_equal worker.instance_variable_get("@garbage_after"), default_garbage
219
+ assert_equal Backburner.configuration.max_job_retries, default_retries
220
+
221
+ silenced do
222
+ worker.fork_inner('demo.test.foo3')
223
+ end
224
+
225
+ assert_equal worker.instance_variable_get("@threads_number"), 20
226
+ assert_equal worker.instance_variable_get("@garbage_after"), 30
227
+ assert_equal Backburner.configuration.max_job_retries, default_retries
228
+
229
+ silenced do
230
+ worker.fork_inner('demo.test.foo4')
231
+ end
232
+
233
+ assert_equal worker.instance_variable_get("@threads_number"), 40
234
+ assert_equal worker.instance_variable_get("@garbage_after"), 50
235
+ assert_equal Backburner.configuration.max_job_retries, 60
236
+ end
237
+
238
+ end
239
+
240
+ describe "cleanup on parent" do
241
+
242
+ it "child_pids should return a list of alive children pids" do
243
+ worker = @worker_class.new(%W(foo))
244
+ Kernel.expects(:fork).once.returns(12345)
245
+ Process.expects(:kill).with(0, 12345).once
246
+ Process.expects(:pid).once.returns(12346)
247
+ assert_equal [], @worker_class.child_pids
248
+ worker.fork_it {}
249
+ child_pids = @worker_class.child_pids
250
+ assert_equal [12345], child_pids
251
+ child_pids.clear
252
+ end
253
+
254
+ it "child_pids should return an empty array if is_child" do
255
+ Process.expects(:pid).never
256
+ @worker_class.is_child = true
257
+ @worker_class.child_pids << 12345
258
+ assert_equal [], @worker_class.child_pids
259
+ end
260
+
261
+ it "stop_forks should send a SIGTERM for every child" do
262
+ Process.expects(:pid).returns(12346).at_least(1)
263
+ Process.expects(:kill).with(0, 12345).at_least(1)
264
+ Process.expects(:kill).with(0, 12347).at_least(1)
265
+ Process.expects(:kill).with("SIGTERM", 12345)
266
+ Process.expects(:kill).with("SIGTERM", 12347)
267
+ @worker_class.child_pids << 12345
268
+ @worker_class.child_pids << 12347
269
+ assert_equal [12345, 12347], @worker_class.child_pids
270
+ @worker_class.stop_forks
271
+ @worker_class.child_pids.clear
272
+ end
273
+
274
+ it "kill_forks should send a SIGKILL for every child" do
275
+ Process.expects(:pid).returns(12346).at_least(1)
276
+ Process.expects(:kill).with(0, 12345).at_least(1)
277
+ Process.expects(:kill).with(0, 12347).at_least(1)
278
+ Process.expects(:kill).with("SIGKILL", 12345)
279
+ Process.expects(:kill).with("SIGKILL", 12347)
280
+ @worker_class.child_pids << 12345
281
+ @worker_class.child_pids << 12347
282
+ assert_equal [12345, 12347], @worker_class.child_pids
283
+ @worker_class.kill_forks
284
+ @worker_class.child_pids.clear
285
+ end
286
+
287
+ it "finish_forks should call stop_forks, kill_forks and Process.waitall" do
288
+ Process.expects(:pid).returns(12346).at_least(1)
289
+ Process.expects(:kill).with(0, 12345).at_least(1)
290
+ Process.expects(:kill).with(0, 12347).at_least(1)
291
+ Process.expects(:kill).with("SIGTERM", 12345)
292
+ Process.expects(:kill).with("SIGTERM", 12347)
293
+ Process.expects(:kill).with("SIGKILL", 12345)
294
+ Process.expects(:kill).with("SIGKILL", 12347)
295
+ Kernel.expects(:sleep).with(1)
296
+ Process.expects(:waitall)
297
+ @worker_class.child_pids << 12345
298
+ @worker_class.child_pids << 12347
299
+ assert_equal [12345, 12347], @worker_class.child_pids
300
+ silenced do
301
+ @worker_class.finish_forks
302
+ end
303
+ @worker_class.child_pids.clear
304
+ end
305
+
306
+ it "finish_forks should not do anything if is_child" do
307
+ @worker_class.expects(:stop_forks).never
308
+ @worker_class.is_child = true
309
+ @worker_class.child_pids << 12345
310
+ silenced do
311
+ @worker_class.finish_forks
312
+ end
313
+ end
314
+
315
+ end # cleanup on parent
316
+
317
+ describe "practical tests" do
318
+
319
+ before do
320
+ @templogger = Templogger.new('/tmp')
321
+ Backburner.configure { |config| config.logger = @templogger.logger }
322
+ $worker_test_count = 0
323
+ $worker_success = false
324
+ $worker_raise = false
325
+ clear_jobs!('response')
326
+ clear_jobs!('foo.bar.1', 'foo.bar.2', 'foo.bar.3', 'foo.bar.4', 'foo.bar.5')
327
+ @worker_class.threads_number = 1
328
+ @worker_class.garbage_after = 10
329
+ silenced do
330
+ @response_worker = @worker_class.new('response')
331
+ @response_worker.watch_tube('demo.test.response')
332
+ end
333
+ @ignore_forks = true
334
+ end
335
+
336
+ after do
337
+ @templogger.close
338
+ clear_jobs!('response')
339
+ clear_jobs!('foo.bar.1', 'foo.bar.2', 'foo.bar.3', 'foo.bar.4', 'foo.bar.5')
340
+ @worker_class.shutdown = true
341
+ silenced do
342
+ @worker_class.stop_forks
343
+ Timeout::timeout(2) { sleep 0.1 while @worker_class.child_pids.length > 0 }
344
+ @worker_class.kill_forks
345
+ Timeout::timeout(2) { sleep 0.1 while @worker_class.child_pids.length > 0 }
346
+ end
347
+ end
348
+
349
+ it "should work an enqueued job" do
350
+ @worker = @worker_class.new('foo.bar.1')
351
+ @worker.start(false)
352
+ @worker_class.enqueue TestJobFork, [1, 2], :queue => "foo.bar.1"
353
+ silenced(2) do
354
+ @templogger.wait_for_match(/Completed TestJobFork/m)
355
+ @response_worker.work_one_job
356
+ end
357
+ assert_equal 3, $worker_test_count
358
+ end # enqueue
359
+
360
+ it "should work for an async job" do
361
+ @worker = @worker_class.new('foo.bar.2')
362
+ @worker.start(false)
363
+ TestAsyncJobFork.async(:queue => 'foo.bar.2').foo(3, 5)
364
+ silenced(2) do
365
+ @templogger.wait_for_match(/Completed TestAsyncJobFork/m)
366
+ @response_worker.work_one_job
367
+ end
368
+ assert_equal 15, $worker_test_count
369
+ end # async
370
+
371
+ it "should fail quietly if there's an argument error" do
372
+ @worker = @worker_class.new('foo.bar.3')
373
+ @worker.start(false)
374
+ @worker_class.enqueue TestJobFork, ["bam", "foo", "bar"], :queue => "foo.bar.3"
375
+ silenced(5) do
376
+ @templogger.wait_for_match(/Finished TestJobFork.*attempt 1 of 1/m)
377
+ end
378
+ assert_match(/Exception ArgumentError/, @templogger.body)
379
+ assert_equal 0, $worker_test_count
380
+ end # fail, argument
381
+
382
+ it "should support retrying jobs and burying" do
383
+ Backburner.configure { |config| config.max_job_retries = 1; config.retry_delay = 0 }
384
+ @worker = @worker_class.new('foo.bar.4')
385
+ @worker.start(false)
386
+ @worker_class.enqueue TestRetryJobFork, ["bam", "foo"], :queue => 'foo.bar.4'
387
+ silenced(2) do
388
+ @templogger.wait_for_match(/Finished TestRetryJobFork.*attempt 2 of 2/m)
389
+ 2.times { @response_worker.work_one_job }
390
+ end
391
+ assert_equal 2, $worker_test_count
392
+ assert_equal false, $worker_success
393
+ end # retry, bury
394
+
395
+ it "should support retrying jobs and succeeds" do
396
+ Backburner.configure { |config| config.max_job_retries = 2; config.retry_delay = 0 }
397
+ @worker = @worker_class.new('foo.bar.5')
398
+ @worker.start(false)
399
+ @worker_class.enqueue TestRetryJobFork, ["bam", "foo"], :queue => 'foo.bar.5'
400
+ silenced(2) do
401
+ @templogger.wait_for_match(/Completed TestRetryJobFork/m)
402
+ 3.times { @response_worker.work_one_job }
403
+ end
404
+ assert_equal 3, $worker_test_count
405
+ assert_equal true, $worker_success
406
+ end # retrying, succeeds
407
+
408
+ end # practical tests
409
+
410
+ end # forking and threading
411
+
412
+ end # Backburner::Workers::ThreadsOnFork module
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: backburner
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.6
4
+ version: 0.3.0
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2012-11-13 00:00:00.000000000 Z
12
+ date: 2012-11-15 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: beaneater
@@ -64,17 +64,17 @@ dependencies:
64
64
  requirement: !ruby/object:Gem::Requirement
65
65
  none: false
66
66
  requirements:
67
- - - ~>
67
+ - - '='
68
68
  - !ruby/object:Gem::Version
69
- version: 4.1.0
69
+ version: 3.2.0
70
70
  type: :development
71
71
  prerelease: false
72
72
  version_requirements: !ruby/object:Gem::Requirement
73
73
  none: false
74
74
  requirements:
75
- - - ~>
75
+ - - '='
76
76
  - !ruby/object:Gem::Version
77
- version: 4.1.0
77
+ version: 3.2.0
78
78
  - !ruby/object:Gem::Dependency
79
79
  name: mocha
80
80
  requirement: !ruby/object:Gem::Requirement
@@ -117,6 +117,7 @@ files:
117
117
  - examples/hooked.rb
118
118
  - examples/retried.rb
119
119
  - examples/simple.rb
120
+ - examples/stress.rb
120
121
  - lib/backburner.rb
121
122
  - lib/backburner/async_proxy.rb
122
123
  - lib/backburner/configuration.rb
@@ -131,19 +132,24 @@ files:
131
132
  - lib/backburner/version.rb
132
133
  - lib/backburner/worker.rb
133
134
  - lib/backburner/workers/simple.rb
135
+ - lib/backburner/workers/threads_on_fork.rb
136
+ - test/async_proxy_test.rb
134
137
  - test/back_burner_test.rb
135
138
  - test/connection_test.rb
136
139
  - test/fixtures/hooked.rb
140
+ - test/fixtures/test_fork_jobs.rb
137
141
  - test/fixtures/test_jobs.rb
142
+ - test/helpers/templogger.rb
138
143
  - test/helpers_test.rb
139
144
  - test/hooks_test.rb
140
145
  - test/job_test.rb
141
146
  - test/logger_test.rb
142
147
  - test/performable_test.rb
143
148
  - test/queue_test.rb
144
- - test/simple_worker_test.rb
145
149
  - test/test_helper.rb
146
150
  - test/worker_test.rb
151
+ - test/workers/simple_worker_test.rb
152
+ - test/workers/threads_on_fork_test.rb
147
153
  homepage: http://github.com/nesquena/backburner
148
154
  licenses: []
149
155
  post_install_message:
@@ -169,17 +175,21 @@ signing_key:
169
175
  specification_version: 3
170
176
  summary: Reliable beanstalk background job processing made easy for Ruby and Sinatra
171
177
  test_files:
178
+ - test/async_proxy_test.rb
172
179
  - test/back_burner_test.rb
173
180
  - test/connection_test.rb
174
181
  - test/fixtures/hooked.rb
182
+ - test/fixtures/test_fork_jobs.rb
175
183
  - test/fixtures/test_jobs.rb
184
+ - test/helpers/templogger.rb
176
185
  - test/helpers_test.rb
177
186
  - test/hooks_test.rb
178
187
  - test/job_test.rb
179
188
  - test/logger_test.rb
180
189
  - test/performable_test.rb
181
190
  - test/queue_test.rb
182
- - test/simple_worker_test.rb
183
191
  - test/test_helper.rb
184
192
  - test/worker_test.rb
193
+ - test/workers/simple_worker_test.rb
194
+ - test/workers/threads_on_fork_test.rb
185
195
  has_rdoc: