backburner 1.3.0 → 1.3.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.travis.yml +1 -0
- data/CHANGELOG.md +5 -1
- data/HOOKS.md +26 -2
- data/README.md +16 -4
- data/backburner.gemspec +2 -0
- data/lib/backburner.rb +1 -0
- data/lib/backburner/queue.rb +2 -2
- data/lib/backburner/tasks.rb +20 -9
- data/lib/backburner/version.rb +1 -1
- data/lib/backburner/workers/threading.rb +129 -0
- data/lib/backburner/workers/threads_on_fork.rb +0 -1
- data/test/workers/simple_worker_test.rb +1 -1
- data/test/workers/threading_worker_test.rb +68 -0
- metadata +45 -14
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: ffcc49440304bb0d25a3710a3d94442fd1881f51
|
4
|
+
data.tar.gz: 2e9318efc9a04979135fecc8edfce4e8c8912386
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: caa6aa25343e41b2e8097be38d6b5679f304b9f417f2b0559e085caa2165660fc3ab132f5a920a49360faf16629b84b2d1b740ddf8332b0106df0437458a6f40
|
7
|
+
data.tar.gz: 457cb6f9ce73c900e0c54face79f78c8f0f6a89b1462580208f5afa9450f17c717fd7b2dd5701299addb334e260e55baa5c5360583867268c9f4ea921c94e38b
|
data/.travis.yml
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,6 +1,10 @@
|
|
1
1
|
# CHANGELOG
|
2
2
|
|
3
|
-
## Version 1.3.
|
3
|
+
## Version 1.3.1 (April 21 2016)
|
4
|
+
|
5
|
+
* Addition of thread-pool-based concurrency (@contentfree)
|
6
|
+
|
7
|
+
## Version 1.3.0 (February 05 2016)
|
4
8
|
|
5
9
|
* Enqueue command now responds with beanstalk response details
|
6
10
|
|
data/HOOKS.md
CHANGED
@@ -5,7 +5,7 @@ In many cases you can use a hook rather than mess around with Backburner's inter
|
|
5
5
|
|
6
6
|
## Job Hooks
|
7
7
|
|
8
|
-
Hooks are transparently adapted from [Resque](https://github.com/
|
8
|
+
Hooks are transparently adapted from [Resque](https://github.com/resque/resque/blob/master/docs/HOOKS.md), so
|
9
9
|
if you are familiar with their hook API, now you can use nearly the same ones with beanstalkd and backburner!
|
10
10
|
|
11
11
|
There are a variety of hooks available that are triggered during the lifecycle of a job:
|
@@ -56,7 +56,31 @@ class SomeJob
|
|
56
56
|
end
|
57
57
|
```
|
58
58
|
|
59
|
-
You can also setup modules to create compose-able and reusable hooks for your jobs.
|
59
|
+
You can also setup modules to create compose-able and reusable hooks for your jobs. For example:
|
60
|
+
|
61
|
+
```ruby
|
62
|
+
module LoggedJob
|
63
|
+
def before_perform_log_job(*args)
|
64
|
+
Logger.info "About to perform #{self} with #{args.inspect}"
|
65
|
+
end
|
66
|
+
end
|
67
|
+
|
68
|
+
module BuriedJob
|
69
|
+
def on_failure_bury(e, *args)
|
70
|
+
Logger.info "Performing #{self} caused an exception (#{e}). Retrying..."
|
71
|
+
self.bury
|
72
|
+
end
|
73
|
+
end
|
74
|
+
|
75
|
+
class MyJob
|
76
|
+
extend LoggedJob
|
77
|
+
extend BuriedJob
|
78
|
+
|
79
|
+
def self.perform(*args)
|
80
|
+
# ...
|
81
|
+
end
|
82
|
+
end
|
83
|
+
```
|
60
84
|
|
61
85
|
## Worker Hooks
|
62
86
|
|
data/README.md
CHANGED
@@ -126,8 +126,8 @@ The key options available are:
|
|
126
126
|
|
127
127
|
## Breaking Changes
|
128
128
|
|
129
|
-
|
130
|
-
be put into a 'newsletter-job' queue.
|
129
|
+
Before **v0.4.0**: Jobs were placed into default queues based on the name of the class creating the queue. i.e NewsletterJob would
|
130
|
+
be put into a 'newsletter-job' queue. As of 0.4.0, all jobs are placed into a primary queue named "my.app.namespace.backburner-jobs"
|
131
131
|
unless otherwise specified.
|
132
132
|
|
133
133
|
## Usage
|
@@ -402,6 +402,7 @@ By default, Backburner comes with the following workers built-in:
|
|
402
402
|
| `Backburner::Workers::Simple` | Single threaded, no forking worker. Simplest option. |
|
403
403
|
| `Backburner::Workers::Forking` | Basic forking worker that manages crashes and memory bloat. |
|
404
404
|
| `Backburner::Workers::ThreadsOnFork` | Forking worker that utilizes threads for concurrent processing. |
|
405
|
+
| `Backburner::Workers::Threading` | Utilizes thread pools for concurrent processing. |
|
405
406
|
|
406
407
|
You can select the default worker for processing with:
|
407
408
|
|
@@ -423,12 +424,23 @@ or through associated rake tasks with:
|
|
423
424
|
$ QUEUE=newsletter-sender,push-message THREADS=2 GARBAGE=1000 rake backburner:threads_on_fork:work
|
424
425
|
```
|
425
426
|
|
427
|
+
When running on MRI or another Ruby implementation with a Global Interpreter Lock (GIL), do not be surprised if you're unable to saturate multiple cores, even with the threads_on_fork worker. To utilize multiple cores, you must run multiple worker processes.
|
428
|
+
|
429
|
+
Additional concurrency strategies will hopefully be contributed in the future.
|
430
|
+
If you are interested in helping out, please let us know.
|
431
|
+
|
432
|
+
#### More info: Threads on Fork Worker
|
433
|
+
|
426
434
|
For more information on the threads_on_fork worker, check out the
|
427
435
|
[ThreadsOnFork Worker](https://github.com/nesquena/backburner/wiki/ThreadsOnFork-worker) documentation. Please note that the `ThreadsOnFork` worker does not work on Windows due to its lack of `fork`.
|
428
436
|
|
437
|
+
#### More info: Threading Worker (thread-pool-based)
|
429
438
|
|
430
|
-
|
431
|
-
|
439
|
+
Configuration options for the Threading worker are similar to the threads_on_fork worker, sans the garbage option. When running via the `backburner` CLI, it's simplest to provide the queue names and maximum number of threads in the format "{queue name}:{max threads in pool}[,{name}:{threads}]":
|
440
|
+
|
441
|
+
```
|
442
|
+
$ bundle exec backburner -q queue1:4,queue2:4 # and then other options, like environment, pidfile, app root, etc. See docs for the CLI
|
443
|
+
```
|
432
444
|
|
433
445
|
### Default Queues
|
434
446
|
|
data/backburner.gemspec
CHANGED
@@ -18,8 +18,10 @@ Gem::Specification.new do |s|
|
|
18
18
|
|
19
19
|
s.add_runtime_dependency 'beaneater', '~> 1.0'
|
20
20
|
s.add_runtime_dependency 'dante', '> 0.1.5'
|
21
|
+
s.add_runtime_dependency 'concurrent-ruby', '~> 1.0.1'
|
21
22
|
|
22
23
|
s.add_development_dependency 'rake'
|
23
24
|
s.add_development_dependency 'minitest', '3.2.0'
|
24
25
|
s.add_development_dependency 'mocha'
|
26
|
+
s.add_development_dependency 'byebug'
|
25
27
|
end
|
data/lib/backburner.rb
CHANGED
data/lib/backburner/queue.rb
CHANGED
@@ -54,7 +54,7 @@ module Backburner
|
|
54
54
|
end
|
55
55
|
end
|
56
56
|
|
57
|
-
# Returns or assigns queue parallel active jobs limit (only ThreadsOnFork
|
57
|
+
# Returns or assigns queue parallel active jobs limit (only ThreadsOnFork and Threading workers)
|
58
58
|
#
|
59
59
|
# @example
|
60
60
|
# queue_jobs_limit 5
|
@@ -82,7 +82,7 @@ module Backburner
|
|
82
82
|
end
|
83
83
|
end
|
84
84
|
|
85
|
-
# Returns or assigns queue retry limit (only ThreadsOnFork
|
85
|
+
# Returns or assigns queue retry limit (only ThreadsOnFork worker)
|
86
86
|
#
|
87
87
|
# @example
|
88
88
|
# queue_retry_limit 6
|
data/lib/backburner/tasks.rb
CHANGED
@@ -5,16 +5,14 @@ namespace :backburner do
|
|
5
5
|
# QUEUE=foo,bar,baz rake backburner:work
|
6
6
|
desc "Start backburner worker using default worker"
|
7
7
|
task :work => :environment do
|
8
|
-
|
9
|
-
Backburner.work queues
|
8
|
+
Backburner.work get_queues
|
10
9
|
end
|
11
10
|
|
12
11
|
namespace :simple do
|
13
12
|
# QUEUE=foo,bar,baz rake backburner:simple:work
|
14
13
|
desc "Starts backburner worker using simple processing"
|
15
14
|
task :work => :environment do
|
16
|
-
|
17
|
-
Backburner.work queues, :worker => Backburner::Workers::Simple
|
15
|
+
Backburner.work get_queues, :worker => Backburner::Workers::Simple
|
18
16
|
end
|
19
17
|
end # simple
|
20
18
|
|
@@ -22,8 +20,7 @@ namespace :backburner do
|
|
22
20
|
# QUEUE=foo,bar,baz rake backburner:forking:work
|
23
21
|
desc "Starts backburner worker using fork processing"
|
24
22
|
task :work => :environment do
|
25
|
-
|
26
|
-
Backburner.work queues, :worker => Backburner::Workers::Forking
|
23
|
+
Backburner.work get_queues, :worker => Backburner::Workers::Forking
|
27
24
|
end
|
28
25
|
end # forking
|
29
26
|
|
@@ -32,12 +29,26 @@ namespace :backburner do
|
|
32
29
|
# twitter tube will have 10 threads, garbage after 5k executions and retry 5 times.
|
33
30
|
desc "Starts backburner worker using threads_on_fork processing"
|
34
31
|
task :work => :environment do
|
35
|
-
queues = (ENV["QUEUE"] ? ENV["QUEUE"].split(',') : nil) rescue nil
|
36
32
|
threads = ENV['THREADS'].to_i
|
37
33
|
garbage = ENV['GARBAGE'].to_i
|
38
34
|
Backburner::Workers::ThreadsOnFork.threads_number = threads if threads > 0
|
39
35
|
Backburner::Workers::ThreadsOnFork.garbage_after = garbage if garbage > 0
|
40
|
-
Backburner.work
|
36
|
+
Backburner.work get_queues, :worker => Backburner::Workers::ThreadsOnFork
|
41
37
|
end
|
42
38
|
end # threads_on_fork
|
43
|
-
|
39
|
+
|
40
|
+
namespace :threading do
|
41
|
+
# QUEUE=twitter:10,parse_page,send_mail,verify_bithday THREADS=2 rake backburner:threading:work
|
42
|
+
# twitter tube will have 10 threads
|
43
|
+
desc "Starts backburner worker using threading processing"
|
44
|
+
task :work => :environment do
|
45
|
+
threads = ENV['THREADS'].to_i
|
46
|
+
Backburner::Workers::Threading.threads_number = threads if threads > 0
|
47
|
+
Backburner.work get_queues, :worker => Backburner::Workers::Threading
|
48
|
+
end
|
49
|
+
end # threads_on_fork
|
50
|
+
|
51
|
+
def get_queues
|
52
|
+
(ENV["QUEUE"] ? ENV["QUEUE"].split(',') : nil) rescue nil
|
53
|
+
end
|
54
|
+
end
|
data/lib/backburner/version.rb
CHANGED
@@ -0,0 +1,129 @@
|
|
1
|
+
require 'concurrent'
|
2
|
+
|
3
|
+
module Backburner
|
4
|
+
module Workers
|
5
|
+
class Threading < Worker
|
6
|
+
class << self
|
7
|
+
attr_accessor :shutdown
|
8
|
+
attr_accessor :threads_number
|
9
|
+
end
|
10
|
+
|
11
|
+
# Custom initializer just to set @tubes_data
|
12
|
+
def initialize(*args)
|
13
|
+
@tubes_data = {}
|
14
|
+
super
|
15
|
+
self.process_tube_options
|
16
|
+
end
|
17
|
+
|
18
|
+
# Used to prepare job queues before processing jobs.
|
19
|
+
# Setup beanstalk tube_names and watch all specified tubes for jobs.
|
20
|
+
#
|
21
|
+
# @raise [Beaneater::NotConnected] If beanstalk fails to connect.
|
22
|
+
# @example
|
23
|
+
# @worker.prepare
|
24
|
+
#
|
25
|
+
def prepare
|
26
|
+
self.tube_names.map! { |name| expand_tube_name(name) }.uniq!
|
27
|
+
log_info "Working #{tube_names.size} queues: [ #{tube_names.join(', ')} ]"
|
28
|
+
@thread_pools = {}
|
29
|
+
@tubes_data.each do |name, config|
|
30
|
+
max_threads = (config[:threads] || self.class.threads_number || ::Concurrent.processor_count).to_i
|
31
|
+
@thread_pools[name] = (::Concurrent::ThreadPoolExecutor.new(min_threads: 1, max_threads: max_threads))
|
32
|
+
end
|
33
|
+
end
|
34
|
+
|
35
|
+
# Starts processing new jobs indefinitely.
|
36
|
+
# Primary way to consume and process jobs in specified tubes.
|
37
|
+
#
|
38
|
+
# @example
|
39
|
+
# @worker.start
|
40
|
+
#
|
41
|
+
def start(wait=true)
|
42
|
+
prepare
|
43
|
+
|
44
|
+
@thread_pools.each do |tube_name, pool|
|
45
|
+
pool.max_length.times do
|
46
|
+
# Create a new connection and set it up to listen on this tube name
|
47
|
+
connection = new_connection.tap{ |conn| conn.tubes.watch!(tube_name) }
|
48
|
+
connection.on_reconnect = lambda { |conn| conn.tubes.watch!(tube_name) }
|
49
|
+
|
50
|
+
# Make it work jobs using its own connection per thread
|
51
|
+
pool.post(connection) { |connection|
|
52
|
+
loop {
|
53
|
+
begin
|
54
|
+
work_one_job(connection)
|
55
|
+
|
56
|
+
rescue => e
|
57
|
+
log_error("Exception caught in thread pool loop. Continuing. -> #{e.message}\nBacktrace: #{e.backtrace}")
|
58
|
+
end
|
59
|
+
}
|
60
|
+
}
|
61
|
+
end
|
62
|
+
end
|
63
|
+
|
64
|
+
wait_for_shutdown! if wait
|
65
|
+
end
|
66
|
+
|
67
|
+
# FIXME: We can't use this on_reconnect method since we don't know which thread
|
68
|
+
# pool the connection belongs to (and therefore we can't re-watch the right tubes).
|
69
|
+
# However, we set the individual connections' on_reconnect method in #start
|
70
|
+
# def on_reconnect(conn)
|
71
|
+
# watch_tube(@watching_tube, conn) if @watching_tube
|
72
|
+
# end
|
73
|
+
|
74
|
+
# Process the special tube_names of Threading worker:
|
75
|
+
# The format is tube_name:custom_threads_limit
|
76
|
+
#
|
77
|
+
# @example
|
78
|
+
# process_tube_names(['foo:10', 'lol'])
|
79
|
+
# => ['foo', lol']
|
80
|
+
def process_tube_names(tube_names)
|
81
|
+
names = compact_tube_names(tube_names)
|
82
|
+
if names.nil?
|
83
|
+
nil
|
84
|
+
else
|
85
|
+
names.map do |name|
|
86
|
+
data = name.split(":")
|
87
|
+
tube_name = data.first
|
88
|
+
threads_number = data[1].empty? ? nil : data[1].to_i rescue nil
|
89
|
+
@tubes_data[expand_tube_name(tube_name)] = {
|
90
|
+
:threads => threads_number
|
91
|
+
}
|
92
|
+
tube_name
|
93
|
+
end
|
94
|
+
end
|
95
|
+
end
|
96
|
+
|
97
|
+
# Process the tube settings
|
98
|
+
# This overrides @tubes_data set by process_tube_names method. So a tube has name 'super_job:5'
|
99
|
+
# and the tube class has setting queue_jobs_limit 10, the result limit will be 10
|
100
|
+
# If the tube is known by existing beanstalkd queue, but not by class - skip it
|
101
|
+
#
|
102
|
+
def process_tube_options
|
103
|
+
Backburner::Worker.known_queue_classes.each do |queue|
|
104
|
+
next if @tubes_data[expand_tube_name(queue)].nil?
|
105
|
+
queue_settings = {
|
106
|
+
:threads => queue.queue_jobs_limit
|
107
|
+
}
|
108
|
+
@tubes_data[expand_tube_name(queue)].merge!(queue_settings){|k, v1, v2| v2.nil? ? v1 : v2 }
|
109
|
+
end
|
110
|
+
end
|
111
|
+
|
112
|
+
# Wait for the shutdown signel
|
113
|
+
def wait_for_shutdown!
|
114
|
+
while !self.class.shutdown do
|
115
|
+
sleep 0.5
|
116
|
+
end
|
117
|
+
|
118
|
+
# Shutting down
|
119
|
+
# FIXME: Shut down each thread's connection
|
120
|
+
@thread_pools.each { |name, pool| pool.kill }
|
121
|
+
end
|
122
|
+
|
123
|
+
def shutdown
|
124
|
+
Backburner::Workers::Threading.shutdown = true
|
125
|
+
super
|
126
|
+
end
|
127
|
+
end # Threading
|
128
|
+
end # Workers
|
129
|
+
end # Backburner
|
@@ -2,7 +2,7 @@ require File.expand_path('../../test_helper', __FILE__)
|
|
2
2
|
require File.expand_path('../../fixtures/test_jobs', __FILE__)
|
3
3
|
require File.expand_path('../../fixtures/hooked', __FILE__)
|
4
4
|
|
5
|
-
describe "Backburner::Workers::
|
5
|
+
describe "Backburner::Workers::Simple module" do
|
6
6
|
before do
|
7
7
|
Backburner.default_queues.clear
|
8
8
|
@worker_class = Backburner::Workers::Simple
|
@@ -0,0 +1,68 @@
|
|
1
|
+
require File.expand_path('../../test_helper', __FILE__)
|
2
|
+
require File.expand_path('../../fixtures/test_jobs', __FILE__)
|
3
|
+
require File.expand_path('../../fixtures/hooked', __FILE__)
|
4
|
+
|
5
|
+
describe "Backburner::Workers::Threading worker" do
|
6
|
+
before do
|
7
|
+
Backburner.default_queues.clear
|
8
|
+
@worker_class = Backburner::Workers::Threading
|
9
|
+
end
|
10
|
+
|
11
|
+
describe "for prepare method" do
|
12
|
+
it "should make tube names array always unique to avoid duplication" do
|
13
|
+
worker = @worker_class.new(["foo", "demo.test.foo"])
|
14
|
+
worker.prepare
|
15
|
+
assert_equal ["demo.test.foo"], worker.tube_names
|
16
|
+
end
|
17
|
+
|
18
|
+
it 'creates a thread pool per queue' do
|
19
|
+
worker = @worker_class.new(%w(foo bar))
|
20
|
+
out = capture_stdout { worker.prepare }
|
21
|
+
assert_equal 2, worker.instance_variable_get("@thread_pools").keys.size
|
22
|
+
end
|
23
|
+
|
24
|
+
it 'uses Concurrent.processor_count if no custom thread count is provided' do
|
25
|
+
worker = @worker_class.new("foo")
|
26
|
+
out = capture_stdout { worker.prepare }
|
27
|
+
assert_equal ::Concurrent.processor_count, worker.instance_variable_get("@thread_pools")["demo.test.foo"].max_length
|
28
|
+
end
|
29
|
+
end # prepare
|
30
|
+
|
31
|
+
describe "for process_tube_names method" do
|
32
|
+
it "should interpret the job_name:threads_limit format" do
|
33
|
+
worker = @worker_class.new(["foo:4"])
|
34
|
+
assert_equal ["foo"], worker.tube_names
|
35
|
+
end
|
36
|
+
|
37
|
+
it "should interpret correctly even if missing values" do
|
38
|
+
tubes = %W(foo1:2 foo2)
|
39
|
+
worker = @worker_class.new(tubes)
|
40
|
+
assert_equal %W(foo1 foo2), worker.tube_names
|
41
|
+
end
|
42
|
+
|
43
|
+
it "should store interpreted values correctly" do
|
44
|
+
tubes = %W(foo1 foo2:2)
|
45
|
+
worker = @worker_class.new(tubes)
|
46
|
+
assert_equal({
|
47
|
+
"demo.test.foo1" => { :threads => nil },
|
48
|
+
"demo.test.foo2" => { :threads => 2 }
|
49
|
+
}, worker.instance_variable_get("@tubes_data"))
|
50
|
+
end
|
51
|
+
end # process_tube_names
|
52
|
+
|
53
|
+
describe 'working a queue' do
|
54
|
+
before do
|
55
|
+
@worker = @worker_class.new(["foo:3"])
|
56
|
+
capture_stdout { @worker.prepare }
|
57
|
+
end
|
58
|
+
|
59
|
+
it 'runs work_on_job per thread' do
|
60
|
+
clear_jobs!("foo")
|
61
|
+
job_count=10
|
62
|
+
job_count.times { @worker_class.enqueue TestJob, [1, 0], :queue => "foo" } # TestJob adds the given arguments together and then to $worker_test_count
|
63
|
+
@worker.start(false) # don't wait for shutdown
|
64
|
+
sleep 0.5 # Wait for threads to do their work
|
65
|
+
assert_equal job_count, $worker_test_count
|
66
|
+
end
|
67
|
+
end # working a queue
|
68
|
+
end
|
metadata
CHANGED
@@ -1,55 +1,69 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: backburner
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.3.
|
4
|
+
version: 1.3.1
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Nathan Esquenazi
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2016-02
|
11
|
+
date: 2016-10-02 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: beaneater
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
16
16
|
requirements:
|
17
|
-
- - ~>
|
17
|
+
- - "~>"
|
18
18
|
- !ruby/object:Gem::Version
|
19
19
|
version: '1.0'
|
20
20
|
type: :runtime
|
21
21
|
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
|
-
- - ~>
|
24
|
+
- - "~>"
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '1.0'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
28
|
name: dante
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
30
30
|
requirements:
|
31
|
-
- -
|
31
|
+
- - ">"
|
32
32
|
- !ruby/object:Gem::Version
|
33
33
|
version: 0.1.5
|
34
34
|
type: :runtime
|
35
35
|
prerelease: false
|
36
36
|
version_requirements: !ruby/object:Gem::Requirement
|
37
37
|
requirements:
|
38
|
-
- -
|
38
|
+
- - ">"
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: 0.1.5
|
41
|
+
- !ruby/object:Gem::Dependency
|
42
|
+
name: concurrent-ruby
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - "~>"
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: 1.0.1
|
48
|
+
type: :runtime
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - "~>"
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: 1.0.1
|
41
55
|
- !ruby/object:Gem::Dependency
|
42
56
|
name: rake
|
43
57
|
requirement: !ruby/object:Gem::Requirement
|
44
58
|
requirements:
|
45
|
-
- -
|
59
|
+
- - ">="
|
46
60
|
- !ruby/object:Gem::Version
|
47
61
|
version: '0'
|
48
62
|
type: :development
|
49
63
|
prerelease: false
|
50
64
|
version_requirements: !ruby/object:Gem::Requirement
|
51
65
|
requirements:
|
52
|
-
- -
|
66
|
+
- - ">="
|
53
67
|
- !ruby/object:Gem::Version
|
54
68
|
version: '0'
|
55
69
|
- !ruby/object:Gem::Dependency
|
@@ -70,14 +84,28 @@ dependencies:
|
|
70
84
|
name: mocha
|
71
85
|
requirement: !ruby/object:Gem::Requirement
|
72
86
|
requirements:
|
73
|
-
- -
|
87
|
+
- - ">="
|
88
|
+
- !ruby/object:Gem::Version
|
89
|
+
version: '0'
|
90
|
+
type: :development
|
91
|
+
prerelease: false
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
93
|
+
requirements:
|
94
|
+
- - ">="
|
95
|
+
- !ruby/object:Gem::Version
|
96
|
+
version: '0'
|
97
|
+
- !ruby/object:Gem::Dependency
|
98
|
+
name: byebug
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
100
|
+
requirements:
|
101
|
+
- - ">="
|
74
102
|
- !ruby/object:Gem::Version
|
75
103
|
version: '0'
|
76
104
|
type: :development
|
77
105
|
prerelease: false
|
78
106
|
version_requirements: !ruby/object:Gem::Requirement
|
79
107
|
requirements:
|
80
|
-
- -
|
108
|
+
- - ">="
|
81
109
|
- !ruby/object:Gem::Version
|
82
110
|
version: '0'
|
83
111
|
description: Beanstalk background job processing made easy
|
@@ -88,8 +116,8 @@ executables:
|
|
88
116
|
extensions: []
|
89
117
|
extra_rdoc_files: []
|
90
118
|
files:
|
91
|
-
- .gitignore
|
92
|
-
- .travis.yml
|
119
|
+
- ".gitignore"
|
120
|
+
- ".travis.yml"
|
93
121
|
- CHANGELOG.md
|
94
122
|
- CONTRIBUTING.md
|
95
123
|
- Gemfile
|
@@ -124,6 +152,7 @@ files:
|
|
124
152
|
- lib/backburner/worker.rb
|
125
153
|
- lib/backburner/workers/forking.rb
|
126
154
|
- lib/backburner/workers/simple.rb
|
155
|
+
- lib/backburner/workers/threading.rb
|
127
156
|
- lib/backburner/workers/threads_on_fork.rb
|
128
157
|
- test/async_proxy_test.rb
|
129
158
|
- test/back_burner_test.rb
|
@@ -144,6 +173,7 @@ files:
|
|
144
173
|
- test/worker_test.rb
|
145
174
|
- test/workers/forking_worker_test.rb
|
146
175
|
- test/workers/simple_worker_test.rb
|
176
|
+
- test/workers/threading_worker_test.rb
|
147
177
|
- test/workers/threads_on_fork_worker_test.rb
|
148
178
|
homepage: http://github.com/nesquena/backburner
|
149
179
|
licenses:
|
@@ -155,12 +185,12 @@ require_paths:
|
|
155
185
|
- lib
|
156
186
|
required_ruby_version: !ruby/object:Gem::Requirement
|
157
187
|
requirements:
|
158
|
-
- -
|
188
|
+
- - ">="
|
159
189
|
- !ruby/object:Gem::Version
|
160
190
|
version: '0'
|
161
191
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
162
192
|
requirements:
|
163
|
-
- -
|
193
|
+
- - ">="
|
164
194
|
- !ruby/object:Gem::Version
|
165
195
|
version: '0'
|
166
196
|
requirements: []
|
@@ -189,4 +219,5 @@ test_files:
|
|
189
219
|
- test/worker_test.rb
|
190
220
|
- test/workers/forking_worker_test.rb
|
191
221
|
- test/workers/simple_worker_test.rb
|
222
|
+
- test/workers/threading_worker_test.rb
|
192
223
|
- test/workers/threads_on_fork_worker_test.rb
|