chantier 0.0.5 → 1.0.5

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 748fbefb75937019db2ddbf9d8ff6ffcd88c1227
4
- data.tar.gz: cacf4320b37f1891393ec55a8ed43d316825d21b
3
+ metadata.gz: 9f7e3721971fc3b264faba767be38065cd763f97
4
+ data.tar.gz: 615d99c6ffccc92804ca9ce6eae1e9e4b7dda18d
5
5
  SHA512:
6
- metadata.gz: 0c14a5d0828f0ed0f4491abdb6e152f027e73d457869f90e4ec3bf9630deb819f0ffd3e710814bfa94b654dc4a25403efdf1fe27800aa4b164af123d0852ff42
7
- data.tar.gz: b6382976248c36e1fa908dcd3f4b80162f4534c5392de7ceecf0687446d8a5aa2b727507f3bdf6ccff466f14ff9818aa36a25ba26bb6b455dfe72befd57d9369
6
+ metadata.gz: 962aa2648f78ce54fdf25af4c272b4a5babb99c2f7ad7fe2edbeadd772180bde851eecea335d75e5e6983b6111ea42854afd56a7a7580ecbdebca94501dfc4d4
7
+ data.tar.gz: f47eb5fcdd418b5fc78e3a77716eeb42c2cf781100fc22b9528c72b44699d90a564dfdd50c0593e030140189cb68dcb779398701ab79afb9bed5b98b8701a2d3
data/.travis.yml ADDED
@@ -0,0 +1,4 @@
1
+ rvm:
2
+ - 2.0.0
3
+ - 2.1.2
4
+ cache: bundler
data/README.md ADDED
@@ -0,0 +1,81 @@
1
+ # chantier
2
+
3
+ Dead-simple task manager for "fire and forget" jobs. Has two interchangeable pools -
4
+ processes and threads. The API of those two is the same, so you can play at will and figure
5
+ out which one works better.
6
+
7
+ The only thing Chantier checks for is that the spun off tasks have completed. It also
8
+ limits the number of tasks active at the same time. Your code will block until a slot
9
+ becomes available for a task.
10
+
11
+ manager = Chantier::ProcessPool.new(slots = 4) # You can also use ThreadPool
12
+ jobs_hose.each_job do | job |
13
+ manager.fork_task do # this call will block until a slot becomes available
14
+ Churner.new(job).churn # this block runs in a subprocess
15
+ end
16
+ manager.still_running? # => most likely "true"
17
+ end
18
+
19
+ manager.block_until_complete! #=> Will block until all the subprocesses have terminated
20
+
21
+ If you have a finite `Enumerable` at hand you can also launch it into the
22
+ `ProcessPool`/`ThreadPool`, like so:
23
+
24
+ manager = Chantier::ThreadPool.new(slots = 4)
25
+
26
+ manager.map_fork(job_tickets) do | job_ticket | # job_tickets has to be an Enumerable
27
+ # this block will run in a thread
28
+ Churner.new(job_ticket).churn
29
+ ...
30
+ end
31
+
32
+ Chantier does not provide any built-in IPC or inter-thread communication features - this
33
+ should stimulate you to write your tasks without them having to do IPC in the first place.
34
+
35
+
36
+ ## Managing job failure
37
+
38
+ Chantier implements what it calls `FailurePolicies`. A `Policy` is an object that works
39
+ like a counter for failed and successfully completed jobs. After each job, the policy
40
+ object will be asked whether `limit_reached?` is now true. If it is, calls to `fork_task()`
41
+ on the `Pool` using the failure policy will fail with an exception. There is a number of
42
+ standard `FailurePolcies` which can be applied to each specific `Pool`, by supplying it in
43
+ the `failure_policy` keyword argument.
44
+
45
+ For example, to stop the `Pool` from accepting jobs if more than half of the jobs fail
46
+ (either by raising an exception within their threads or by exiting the forked process with
47
+ a non-0 exit code):
48
+
49
+ fp = Chantier::FailurePolicies::Percentage.new(50)
50
+ pool = Chantier::ThreadPool.new(num_threads = 5, failure_policy: fp)
51
+ 4.times { pool.fork { puts "All is well"} }
52
+ 6.times { pool.fork { raise "Drat!"} } # Will only run 4 times and fail after
53
+
54
+ To allow only a specific number of failures within a time period wrap the policy in
55
+ a `WithInterval` object:
56
+
57
+ # Only allow 5 failures per second
58
+ counter = Chantier::FailurePolicies::Count.new(5)
59
+ fp = Chantier::FailurePolicies::WithinInterval.new(counter, within_seconds=3)
60
+
61
+ You can use those to set fine-grained failure conditions based on the runtime behavior of
62
+ the Pool you are using and job duration/failure rate. Chantier pools are made to run in
63
+ very long loops, sometimes indefinitely - so a `FailurePolicy` can be your best friend. You
64
+ can also bundle those policies together.
65
+
66
+
67
+ ## Contributing to chantier
68
+
69
+ * Check out the latest master to make sure the feature hasn't been implemented or the bug hasn't been fixed yet.
70
+ * Check out the issue tracker to make sure someone already hasn't requested it and/or contributed it.
71
+ * Fork the project.
72
+ * Start a feature/bugfix branch.
73
+ * Commit and push until you are happy with your contribution.
74
+ * Make sure to add tests for it. This is important so I don't break it in a future version unintentionally.
75
+ * Please try not to mess with the Rakefile, version, or history. If you want to have your own version, or is otherwise necessary, that is fine, but please isolate to its own commit so I can cherry-pick around it.
76
+
77
+ ## Copyright
78
+
79
+ Copyright (c) 2014 Julik Tarkhanov. See LICENSE.txt for
80
+ further details.
81
+
data/chantier.gemspec CHANGED
@@ -2,34 +2,41 @@
2
2
  # DO NOT EDIT THIS FILE DIRECTLY
3
3
  # Instead, edit Jeweler::Tasks in Rakefile, and run 'rake gemspec'
4
4
  # -*- encoding: utf-8 -*-
5
- # stub: chantier 0.0.5 ruby lib
5
+ # stub: chantier 1.0.5 ruby lib
6
6
 
7
7
  Gem::Specification.new do |s|
8
8
  s.name = "chantier"
9
- s.version = "0.0.5"
9
+ s.version = "1.0.5"
10
10
 
11
11
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
12
12
  s.require_paths = ["lib"]
13
13
  s.authors = ["Julik Tarkhanov"]
14
- s.date = "2014-08-05"
14
+ s.date = "2014-08-07"
15
15
  s.description = " Process your jobs in parallel with a simple table of processes or threads "
16
16
  s.email = "me@julik.nl"
17
17
  s.extra_rdoc_files = [
18
18
  "LICENSE.txt",
19
- "README.rdoc"
19
+ "README.md"
20
20
  ]
21
21
  s.files = [
22
22
  ".document",
23
23
  ".rspec",
24
+ ".travis.yml",
24
25
  "Gemfile",
25
26
  "LICENSE.txt",
26
- "README.rdoc",
27
+ "README.md",
27
28
  "Rakefile",
28
29
  "chantier.gemspec",
29
30
  "lib/chantier.rb",
31
+ "lib/failure_policies.rb",
30
32
  "lib/process_pool.rb",
31
33
  "lib/process_pool_with_kill.rb",
32
34
  "lib/thread_pool.rb",
35
+ "spec/failure_policy_by_percentage_spec.rb",
36
+ "spec/failure_policy_counter_spec.rb",
37
+ "spec/failure_policy_mutex_wrapper_spec.rb",
38
+ "spec/failure_policy_spec.rb",
39
+ "spec/failure_policy_within_interval_spec.rb",
33
40
  "spec/process_pool_spec.rb",
34
41
  "spec/process_pool_with_kill_spec.rb",
35
42
  "spec/spec_helper.rb",
data/lib/chantier.rb CHANGED
@@ -1,6 +1,7 @@
1
1
  module Chantier
2
- VERSION = '0.0.5'
2
+ VERSION = '1.0.5'
3
3
  require_relative 'process_pool'
4
4
  require_relative 'process_pool_with_kill'
5
5
  require_relative 'thread_pool'
6
+ require_relative 'failure_policies'
6
7
  end
@@ -0,0 +1,168 @@
1
+ module Chantier::FailurePolicies
2
+ # A very basic failure policy that will do nothing at all.
3
+ # It will always answer "nil" to limit_reached?, therefore allowing
4
+ # the works to proceed indefinitely. By overriding the four main methods
5
+ # on it you can control the policy further.
6
+ #
7
+ # Note that all calls to arm!, failure!, success! and limit_reached? are
8
+ # automatically protected by a Mutex - you don't need to set one up
9
+ # yourself.
10
+ class None
11
+ # Start counting the failures (will be triggered on the first job). You can manually
12
+ # call this to reset the object the object to the initial state (reset error counts)
13
+ def arm!
14
+ end
15
+
16
+ # Increment the failure counter
17
+ def failure!
18
+ end
19
+
20
+ # Increment the success counter
21
+ def success!
22
+ end
23
+
24
+ # Tells whether the failure policy has been triggered.
25
+ # Return something falsey from here if everything is in order
26
+ def limit_reached?
27
+ end
28
+ end
29
+
30
+ # Simplest failure policy based on overall error count.
31
+ #
32
+ # policy = FailAfterCount.new(4)
33
+ # policy.limit_reached? # => false
34
+ # 1.times { policy.failure! }
35
+ # policy.limit_reached? # => false
36
+ # #... and then
37
+ # 4.times { policy.failure! }
38
+ # policy.limit_reached? # => true
39
+ class Count < None
40
+ def initialize(max_failures)
41
+ @max = max_failures
42
+ end
43
+
44
+ # Arm the counter, prepare all the parameters
45
+ def arm!
46
+ @count = 0
47
+ end
48
+
49
+ # Register a failure (simply increments the counter)
50
+ def failure!
51
+ @count += 1
52
+ end
53
+
54
+ # Tells whether we had too many failures
55
+ def limit_reached?
56
+ @count >= @max
57
+ end
58
+ end
59
+
60
+ # Limits the number of failures that may be registered
61
+ # by percentage of errors vs successful triggers.
62
+ #
63
+ # policy = FailByPercentage.new(40)
64
+ # policy.limit_reached? # => false
65
+ # 600.times { policy.success! }
66
+ # policy.limit_reached? # => false
67
+ # 1.times { policy.failure! }
68
+ # policy.limit_reached? # => false
69
+ # 400.times { policy.failure! }
70
+ # policy.limit_reached? # => true
71
+ class Percentage < None
72
+ def initialize(percents_failing)
73
+ @threshold = percents_failing
74
+ end
75
+
76
+ def arm!
77
+ @failures, @successes = 0, 0
78
+ end
79
+
80
+ def failure!
81
+ @failures += 1
82
+ end
83
+
84
+ def success!
85
+ @successes += 1
86
+ end
87
+
88
+ def limit_reached?
89
+ ratio = @failures.to_f / (@failures + @successes)
90
+ (ratio * 100) >= @threshold
91
+ end
92
+ end
93
+
94
+ # Limits the number of failures that may be registered
95
+ # within the given interval
96
+ #
97
+ # policy = Count.new(10)
98
+ # policy_within_interval = FailWithinTimePeriod.new(policy, 60 * 2) # 2 minutes
99
+ # policy_within_interval.limit_reached? # => false
100
+ # #... and then during 1 minute
101
+ # 5.times { policy_within_interval.failure! }
102
+ # policy_within_interval.limit_reached? # => true
103
+ #
104
+ # Once the interval is passed,
105
+ # the error count will be reset back to 0.
106
+ class WithinInterval < None
107
+ def initialize(policy, interval_in_seconds)
108
+ @policy = policy
109
+ @interval = interval_in_seconds
110
+ end
111
+
112
+ def success!
113
+ interval_cutoff!
114
+ @policy.success!
115
+ end
116
+
117
+ def arm!
118
+ @policy.arm!
119
+ @interval_started = Time.now.utc.to_f
120
+ @count = 0
121
+ end
122
+
123
+ def failure!
124
+ interval_cutoff!
125
+ @policy.failure!
126
+ end
127
+
128
+ def limit_reached?
129
+ @policy.limit_reached?
130
+ end
131
+
132
+ private
133
+
134
+ def interval_cutoff!
135
+ t = Time.now.utc.to_f
136
+ if (t - @interval_started) > @interval
137
+ @interval_started = t
138
+ @policy.arm!
139
+ end
140
+ end
141
+
142
+ end
143
+
144
+ # Wraps a FailurePolicy-compatible object in a Mutex
145
+ # for all method calls.
146
+ class MutexWrapper
147
+ def initialize(failure_policy)
148
+ @policy = failure_policy
149
+ @mutex = Mutex.new
150
+ end
151
+
152
+ def arm!
153
+ @mutex.synchronize { @policy.arm! }
154
+ end
155
+
156
+ def success!
157
+ @mutex.synchronize { @policy.success! }
158
+ end
159
+
160
+ def failure!
161
+ @mutex.synchronize { @policy.failure! }
162
+ end
163
+
164
+ def limit_reached?
165
+ @mutex.synchronize { @policy.limit_reached? }
166
+ end
167
+ end
168
+ end
data/lib/process_pool.rb CHANGED
@@ -37,13 +37,13 @@ class Chantier::ProcessPool
37
37
  # Initializes a new ProcessPool with the given number of workers. If max_failures is
38
38
  # given the fork_task method will raise an exception if more than N processes spawned
39
39
  # have been terminated with a non-0 exit status.
40
- def initialize(num_procs, max_failures: nil)
41
- @max_failures = max_failures && max_failures.to_i
42
- @non_zero_exits = 0
43
-
40
+ def initialize(num_procs, failure_policy: Chantier::FailurePolicies::None.new)
44
41
  raise "Need at least 1 slot, given #{num_procs.to_i}" unless num_procs.to_i > 0
45
42
  @pids = [nil] * num_procs.to_i
46
43
  @semaphore = Mutex.new
44
+
45
+ @failure_policy = Chantier::FailurePolicies::MutexWrapper.new(failure_policy)
46
+ @failure_policy.arm!
47
47
  end
48
48
 
49
49
  # Distributes the elements in the given Enumerable to parallel workers,
@@ -68,8 +68,8 @@ class Chantier::ProcessPool
68
68
  # becomes free. Once that happens, the given block will be forked off
69
69
  # and the method will return.
70
70
  def fork_task(&blk)
71
- if @max_failures && @non_zero_exits > @max_failures
72
- raise "Reached error limit of processes quitting with non-0 status - limit set at #{@max_failures}"
71
+ if @failure_policy.limit_reached?
72
+ raise "Reached error limit of processes quitting with non-0 status"
73
73
  end
74
74
 
75
75
  destination_slot_idx = nil
@@ -102,27 +102,12 @@ class Chantier::ProcessPool
102
102
  @semaphore.synchronize do
103
103
  # Now we can remove that process from the process table
104
104
  @pids[destination_slot_idx] = nil
105
- # and increment the error count if needed
106
- @non_zero_exits += 1 unless terminated_normally
107
105
  end
106
+ terminated_normally ? @failure_policy.success! : @failure_policy.failure!
108
107
  end
109
108
 
110
- return task_pid
111
-
112
- # Dispatch the killer thread which kicks in after KILL_AFTER_SECONDS.
113
- # Note that we do not manage the @pids table here because once the process
114
- # gets terminated it will bounce back to the standard wait() above.
115
- # Thread.new do
116
- # sleep KILL_AFTER_SECONDS
117
- # begin
118
- # TERMINATION_SIGNALS.each do | sig |
119
- # Process.kill(sig, task_pid)
120
- # sleep 5 # Give it some time to react
121
- # end
122
- # rescue Errno::ESRCH
123
- # # It has already quit, nothing to do
124
- # end
125
- # end
109
+ # Make sure to return the PID afterwards
110
+ task_pid
126
111
  end
127
112
 
128
113
  # Tells whether some processes are still churning
data/lib/thread_pool.rb CHANGED
@@ -37,17 +37,15 @@ class Chantier::ThreadPool
37
37
  # Initializes a new ProcessPool with the given number of workers. If max_failures is
38
38
  # given the fork_task method will raise an exception if more than N threads spawned
39
39
  # have raised during execution.
40
- def initialize(num_threads, max_failures: nil)
40
+ def initialize(num_threads, failure_policy: Chantier::FailurePolicies::None.new)
41
41
  raise "Need at least 1 slot, given #{num_threads.to_i}" unless num_threads.to_i > 0
42
42
  @threads = [nil] * num_threads.to_i
43
43
  @semaphore = Mutex.new
44
44
 
45
- # Failure counters
46
- @failure_count = 0
47
- @max_failures = max_failures && max_failures.to_i
45
+ @failure_policy = Chantier::FailurePolicies::MutexWrapper.new(failure_policy)
46
+ @failure_policy.arm!
48
47
 
49
48
  # Information on the last exception that happened
50
- @aborted = false
51
49
  @last_representative_exception = nil
52
50
  end
53
51
 
@@ -72,8 +70,8 @@ class Chantier::ThreadPool
72
70
  # the thread it is called from until a slot in the thread table
73
71
  # becomes free.
74
72
  def fork_task(&blk)
75
- if @last_representative_exception
76
- raise "Reached error limit of #{@max_failures} (last error was #{@last_representative_exception.inspect})"
73
+ if @failure_policy.limit_reached?
74
+ raise "Reached error limit (last error was #{@last_representative_exception.inspect})"
77
75
  end
78
76
 
79
77
  destination_slot_idx = nil
@@ -118,15 +116,12 @@ class Chantier::ThreadPool
118
116
 
119
117
  def run_block_with_exception_protection(&blk)
120
118
  yield
119
+ @failure_policy.success!
121
120
  rescue Exception => e
122
121
  # Register the failure and decrement the counter. If we had more than N
123
122
  # failures stop the machine completely by raising an exception in the caller.
124
- @semaphore.synchronize do
125
- @failure_count += 1
126
- if @max_failures && (@failure_count > @max_failures)
127
- @last_representative_exception = e
128
- end
129
- end
123
+ @failure_policy.failure!
124
+ @last_representative_exception = e if @failure_policy.limit_reached?
130
125
  end
131
126
 
132
127
  end
@@ -0,0 +1,27 @@
1
+ require_relative 'spec_helper'
2
+
3
+ describe Chantier::FailurePolicies::Percentage do
4
+ it 'performs the percentage checks' do
5
+ policy = described_class.new(40.0)
6
+ policy.arm!
7
+
8
+ 599.times { policy.success! }
9
+ 1.times { policy.failure! }
10
+ expect(policy).not_to be_limit_reached
11
+
12
+ 399.times { policy.failure! }
13
+ expect(policy).to be_limit_reached
14
+ end
15
+
16
+ it 'resets the counts when calling arm!' do
17
+ policy = described_class.new(40)
18
+ policy.arm!
19
+
20
+ 50.times { policy.failure! }
21
+ 50.times { policy.success! }
22
+ expect(policy).to be_limit_reached
23
+
24
+ policy.arm!
25
+ expect(policy).not_to be_limit_reached
26
+ end
27
+ end
@@ -0,0 +1,28 @@
1
+ require_relative 'spec_helper'
2
+
3
+ describe Chantier::FailurePolicies::Count do
4
+ it 'performs the counts with the right responses' do
5
+ policy = described_class.new(12)
6
+
7
+ policy.arm!
8
+
9
+ (644 - 12).times { policy.success! }
10
+ expect(policy).not_to be_limit_reached
11
+
12
+ 5.times { policy.failure! }
13
+ expect(policy).not_to be_limit_reached
14
+
15
+ (12 - 5).times { policy.failure! }
16
+ expect(policy).to be_limit_reached
17
+ end
18
+
19
+ it 'resets the counts when calling arm!' do
20
+ policy = described_class.new(4)
21
+ policy.arm!
22
+ 4.times { policy.failure! }
23
+ expect(policy).to be_limit_reached
24
+
25
+ policy.arm!
26
+ expect(policy).not_to be_limit_reached
27
+ end
28
+ end
@@ -0,0 +1,106 @@
1
+ require_relative 'spec_helper'
2
+
3
+ def rsleep
4
+ sleep(rand(10)/1200.0)
5
+ end
6
+
7
+ describe Chantier::FailurePolicies::MutexWrapper do
8
+ class NonThreadsafe
9
+ attr_reader :arms, :successes, :failures, :limits_reached
10
+
11
+ def initialize
12
+ @arms, @successes, @failures, @limits_reached = 0,0,0,0
13
+ end
14
+
15
+ def arm!
16
+ @arms += 6
17
+ rsleep
18
+ @arms -= 6
19
+ rsleep
20
+ @arms += 1
21
+ end
22
+
23
+ def success!
24
+ @successes += 12
25
+ rsleep
26
+ @successes -= 12
27
+ rsleep
28
+ @successes += 1
29
+ end
30
+
31
+ def failure!
32
+ @failures += 4
33
+ rsleep
34
+ @failures -= 4
35
+ rsleep
36
+ @failures += 1
37
+ end
38
+
39
+ def limit_reached?
40
+ @limits_reached += 13
41
+ rsleep
42
+ @limits_reached -= 13
43
+ rsleep
44
+ @limits_reached += 1
45
+ end
46
+ end
47
+
48
+ it 'evaluates a non-threadsafe object in this spec' do
49
+ policy = NonThreadsafe.new
50
+
51
+ n = 400
52
+ states = []
53
+ threads = (1..n).map do | n |
54
+ Thread.new do
55
+ rsleep
56
+ policy.arm!
57
+ rsleep
58
+ policy.failure!
59
+ rsleep
60
+ policy.success!
61
+ rsleep
62
+ policy.limit_reached?
63
+ call_counts = [
64
+ policy.arms,
65
+ policy.failures,
66
+ policy.successes,
67
+ policy.limits_reached,
68
+ ]
69
+ states << call_counts
70
+ end
71
+ end
72
+
73
+ threads.map(&:join)
74
+ expect(states.uniq.length).not_to eq(n)
75
+ end
76
+
77
+ it 'wraps all the necessary methods' do
78
+ wrapped = NonThreadsafe.new
79
+ policy = described_class.new(wrapped)
80
+
81
+ n = 400
82
+ states = []
83
+ threads = (1..n).map do | n |
84
+ Thread.new do
85
+ rsleep
86
+ policy.arm!
87
+ rsleep
88
+ policy.failure!
89
+ rsleep
90
+ policy.success!
91
+ rsleep
92
+ policy.limit_reached?
93
+ call_counts = [
94
+ wrapped.arms,
95
+ wrapped.failures,
96
+ wrapped.successes,
97
+ wrapped.limits_reached,
98
+ ]
99
+ states << call_counts
100
+ end
101
+ end
102
+
103
+ threads.map(&:join)
104
+ expect(states.uniq.length).to eq(n)
105
+ end
106
+ end
@@ -0,0 +1,10 @@
1
+ require_relative 'spec_helper'
2
+
3
+ describe Chantier::FailurePolicies::None do
4
+ it 'has all the necessary methods' do
5
+ expect(subject.arm!).to be_nil
6
+ expect(subject.failure!).to be_nil
7
+ expect(subject.success!).to be_nil
8
+ expect(subject.limit_reached?).to be_nil
9
+ end
10
+ end
@@ -0,0 +1,30 @@
1
+ require_relative 'spec_helper'
2
+
3
+ describe Chantier::FailurePolicies::WithinInterval do
4
+ let(:counter) { Chantier::FailurePolicies::Count.new(10) }
5
+
6
+ it 'does not cross the limit when errors are spread out' do
7
+ policy = described_class.new(counter, 0.5)
8
+ policy.arm!
9
+
10
+ 10.times do
11
+ policy.failure!
12
+ sleep 0.1
13
+ end
14
+ expect(policy).not_to be_limit_reached
15
+ end
16
+
17
+ it 'does cross the limit when errors are spread out' do
18
+ policy = described_class.new(counter, 0.5)
19
+ policy.arm!
20
+
21
+ 10.times do
22
+ policy.failure!
23
+ sleep 0.01
24
+ end
25
+ expect(policy).to be_limit_reached
26
+
27
+ policy.arm!
28
+ expect(policy).not_to be_limit_reached
29
+ end
30
+ end
@@ -52,12 +52,13 @@ describe Chantier::ProcessPool do
52
52
 
53
53
  context 'with failures' do
54
54
  it 'raises after 4 failures' do
55
- under_test = described_class.new(num_workers=3, max_failures: '4')
55
+ fp = Chantier::FailurePolicies::Count.new(4)
56
+ under_test = described_class.new(num_workers = 3, failure_policy: fp)
56
57
  expect {
57
58
  15.times do
58
59
  under_test.fork_task { raise "I am such a failure" }
59
60
  end
60
- }.to raise_error('Reached error limit of processes quitting with non-0 status - limit set at 4')
61
+ }.to raise_error('Reached error limit of processes quitting with non-0 status')
61
62
  end
62
63
 
63
64
  it 'runs through the jobs if max_failures is not given' do
@@ -48,12 +48,13 @@ describe Chantier::ThreadPool do
48
48
 
49
49
  context 'with failures' do
50
50
  it 'raises after 4 failures' do
51
- under_test = described_class.new(num_workers=3, max_failures: '4')
51
+ fp = Chantier::FailurePolicies::Count.new(4)
52
+ under_test = described_class.new(num_workers=3, failure_policy: fp)
52
53
  expect {
53
54
  15.times do
54
55
  under_test.fork_task { raise "I am such a failure" }
55
56
  end
56
- }.to raise_error('Reached error limit of 4 (last error was #<RuntimeError: I am such a failure>)')
57
+ }.to raise_error('Reached error limit (last error was #<RuntimeError: I am such a failure>)')
57
58
  end
58
59
 
59
60
  it 'runs through the jobs if max_failures is not given' do
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: chantier
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.5
4
+ version: 1.0.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Julik Tarkhanov
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-08-05 00:00:00.000000000 Z
11
+ date: 2014-08-07 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rspec
@@ -86,19 +86,26 @@ executables: []
86
86
  extensions: []
87
87
  extra_rdoc_files:
88
88
  - LICENSE.txt
89
- - README.rdoc
89
+ - README.md
90
90
  files:
91
91
  - ".document"
92
92
  - ".rspec"
93
+ - ".travis.yml"
93
94
  - Gemfile
94
95
  - LICENSE.txt
95
- - README.rdoc
96
+ - README.md
96
97
  - Rakefile
97
98
  - chantier.gemspec
98
99
  - lib/chantier.rb
100
+ - lib/failure_policies.rb
99
101
  - lib/process_pool.rb
100
102
  - lib/process_pool_with_kill.rb
101
103
  - lib/thread_pool.rb
104
+ - spec/failure_policy_by_percentage_spec.rb
105
+ - spec/failure_policy_counter_spec.rb
106
+ - spec/failure_policy_mutex_wrapper_spec.rb
107
+ - spec/failure_policy_spec.rb
108
+ - spec/failure_policy_within_interval_spec.rb
102
109
  - spec/process_pool_spec.rb
103
110
  - spec/process_pool_with_kill_spec.rb
104
111
  - spec/spec_helper.rb
data/README.rdoc DELETED
@@ -1,47 +0,0 @@
1
- = chantier
2
-
3
- Dead-simple task manager for "fire and forget" jobs. Has two interchangeable pools - processes and threads. The API
4
- of those two is the same, so you can play at will and figure out which one works better.
5
-
6
- The only thing Chantier checks for is that the spun off tasks have completed. It also limits the number of tasks
7
- active at the same time. Your code will block until a slot becomes available for a task.
8
-
9
- manager = Chantier::ProcessPool.new(slots = 4) # You can also use ThreadPool
10
- jobs_hose.each_job do | job |
11
- manager.fork_task do # this call will block until a slot becomes available
12
- Churner.new(job).churn # this block runs in a subprocess
13
- end
14
- manager.still_running? # => most likely "true"
15
- end
16
-
17
- manager.block_until_complete! #=> Will block until all the subprocesses have terminated
18
-
19
- If you have a finite Enumerable at hand you can also launch it into the ProcessPool/ThreadPool, like so:
20
-
21
- manager = Chantier::ThreadPool.new(slots = 4)
22
-
23
- manager.map_fork(job_tickets) do | job_ticket | # job_tickets has to be an Enumerable
24
- # this block will run in a thread
25
- Churner.new(job_ticket).churn
26
- ...
27
- end
28
-
29
- Chantier does not provide any built-in IPC or inter-thread communication features - this should
30
- stimulate you to write your tasks without them having to do IPC in the first place.
31
-
32
-
33
- == Contributing to chantier
34
-
35
- * Check out the latest master to make sure the feature hasn't been implemented or the bug hasn't been fixed yet.
36
- * Check out the issue tracker to make sure someone already hasn't requested it and/or contributed it.
37
- * Fork the project.
38
- * Start a feature/bugfix branch.
39
- * Commit and push until you are happy with your contribution.
40
- * Make sure to add tests for it. This is important so I don't break it in a future version unintentionally.
41
- * Please try not to mess with the Rakefile, version, or history. If you want to have your own version, or is otherwise necessary, that is fine, but please isolate to its own commit so I can cherry-pick around it.
42
-
43
- == Copyright
44
-
45
- Copyright (c) 2014 Julik Tarkhanov. See LICENSE.txt for
46
- further details.
47
-