in-parallel 0.1.1 → 0.1.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 727d8d9b9bba7b35600ceb413fa49cc67051863d
4
- data.tar.gz: 05ef1dd733547320a4cf270fea067114ace9d2b3
3
+ metadata.gz: c73c6b3c70624e4afbe733cbe691e933f2ee178d
4
+ data.tar.gz: 1139cdd892f6242ff9a83cdf722d1df8e5520cc7
5
5
  SHA512:
6
- metadata.gz: 68a7d78e8c45443a419279684e819f01fc3167cd77f6e7a7cdb2cdf5880700eadc353c460ec4d94350e5d3b104a3dffa7d9bc44f56b3eafebe89dbd775f91568
7
- data.tar.gz: 4eca6ca978053f9a08c3e0ed9d66b3e851543cbdae48fff7e9c13547633e386d97faba0f463d098f996fa287fa3bf425dcdfce05775c3ba9074d35cb9df34297
6
+ metadata.gz: 7efe4eaa24eb74c0a1af3a5ebb7408c14afe26c682d0f0e76d4ef9ab31efd027ef9989c1c3bf73143d129aadafdca8c2ea2a66dccc607ba0651edd1e0d9c1ecb
7
+ data.tar.gz: 5008f55eedcc36458011d549d6bb0ad1b87ab5f9126b9dfae00025e0923aff6d0d8a870bf7b5a1e68a53d592cfbcd4c25257c6abd90dc884a6c72d87353b0a00
data/README.md CHANGED
@@ -1,9 +1,9 @@
1
1
  # in-parallel
2
2
  A lightweight Ruby library with very simple syntax, making use of process.fork for parallelization
3
3
 
4
- I know there are other libraries that do parallelization, but I wanted something very simple to consume, and this was fun. I plan on using this within a test framework to enable parallel execution of some of the framework's tasks, and allow people within thier tests to execute code in parallel when wanted. This solution does not check to see how many processors you have, it just forks as many processes as you ask for. That means that it will handle a handful of parallel processes well, but could definitely overload your system with ruby processes if you try to spin up a LOT of processes. If you're looking for something simple and light-weight and on either linux or mac, then this solution could be what you want.
4
+ The other Ruby librarys that do parallel execution all support one primary use case - crunching through a large queue of small tasks as quickly and efficiently as possible. This library primarily supports the use case of needing to run a few larger tasks in parallel and managing the stdout to make it easy to understand which processes are logging what. This library was created to be used by the Beaker test framework to enable parallel execution of some of the framework's tasks, and allow people within thier tests to execute code in parallel when wanted. This solution does not check to see how many processors you have, it just forks as many processes as you ask for. That means that it will handle a handful of parallel processes well, but could definitely overload your system with ruby processes if you try to spin up a LOT of processes. If you're looking for something simple and light-weight and on either linux or mac (forking processes is not supported on Windows), then this solution could be what you want.
5
5
 
6
- If you are looking for something a little more production ready, you should take a look at the [parallel](https://github.com/grosser/parallel) project.
6
+ If you are looking for something a little more production ready, you should take a look at the [parallel](https://github.com/grosser/parallel) project. In the future this library will extend the in the parallel gem to take advantage of all of it's useful features as well.
7
7
 
8
8
  ## Methods:
9
9
 
@@ -99,8 +99,8 @@ hello world, bar
99
99
 
100
100
  ```
101
101
 
102
- ### Array.each_in_parallel(&block)
103
- 1. This is very similar to other solutions, except that it directly extends the Array class with an each_in_parallel method, giving you the ability to pretty simply spawn a process for any item in an array.
102
+ ### Enumerable.each_in_parallel(&block)
103
+ 1. This is very similar to other solutions, except that it directly extends the Enumerable class with an each_in_parallel method, giving you the ability to pretty simply spawn a process for any item in an array or map.
104
104
  2. Identifies the block location (or caller location if the block does not have a source_location) in the console log to make it clear which block is being executed
105
105
 
106
106
  ```ruby
data/in_parallel.gemspec CHANGED
@@ -10,15 +10,17 @@ Gem::Specification.new do |spec|
10
10
  spec.email = ["sam.woods@puppetlabs.com"]
11
11
 
12
12
  spec.summary = "A lightweight library to execute a handful of tasks in parallel with simple syntax"
13
- spec.description = "I know there are other libraries that do parallelization, but I wanted something very " +
14
- "simple to consume, and this was fun. I plan on using this within a test framework to enable parallel " +
15
- "execution of some of the framework's tasks, and allow people within thier tests to execute code in " +
16
- "parallel when wanted. This solution does not check to see how many processors you have, it just forks " +
17
- "as many processes as you ask for. That means that it will handle a handful of parallel processes well, " +
18
- "but could definitely overload your system with ruby processes if you try to spin up a LOT of processes. " +
19
- "If you're looking for something simple and light-weight and on either linux or mac, then this solution " +
20
- "could be what you want. If you are looking for something a little more production ready, you should take " +
21
- "a look at the parallel project."
13
+ spec.description = "The other Ruby librarys that do parallel execution all support one primary use case " +
14
+ "- crunching through a large queue of small tasks as quickly and efficiently as possible. This library " +
15
+ "primarily supports the use case of needing to run a few larger tasks in parallel and managing the " +
16
+ "stdout to make it easy to understand which processes are logging what. This library was created to be " +
17
+ "used by the Beaker test framework to enable parallel execution of some of the framework's tasks, and " +
18
+ "allow people within thier tests to execute code in parallel when wanted. This solution does not check " +
19
+ "to see how many processors you have, it just forks as many processes as you ask for. That means that it " +
20
+ "will handle a handful of parallel processes well, but could definitely overload your system with ruby " +
21
+ "processes if you try to spin up a LOT of processes. If you're looking for something simple and " +
22
+ "light-weight and on either linux or mac (forking processes is not supported on Windows), then this " +
23
+ "solution could be what you want."
22
24
  spec.homepage = "https://github.com/samwoods1/in-parallel"
23
25
  spec.license = "MIT"
24
26
 
@@ -1,3 +1,3 @@
1
- module InParallel
2
- VERSION = Version = '0.1.1'
1
+ class InParallel
2
+ VERSION = Version = '0.1.2'
3
3
  end
data/lib/in_parallel.rb CHANGED
@@ -1,182 +1,238 @@
1
- require_relative 'enumerable'
1
+ require_relative 'parallel_enumerable'
2
2
 
3
3
  class InParallel
4
- @@supported = Process.respond_to?(:fork)
5
- @@outs = []
6
- def self.outs
7
- @@outs
4
+ # How many seconds between outputting to stdout that we are waiting for child processes.
5
+ # 0 or < 0 means no signaling.
6
+ @@signal_interval = 30
7
+ @@process_infos = []
8
+ @@raise_error = nil
9
+ def self.process_infos
10
+ @@process_infos
11
+ end
12
+
13
+ @@background_objs = []
14
+ @@result_id = 0
15
+
16
+ @@pids = []
17
+
18
+ # Example - will spawn 2 processes, (1 for each method) wait until they both complete, and log STDOUT:
19
+ # InParallel.run_in_parallel {
20
+ # @result_1 = on agents[0], 'puppet agent -t'
21
+ # @result_2 = on agents[1], 'puppet agent -t'
22
+ # }
23
+ # NOTE: Only supports assigning instance variables within the block, not local variables
24
+ def self.run_in_parallel(&block)
25
+ if Process.respond_to?(:fork)
26
+ proxy = BlankBindingParallelProxy.new(self)
27
+ proxy.instance_eval(&block)
28
+ results_map = wait_for_processes
29
+ # pass in the 'self' from the block.binding which is the instance of the class
30
+ # that contains the initial binding call.
31
+ # This gives us access to the local and instance variables from that context.
32
+ return result_lookup(proxy, eval("self", block.binding), results_map)
8
33
  end
34
+ puts 'Warning: Fork is not supported on this OS, executing block normally'
35
+ block.call
36
+ end
9
37
 
10
- @@background_objs = []
11
- @@result_id = 0
12
-
13
- # Example - will spawn 2 processes, (1 for each method) wait until they both complete, and log STDOUT:
14
- # InParallel.run_in_parallel {
15
- # @result_1 = on agents[0], 'puppet agent -t'
16
- # @result_2 = on agents[1], 'puppet agent -t'
17
- # }
18
- # NOTE: Only supports assigning instance variables within the block, not local variables
19
- def self.run_in_parallel(&block)
20
- if @@supported
21
- proxy = BlankBindingParallelProxy.new(self)
22
- proxy.instance_eval(&block)
23
- results_map = wait_for_processes
24
- # pass in the 'self' from the block.binding which is the instance of the class
25
- # that contains the initial binding call.
26
- # This gives us access to the local and instance variables from that context.
27
- return result_lookup(proxy, eval("self", block.binding), results_map)
38
+ # Private method to lookup results from the results_map and replace the
39
+ # temp values with actual return values
40
+ def self.result_lookup(proxy_obj, target_obj, results_map)
41
+ vars = (proxy_obj.instance_variables)
42
+ results_map.keys.each { |tmp_result|
43
+ vars.each {|var|
44
+ if proxy_obj.instance_variable_get(var) == tmp_result
45
+ target_obj.instance_variable_set(var, results_map[tmp_result])
46
+ break
47
+ end
48
+ }
49
+ }
50
+ end
51
+ private_class_method :result_lookup
52
+
53
+ # Example - Will spawn a process in the background to run puppet agent on two agents and return immediately:
54
+ # Parallel.run_in_background {
55
+ # @result = on agents[0], 'puppet agent -t'
56
+ # @result_2 = on agents[1], 'puppet agent -t'
57
+ # }
58
+ # # Do something else here before waiting for the process to complete
59
+ #
60
+ # # Optionally wait for the processes to complete before continuing.
61
+ # # Otherwise use run_in_background(true) to clean up the process status and output immediately.
62
+ # Parrallel.get_background_results(self)
63
+ # NOTE: must call get_background_results to allow instance variables in calling object to be set,
64
+ # otherwise @result will evaluate to "unresolved_parallel_result_0"
65
+ def self.run_in_background(ignore_result = true, &block)
66
+ if Process.respond_to?(:fork)
67
+ proxy = BlankBindingParallelProxy.new(self)
68
+ proxy.instance_eval(&block)
69
+
70
+ if ignore_result
71
+ Process.detach(@@process_infos.last[:pid])
72
+ @@process_infos.pop
73
+ else
74
+ @@background_objs << {:proxy => proxy, :target => eval("self", block.binding)}
75
+ return process_infos.last[:tmp_result]
28
76
  end
29
- puts 'Warning: Fork is not supported on this OS, executing block normally'
30
- block.call
77
+ return
31
78
  end
79
+ puts 'Warning: Fork is not supported on this OS, executing block normally'
80
+ result = block.call
81
+ return nil if ignore_result
82
+ result
83
+ end
32
84
 
33
- # Private method to lookup results from the results_map and replace the
34
- # temp values with actual return values
35
- def self.result_lookup(proxy_obj, target_obj, results_map)
36
- vars = (proxy_obj.instance_variables)
37
- results_map.keys.each { |tmp_result|
38
- vars.each {|var|
39
- if proxy_obj.instance_variable_get(var) == tmp_result
40
- target_obj.instance_variable_set(var, results_map[tmp_result])
41
- break
42
- end
43
- }
44
- }
85
+ def self.get_background_results
86
+ results_map = wait_for_processes
87
+ # pass in the 'self' from the block.binding which is the instance of the class
88
+ # that contains the initial binding call.
89
+ # This gives us access to the instance variables from that context.
90
+ @@background_objs.each {|obj|
91
+ return result_lookup(obj[:proxy], obj[:target], results_map)
92
+ }
93
+ end
45
94
 
95
+ # Waits for all processes to complete and logs STDOUT and STDERR in chunks from any processes
96
+ # that were triggered from this Parallel class
97
+ def self.wait_for_processes
98
+ trap(:INT) do
99
+ puts "Warning, recieved interrupt. Processing child results and exiting."
100
+ @@proces_infos.each { |process_info|
101
+ # Send INT to each child process so it returns and can print stdout and stderr to console before exiting.
102
+ Process.kill("INT", process_info[:pid])
103
+ }
46
104
  end
47
- private_class_method :result_lookup
48
-
49
- # Example - Will spawn a process in the background to run puppet agent on two agents and return immediately:
50
- # Parallel.run_in_background {
51
- # @result = on agents[0], 'puppet agent -t'
52
- # @result_2 = on agents[1], 'puppet agent -t'
53
- # }
54
- # # Do something else here before waiting for the process to complete
55
- #
56
- # # Optionally wait for the processes to complete before continuing.
57
- # # Otherwise use run_in_background(true) to clean up the process status and output immediately.
58
- # Parrallel.get_background_results(self)
59
- # NOTE: must call get_background_results to allow instance variables in calling object to be set,
60
- # otherwise @result will evaluate to "unresolved_parallel_result_0"
61
- def self.run_in_background(ignore_result = true, &block)
62
- if @@supported
63
- proxy = BlankBindingParallelProxy.new(self)
64
- proxy.instance_eval(&block)
65
-
66
- if ignore_result
67
- Process.detach(@@outs.last[:pid])
68
- @@outs.pop
69
- else
70
- @@background_objs << {:proxy => proxy, :target => eval("self", block.binding)}
71
- return outs.last[:tmp_result]
72
- end
73
- return
105
+ return unless Process.respond_to?(:fork)
106
+ # Custom process to wait so that we can do things like time out, and kill child processes if
107
+ # one process returns with an error before the others complete.
108
+ results_map = {}
109
+ timer = Time.now
110
+ while !@@process_infos.empty? do
111
+ if @@signal_interval > 0 && Time.now > timer + @@signal_interval
112
+ puts 'Waiting for child processes.'
113
+ timer = Time.now
74
114
  end
75
- puts 'Warning: Fork is not supported on this OS, executing block normally'
76
- result = block.call
77
- return nil if ignore_result
78
- result
79
- end
80
-
81
- def self.get_background_results
82
- results_map = wait_for_processes
83
- # pass in the 'self' from the block.binding which is the instance of the class
84
- # that contains the initial binding call.
85
- # This gives us access to the local and instance variables from that context.
86
- @@background_objs.each {|obj|
87
- return result_lookup(obj[:proxy], obj[:target], results_map)
88
- }
89
- end
90
-
91
- # Waits for all processes to complete and logs STDOUT and STDERR in chunks from any processes
92
- # that were triggered from this Parallel class
93
- def self.wait_for_processes
94
- return unless @@supported
95
- # Wait for all processes to complete
96
- statuses = Process.waitall
97
-
98
- results_map = {}
99
- # Print the STDOUT and STDERR for each process with signals for start and end
100
- while !@@outs.empty? do
101
- out = @@outs.shift
102
- begin
103
- puts "\n------ Begin output for #{out[:method_sym]} - #{out[:pid]}\n"
104
- puts out[:std_out].readlines
105
- puts "------ Completed output for #{out[:method_sym]} - #{out[:pid]}\n"
106
- results_map[out[:tmp_result]] = Marshal.load(out[:result].read)
107
- ensure
108
- # close the read end pipes
109
- out[:std_out].close unless out[:std_out].closed?
110
- out[:result].close unless out[:result].closed?
115
+ @@process_infos.each {|process_info|
116
+ # wait up to half a second for each thread to see if it is complete, if not, check the next thread.
117
+ # returns immediately if the process has completed.
118
+ thr = process_info[:wait_thread].join(0.5)
119
+ unless thr.nil?
120
+ # the process completed, get the result and rethrow on error.
121
+ begin
122
+ # Print the STDOUT and STDERR for each process with signals for start and end
123
+ puts "\n------ Begin output for #{process_info[:method_sym]} - #{process_info[:pid]}\n"
124
+ puts File.new(process_info[:std_out], 'r').readlines
125
+ puts "------ Completed output for #{process_info[:method_sym]} - #{process_info[:pid]}\n"
126
+ result = process_info[:result].read
127
+ results_map[process_info[:tmp_result]] = (result.nil? || result.empty?) ? result : Marshal.load(result)
128
+ File.delete(process_info[:std_out])
129
+ # Kill all other processes and let them log their stdout before re-raising
130
+ # if a child process raised an error.
131
+ if results_map[process_info[:tmp_result]].is_a?(StandardError)
132
+ @@process_infos.each{|p_info|
133
+ begin
134
+ Process.kill(0, p_info[:pid]) unless p_info[:pid] == process_info[:pid]
135
+ rescue StandardError
136
+ end
137
+ }
138
+ end
139
+ ensure
140
+ # close the read end pipe
141
+ process_info[:result].close unless process_info[:result].closed?
142
+ @@process_infos.delete(process_info)
143
+ @@raise_error = results_map[process_info[:tmp_result]] if results_map[process_info[:tmp_result]].is_a?(StandardError)
144
+ break
145
+ end
111
146
  end
112
- end
113
-
114
- statuses.each { |status|
115
- raise("Parallel process with PID '#{status[0]}' failed: #{status[1]}") unless status[1].success?
116
147
  }
148
+ end
117
149
 
118
- return results_map
150
+ # Reset the error in case the error is rescued
151
+ begin
152
+ raise @@raise_error unless @@raise_error.nil?
153
+ ensure
154
+ @@raise_error = nil
119
155
  end
120
156
 
121
- # private method to execute some code in a separate process and store the STDOUT and STDERR for later retrieval
122
- def self._execute_in_parallel(method_sym, obj = self, &block)
123
- ret_val = nil
124
- # Communicate the return value of the method or block
125
- read_result, write_result = IO.pipe
126
- # Store the STDOUT and STDERR of the method or block
127
- read_io, write_io = IO.pipe
128
- pid = fork do
129
- STDOUT.reopen(write_io)
130
- STDERR.reopen(write_io)
131
- # Need to store this for the case of run_in_background in _execute
132
- @@result_writer = write_result
133
- begin
134
- # close subprocess's copy of read_io since it only needs to write
135
- read_io.close
136
- read_result.close
137
- ret_val = obj.instance_eval(&block)
138
- # Write the result to the write_result IO stream.
139
- # Have to serialize the value so it can be transmitted via IO
140
- Marshal.dump(ret_val, write_result)
141
- rescue SystemCallError => err
142
- puts "error: #{err.message}"
143
- write_io.write('.')
144
- exit 1
145
- ensure
146
- write_io.close
147
- write_result.close
157
+ return results_map
158
+ end
159
+
160
+ # private method to execute some code in a separate process and store the STDOUT and STDERR for later retrieval
161
+ def self._execute_in_parallel(method_sym, obj = self, &block)
162
+ ret_val = nil
163
+ # Communicate the return value of the method or block
164
+ read_result, write_result = IO.pipe
165
+ pid = fork do
166
+ exit_status = 0
167
+ trap(:INT) do
168
+ raise StandardError.new("Warning: Interrupt received; exiting...")
169
+ end
170
+ Dir.mkdir('tmp') unless Dir.exists? 'tmp'
171
+ write_file = File.new("tmp/parallel_process_#{Process.pid}", 'w')
172
+
173
+ # IO buffer is 64kb, which isn't much... if debug logging is turned on,
174
+ # this can be exceeded before a process completes.
175
+ # Storing output in file rather than using IO.pipe
176
+ STDOUT.reopen(write_file)
177
+ STDERR.reopen(write_file)
178
+
179
+ begin
180
+ # close subprocess's copy of read_result since it only needs to write
181
+ read_result.close
182
+ ret_val = obj.instance_eval(&block)
183
+ # Write the result to the write_result IO stream.
184
+ # Have to serialize the value so it can be transmitted via IO
185
+ if(!ret_val.nil? && ret_val.singleton_methods && ret_val.class != TrueClass && ret_val.class != FalseClass && ret_val.class != Fixnum)
186
+ begin
187
+ ret_val = ret_val.dup
188
+ rescue StandardError => err
189
+ #in case there are other types that can't dup
190
+ end
148
191
  end
192
+ Marshal.dump(ret_val, write_result) unless ret_val.nil?
193
+ rescue StandardError => err
194
+ puts "Error in process #{pid}: #{err.message}"
195
+ # Return the error if an error is rescued so we can re-throw in the main process.
196
+ Marshal.dump(err, write_result)
197
+ exit_status = 1
198
+ ensure
199
+ write_result.close
200
+ exit exit_status
149
201
  end
150
- write_io.close
151
- write_result.close
152
- # store the IO object with the STDOUT for each pid
153
- out = { :pid => pid,
154
- :method_sym => method_sym,
155
- :std_out => read_io,
156
- :result => read_result,
157
- :tmp_result => "unresolved_parallel_result_#{@@result_id}" }
158
- @@outs.push(out)
159
- @@result_id += 1
160
- out
161
202
  end
203
+ write_result.close
204
+ # Process.detach returns a thread that will be nil if the process is still running and thr if not.
205
+ # This allows us to check to see if processes have exited without having to call the blocking Process.wait functions.
206
+ wait_thread = Process.detach(pid)
207
+ # store the IO object with the STDOUT and waiting thread for each pid
208
+ process_info = { :wait_thread => wait_thread,
209
+ :pid => pid,
210
+ :method_sym => method_sym,
211
+ :std_out => "tmp/parallel_process_#{pid}",
212
+ :result => read_result,
213
+ :tmp_result => "unresolved_parallel_result_#{@@result_id}" }
214
+ @@process_infos.push(process_info)
215
+ @@result_id += 1
216
+ process_info
217
+ end
162
218
 
163
- # Proxy class used to wrap each method execution in a block and run it in parallel
164
- # A block from Parallel.run_in_parallel is executed with a binding of an instance of this class
165
- class BlankBindingParallelProxy < BasicObject
166
- # Don't worry about running methods like puts or other basic stuff in parallel
167
- include ::Kernel
168
-
169
- def initialize(obj)
170
- @object = obj
171
- @result_id = 0
172
- end
219
+ # Proxy class used to wrap each method execution in a block and run it in parallel
220
+ # A block from Parallel.run_in_parallel is executed with a binding of an instance of this class
221
+ class BlankBindingParallelProxy < BasicObject
222
+ # Don't worry about running methods like puts or other basic stuff in parallel
223
+ include ::Kernel
173
224
 
174
- # All methods within the block should show up as missing (unless defined in :Kernel)
175
- def method_missing(method_sym, *args, &block)
176
- out = ::InParallel._execute_in_parallel(method_sym) {@object.send(method_sym, *args, &block)}
177
- puts "Forked process for '#{method_sym}' - PID = '#{out[:pid]}'\n"
178
- out[:tmp_result]
179
- end
225
+ def initialize(obj)
226
+ @object = obj
227
+ @result_id = 0
228
+ end
180
229
 
230
+ # All methods within the block should show up as missing (unless defined in :Kernel)
231
+ def method_missing(method_sym, *args, &block)
232
+ out = ::InParallel._execute_in_parallel(method_sym) {@object.send(method_sym, *args, &block)}
233
+ puts "Forked process for '#{method_sym}' - PID = '#{out[:pid]}'\n"
234
+ out[:tmp_result]
181
235
  end
236
+
182
237
  end
238
+ end
@@ -4,9 +4,9 @@
4
4
  # on agent, 'puppet agent -t'
5
5
  # }
6
6
  module Enumerable
7
- def each_in_parallel(&block)
8
- if Process.respond_to?(:fork)
9
- method_sym = "#{caller_locations[0]}"
7
+ def each_in_parallel(method_sym=nil, &block)
8
+ if Process.respond_to?(:fork) && size > 1
9
+ method_sym ||= "#{caller_locations[0]}"
10
10
  each do |item|
11
11
  out = InParallel._execute_in_parallel(method_sym) {block.call(item)}
12
12
  puts "'each_in_parallel' forked process for '#{method_sym}' - PID = '#{out[:pid]}'\n"
@@ -14,7 +14,7 @@ module Enumerable
14
14
  # return the array of values, no need to look up from the map.
15
15
  return InParallel.wait_for_processes.values
16
16
  end
17
- puts 'Warning: Fork is not supported on this OS, executing block normally'
17
+ puts 'Warning: Fork is not supported on this OS, executing block normally' unless Process.respond_to? :fork
18
18
  block.call
19
19
  each(&block)
20
20
  end
metadata CHANGED
@@ -1,54 +1,56 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: in-parallel
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.1
4
+ version: 0.1.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - samwoods1
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2016-05-09 00:00:00.000000000 Z
11
+ date: 2016-05-27 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
15
15
  requirement: !ruby/object:Gem::Requirement
16
16
  requirements:
17
- - - ~>
17
+ - - "~>"
18
18
  - !ruby/object:Gem::Version
19
19
  version: '1.11'
20
20
  type: :development
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
- - - ~>
24
+ - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '1.11'
27
27
  - !ruby/object:Gem::Dependency
28
28
  name: rake
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
- - - ~>
31
+ - - "~>"
32
32
  - !ruby/object:Gem::Version
33
33
  version: '10.0'
34
34
  type: :development
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
- - - ~>
38
+ - - "~>"
39
39
  - !ruby/object:Gem::Version
40
40
  version: '10.0'
41
- description: I know there are other libraries that do parallelization, but I wanted
42
- something very simple to consume, and this was fun. I plan on using this within
43
- a test framework to enable parallel execution of some of the framework's tasks,
44
- and allow people within thier tests to execute code in parallel when wanted. This
45
- solution does not check to see how many processors you have, it just forks as many
46
- processes as you ask for. That means that it will handle a handful of parallel processes
47
- well, but could definitely overload your system with ruby processes if you try to
48
- spin up a LOT of processes. If you're looking for something simple and light-weight
49
- and on either linux or mac, then this solution could be what you want. If you are
50
- looking for something a little more production ready, you should take a look at
51
- the parallel project.
41
+ description: The other Ruby librarys that do parallel execution all support one primary
42
+ use case - crunching through a large queue of small tasks as quickly and efficiently
43
+ as possible. This library primarily supports the use case of needing to run a few
44
+ larger tasks in parallel and managing the stdout to make it easy to understand which
45
+ processes are logging what. This library was created to be used by the Beaker test
46
+ framework to enable parallel execution of some of the framework's tasks, and allow
47
+ people within thier tests to execute code in parallel when wanted. This solution
48
+ does not check to see how many processors you have, it just forks as many processes
49
+ as you ask for. That means that it will handle a handful of parallel processes well,
50
+ but could definitely overload your system with ruby processes if you try to spin
51
+ up a LOT of processes. If you're looking for something simple and light-weight and
52
+ on either linux or mac (forking processes is not supported on Windows), then this
53
+ solution could be what you want.
52
54
  email:
53
55
  - sam.woods@puppetlabs.com
54
56
  executables: []
@@ -61,8 +63,8 @@ files:
61
63
  - Rakefile
62
64
  - in_parallel.gemspec
63
65
  - in_parallel/version.rb
64
- - lib/enumerable.rb
65
66
  - lib/in_parallel.rb
67
+ - lib/parallel_enumerable.rb
66
68
  homepage: https://github.com/samwoods1/in-parallel
67
69
  licenses:
68
70
  - MIT
@@ -73,17 +75,17 @@ require_paths:
73
75
  - lib
74
76
  required_ruby_version: !ruby/object:Gem::Requirement
75
77
  requirements:
76
- - - '>='
78
+ - - ">="
77
79
  - !ruby/object:Gem::Version
78
80
  version: '0'
79
81
  required_rubygems_version: !ruby/object:Gem::Requirement
80
82
  requirements:
81
- - - '>='
83
+ - - ">="
82
84
  - !ruby/object:Gem::Version
83
85
  version: '0'
84
86
  requirements: []
85
87
  rubyforge_project:
86
- rubygems_version: 2.4.7
88
+ rubygems_version: 2.5.1
87
89
  signing_key:
88
90
  specification_version: 4
89
91
  summary: A lightweight library to execute a handful of tasks in parallel with simple