in-parallel 0.1.1 → 0.1.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +4 -4
- data/in_parallel.gemspec +11 -9
- data/in_parallel/version.rb +2 -2
- data/lib/in_parallel.rb +215 -159
- data/lib/{enumerable.rb → parallel_enumerable.rb} +4 -4
- metadata +23 -21
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: c73c6b3c70624e4afbe733cbe691e933f2ee178d
|
4
|
+
data.tar.gz: 1139cdd892f6242ff9a83cdf722d1df8e5520cc7
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7efe4eaa24eb74c0a1af3a5ebb7408c14afe26c682d0f0e76d4ef9ab31efd027ef9989c1c3bf73143d129aadafdca8c2ea2a66dccc607ba0651edd1e0d9c1ecb
|
7
|
+
data.tar.gz: 5008f55eedcc36458011d549d6bb0ad1b87ab5f9126b9dfae00025e0923aff6d0d8a870bf7b5a1e68a53d592cfbcd4c25257c6abd90dc884a6c72d87353b0a00
|
data/README.md
CHANGED
@@ -1,9 +1,9 @@
|
|
1
1
|
# in-parallel
|
2
2
|
A lightweight Ruby library with very simple syntax, making use of process.fork for parallelization
|
3
3
|
|
4
|
-
|
4
|
+
The other Ruby librarys that do parallel execution all support one primary use case - crunching through a large queue of small tasks as quickly and efficiently as possible. This library primarily supports the use case of needing to run a few larger tasks in parallel and managing the stdout to make it easy to understand which processes are logging what. This library was created to be used by the Beaker test framework to enable parallel execution of some of the framework's tasks, and allow people within thier tests to execute code in parallel when wanted. This solution does not check to see how many processors you have, it just forks as many processes as you ask for. That means that it will handle a handful of parallel processes well, but could definitely overload your system with ruby processes if you try to spin up a LOT of processes. If you're looking for something simple and light-weight and on either linux or mac (forking processes is not supported on Windows), then this solution could be what you want.
|
5
5
|
|
6
|
-
If you are looking for something a little more production ready, you should take a look at the [parallel](https://github.com/grosser/parallel) project.
|
6
|
+
If you are looking for something a little more production ready, you should take a look at the [parallel](https://github.com/grosser/parallel) project. In the future this library will extend the in the parallel gem to take advantage of all of it's useful features as well.
|
7
7
|
|
8
8
|
## Methods:
|
9
9
|
|
@@ -99,8 +99,8 @@ hello world, bar
|
|
99
99
|
|
100
100
|
```
|
101
101
|
|
102
|
-
###
|
103
|
-
1. This is very similar to other solutions, except that it directly extends the
|
102
|
+
### Enumerable.each_in_parallel(&block)
|
103
|
+
1. This is very similar to other solutions, except that it directly extends the Enumerable class with an each_in_parallel method, giving you the ability to pretty simply spawn a process for any item in an array or map.
|
104
104
|
2. Identifies the block location (or caller location if the block does not have a source_location) in the console log to make it clear which block is being executed
|
105
105
|
|
106
106
|
```ruby
|
data/in_parallel.gemspec
CHANGED
@@ -10,15 +10,17 @@ Gem::Specification.new do |spec|
|
|
10
10
|
spec.email = ["sam.woods@puppetlabs.com"]
|
11
11
|
|
12
12
|
spec.summary = "A lightweight library to execute a handful of tasks in parallel with simple syntax"
|
13
|
-
spec.description = "
|
14
|
-
"
|
15
|
-
"
|
16
|
-
"
|
17
|
-
"
|
18
|
-
"
|
19
|
-
"
|
20
|
-
"
|
21
|
-
"a
|
13
|
+
spec.description = "The other Ruby librarys that do parallel execution all support one primary use case " +
|
14
|
+
"- crunching through a large queue of small tasks as quickly and efficiently as possible. This library " +
|
15
|
+
"primarily supports the use case of needing to run a few larger tasks in parallel and managing the " +
|
16
|
+
"stdout to make it easy to understand which processes are logging what. This library was created to be " +
|
17
|
+
"used by the Beaker test framework to enable parallel execution of some of the framework's tasks, and " +
|
18
|
+
"allow people within thier tests to execute code in parallel when wanted. This solution does not check " +
|
19
|
+
"to see how many processors you have, it just forks as many processes as you ask for. That means that it " +
|
20
|
+
"will handle a handful of parallel processes well, but could definitely overload your system with ruby " +
|
21
|
+
"processes if you try to spin up a LOT of processes. If you're looking for something simple and " +
|
22
|
+
"light-weight and on either linux or mac (forking processes is not supported on Windows), then this " +
|
23
|
+
"solution could be what you want."
|
22
24
|
spec.homepage = "https://github.com/samwoods1/in-parallel"
|
23
25
|
spec.license = "MIT"
|
24
26
|
|
data/in_parallel/version.rb
CHANGED
@@ -1,3 +1,3 @@
|
|
1
|
-
|
2
|
-
VERSION = Version = '0.1.
|
1
|
+
class InParallel
|
2
|
+
VERSION = Version = '0.1.2'
|
3
3
|
end
|
data/lib/in_parallel.rb
CHANGED
@@ -1,182 +1,238 @@
|
|
1
|
-
require_relative '
|
1
|
+
require_relative 'parallel_enumerable'
|
2
2
|
|
3
3
|
class InParallel
|
4
|
-
|
5
|
-
|
6
|
-
|
7
|
-
|
4
|
+
# How many seconds between outputting to stdout that we are waiting for child processes.
|
5
|
+
# 0 or < 0 means no signaling.
|
6
|
+
@@signal_interval = 30
|
7
|
+
@@process_infos = []
|
8
|
+
@@raise_error = nil
|
9
|
+
def self.process_infos
|
10
|
+
@@process_infos
|
11
|
+
end
|
12
|
+
|
13
|
+
@@background_objs = []
|
14
|
+
@@result_id = 0
|
15
|
+
|
16
|
+
@@pids = []
|
17
|
+
|
18
|
+
# Example - will spawn 2 processes, (1 for each method) wait until they both complete, and log STDOUT:
|
19
|
+
# InParallel.run_in_parallel {
|
20
|
+
# @result_1 = on agents[0], 'puppet agent -t'
|
21
|
+
# @result_2 = on agents[1], 'puppet agent -t'
|
22
|
+
# }
|
23
|
+
# NOTE: Only supports assigning instance variables within the block, not local variables
|
24
|
+
def self.run_in_parallel(&block)
|
25
|
+
if Process.respond_to?(:fork)
|
26
|
+
proxy = BlankBindingParallelProxy.new(self)
|
27
|
+
proxy.instance_eval(&block)
|
28
|
+
results_map = wait_for_processes
|
29
|
+
# pass in the 'self' from the block.binding which is the instance of the class
|
30
|
+
# that contains the initial binding call.
|
31
|
+
# This gives us access to the local and instance variables from that context.
|
32
|
+
return result_lookup(proxy, eval("self", block.binding), results_map)
|
8
33
|
end
|
34
|
+
puts 'Warning: Fork is not supported on this OS, executing block normally'
|
35
|
+
block.call
|
36
|
+
end
|
9
37
|
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
38
|
+
# Private method to lookup results from the results_map and replace the
|
39
|
+
# temp values with actual return values
|
40
|
+
def self.result_lookup(proxy_obj, target_obj, results_map)
|
41
|
+
vars = (proxy_obj.instance_variables)
|
42
|
+
results_map.keys.each { |tmp_result|
|
43
|
+
vars.each {|var|
|
44
|
+
if proxy_obj.instance_variable_get(var) == tmp_result
|
45
|
+
target_obj.instance_variable_set(var, results_map[tmp_result])
|
46
|
+
break
|
47
|
+
end
|
48
|
+
}
|
49
|
+
}
|
50
|
+
end
|
51
|
+
private_class_method :result_lookup
|
52
|
+
|
53
|
+
# Example - Will spawn a process in the background to run puppet agent on two agents and return immediately:
|
54
|
+
# Parallel.run_in_background {
|
55
|
+
# @result = on agents[0], 'puppet agent -t'
|
56
|
+
# @result_2 = on agents[1], 'puppet agent -t'
|
57
|
+
# }
|
58
|
+
# # Do something else here before waiting for the process to complete
|
59
|
+
#
|
60
|
+
# # Optionally wait for the processes to complete before continuing.
|
61
|
+
# # Otherwise use run_in_background(true) to clean up the process status and output immediately.
|
62
|
+
# Parrallel.get_background_results(self)
|
63
|
+
# NOTE: must call get_background_results to allow instance variables in calling object to be set,
|
64
|
+
# otherwise @result will evaluate to "unresolved_parallel_result_0"
|
65
|
+
def self.run_in_background(ignore_result = true, &block)
|
66
|
+
if Process.respond_to?(:fork)
|
67
|
+
proxy = BlankBindingParallelProxy.new(self)
|
68
|
+
proxy.instance_eval(&block)
|
69
|
+
|
70
|
+
if ignore_result
|
71
|
+
Process.detach(@@process_infos.last[:pid])
|
72
|
+
@@process_infos.pop
|
73
|
+
else
|
74
|
+
@@background_objs << {:proxy => proxy, :target => eval("self", block.binding)}
|
75
|
+
return process_infos.last[:tmp_result]
|
28
76
|
end
|
29
|
-
|
30
|
-
block.call
|
77
|
+
return
|
31
78
|
end
|
79
|
+
puts 'Warning: Fork is not supported on this OS, executing block normally'
|
80
|
+
result = block.call
|
81
|
+
return nil if ignore_result
|
82
|
+
result
|
83
|
+
end
|
32
84
|
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
end
|
43
|
-
}
|
44
|
-
}
|
85
|
+
def self.get_background_results
|
86
|
+
results_map = wait_for_processes
|
87
|
+
# pass in the 'self' from the block.binding which is the instance of the class
|
88
|
+
# that contains the initial binding call.
|
89
|
+
# This gives us access to the instance variables from that context.
|
90
|
+
@@background_objs.each {|obj|
|
91
|
+
return result_lookup(obj[:proxy], obj[:target], results_map)
|
92
|
+
}
|
93
|
+
end
|
45
94
|
|
95
|
+
# Waits for all processes to complete and logs STDOUT and STDERR in chunks from any processes
|
96
|
+
# that were triggered from this Parallel class
|
97
|
+
def self.wait_for_processes
|
98
|
+
trap(:INT) do
|
99
|
+
puts "Warning, recieved interrupt. Processing child results and exiting."
|
100
|
+
@@proces_infos.each { |process_info|
|
101
|
+
# Send INT to each child process so it returns and can print stdout and stderr to console before exiting.
|
102
|
+
Process.kill("INT", process_info[:pid])
|
103
|
+
}
|
46
104
|
end
|
47
|
-
|
48
|
-
|
49
|
-
#
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
54
|
-
|
55
|
-
|
56
|
-
# # Optionally wait for the processes to complete before continuing.
|
57
|
-
# # Otherwise use run_in_background(true) to clean up the process status and output immediately.
|
58
|
-
# Parrallel.get_background_results(self)
|
59
|
-
# NOTE: must call get_background_results to allow instance variables in calling object to be set,
|
60
|
-
# otherwise @result will evaluate to "unresolved_parallel_result_0"
|
61
|
-
def self.run_in_background(ignore_result = true, &block)
|
62
|
-
if @@supported
|
63
|
-
proxy = BlankBindingParallelProxy.new(self)
|
64
|
-
proxy.instance_eval(&block)
|
65
|
-
|
66
|
-
if ignore_result
|
67
|
-
Process.detach(@@outs.last[:pid])
|
68
|
-
@@outs.pop
|
69
|
-
else
|
70
|
-
@@background_objs << {:proxy => proxy, :target => eval("self", block.binding)}
|
71
|
-
return outs.last[:tmp_result]
|
72
|
-
end
|
73
|
-
return
|
105
|
+
return unless Process.respond_to?(:fork)
|
106
|
+
# Custom process to wait so that we can do things like time out, and kill child processes if
|
107
|
+
# one process returns with an error before the others complete.
|
108
|
+
results_map = {}
|
109
|
+
timer = Time.now
|
110
|
+
while !@@process_infos.empty? do
|
111
|
+
if @@signal_interval > 0 && Time.now > timer + @@signal_interval
|
112
|
+
puts 'Waiting for child processes.'
|
113
|
+
timer = Time.now
|
74
114
|
end
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
91
|
-
|
92
|
-
|
93
|
-
|
94
|
-
|
95
|
-
|
96
|
-
|
97
|
-
|
98
|
-
|
99
|
-
|
100
|
-
|
101
|
-
|
102
|
-
|
103
|
-
|
104
|
-
|
105
|
-
|
106
|
-
results_map[out[:tmp_result]] = Marshal.load(out[:result].read)
|
107
|
-
ensure
|
108
|
-
# close the read end pipes
|
109
|
-
out[:std_out].close unless out[:std_out].closed?
|
110
|
-
out[:result].close unless out[:result].closed?
|
115
|
+
@@process_infos.each {|process_info|
|
116
|
+
# wait up to half a second for each thread to see if it is complete, if not, check the next thread.
|
117
|
+
# returns immediately if the process has completed.
|
118
|
+
thr = process_info[:wait_thread].join(0.5)
|
119
|
+
unless thr.nil?
|
120
|
+
# the process completed, get the result and rethrow on error.
|
121
|
+
begin
|
122
|
+
# Print the STDOUT and STDERR for each process with signals for start and end
|
123
|
+
puts "\n------ Begin output for #{process_info[:method_sym]} - #{process_info[:pid]}\n"
|
124
|
+
puts File.new(process_info[:std_out], 'r').readlines
|
125
|
+
puts "------ Completed output for #{process_info[:method_sym]} - #{process_info[:pid]}\n"
|
126
|
+
result = process_info[:result].read
|
127
|
+
results_map[process_info[:tmp_result]] = (result.nil? || result.empty?) ? result : Marshal.load(result)
|
128
|
+
File.delete(process_info[:std_out])
|
129
|
+
# Kill all other processes and let them log their stdout before re-raising
|
130
|
+
# if a child process raised an error.
|
131
|
+
if results_map[process_info[:tmp_result]].is_a?(StandardError)
|
132
|
+
@@process_infos.each{|p_info|
|
133
|
+
begin
|
134
|
+
Process.kill(0, p_info[:pid]) unless p_info[:pid] == process_info[:pid]
|
135
|
+
rescue StandardError
|
136
|
+
end
|
137
|
+
}
|
138
|
+
end
|
139
|
+
ensure
|
140
|
+
# close the read end pipe
|
141
|
+
process_info[:result].close unless process_info[:result].closed?
|
142
|
+
@@process_infos.delete(process_info)
|
143
|
+
@@raise_error = results_map[process_info[:tmp_result]] if results_map[process_info[:tmp_result]].is_a?(StandardError)
|
144
|
+
break
|
145
|
+
end
|
111
146
|
end
|
112
|
-
end
|
113
|
-
|
114
|
-
statuses.each { |status|
|
115
|
-
raise("Parallel process with PID '#{status[0]}' failed: #{status[1]}") unless status[1].success?
|
116
147
|
}
|
148
|
+
end
|
117
149
|
|
118
|
-
|
150
|
+
# Reset the error in case the error is rescued
|
151
|
+
begin
|
152
|
+
raise @@raise_error unless @@raise_error.nil?
|
153
|
+
ensure
|
154
|
+
@@raise_error = nil
|
119
155
|
end
|
120
156
|
|
121
|
-
|
122
|
-
|
123
|
-
|
124
|
-
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
133
|
-
|
134
|
-
|
135
|
-
|
136
|
-
|
137
|
-
|
138
|
-
|
139
|
-
|
140
|
-
|
141
|
-
|
142
|
-
|
143
|
-
|
144
|
-
|
145
|
-
|
146
|
-
|
147
|
-
|
157
|
+
return results_map
|
158
|
+
end
|
159
|
+
|
160
|
+
# private method to execute some code in a separate process and store the STDOUT and STDERR for later retrieval
|
161
|
+
def self._execute_in_parallel(method_sym, obj = self, &block)
|
162
|
+
ret_val = nil
|
163
|
+
# Communicate the return value of the method or block
|
164
|
+
read_result, write_result = IO.pipe
|
165
|
+
pid = fork do
|
166
|
+
exit_status = 0
|
167
|
+
trap(:INT) do
|
168
|
+
raise StandardError.new("Warning: Interrupt received; exiting...")
|
169
|
+
end
|
170
|
+
Dir.mkdir('tmp') unless Dir.exists? 'tmp'
|
171
|
+
write_file = File.new("tmp/parallel_process_#{Process.pid}", 'w')
|
172
|
+
|
173
|
+
# IO buffer is 64kb, which isn't much... if debug logging is turned on,
|
174
|
+
# this can be exceeded before a process completes.
|
175
|
+
# Storing output in file rather than using IO.pipe
|
176
|
+
STDOUT.reopen(write_file)
|
177
|
+
STDERR.reopen(write_file)
|
178
|
+
|
179
|
+
begin
|
180
|
+
# close subprocess's copy of read_result since it only needs to write
|
181
|
+
read_result.close
|
182
|
+
ret_val = obj.instance_eval(&block)
|
183
|
+
# Write the result to the write_result IO stream.
|
184
|
+
# Have to serialize the value so it can be transmitted via IO
|
185
|
+
if(!ret_val.nil? && ret_val.singleton_methods && ret_val.class != TrueClass && ret_val.class != FalseClass && ret_val.class != Fixnum)
|
186
|
+
begin
|
187
|
+
ret_val = ret_val.dup
|
188
|
+
rescue StandardError => err
|
189
|
+
#in case there are other types that can't dup
|
190
|
+
end
|
148
191
|
end
|
192
|
+
Marshal.dump(ret_val, write_result) unless ret_val.nil?
|
193
|
+
rescue StandardError => err
|
194
|
+
puts "Error in process #{pid}: #{err.message}"
|
195
|
+
# Return the error if an error is rescued so we can re-throw in the main process.
|
196
|
+
Marshal.dump(err, write_result)
|
197
|
+
exit_status = 1
|
198
|
+
ensure
|
199
|
+
write_result.close
|
200
|
+
exit exit_status
|
149
201
|
end
|
150
|
-
write_io.close
|
151
|
-
write_result.close
|
152
|
-
# store the IO object with the STDOUT for each pid
|
153
|
-
out = { :pid => pid,
|
154
|
-
:method_sym => method_sym,
|
155
|
-
:std_out => read_io,
|
156
|
-
:result => read_result,
|
157
|
-
:tmp_result => "unresolved_parallel_result_#{@@result_id}" }
|
158
|
-
@@outs.push(out)
|
159
|
-
@@result_id += 1
|
160
|
-
out
|
161
202
|
end
|
203
|
+
write_result.close
|
204
|
+
# Process.detach returns a thread that will be nil if the process is still running and thr if not.
|
205
|
+
# This allows us to check to see if processes have exited without having to call the blocking Process.wait functions.
|
206
|
+
wait_thread = Process.detach(pid)
|
207
|
+
# store the IO object with the STDOUT and waiting thread for each pid
|
208
|
+
process_info = { :wait_thread => wait_thread,
|
209
|
+
:pid => pid,
|
210
|
+
:method_sym => method_sym,
|
211
|
+
:std_out => "tmp/parallel_process_#{pid}",
|
212
|
+
:result => read_result,
|
213
|
+
:tmp_result => "unresolved_parallel_result_#{@@result_id}" }
|
214
|
+
@@process_infos.push(process_info)
|
215
|
+
@@result_id += 1
|
216
|
+
process_info
|
217
|
+
end
|
162
218
|
|
163
|
-
|
164
|
-
|
165
|
-
|
166
|
-
|
167
|
-
|
168
|
-
|
169
|
-
def initialize(obj)
|
170
|
-
@object = obj
|
171
|
-
@result_id = 0
|
172
|
-
end
|
219
|
+
# Proxy class used to wrap each method execution in a block and run it in parallel
|
220
|
+
# A block from Parallel.run_in_parallel is executed with a binding of an instance of this class
|
221
|
+
class BlankBindingParallelProxy < BasicObject
|
222
|
+
# Don't worry about running methods like puts or other basic stuff in parallel
|
223
|
+
include ::Kernel
|
173
224
|
|
174
|
-
|
175
|
-
|
176
|
-
|
177
|
-
|
178
|
-
out[:tmp_result]
|
179
|
-
end
|
225
|
+
def initialize(obj)
|
226
|
+
@object = obj
|
227
|
+
@result_id = 0
|
228
|
+
end
|
180
229
|
|
230
|
+
# All methods within the block should show up as missing (unless defined in :Kernel)
|
231
|
+
def method_missing(method_sym, *args, &block)
|
232
|
+
out = ::InParallel._execute_in_parallel(method_sym) {@object.send(method_sym, *args, &block)}
|
233
|
+
puts "Forked process for '#{method_sym}' - PID = '#{out[:pid]}'\n"
|
234
|
+
out[:tmp_result]
|
181
235
|
end
|
236
|
+
|
182
237
|
end
|
238
|
+
end
|
@@ -4,9 +4,9 @@
|
|
4
4
|
# on agent, 'puppet agent -t'
|
5
5
|
# }
|
6
6
|
module Enumerable
|
7
|
-
def each_in_parallel(&block)
|
8
|
-
if Process.respond_to?(:fork)
|
9
|
-
method_sym
|
7
|
+
def each_in_parallel(method_sym=nil, &block)
|
8
|
+
if Process.respond_to?(:fork) && size > 1
|
9
|
+
method_sym ||= "#{caller_locations[0]}"
|
10
10
|
each do |item|
|
11
11
|
out = InParallel._execute_in_parallel(method_sym) {block.call(item)}
|
12
12
|
puts "'each_in_parallel' forked process for '#{method_sym}' - PID = '#{out[:pid]}'\n"
|
@@ -14,7 +14,7 @@ module Enumerable
|
|
14
14
|
# return the array of values, no need to look up from the map.
|
15
15
|
return InParallel.wait_for_processes.values
|
16
16
|
end
|
17
|
-
puts 'Warning: Fork is not supported on this OS, executing block normally'
|
17
|
+
puts 'Warning: Fork is not supported on this OS, executing block normally' unless Process.respond_to? :fork
|
18
18
|
block.call
|
19
19
|
each(&block)
|
20
20
|
end
|
metadata
CHANGED
@@ -1,54 +1,56 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: in-parallel
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.1.
|
4
|
+
version: 0.1.2
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- samwoods1
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2016-05-
|
11
|
+
date: 2016-05-27 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: bundler
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
16
16
|
requirements:
|
17
|
-
- - ~>
|
17
|
+
- - "~>"
|
18
18
|
- !ruby/object:Gem::Version
|
19
19
|
version: '1.11'
|
20
20
|
type: :development
|
21
21
|
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
|
-
- - ~>
|
24
|
+
- - "~>"
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '1.11'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
28
|
name: rake
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
30
30
|
requirements:
|
31
|
-
- - ~>
|
31
|
+
- - "~>"
|
32
32
|
- !ruby/object:Gem::Version
|
33
33
|
version: '10.0'
|
34
34
|
type: :development
|
35
35
|
prerelease: false
|
36
36
|
version_requirements: !ruby/object:Gem::Requirement
|
37
37
|
requirements:
|
38
|
-
- - ~>
|
38
|
+
- - "~>"
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: '10.0'
|
41
|
-
description:
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
41
|
+
description: The other Ruby librarys that do parallel execution all support one primary
|
42
|
+
use case - crunching through a large queue of small tasks as quickly and efficiently
|
43
|
+
as possible. This library primarily supports the use case of needing to run a few
|
44
|
+
larger tasks in parallel and managing the stdout to make it easy to understand which
|
45
|
+
processes are logging what. This library was created to be used by the Beaker test
|
46
|
+
framework to enable parallel execution of some of the framework's tasks, and allow
|
47
|
+
people within thier tests to execute code in parallel when wanted. This solution
|
48
|
+
does not check to see how many processors you have, it just forks as many processes
|
49
|
+
as you ask for. That means that it will handle a handful of parallel processes well,
|
50
|
+
but could definitely overload your system with ruby processes if you try to spin
|
51
|
+
up a LOT of processes. If you're looking for something simple and light-weight and
|
52
|
+
on either linux or mac (forking processes is not supported on Windows), then this
|
53
|
+
solution could be what you want.
|
52
54
|
email:
|
53
55
|
- sam.woods@puppetlabs.com
|
54
56
|
executables: []
|
@@ -61,8 +63,8 @@ files:
|
|
61
63
|
- Rakefile
|
62
64
|
- in_parallel.gemspec
|
63
65
|
- in_parallel/version.rb
|
64
|
-
- lib/enumerable.rb
|
65
66
|
- lib/in_parallel.rb
|
67
|
+
- lib/parallel_enumerable.rb
|
66
68
|
homepage: https://github.com/samwoods1/in-parallel
|
67
69
|
licenses:
|
68
70
|
- MIT
|
@@ -73,17 +75,17 @@ require_paths:
|
|
73
75
|
- lib
|
74
76
|
required_ruby_version: !ruby/object:Gem::Requirement
|
75
77
|
requirements:
|
76
|
-
- -
|
78
|
+
- - ">="
|
77
79
|
- !ruby/object:Gem::Version
|
78
80
|
version: '0'
|
79
81
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
80
82
|
requirements:
|
81
|
-
- -
|
83
|
+
- - ">="
|
82
84
|
- !ruby/object:Gem::Version
|
83
85
|
version: '0'
|
84
86
|
requirements: []
|
85
87
|
rubyforge_project:
|
86
|
-
rubygems_version: 2.
|
88
|
+
rubygems_version: 2.5.1
|
87
89
|
signing_key:
|
88
90
|
specification_version: 4
|
89
91
|
summary: A lightweight library to execute a handful of tasks in parallel with simple
|