ProcessorPool 1.0.0

Sign up to get free protection for your applications and to get access to all the features.
Binary file
@@ -0,0 +1,6 @@
1
+ == 1.0.0a / 2008-01-27
2
+
3
+ * 1 major enhancement
4
+ * Birthday!
5
+ * Initial check-in of gem
6
+
@@ -0,0 +1,13 @@
1
+ .DS_Store
2
+ History.txt
3
+ Manifest.txt
4
+ README.txt
5
+ Rakefile
6
+ bin/processor_pool
7
+ lib/processor/processor.rb
8
+ lib/processor/processor_instance_info.rb
9
+ lib/processor/remote_directory.rb
10
+ lib/processor_pool.rb
11
+ test/helper.rb
12
+ test/test_processor.rb
13
+ test/test_processor_instance_info.rb
@@ -0,0 +1,123 @@
1
+ ProcessorPool
2
+ by Ari Lerner and Ron Evans
3
+ CitrusByte
4
+ http://blog.citrusbyte.com
5
+
6
+ == DESCRIPTION:
7
+
8
+ Provides a simple load-balancing solution for Amazon's EC2 - S3 backend.
9
+
10
+ Usage would include an upload server, a image and video processing and transcoding server, all non-blocking separate from an application server.
11
+
12
+ == FEATURES/PROBLEMS:
13
+
14
+ * Need to find a keep-alive solution
15
+
16
+ == SYNOPSIS:
17
+
18
+ require "rubygems"
19
+ require "lib/processor.rb"
20
+
21
+ # Set the access_key_id and the secret_access_key provided by amazon
22
+ access_key_id, secret_access_key = '11H31W1044QZ9QQF48G2', 'a8Xb46qggIeDfm1y/RUL48Ullq87MxgOKCzCmw1c'
23
+
24
+ # And start the processors!
25
+ ProcessorPool.start(access_key_id, secret_access_key)
26
+
27
+ # ProcessorPool is built on top of Sinatra (http://sinatra.rubyforge.org/) and uses the same idioms.
28
+ # For more information, check out Sinatra (at http://sinatra.rubyforge.org/):
29
+
30
+ get '/' do
31
+ "Processing data"
32
+ end
33
+
34
+ == IMPLEMENTATION
35
+ # Uploading and processing
36
+ # To use it without inline uploading
37
+ # Upload like you normally would do, but in the controller, just pass the variables to the offsite storage and continue on as you would. This won't block the rails mongrel from continuing on. Of course, you also need to check to make sure the file does get uploaded and processed at this point.
38
+ # Inline uploading
39
+ # In Rails, you can redirect the the upload to the processor pool in the form. For instance:
40
+
41
+ <% form_for @media, :id => "fileform", :url => upload_url, :multipart => true do %>
42
+
43
+ # To do the inline uploading, you should call a javascript method to interact with the user to let them know the uploading has begun. For instance:
44
+
45
+ <button class="button" id="buttonUpload" onclick="return beginFileUpload();">Upload</button>
46
+
47
+ <script type="text/javascript">
48
+ function beginFileUpload()
49
+ {
50
+ // todo: some kind of feedback that upload has started...
51
+ $("#uploading").show();
52
+ ...
53
+
54
+ # For ease of use, we recommend embedding this helper function somewhere:
55
+
56
+ def url_for_upload_server
57
+ if RAILS_ENV == 'production'
58
+ # determine the upload server to try to use
59
+ p = Processors.get_random_processor(::SERVER_POOL_BUCKET)
60
+ "http://#{p.hostname}:4567/new"
61
+ else
62
+ p = Processors.get_random_processor(::SERVER_POOL_BUCKET)
63
+ "http://localhost:4567/new"
64
+ end
65
+ end
66
+
67
+ # The reset is an exercise for you to complete.
68
+
69
+ # Other ideas, using similar methodology would be to upload videos and do the processing there, to run awy sort of processor intensive computations off-site.
70
+
71
+ == CONVENIENCE METHODS
72
+
73
+ # There are a number of convenience methods included with processor_pool. It is intended to work with Amazon's S3 service.
74
+
75
+ AWS::S3::S3Object.store('/folder/to/greeting.txt', 'hello world!', 'ron', :use_virtual_directories => true)
76
+
77
+ # Alternatively, you can use the included RemoteDirectory class
78
+
79
+ dir = RemoteDirectory.new("/folder/name")
80
+
81
+ # Save one file to the S3 directory
82
+ dir.save_to_s3("movie.mpg", "movies", "encoded")
83
+
84
+ # Copy all the files in the directory to the s3 bucket
85
+ dir.copy_to_s3("movies", "encoded")
86
+
87
+ dir.files # Returns a list of all the files in the bucket
88
+
89
+ == REQUIREMENTS:
90
+
91
+ * aws/s3
92
+ * net/ping
93
+ * open-uri
94
+ * sinatra
95
+
96
+ == INSTALL:
97
+
98
+ * sudo gem install processor_pool
99
+
100
+ == LICENSE:
101
+
102
+ (The MIT License)
103
+
104
+ Copyright (c) 2008 Ari Lerner, CitrusByte and Ron Evans
105
+
106
+ Permission is hereby granted, free of charge, to any person obtaining
107
+ a copy of this software and associated documentation files (the
108
+ 'Software'), to deal in the Software without restriction, including
109
+ without limitation the rights to use, copy, modify, merge, publish,
110
+ distribute, sublicense, and/or sell copies of the Software, and to
111
+ permit persons to whom the Software is furnished to do so, subject to
112
+ the following conditions:
113
+
114
+ The above copyright notice and this permission notice shall be
115
+ included in all copies or substantial portions of the Software.
116
+
117
+ THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
118
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
119
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
120
+ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
121
+ CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
122
+ TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
123
+ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
@@ -0,0 +1,19 @@
1
+ # -*- ruby -*-
2
+
3
+ require 'rubygems'
4
+ require 'hoe'
5
+ $:.unshift(File.dirname(__FILE__) + "/lib")
6
+ require 'processor_pool.rb'
7
+
8
+ Hoe.new('ProcessorPool', ProcessorPool::VERSION) do |p|
9
+ p.rubyforge_name = 'processorpool'
10
+ p.name = "ProcessorPool"
11
+ p.author = 'Ari Lerner and Ron Evans'
12
+ p.email = 'ari.lerner@citrusbyte.com'
13
+ p.summary = 'Provides a simple load-balancing solution for Amazon\'s EC2 - S3 backend'
14
+ p.description = p.paragraphs_of('README.txt', 2..5).join("\n\n")
15
+ p.url = p.paragraphs_of('README.txt', 0).first.split(/\n/)[1..-1]
16
+ p.changes = p.paragraphs_of('History.txt', 0..1).join("\n\n")
17
+ end
18
+
19
+ # vim: syntax=Ruby
File without changes
@@ -0,0 +1,107 @@
1
+ require 'rubygems'
2
+ require 'aws/s3'
3
+ require 'net/ping'
4
+ require File.join(File.dirname(__FILE__), *%w[processor_instance_info])
5
+
6
+ =begin rdoc
7
+ Processors is the list of Processors from which
8
+ a process can choose from.
9
+ =end
10
+ class Processors
11
+ attr_reader :bucket
12
+
13
+ # returns an array of the available active processors
14
+ def self.registered(bucket)
15
+ @bucket = bucket
16
+ processors = []
17
+ # get the list of servers that have registered themselves
18
+ AWS::S3::Bucket.objects(@bucket).each {|o|
19
+ processors << Processor.new(@bucket, o.key)
20
+ }
21
+ processors
22
+ end
23
+
24
+ # returns a random processor from the pool of processors
25
+ def self.get_random_processor(bucket)
26
+ processors = self.registered(bucket)
27
+ return nil if processors.empty?
28
+ processors[rand(processors.size)]
29
+ end
30
+ end
31
+
32
+ =begin rdoc
33
+ A Processor is a link in the processing chain. This is where
34
+ the processor gets chosen and the application runs on top of
35
+ the chosen processor
36
+ =end
37
+ class Processor
38
+ attr_reader :bucket, :hostname
39
+
40
+ def initialize(bucket, hostname = Processor.get_current_hostname)
41
+ @bucket = bucket
42
+ @hostname = hostname
43
+ end
44
+
45
+ # make this processor available to the pool
46
+ def register
47
+ AWS::S3::S3Object.store(@hostname, 'ready', @bucket) if @hostname && @bucket
48
+ end
49
+
50
+ # remove this processor from the pool
51
+ def unregister
52
+ AWS::S3::S3Object.delete(@hostname, @bucket) if @hostname && @bucket
53
+ end
54
+
55
+ # check to see if this processor has been properly registered
56
+ def registered?
57
+ AWS::S3::S3Object.exists?(@hostname, @bucket)
58
+ end
59
+
60
+ # is this processor working?
61
+ def active?
62
+ # check to see if this server is pingable
63
+ return Net::PingExternal.new(@hostname).ping
64
+ end
65
+
66
+ # equality operator
67
+ def ==(other)
68
+ return @hostname == other.hostname
69
+ end
70
+
71
+ # inequality operator
72
+ def not_equal?(other)
73
+ return @hostname != other.hostname
74
+ end
75
+
76
+ # return the current unix hostname
77
+ def self.get_current_hostname
78
+ return `hostname`.chomp
79
+ end
80
+
81
+ # return the URL needed to try to upload to this processor
82
+ def url_for
83
+ "http://#{@hostname}:4567/new"
84
+ end
85
+ end
86
+
87
+ =begin rdoc
88
+ MissingInformation exception is thrown when there is missing information
89
+ required to run the processor
90
+ =end
91
+ class MissingInformation < Exception
92
+ @@missing_info_messages = {
93
+ :access_key_id => "Must provide an access_key_id",
94
+ :secret_access_key => "Must provide a secret_access_key"
95
+ }
96
+ attr_reader :i
97
+ # Add the missing information key here
98
+ def initialize(i=nil)
99
+ super
100
+ @i = i
101
+ end
102
+ # Takes the missing symbol key and translates it to a message
103
+ def message
104
+ "Missing information: #{@@missing_info_messages[@i]}"
105
+ end
106
+ alias :to_s :message
107
+ end
@@ -0,0 +1,23 @@
1
+ require 'open-uri'
2
+
3
+ class ProcessorInstanceInfo
4
+ def self.host_name
5
+ @@host_name ||= open('http://169.254.169.254/latest/meta-data/public-hostname').read
6
+ @@host_name
7
+ end
8
+
9
+ def self.server_pool_bucket
10
+ @@server_pool_bucket ||= open('http://169.254.169.254/latest/user-data').read.split('|')[0]
11
+ @@server_pool_bucket
12
+ end
13
+
14
+ def self.server_host_name
15
+ @@server_host_name ||= open('http://169.254.169.254/latest/user-data').read.split('|')[1]
16
+ @@server_host_name
17
+ end
18
+
19
+ def self.storage_bucket
20
+ @@storage_bucket ||= open('http://169.254.169.254/latest/user-data').read.split('|')[2]
21
+ @@storage_bucket
22
+ end
23
+ end
@@ -0,0 +1,85 @@
1
+ require 'fileutils'
2
+ require 'aws/s3'
3
+
4
+
5
+ # This is an extension to S3Object that supports the emerging 'standard' for virtual folders on S3.
6
+ # For example:
7
+ # S3Object.store('/folder/to/greeting.txt', 'hello world!', 'ron', :use_virtual_directories => true)
8
+ #
9
+ # This will create an object in S3 that mimics a folder, as far as the S3 GUI browsers like
10
+ # the S3 Firefox Extension or Bucket Explorer are concerned.
11
+ module AWS
12
+ module S3
13
+ class S3Object
14
+ class << self
15
+
16
+ alias :original_store :store
17
+ def store(key, data, bucket = nil, options = {})
18
+ store_folders(key, bucket, options) if options[:use_virtual_directories]
19
+ original_store(key, data, bucket, options)
20
+ end
21
+
22
+ def store_folders(key, bucket = nil, options = {})
23
+ folders = key.split("/")
24
+ folders.slice!(0)
25
+ folders.pop
26
+ current_folder = "/"
27
+ folders.each {|folder|
28
+ current_folder += folder
29
+ store_folder(current_folder, bucket, options)
30
+ current_folder += "/"
31
+ }
32
+ end
33
+
34
+ def store_folder(key, bucket = nil, options = {})
35
+ original_store(key + "_$folder$", "", bucket, options) # store the magic entry that emulates a folder
36
+ end
37
+ end
38
+ end
39
+ end
40
+ end
41
+
42
+
43
+
44
+ class RemoteDirectory
45
+ # pass in the string with the name of directory like "/path/to/dir".
46
+ def initialize(dirname)
47
+ @dirname = dirname
48
+ @deleted = false
49
+ end
50
+
51
+ # returns all of the files in this directory. wraps Dir.glob.
52
+ def files
53
+ result = []
54
+ return result if @dirname.nil? || @deleted
55
+ Dir.chdir(@dirname) do
56
+ result = Dir.glob("*")
57
+ end
58
+ result
59
+ end
60
+
61
+ # copy each of the files in this directory up to S3. pass in the bucket like "my-bucket",
62
+ # and the prefix for storing files within the bucket like "path/to"
63
+ def copy_to_s3(bucket, prefix, use_virtual_directories=false)
64
+ files.each {|f|
65
+ save_file_to_s3(f, bucket, prefix, use_virtual_directories)
66
+ }
67
+ end
68
+
69
+ # delete all of the files in this directory, and the directory itself
70
+ def delete!
71
+ # todo: implement this convenience method
72
+
73
+ # return if @deleted
74
+ # FileUtils.rm files
75
+ # Dir.rmdir(@dirname)
76
+ # @deleted = true
77
+ end
78
+
79
+ # save an individual file up to S3 storage
80
+ def save_file_to_s3(filename, bucket, prefix, use_virtual_directories=false)
81
+ options = {:access => :public_read}
82
+ options[:use_virtual_directories] = true if use_virtual_directories
83
+ AWS::S3::S3Object.store("/" + prefix + "/" + filename, open(File.join(@dirname, filename)), bucket, options)
84
+ end
85
+ end
@@ -0,0 +1,57 @@
1
+ require 'open-uri'
2
+ require "rubygems"
3
+ require "builder"
4
+ require File.join(File.dirname(__FILE__), *%w[processor processor])
5
+
6
+ class ProcessorPool
7
+ VERSION = '1.0.0'
8
+ attr_reader :started
9
+
10
+ Dir["processor/**.rb"].each {|f| require f}
11
+
12
+ class << self
13
+ # Initiates the S3 connection
14
+ def start(access_key_id, secret_access_key)
15
+ raise MissingInformation.new(:access_key_id) unless access_key_id
16
+ raise MissingInformation.new(:secret_access_key) unless secret_access_key
17
+
18
+ get_instance_data
19
+ establish_connection_with_s3
20
+
21
+ current_processor = Processor.new(@@server_pool_bucket, @@host_name)
22
+ processors = Processors.registered(@@server_pool_bucket)
23
+
24
+ register_self_to_pool
25
+ check_other_processors_in_pool
26
+
27
+ @started = true
28
+ # Require sinatra for the user to use
29
+ require "sinatra"
30
+ end
31
+ # determine which server pool to use by making a REST query to get the instance metadata
32
+ def get_instance_data
33
+ @@server_pool_bucket = ProcessorInstanceInfo.server_pool_bucket
34
+ @@host_name = ProcessorInstanceInfo.host_name
35
+ end
36
+ # Establish connection with s3 and user defined details
37
+ def establish_connection_with_s3
38
+ AWS::S3::Base.establish_connection!(
39
+ :access_key_id => access_key_id,
40
+ :secret_access_key => secret_access_key,
41
+ :persistent => false
42
+ )
43
+ end
44
+ # make ourselves available in the pool, if not yet registered
45
+ def register_self_to_pool
46
+ if !processors.include?(current_processor)
47
+ current_processor.register
48
+ end
49
+ end
50
+ # check all of the registered processors in pool, except ourselves
51
+ def check_other_processors_in_pool
52
+ processors.each {|p|
53
+ p.unregister if p.not_equal?(current_processor) && !p.active?
54
+ }
55
+ end
56
+ end
57
+ end
@@ -0,0 +1,74 @@
1
+ require File.dirname(__FILE__) + '/../lib/processor_pool'
2
+
3
+ %w(test/spec).each do |library|
4
+ begin
5
+ require library
6
+ rescue
7
+ STDERR.puts "== Cannot run test without #{library}"
8
+ end
9
+ end
10
+
11
+ PRODUCTION_MODE = false
12
+ SERVER_POOL_BUCKET = "processor_pool_bucket_test" # SET THIS
13
+ ACCESS_KEY_ID = '' # AND THIS
14
+ SECRET_ACCESS_KEY = '' # AND FINALLY THIS
15
+
16
+ AWS::S3::Base.establish_connection!(
17
+ :access_key_id => ACCESS_KEY_ID,
18
+ :secret_access_key => SECRET_ACCESS_KEY
19
+ )
20
+
21
+ begin
22
+ Processor.new(SERVER_POOL_BUCKET, "test1").register
23
+ rescue
24
+ raise Exception.new "You must set the access_key_id and secret_access_key in test/helper.rb to run these"
25
+ end
26
+
27
+ class Test::Unit::TestCase
28
+ def file_exists?(temp_directory_prefix, test_id, file)
29
+ return File.exists?(File.join(temp_directory_prefix, test_id, file))
30
+ end
31
+
32
+ def temp_directory_prefix
33
+ File.dirname(__FILE__) + "/temp"
34
+ end
35
+
36
+ def copy_working_file(test_id, file, newfile=nil)
37
+ test_id = test_id + "/" + newfile if newfile
38
+ FileUtils.cp file, temp_directory_prefix + "/" + test_id
39
+ end
40
+
41
+ def setup_temp_directory(test_id)
42
+ FileUtils.makedirs(temp_directory_prefix + "/" + test_id) if not File.exists?(temp_directory_prefix + "/" + test_id)
43
+ end
44
+
45
+ def clean_temp_directory(test_id)
46
+ FileUtils.rm Dir.glob(File.join(temp_directory_prefix, test_id, "/*"))
47
+ end
48
+ end
49
+ module Test::Unit::AssertDifference
50
+ def assert_difference(object, method = nil, difference = 1)
51
+ initial_value = object.send(method)
52
+ yield
53
+ assert_equal initial_value + difference, object.send(method), "#{object}##{method}"
54
+ end
55
+
56
+ def assert_no_difference(object, method, &block)
57
+ assert_difference object, method, 0, &block
58
+ end
59
+ end
60
+ # Weird gem fix
61
+ module Kernel
62
+ def memoize(reload = true, storage = nil)
63
+ storage = "@#{storage || __method__(1)}"
64
+ if reload
65
+ instance_variable_set(storage, nil)
66
+ else
67
+ instance_variable_set(storage, nil)
68
+ if cache = instance_variable_get(storage)
69
+ return cache
70
+ end
71
+ end
72
+ instance_variable_set(storage, yield)
73
+ end
74
+ end
@@ -0,0 +1,67 @@
1
+ require File.dirname(__FILE__) + '/helper'
2
+
3
+ context "Processors when there are no processors registered" do
4
+ specify "should return an array of registered pools when requested" do
5
+ Processors.registered(SERVER_POOL_BUCKET).class.should == Array
6
+ end
7
+ specify "should return an empty array if there are no registered pools" do
8
+ Processors.registered(SERVER_POOL_BUCKET).empty?.should == true
9
+ end
10
+ specify "should return nil if there are no processors when asking for a random processor" do
11
+ Processors.get_random_processor(SERVER_POOL_BUCKET).should == nil
12
+ end
13
+ end
14
+ context "Processors when there are processors registered" do
15
+ before(:all) do
16
+ Processor.new(SERVER_POOL_BUCKET, "test1").register
17
+ Processor.new(SERVER_POOL_BUCKET, "test2").register
18
+ Processor.new(SERVER_POOL_BUCKET, "test3").register
19
+ end
20
+ after(:all) do
21
+ Processors.registered(SERVER_POOL_BUCKET).each {|p|
22
+ p.unregister
23
+ }
24
+ end
25
+ specify "should return a non-empty array when there are registered pools" do
26
+ Processors.registered(SERVER_POOL_BUCKET).empty?.should == false
27
+ end
28
+ specify "should return a random processor when asking for a random processor" do
29
+ Processors.get_random_processor(SERVER_POOL_BUCKET).class.should == Processor
30
+ end
31
+ end
32
+ context "Processor in general, when registering and unregistering" do
33
+ setup do
34
+ @processor = Processor.new(SERVER_POOL_BUCKET)
35
+ end
36
+ after(:all) do
37
+ Processors.registered(SERVER_POOL_BUCKET).each {|p|
38
+ p.unregister
39
+ }
40
+ end
41
+ specify "should initialize with a bucket" do
42
+ @processor.bucket.should == SERVER_POOL_BUCKET
43
+ end
44
+ specify "should increase the pool size when registering" do
45
+ cnt = Processors.registered(SERVER_POOL_BUCKET).size
46
+ @processor.register
47
+ (cnt + 1).should == Processors.registered(SERVER_POOL_BUCKET).size
48
+ end
49
+ specify "should decrease the pool size when unregistering" do
50
+ @processor.register
51
+ cnt = Processors.registered(SERVER_POOL_BUCKET).size
52
+ @processor.unregister
53
+ (cnt - 1).should == Processors.registered(SERVER_POOL_BUCKET).size
54
+ end
55
+ specify "should register within the pool" do
56
+ @processor.register
57
+ Processors.registered(SERVER_POOL_BUCKET).include?(@processor).should == true
58
+ end
59
+ specify "should be able to tell if it is registered" do
60
+ @processor.register
61
+ @processor.registered?.should == true
62
+ end
63
+ specify "should be able to tell that it is not registered" do
64
+ @processor.unregister
65
+ @processor.registered?.should == false
66
+ end
67
+ end
@@ -0,0 +1,17 @@
1
+ require File.dirname(__FILE__) + '/helper'
2
+
3
+ context "Processor Instance Info class methods" do
4
+ specify "it should respond to the method host_name" do
5
+ ProcessorInstanceInfo.respond_to?(:host_name).should == true
6
+ end
7
+ specify "should respond to the method server_pool_bucket" do
8
+ ProcessorInstanceInfo.respond_to?(:server_pool_bucket).should == true
9
+ end
10
+ specify "should respond to the method server_host_name" do
11
+ ProcessorInstanceInfo.respond_to?(:server_host_name).should == true
12
+ end
13
+ specify "should respond to the method storage_bucket" do
14
+ ProcessorInstanceInfo.respond_to?(:storage_bucket).should == true
15
+ end
16
+ end
17
+
metadata ADDED
@@ -0,0 +1,70 @@
1
+ --- !ruby/object:Gem::Specification
2
+ rubygems_version: 0.9.4
3
+ specification_version: 1
4
+ name: ProcessorPool
5
+ version: !ruby/object:Gem::Version
6
+ version: 1.0.0
7
+ date: 2008-02-12 00:00:00 -08:00
8
+ summary: Provides a simple load-balancing solution for Amazon's EC2 - S3 backend
9
+ require_paths:
10
+ - lib
11
+ email: ari.lerner@citrusbyte.com
12
+ homepage: " by Ari Lerner and Ron Evans"
13
+ rubyforge_project: processorpool
14
+ description: "Usage would include an upload server, a image and video processing and transcoding server, all non-blocking separate from an application server. == FEATURES/PROBLEMS: * Need to find a keep-alive solution == SYNOPSIS: require \"rubygems\" require \"lib/processor.rb\""
15
+ autorequire:
16
+ default_executable:
17
+ bindir: bin
18
+ has_rdoc: true
19
+ required_ruby_version: !ruby/object:Gem::Version::Requirement
20
+ requirements:
21
+ - - ">"
22
+ - !ruby/object:Gem::Version
23
+ version: 0.0.0
24
+ version:
25
+ platform: ruby
26
+ signing_key:
27
+ cert_chain:
28
+ post_install_message:
29
+ authors:
30
+ - Ari Lerner and Ron Evans
31
+ files:
32
+ - .DS_Store
33
+ - History.txt
34
+ - Manifest.txt
35
+ - README.txt
36
+ - Rakefile
37
+ - bin/processor_pool
38
+ - lib/processor/processor.rb
39
+ - lib/processor/processor_instance_info.rb
40
+ - lib/processor/remote_directory.rb
41
+ - lib/processor_pool.rb
42
+ - test/helper.rb
43
+ - test/test_processor.rb
44
+ - test/test_processor_instance_info.rb
45
+ test_files:
46
+ - test/test_processor.rb
47
+ - test/test_processor_instance_info.rb
48
+ rdoc_options:
49
+ - --main
50
+ - README.txt
51
+ extra_rdoc_files:
52
+ - History.txt
53
+ - Manifest.txt
54
+ - README.txt
55
+ executables:
56
+ - processor_pool
57
+ extensions: []
58
+
59
+ requirements: []
60
+
61
+ dependencies:
62
+ - !ruby/object:Gem::Dependency
63
+ name: hoe
64
+ version_requirement:
65
+ version_requirements: !ruby/object:Gem::Version::Requirement
66
+ requirements:
67
+ - - ">="
68
+ - !ruby/object:Gem::Version
69
+ version: 1.5.0
70
+ version: