nexia_worker_roulette 0.1.11

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 4ea8533bab4bfc550f23023bfa81f0befbcd107d
4
+ data.tar.gz: 0daa96c1b20b1c1621bbb4098b5726ff645e283a
5
+ SHA512:
6
+ metadata.gz: 507641dc68cffaaccdb6c5077e38039af0afce8b348f2c240a60048b76b04f3472a5246b04bfd671c6b3a41b5ee080e469844cc95020e6cc8f1457f8faca1d4f
7
+ data.tar.gz: 42b459a4d9288efd387c5c15d56c490c857de003d09d16d1f2aeec6bd3f2d39e4afa9ffd12a9faddd7454ae46c78d798d273ffc1ff93455fdb1e78568b4d722d
data/.agignore ADDED
@@ -0,0 +1,2 @@
1
+ coverage/*
2
+ vendor/bundle/*
data/.gitignore ADDED
@@ -0,0 +1,21 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
18
+ tmp/*
19
+ coverage/*
20
+ vendor/bundle
21
+ results.xml
data/.rspec ADDED
@@ -0,0 +1,2 @@
1
+ --color
2
+ --format doc
data/.simplecov ADDED
@@ -0,0 +1,16 @@
1
+ require 'simplecov'
2
+ require 'simplecov-rcov'
3
+
4
+ class SimpleCov::Formatter::MergedFormatter
5
+ def format(result)
6
+ SimpleCov::Formatter::HTMLFormatter.new.format(result)
7
+ SimpleCov::Formatter::RcovFormatter.new.format(result)
8
+ end
9
+ end
10
+ SimpleCov.formatter = SimpleCov::Formatter::MergedFormatter
11
+
12
+ SimpleCov.start do
13
+ add_filter "_spec.rb"
14
+
15
+ SimpleCov.minimum_coverage 99
16
+ end
data/Gemfile ADDED
@@ -0,0 +1,9 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in worker_roulette.gemspec
4
+ gemspec
5
+
6
+ # Gemfile
7
+ group :development do
8
+ gem 'evented-spec', :git => 'https://github.com/ruby-amqp/evented-spec.git'
9
+ end
data/Guardfile ADDED
@@ -0,0 +1,3 @@
1
+ guard 'rspec' do
2
+ watch(%r{.*})
3
+ end
data/LICENSE.txt ADDED
File without changes
data/README.md ADDED
@@ -0,0 +1,114 @@
1
+ # WorkerRoulette
2
+
3
+ WorkerRoulette is designed to allow large numbers of unique devices, processes, users, or whatever communicate over individual channels without messing up the order of their messages. WorkerRoulette was created to solve two otherwise hard problems. First, other messaging solutions (I'm looking at you RabbitMQ) are not designed to handle very large numbers of queues (millions); because WorkerRoulette is built on top of Redis, we have successfully tested it running with millions of queues. Second, other messaging systems assume one (or more) of three things: 1. Your message consumers know the routing key of messages they are interested in processing; 2. Your messages can wait so that only one consumer is processed at a time; 3. You love to complicated write code to put your messages back in order. Sometimes, none of these things is true and that is where WorkerRoulette comes in.
4
+
5
+ WorkerRoulette lets you have thousands of competing consumers (distributed over as many machines as you'd like) processing ordered messages from millions of totally unknown message providers. It does all this and ensures that the messages sent from each message provider are processed in exactly the order it sent them.
6
+
7
+ ## Use
8
+ ```ruby
9
+ size_of_connection_pool = 100
10
+
11
+ #Start it up
12
+ #the config takes size for the connection pool size, evented to specify whether to use evented redis or not,
13
+ #polling_time dictates the number of seconds to wait before checking the job queue
14
+ #(use higher numbers for high traffic systems), then the normal redis config
15
+ WorkerRoulette.start(size: size_of_connection_pool, evented: false, host: 'localhost', timeout: 5, db: 1, polling_time: 2)
16
+
17
+ #Enqueue some work
18
+ sender_id = :shady
19
+ foreman = WorkerRoulette.foreman(sender_id)
20
+ foreman.enqueue_work_order('hello')
21
+
22
+ #Pull it off (headers always include both the unique id of the sender)
23
+ tradesman = WorkerRoulette.tradesman
24
+ work_orders = tradesman.work_orders! #drain the queue of the next available sender
25
+ work_orders # => {'headers' => {'sender' => :shady}, 'payload' => ['hello']}
26
+
27
+ #Enqueue some more from someone else
28
+ other_sender_id = :the_real_slim_shady
29
+ other_foreman = WorkerRoulette.foreman(other_sender_id, 'some_namespace')
30
+ other_foreman.enqueue_work_order({'can you get me' => 'the number nine?'}, {'custom' => 'headers'})
31
+
32
+ #Have the same worker pull that off
33
+ work_orders = tradesman.work_orders! #drain the queue of the next available sender
34
+ work_orders # => {'headers' => {'sender' => :the_real_slim_shady, 'custom' => 'headers'},
35
+ # 'payload' => [{'can you get me' => 'the number nine?'}]}
36
+
37
+ #Have your workers wait for work to come in
38
+ foreman.enqueue_work_order('will I see you later?')
39
+ foreman.enqueue_work_order('can you give me back my dime?')
40
+
41
+ #And they will pull it off as it comes, as long as it comes. If the job board is empty (ie there is now work to do),
42
+ #this method will poll redis every "polling_time" seconds until it finds work,
43
+ #at which point it will continue processing jobs until the job board is empty again
44
+ tradesman.wait_for_work_orders do |work_orders| #drain the queue of the next available sender
45
+ work_orders.first # => ['will I see you later', 'can you give me back my dime?']
46
+ end
47
+ ```
48
+
49
+ ## Channels
50
+ You can also namespace your work orders over a channel, in case you have several sorts of competing consumers who should not step on each other's toes:
51
+ ```ruby
52
+ tradesman = WorkerRoulette.tradesman('good_channel')
53
+ tradesman.should_receive(:work_orders!).and_call_original
54
+
55
+ good_foreman = WorkerRoulette.foreman('foreman', 'good_channel')
56
+ bad_foreman = WorkerRoulette.foreman('foreman', 'bad_channel')
57
+
58
+ good_foreman.enqueue_work_order('some old fashion work')
59
+ bad_foreman.enqueue_work_order('evil biddings you should not carry out')
60
+
61
+ tradesman.wait_for_work_orders do |work|
62
+ work.to_s.should match("some old fashion work") #only got the work from the good foreman
63
+ work.to_s.should_not match("evil") #channels let us ignore the other's evil orders
64
+ end
65
+ ```
66
+
67
+ ## Performance
68
+ Running the performance tests on my laptop, the numbers break down like this:
69
+ ### Evented
70
+ - Polling: ~11,500 read-write round-trips / second
71
+ - Manual: ~6,000 read-write round-trips / second
72
+
73
+ ### Non-Evented
74
+ - Polling: ~10,000 read-write round-trips / second
75
+ - Manual: ~5,500 read-write round-trips / second
76
+
77
+ To run the perf tests yourself run `bundle exec spec:perf`
78
+
79
+ ## Redis Version
80
+ WorkerRoulette uses Redis' lua scripting feature to acheive such high throughput and therefore requires a version of Redis that supports lua scripting (>= Redis 2.6)
81
+
82
+ ##Caveat Emptor
83
+ While WorkerRoulette does promise to keep the messages of each consumer processed in order by competing consumers, it does NOT guarantee the order in which the queues themselves will be processed. In general, work is processed in a FIFO order, but for performance reasons this has been left a loose FIFO. For example, if Abdul enqueues some ordered messages ('1', '2', and '3') and then so do Mark and Wanda, Mark's messages may be processed first, then it would likely be Abdul's, and then Wanda's. However, even though Mark jumped the line, Abdul's messages will still be processed in the order he enqueued them ('1', '2', then '3').
84
+
85
+ ## Installation
86
+
87
+ Add this line to your application's Gemfile:
88
+
89
+ gem 'worker_roulette'
90
+
91
+ And then execute:
92
+
93
+ $ bundle
94
+
95
+ Or install it yourself as:
96
+
97
+ $ gem install worker_roulette
98
+
99
+ ## Run the specs
100
+
101
+ $ bundle exec rake spec:ci
102
+
103
+ ## Run the performance tests
104
+
105
+ $ bundle exec rake spec:perf
106
+
107
+
108
+ ## Contributing
109
+
110
+ 1. Fork it
111
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
112
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
113
+ 4. Push to the branch (`git push origin my-new-feature`)
114
+ 5. Create new Pull Request
data/Rakefile ADDED
@@ -0,0 +1,32 @@
1
+ #!/usr/bin/env rake
2
+ require "bundler/gem_tasks"
3
+ require "rspec/core/rake_task"
4
+
5
+ RSpec::Core::RakeTask.new
6
+
7
+ task :default => :spec
8
+ task :test => :spec
9
+
10
+ def rspec_out_file
11
+ require 'rspec_junit_formatter'
12
+ "-f RspecJunitFormatter -o results.xml"
13
+ end
14
+
15
+ namespace :spec do
16
+ desc "Run all unit and integration tests"
17
+ task :ci do
18
+ rspec_out_file = nil
19
+ sh "bundle exec rspec #{rspec_out_file} spec"
20
+ end
21
+
22
+ desc "Run perf tests"
23
+ task :perf do
24
+ rspec_out_file = nil
25
+ sh "bundle exec ruby ./spec/benchmark/perf_test.rb"
26
+ end
27
+
28
+ desc "Run all tests and generate coverage xml"
29
+ task :cov do
30
+ sh "bundle exec rspec #{rspec_out_file} spec"
31
+ end
32
+ end
@@ -0,0 +1,71 @@
1
+ module WorkerRoulette
2
+ class Foreman
3
+ attr_reader :sender, :namespace, :channel
4
+
5
+ LUA_ENQUEUE_WORK_ORDERS = <<-HERE
6
+ local counter_key = KEYS[1]
7
+ local job_board_key = KEYS[2]
8
+ local sender_key = KEYS[3]
9
+ local channel = KEYS[4]
10
+
11
+ local work_order = ARGV[1]
12
+ local job_notification = ARGV[2]
13
+ local redis_call = redis.call
14
+ local zscore = 'ZSCORE'
15
+ local incr = 'INCR'
16
+ local zadd = 'ZADD'
17
+ local rpush = 'RPUSH'
18
+ local publish = 'PUBLISH'
19
+ local zcard = 'ZCARD'
20
+ local del = 'DEL'
21
+
22
+ local function enqueue_work_orders(work_order, job_notification)
23
+ redis_call(rpush, sender_key, work_order)
24
+
25
+ -- called when a work from a new sender is added
26
+ if (redis_call(zscore, job_board_key, sender_key) == false) then
27
+ local count = redis_call(incr, counter_key)
28
+ redis_call(zadd, job_board_key, count, sender_key)
29
+ end
30
+ end
31
+
32
+ enqueue_work_orders(work_order, job_notification)
33
+ HERE
34
+
35
+ def initialize(redis_pool, sender, namespace = nil)
36
+ @sender = sender
37
+ @namespace = namespace
38
+ @redis_pool = redis_pool
39
+ @channel = namespace || WorkerRoulette::JOB_NOTIFICATIONS
40
+ @lua = Lua.new(@redis_pool)
41
+ end
42
+
43
+ def enqueue_work_order(work_order, headers = {}, &callback)
44
+ work_order = {'headers' => default_headers.merge(headers), 'payload' => work_order}
45
+ enqueue_work_order_without_headers(work_order, &callback)
46
+ end
47
+
48
+ def enqueue_work_order_without_headers(work_order, &callback)
49
+ @lua.call(LUA_ENQUEUE_WORK_ORDERS, [counter_key, job_board_key, sender_key, @channel],
50
+ [WorkerRoulette.dump(work_order), WorkerRoulette::JOB_NOTIFICATIONS], &callback)
51
+ end
52
+
53
+ def job_board_key
54
+ @job_board_key ||= WorkerRoulette.job_board_key(@namespace)
55
+ end
56
+
57
+ def counter_key
58
+ @counter_key ||= WorkerRoulette.counter_key(@namespace)
59
+ end
60
+
61
+ def sender_key
62
+ @sender_key ||= WorkerRoulette.sender_key(@sender, @namespace)
63
+ end
64
+
65
+ private
66
+
67
+ def default_headers
68
+ Hash['sender' => sender]
69
+ end
70
+ end
71
+ end
@@ -0,0 +1,50 @@
1
+ module WorkerRoulette
2
+ class Lua
3
+ Thread.main[:worker_roulette_lua_script_cache] = Hash.new
4
+
5
+ def initialize(connection_pool)
6
+ @connection_pool = connection_pool
7
+ end
8
+
9
+ def call(lua_script, keys_accessed = [], args = [], &callback)
10
+ @connection_pool.with do |redis|
11
+ evalsha(redis, lua_script, keys_accessed, args, &callback)
12
+ end
13
+ end
14
+
15
+ def sha(lua_script)
16
+ Thread.main[:worker_roulette_lua_script_cache][lua_script] ||= Digest::SHA1.hexdigest(lua_script)
17
+ end
18
+
19
+ def cache
20
+ Thread.main[:worker_roulette_lua_script_cache].dup
21
+ end
22
+
23
+ def clear_cache!
24
+ Thread.main[:worker_roulette_lua_script_cache] = {}
25
+ end
26
+
27
+ def eval(redis, lua_script, keys_accessed, args, &callback)
28
+ results = redis.eval(lua_script, keys_accessed.length, *keys_accessed, *args)
29
+ results.callback(&callback) if callback
30
+ results.errback {|err_msg| raise EM::Hiredis::RedisError.new(err_msg)}
31
+ end
32
+
33
+ def evalsha(redis, lua_script, keys_accessed, args, &callback)
34
+ if redis.class == EM::Hiredis::Client
35
+ results = redis.evalsha(sha(lua_script), keys_accessed.length, *keys_accessed, *args)
36
+ results.callback(&callback) if callback
37
+ results.errback {eval(redis, lua_script, keys_accessed, args, &callback)}
38
+ else
39
+ begin
40
+ results = redis.evalsha(sha(lua_script), keys_accessed, args)
41
+ rescue Redis::CommandError
42
+ results = redis.eval(lua_script, keys_accessed, args)
43
+ ensure
44
+ return callback.call results if callback
45
+ end
46
+ end
47
+ results
48
+ end
49
+ end
50
+ end
@@ -0,0 +1,125 @@
1
+ module WorkerRoulette
2
+ class Tradesman
3
+ attr_reader :last_sender, :remaining_jobs, :timer
4
+
5
+ LUA_DRAIN_WORK_ORDERS = <<-HERE
6
+ local empty_string = ""
7
+ local job_board_key = KEYS[1]
8
+ local last_sender_key = KEYS[2] or empty_string
9
+ local sender_key = ARGV[1] or empty_string
10
+ local redis_call = redis.call
11
+ local lock_key_prefix = "L*:"
12
+ local lock_value = 1
13
+ local ex = "EX"
14
+ local nx = "NX"
15
+ local get = "GET"
16
+ local set = "SET"
17
+ local del = "DEL"
18
+ local lrange = "LRANGE"
19
+ local zrank = "ZRANK"
20
+ local zrange = "ZRANGE"
21
+ local zrem = "ZREM"
22
+ local zcard = 'ZCARD'
23
+
24
+ local function drain_work_orders(job_board_key, last_sender_key, sender_key)
25
+
26
+ --kill lock for last_sender_key
27
+ if last_sender_key ~= empty_string then
28
+ local last_sender_lock_key = lock_key_prefix .. last_sender_key
29
+ redis_call(del, last_sender_lock_key)
30
+ end
31
+
32
+ if sender_key == empty_string then
33
+ sender_key = redis_call(zrange, job_board_key, 0, 0)[1] or empty_string
34
+
35
+ -- return if job_board is empty
36
+ if sender_key == empty_string then
37
+ return {empty_string, {}, 0}
38
+ end
39
+ end
40
+
41
+ local lock_key = lock_key_prefix .. sender_key
42
+ local was_not_locked = redis_call(set, lock_key, lock_value, ex, 3, nx)
43
+
44
+ if was_not_locked then
45
+ local work_orders = redis_call(lrange, sender_key, 0, -1)
46
+ redis_call(del, sender_key)
47
+
48
+ redis_call(zrem, job_board_key, sender_key)
49
+ local remaining_jobs = redis_call(zcard, job_board_key) or 0
50
+
51
+ return {sender_key, work_orders, remaining_jobs}
52
+ else
53
+ local sender_index = redis_call(zrank, job_board_key, sender_key)
54
+ local next_index = sender_index + 1
55
+ local next_sender_key = redis_call(zrange, job_board_key, next_index, next_index)[1]
56
+ if next_sender_key then
57
+ return drain_work_orders(job_board_key, empty_string, next_sender_key)
58
+ else
59
+ -- return if job_board is empty
60
+ return {empty_string, {}, 0}
61
+ end
62
+ end
63
+ end
64
+
65
+ return drain_work_orders(job_board_key, last_sender_key, empty_string)
66
+ HERE
67
+
68
+ def initialize(redis_pool, evented, namespace = nil, polling_time = WorkerRoulette::DEFAULT_POLLING_TIME)
69
+ @evented = evented
70
+ @polling_time = polling_time
71
+ @redis_pool = redis_pool
72
+ @namespace = namespace
73
+ @channel = namespace || WorkerRoulette::JOB_NOTIFICATIONS
74
+ @lua = Lua.new(@redis_pool)
75
+ @remaining_jobs = 0
76
+ end
77
+
78
+ def wait_for_work_orders(&on_message_callback)
79
+ return unless on_message_callback
80
+ work_orders! do |work|
81
+ on_message_callback.call(work) if work.any?
82
+ if @evented
83
+ evented_drain_work_queue!(&on_message_callback)
84
+ else
85
+ non_evented_drain_work_queue!(&on_message_callback)
86
+ end
87
+ end
88
+ end
89
+
90
+ def work_orders!(&callback)
91
+ @lua.call(LUA_DRAIN_WORK_ORDERS, [job_board_key, @last_sender], [nil]) do |results|
92
+ sender_key = results[0]
93
+ work_orders = results[1]
94
+ @remaining_jobs = results[2]
95
+ @last_sender = sender_key.split(':').last
96
+ work = work_orders.map {|work_order| WorkerRoulette.load(work_order)}
97
+ callback.call work if callback
98
+ work
99
+ end
100
+ end
101
+
102
+ def job_board_key
103
+ @job_board_key ||= WorkerRoulette.job_board_key(@namespace)
104
+ end
105
+
106
+ private
107
+
108
+ def evented_drain_work_queue!(&on_message_callback)
109
+ if remaining_jobs > 0
110
+ EM.next_tick {wait_for_work_orders(&on_message_callback)}
111
+ else
112
+ EM.add_timer(@polling_time) { wait_for_work_orders(&on_message_callback) }
113
+ end
114
+ end
115
+
116
+ def non_evented_drain_work_queue!(&on_message_callback)
117
+ if remaining_jobs > 0
118
+ wait_for_work_orders(&on_message_callback)
119
+ else
120
+ sleep 2
121
+ wait_for_work_orders(&on_message_callback)
122
+ end
123
+ end
124
+ end
125
+ end
@@ -0,0 +1,3 @@
1
+ module WorkerRoulette
2
+ VERSION = '0.1.11'
3
+ end
@@ -0,0 +1,96 @@
1
+ require "worker_roulette/version"
2
+ require 'oj'
3
+ require 'redis'
4
+ require 'hiredis'
5
+ require 'em-hiredis'
6
+ require 'connection_pool'
7
+ require "digest/sha1"
8
+
9
+ Dir[File.join(File.dirname(__FILE__),'worker_roulette','**','*.rb')].sort.each { |file| require file.gsub(".rb", "")}
10
+
11
+ module WorkerRoulette
12
+ class WorkerRoulette
13
+ JOB_BOARD = "job_board"
14
+ JOB_NOTIFICATIONS = "new_job_ready"
15
+ DEFAULT_POLLING_TIME = 2
16
+
17
+ def self.dump(obj)
18
+ Oj.dump(obj)
19
+ rescue Oj::ParseError => e
20
+ {'error' => e, 'unparsable_string' => obj}
21
+ end
22
+
23
+ def self.load(json)
24
+ Oj.load(json)
25
+ rescue Oj::ParseError => e
26
+ {'error' => e, 'unparsable_string' => obj}
27
+ end
28
+
29
+ def self.job_board_key(namespace = nil)
30
+ "#{namespace + ':' if namespace}#{WorkerRoulette::JOB_BOARD}"
31
+ end
32
+
33
+ def self.sender_key(sender, namespace = nil)
34
+ "#{namespace + ':' if namespace}#{sender}"
35
+ end
36
+
37
+ def self.counter_key(sender, namespace = nil)
38
+ "#{namespace + ':' if namespace}counter_key"
39
+ end
40
+
41
+ def self.start(config = {})
42
+ instance = new(config)
43
+ instance
44
+ end
45
+
46
+ private_class_method :new
47
+
48
+ def initialize(config = {})
49
+ @redis_config = { host: 'localhost', port: 6379, db: 14, driver: :hiredis, timeout: 5, evented: false, pool_size: 10 , polling_time: DEFAULT_POLLING_TIME}.merge(config)
50
+ @pool_config = Hash[size: @redis_config.delete(:pool_size), timeout: @redis_config.delete(:timeout)]
51
+ @evented = @redis_config.delete(:evented)
52
+ @polling_time = @redis_config.delete(:polling_time)
53
+
54
+ @foreman_connection_pool = ConnectionPool.new(@pool_config) {new_redis}
55
+ @tradesman_connection_pool = ConnectionPool.new(@pool_config) {new_redis}
56
+ end
57
+
58
+ def foreman(sender, namespace = nil)
59
+ raise "WorkerRoulette not Started" unless @foreman_connection_pool
60
+ Foreman.new(@foreman_connection_pool, sender, namespace)
61
+ end
62
+
63
+ def tradesman(namespace = nil, polling_time = DEFAULT_POLLING_TIME)
64
+ raise "WorkerRoulette not Started" unless @tradesman_connection_pool
65
+ Tradesman.new(@tradesman_connection_pool, @evented, namespace, polling_time || @polling_time)
66
+ end
67
+
68
+ def tradesman_connection_pool
69
+ @tradesman_connection_pool
70
+ end
71
+
72
+ def pool_size
73
+ (@pool_config ||= {})[:size]
74
+ end
75
+
76
+ def redis_config
77
+ (@redis_config ||= {}).dup
78
+ end
79
+
80
+ def polling_time
81
+ @polling_time
82
+ end
83
+
84
+ private
85
+
86
+ def new_redis
87
+ if @evented
88
+ require 'eventmachine'
89
+ redis = EM::Hiredis::Client.new(@redis_config[:host], @redis_config[:port], @redis_config[:password], @redis_config[:db])
90
+ redis.connect
91
+ else
92
+ Redis.new(@redis_config)
93
+ end
94
+ end
95
+ end
96
+ end
@@ -0,0 +1,39 @@
1
+ require 'worker_roulette'
2
+
3
+ def pub(iterations)
4
+ instance = WorkerRoulette::WorkerRoulette.start(evented: false)
5
+ work_order = {'ding dong' => "hello_foreman_" * 100}
6
+ iterations.times do |iteration|
7
+ sender = 'sender_' + (30_000 * rand).to_i.to_s
8
+ foreman = instance.foreman(sender, 'good_channel')
9
+ foreman.enqueue_work_order(work_order)
10
+ puts "published: #{iteration}" if iteration % 10_000 == 0
11
+ end
12
+ end
13
+
14
+ def sub(iterations)
15
+ instance = WorkerRoulette::WorkerRoulette.start(evented: true)
16
+ @tradesman = instance.tradesman('good_channel')
17
+ @received = 0
18
+ @tradesman.wait_for_work_orders do |work|
19
+ @received += work.length
20
+ puts @received if @received % (iterations / 10) == 0
21
+ end
22
+ end
23
+
24
+ def start(action, iterations = 1_000_000)
25
+ EM.kqueue = true
26
+ socket_max = 50_000
27
+ EventMachine.set_descriptor_table_size(socket_max)
28
+
29
+ EM.run do
30
+ Signal.trap("INT") {
31
+ EM.stop_event_loop
32
+ }
33
+ Signal.trap("TERM") {
34
+ EM.stop_event_loop
35
+ }
36
+
37
+ self.send(action, iterations)
38
+ end
39
+ end