dcell 0.0.0 → 0.0.1
Sign up to get free protection for your applications and to get access to all the features.
- data/.gitignore +2 -0
- data/.rspec +4 -0
- data/CHANGES.md +8 -0
- data/Gemfile +2 -0
- data/README.md +209 -0
- data/Rakefile +3 -0
- data/benchmarks/messaging.rb +67 -0
- data/benchmarks/receiver.rb +37 -0
- data/dcell.gemspec +10 -3
- data/lib/celluloid/README +8 -0
- data/lib/celluloid/zmq.rb +28 -0
- data/lib/celluloid/zmq/mailbox.rb +13 -0
- data/lib/celluloid/zmq/reactor.rb +74 -0
- data/lib/dcell.rb +92 -2
- data/lib/dcell/actor_proxy.rb +4 -0
- data/lib/dcell/application.rb +6 -0
- data/lib/dcell/celluloid_ext.rb +57 -0
- data/lib/dcell/directory.rb +23 -0
- data/lib/dcell/global.rb +23 -0
- data/lib/dcell/mailbox_proxy.rb +61 -0
- data/lib/dcell/messages.rb +67 -0
- data/lib/dcell/node.rb +120 -0
- data/lib/dcell/registries/redis_adapter.rb +86 -0
- data/lib/dcell/registries/zk_adapter.rb +122 -0
- data/lib/dcell/responses.rb +16 -0
- data/lib/dcell/router.rb +71 -0
- data/lib/dcell/rspec.rb +1 -0
- data/lib/dcell/server.rb +80 -0
- data/lib/dcell/version.rb +1 -1
- data/spec/celluloid/zmq/mailbox_spec.rb +6 -0
- data/spec/dcell/actor_proxy_spec.rb +60 -0
- data/spec/dcell/celluloid_ext_spec.rb +21 -0
- data/spec/dcell/directory_spec.rb +8 -0
- data/spec/dcell/global_spec.rb +21 -0
- data/spec/dcell/node_spec.rb +23 -0
- data/spec/dcell/registries/redis_adapter_spec.rb +6 -0
- data/spec/dcell/registries/zk_adapter_spec.rb +11 -0
- data/spec/spec_helper.rb +16 -0
- data/spec/support/helpers.rb +40 -0
- data/spec/support/registry_examples.rb +35 -0
- data/spec/test_node.rb +33 -0
- data/tasks/rspec.task +7 -0
- data/tasks/zookeeper.task +58 -0
- metadata +111 -7
data/.gitignore
CHANGED
data/.rspec
ADDED
data/CHANGES.md
ADDED
data/Gemfile
CHANGED
data/README.md
ADDED
@@ -0,0 +1,209 @@
|
|
1
|
+
DCell
|
2
|
+
=====
|
3
|
+
|
4
|
+
DCell is a simple and easy way to build distributed applications in Ruby.
|
5
|
+
Somewhat similar to DRb, DCell lets you easily expose Ruby objects as network
|
6
|
+
services, and call them remotely just like you would any other Ruby object.
|
7
|
+
However, unlike DRb all objects in the system are concurrent. You can create
|
8
|
+
and register several available services on a given node, obtain handles to
|
9
|
+
them, and easily pass these handles around the network just like any other
|
10
|
+
objects.
|
11
|
+
|
12
|
+
DCell is a distributed extension to Celluloid, which provides concurrent
|
13
|
+
objects for Ruby with many of the features of Erlang, such as the ability
|
14
|
+
to supervise objects and restart them when they crash, and also link to
|
15
|
+
other objects and receive event notifications of when they crash. This makes
|
16
|
+
it easier to build robust, fault-tolerant distributed systems.
|
17
|
+
|
18
|
+
You can read more about Celluloid at: http://celluloid.github.com
|
19
|
+
|
20
|
+
Supported Platforms
|
21
|
+
-------------------
|
22
|
+
|
23
|
+
DCell works on Ruby 1.9.2/1.9.3, JRuby 1.6 (in 1.9 mode), and Rubinius 2.0.
|
24
|
+
|
25
|
+
To use JRuby in 1.9 mode, you'll need to pass the "--1.9" command line
|
26
|
+
option to the JRuby executable, or set the "JRUBY_OPTS=--1.9" environment
|
27
|
+
variable:
|
28
|
+
|
29
|
+
export JRUBY_OPTS=--1.9
|
30
|
+
|
31
|
+
(Note: I'd recommend putting the above in your .bashrc/.zshrc/etc in
|
32
|
+
general. 1.9 is the future, time to embrace it)
|
33
|
+
|
34
|
+
Celluloid works on Rubinius in either 1.8 or 1.9 mode.
|
35
|
+
|
36
|
+
All components, including the 0MQ bindings, Redis, and Zookeeper adapters
|
37
|
+
are all certified to work on the above platforms. The 0MQ binding is FFI.
|
38
|
+
The Redis adapter is pure Ruby. The Zookeeper adapter uses an MRI-style
|
39
|
+
native extension but also supplies a pure-Java backend for JRuby.
|
40
|
+
|
41
|
+
Prerequisites
|
42
|
+
-------------
|
43
|
+
|
44
|
+
DCell requires 0MQ. On OS X, this is available through Homebrew by running:
|
45
|
+
|
46
|
+
brew install zeromq
|
47
|
+
|
48
|
+
DCell keeps the state of all connected nodes and global configuration data
|
49
|
+
in a service it calls the "registry". There are presently two supported
|
50
|
+
registry services:
|
51
|
+
|
52
|
+
* Redis (Fast and Loose): Redis is a persistent data structures server.
|
53
|
+
It's simple and easy to use for development and prototyping, but lacks a
|
54
|
+
good distribution story.
|
55
|
+
|
56
|
+
* Zookeeper (Serious Business): Zookeeper is a high-performance coordination
|
57
|
+
service for distributed applications. It exposes common services such as
|
58
|
+
naming, configuration management, synchronization, and group management.
|
59
|
+
Unfortunately, it has slightly more annoying client-side dependencies and is
|
60
|
+
more difficult to deploy than Redis.
|
61
|
+
|
62
|
+
You may pick either one of these services to use as DCell's registry. The
|
63
|
+
default is Redis.
|
64
|
+
|
65
|
+
To install a local copy of Redis on OS X with Homebrew, run:
|
66
|
+
|
67
|
+
brew install redis
|
68
|
+
|
69
|
+
To install a local copy Zookeeper for testing purposes, run:
|
70
|
+
|
71
|
+
rake zookeeper:install
|
72
|
+
|
73
|
+
and to start it run:
|
74
|
+
|
75
|
+
rake zookeeper:start
|
76
|
+
|
77
|
+
Configuration
|
78
|
+
-------------
|
79
|
+
|
80
|
+
The simplest way to configure and start DCell is with the following:
|
81
|
+
|
82
|
+
require 'dcell'
|
83
|
+
|
84
|
+
DCell.start
|
85
|
+
|
86
|
+
This configures DCell with all the default options, however there are many
|
87
|
+
options you can override, e.g.:
|
88
|
+
|
89
|
+
DCell.start :id => "node42", :addr => "tcp://127.0.0.1:2042"
|
90
|
+
|
91
|
+
DCell identifies each node with a unique node ID, that defaults to your
|
92
|
+
hostname. Each node needs to be reachable over 0MQ, and the addr option
|
93
|
+
specifies the 0MQ address where the host can be reached. When giving a tcp://
|
94
|
+
URL, you *must* specify an IP address and not a hostname.
|
95
|
+
|
96
|
+
To join a cluster you'll need to provide the location of the registry server.
|
97
|
+
This can be done through the "registry" configuration key:
|
98
|
+
|
99
|
+
DCell.start :id => "node24", :addr => "tcp://127.0.0.1:2042",
|
100
|
+
:registry => {
|
101
|
+
:adapter => 'redis',
|
102
|
+
:host => 'mycluster.example.org',
|
103
|
+
:port => 6379
|
104
|
+
}
|
105
|
+
|
106
|
+
When configuring DCell to use Redis, use the following options:
|
107
|
+
|
108
|
+
- **adapter**: "redis" (*optional, alternatively "zk"*)
|
109
|
+
- **host**: hostname or IP address of the Redis server (*optional, default localhost*)
|
110
|
+
- **port**: port of the Redis server (*optional, default 6379*)
|
111
|
+
- **password**: password to the Redis server (*optional*)
|
112
|
+
|
113
|
+
Usage
|
114
|
+
-----
|
115
|
+
|
116
|
+
You've now configured a single node in a DCell cluster. You can obtain the
|
117
|
+
DCell::Node object representing the local node by calling DCell.me:
|
118
|
+
|
119
|
+
>> DCell.start
|
120
|
+
=> #<Celluloid::Supervisor(DCell::Application):0xed6>
|
121
|
+
>> DCell.me
|
122
|
+
=> #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:7777">
|
123
|
+
|
124
|
+
DCell::Node objects are the entry point for locating actors on the system.
|
125
|
+
DCell.me returns the local node. Other nodes can be obtained by their
|
126
|
+
node IDs:
|
127
|
+
|
128
|
+
>> node = DCell::Node["cryptosphere.local"]
|
129
|
+
=> #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:7777">
|
130
|
+
|
131
|
+
DCell::Node.all returns all connected nodes in the cluster:
|
132
|
+
|
133
|
+
>> DCell::Node.all
|
134
|
+
=> [#<DCell::Node[test_node] @addr="tcp://127.0.0.1:21264">, #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:7777">]
|
135
|
+
|
136
|
+
DCell::Node is a Ruby Enumerable. You can iterate across all nodes with
|
137
|
+
DCell::Node.each.
|
138
|
+
|
139
|
+
Once you've obtained a node, you can look up services it exports and call them
|
140
|
+
just like you'd invoke methods on any other Ruby object:
|
141
|
+
|
142
|
+
>> node = DCell::Node["cryptosphere.local"]
|
143
|
+
=> #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:7777">
|
144
|
+
>> time_server = node[:time_server]
|
145
|
+
=> #<Celluloid::Actor(TimeServer:0xee8)>
|
146
|
+
>> time_server.time
|
147
|
+
=> "The time is: 2011-11-10 20:23:47 -0800"
|
148
|
+
|
149
|
+
You can also find all available services on a node with DCell::Node#all:
|
150
|
+
|
151
|
+
>> node = DCell::Node["cryptosphere.local"]
|
152
|
+
=> #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:7777">
|
153
|
+
>> node.all
|
154
|
+
=> [:time_server]
|
155
|
+
|
156
|
+
Registering Actors
|
157
|
+
------------------
|
158
|
+
|
159
|
+
All services exposed by DCell must take the form of registered Celluloid actors.
|
160
|
+
What follows is an extremely brief introduction to creating and registering
|
161
|
+
actors, but for more information, you should definitely [read the Celluloid
|
162
|
+
documentation](http://celluloid.github.com).
|
163
|
+
|
164
|
+
DCell exposes all Celluloid actors you've registered directly onto the network.
|
165
|
+
The best way to register an actor is by supervising it. Below is an example of
|
166
|
+
how to create an actor and register it on the network:
|
167
|
+
|
168
|
+
class TimeServer
|
169
|
+
include Celluloid
|
170
|
+
|
171
|
+
def time
|
172
|
+
"The time is: #{Time.now}"
|
173
|
+
end
|
174
|
+
end
|
175
|
+
|
176
|
+
Now that we've defined the TimeServer, we're going to supervise it and register
|
177
|
+
it in the local registry:
|
178
|
+
|
179
|
+
>> TimeServer.supervise_as :time_server
|
180
|
+
=> #<Celluloid::Supervisor(TimeServer):0xee4>
|
181
|
+
|
182
|
+
Supervising actors means that if they crash, they're automatically restarted
|
183
|
+
and registered under the same name. We can access registered actors by using
|
184
|
+
Celluloid::Actor#[]:
|
185
|
+
|
186
|
+
>> Celluloid::Actor[:time_server]
|
187
|
+
=> #<Celluloid::Actor(TimeServer:0xee8)>
|
188
|
+
>> Celluloid::Actor[:time_server].time
|
189
|
+
=> "The time is: 2011-11-10 20:17:48 -0800"
|
190
|
+
|
191
|
+
This same actor is now available using the DCell::Node#[] syntax:
|
192
|
+
|
193
|
+
>> node = DCell.me
|
194
|
+
=> #<DCell::Node[cryptosphere.local] @addr="tcp://127.0.0.1:1870">
|
195
|
+
>> node[:time_server].time
|
196
|
+
=> "The time is: 2011-11-10 20:28:27 -0800"
|
197
|
+
|
198
|
+
Globals
|
199
|
+
-------
|
200
|
+
|
201
|
+
DCell provides a registry global for storing configuration data and actors you
|
202
|
+
wish to publish globally to the entire cluster:
|
203
|
+
|
204
|
+
>> actor = Celluloid::Actor[:dcell_server]
|
205
|
+
=> #<Celluloid::Actor(DCell::Server:0xf2e) @addr="tcp://127.0.0.1:7777">
|
206
|
+
>> DCell::Global[:sweet_server] = actor
|
207
|
+
=> #<Celluloid::Actor(DCell::Server:0xf2e) @addr="tcp://127.0.0.1:7777">
|
208
|
+
>> DCell::Global[:sweet_server]
|
209
|
+
=> #<Celluloid::Actor(DCell::Server:0xf2e) @addr="tcp://127.0.0.1:7777">
|
data/Rakefile
CHANGED
@@ -0,0 +1,67 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
require 'benchmark'
|
4
|
+
|
5
|
+
require 'rubygems'
|
6
|
+
require 'bundler'
|
7
|
+
Bundler.setup
|
8
|
+
|
9
|
+
require 'dcell'
|
10
|
+
DCell.setup
|
11
|
+
DCell.run!
|
12
|
+
|
13
|
+
RECEIVER_PORT = 12345
|
14
|
+
|
15
|
+
$receiver_pid = Process.spawn Gem.ruby, File.expand_path("../receiver.rb", __FILE__)
|
16
|
+
STDERR.print "Waiting for test node to start up..."
|
17
|
+
|
18
|
+
socket = nil
|
19
|
+
30.times do
|
20
|
+
begin
|
21
|
+
socket = TCPSocket.open("127.0.0.1", RECEIVER_PORT)
|
22
|
+
break if socket
|
23
|
+
rescue Errno::ECONNREFUSED
|
24
|
+
STDERR.print "."
|
25
|
+
sleep 1
|
26
|
+
end
|
27
|
+
end
|
28
|
+
|
29
|
+
if socket
|
30
|
+
STDERR.puts " done!"
|
31
|
+
socket.close
|
32
|
+
else
|
33
|
+
STDERR.puts " FAILED!"
|
34
|
+
raise "couldn't connect to test node!"
|
35
|
+
end
|
36
|
+
|
37
|
+
class AsyncPerformanceTest
|
38
|
+
include Celluloid
|
39
|
+
|
40
|
+
def initialize(progenator, n = 10000)
|
41
|
+
@n = n
|
42
|
+
@receiver = progenator.spawn_async_receiver(n, current_actor)
|
43
|
+
end
|
44
|
+
|
45
|
+
def run
|
46
|
+
@n.times { @receiver.increment! }
|
47
|
+
wait :complete
|
48
|
+
end
|
49
|
+
|
50
|
+
def complete
|
51
|
+
signal :complete
|
52
|
+
end
|
53
|
+
end
|
54
|
+
|
55
|
+
receiver = DCell::Node['benchmark_receiver']
|
56
|
+
progenator = receiver[:progenator]
|
57
|
+
|
58
|
+
test = AsyncPerformanceTest.new progenator
|
59
|
+
time = Benchmark.measure { test.run }.real
|
60
|
+
messages_per_second = 1 / time * 10000
|
61
|
+
|
62
|
+
puts "messages_per_second: #{"%0.2f" % messages_per_second}"
|
63
|
+
|
64
|
+
Process.kill 9, $receiver_pid
|
65
|
+
Process.wait $receiver_pid rescue nil
|
66
|
+
|
67
|
+
exit 0
|
@@ -0,0 +1,37 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'bundler'
|
3
|
+
Bundler.setup
|
4
|
+
|
5
|
+
require 'dcell'
|
6
|
+
DCell.setup :id => 'benchmark_receiver', :addr => 'tcp://127.0.0.1:12345'
|
7
|
+
|
8
|
+
class AsyncReceiver
|
9
|
+
include Celluloid
|
10
|
+
attr_reader :count
|
11
|
+
|
12
|
+
def initialize(n, actor)
|
13
|
+
@n, @actor = n, actor
|
14
|
+
@count = 0
|
15
|
+
end
|
16
|
+
|
17
|
+
def increment
|
18
|
+
@count += 1
|
19
|
+
@actor.complete! if @count == @n
|
20
|
+
@count
|
21
|
+
end
|
22
|
+
end
|
23
|
+
|
24
|
+
class Progenator
|
25
|
+
include Celluloid
|
26
|
+
|
27
|
+
def spawn_async_receiver(n, actor)
|
28
|
+
AsyncReceiver.new(n, actor)
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
32
|
+
class BenchmarkApplication < Celluloid::Application
|
33
|
+
supervise DCell::Application
|
34
|
+
supervise Progenator, :as => :progenator
|
35
|
+
end
|
36
|
+
|
37
|
+
BenchmarkApplication.run
|
data/dcell.gemspec
CHANGED
@@ -10,12 +10,19 @@ Gem::Specification.new do |s|
|
|
10
10
|
s.homepage = "http://github.com/tarcieri/dcell"
|
11
11
|
s.summary = "An asynchronous distributed object framework based on Celluloid"
|
12
12
|
s.description = "DCell is an distributed object framework based on Celluloid built on 0MQ and Zookeeper"
|
13
|
-
|
13
|
+
|
14
14
|
s.files = `git ls-files`.split("\n")
|
15
15
|
s.test_files = `git ls-files -- {test,spec,features}/*`.split("\n")
|
16
16
|
s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) }
|
17
17
|
s.require_paths = ["lib"]
|
18
|
-
|
19
|
-
s.add_dependency "celluloid"
|
18
|
+
|
19
|
+
s.add_dependency "celluloid", ">= 0.6.2"
|
20
|
+
s.add_dependency "ffi"
|
20
21
|
s.add_dependency "ffi-rzmq"
|
22
|
+
s.add_dependency "redis"
|
23
|
+
s.add_dependency "redis-namespace"
|
24
|
+
|
25
|
+
s.add_development_dependency "rake"
|
26
|
+
s.add_development_dependency "rspec", ">= 2.7.0"
|
27
|
+
#s.add_development_dependency "zk"
|
21
28
|
end
|
@@ -0,0 +1,8 @@
|
|
1
|
+
Ideally DCell would use Celluloid::IO to monitor 0MQ sockets. Unfortunately,
|
2
|
+
ffi-rzmq presently does not present an API for obtaining an IO object from a
|
3
|
+
0MQ socket.
|
4
|
+
|
5
|
+
This directory contains a temporary workaround Celluloid::ZMQ which uses
|
6
|
+
a ZMQ::Poller to monitor for 0MQ activity. This implementation is hardly
|
7
|
+
anywhere near ideal but it works and provides a temporary solution until
|
8
|
+
ffi-rzmq implements an API for retrieving IO objects.
|
@@ -0,0 +1,28 @@
|
|
1
|
+
require 'ffi-rzmq'
|
2
|
+
|
3
|
+
require 'celluloid'
|
4
|
+
require 'celluloid/zmq/mailbox'
|
5
|
+
require 'celluloid/zmq/reactor'
|
6
|
+
|
7
|
+
module Celluloid
|
8
|
+
# Actors which run alongside 0MQ operations
|
9
|
+
# This is a temporary hack (hopefully) until ffi-rzmq exposes IO objects for
|
10
|
+
# 0MQ sockets that can be used with Celluloid::IO
|
11
|
+
module ZMQ
|
12
|
+
def self.included(klass)
|
13
|
+
klass.send :include, ::Celluloid
|
14
|
+
klass.use_mailbox Celluloid::ZMQ::Mailbox
|
15
|
+
end
|
16
|
+
|
17
|
+
# Wait for the given IO object to become readable
|
18
|
+
def wait_readable(socket)
|
19
|
+
# Law of demeter be damned!
|
20
|
+
current_actor.mailbox.reactor.wait_readable(socket)
|
21
|
+
end
|
22
|
+
|
23
|
+
# Wait for the given IO object to become writeable
|
24
|
+
def wait_writeable(socket)
|
25
|
+
current_actor.mailbox.reactor.wait_writeable(socket)
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
@@ -0,0 +1,13 @@
|
|
1
|
+
module Celluloid
|
2
|
+
module ZMQ
|
3
|
+
# A Celluloid mailbox for Actors that wait on 0MQ sockets
|
4
|
+
class Mailbox < Celluloid::IO::Mailbox
|
5
|
+
def initialize
|
6
|
+
@messages = []
|
7
|
+
@lock = Mutex.new
|
8
|
+
@waker = Celluloid::IO::Waker.new
|
9
|
+
@reactor = Reactor.new(@waker)
|
10
|
+
end
|
11
|
+
end
|
12
|
+
end
|
13
|
+
end
|
@@ -0,0 +1,74 @@
|
|
1
|
+
module Celluloid
|
2
|
+
module ZMQ
|
3
|
+
# React to incoming 0MQ and Celluloid events. This is kinda sorta supposed
|
4
|
+
# to resemble the Reactor design pattern.
|
5
|
+
class Reactor
|
6
|
+
def initialize(waker)
|
7
|
+
@waker = waker
|
8
|
+
@poller = ::ZMQ::Poller.new
|
9
|
+
@readers = {}
|
10
|
+
@writers = {}
|
11
|
+
|
12
|
+
# FIXME: The way things are presently implemented is super ghetto
|
13
|
+
# The ZMQ::Poller should be able to wait on the waker somehow
|
14
|
+
# but I can't get it to work :(
|
15
|
+
#result = @poller.register(nil, ::ZMQ::POLLIN, @waker.io.fileno)
|
16
|
+
#
|
17
|
+
#unless ::ZMQ::Util.resultcode_ok?(result)
|
18
|
+
# raise "couldn't register waker with 0MQ poller"
|
19
|
+
#end
|
20
|
+
end
|
21
|
+
|
22
|
+
# Wait for the given ZMQ socket to become readable
|
23
|
+
def wait_readable(socket)
|
24
|
+
monitor_zmq socket, @readers, ::ZMQ::POLLIN
|
25
|
+
end
|
26
|
+
|
27
|
+
# Wait for the given ZMQ socket to become writeable
|
28
|
+
def wait_writeable(socket)
|
29
|
+
monitor_zmq socket, @writers, ::ZMQ::POLLOUT
|
30
|
+
end
|
31
|
+
|
32
|
+
# Monitor the given ZMQ socket with the given options
|
33
|
+
def monitor_zmq(socket, set, type)
|
34
|
+
if set.has_key? socket
|
35
|
+
raise ArgumentError, "another method is already waiting on #{socket.inspect}"
|
36
|
+
else
|
37
|
+
set[socket] = Fiber.current
|
38
|
+
end
|
39
|
+
|
40
|
+
@poller.register socket, type
|
41
|
+
Fiber.yield
|
42
|
+
|
43
|
+
@poller.deregister socket, type
|
44
|
+
socket
|
45
|
+
end
|
46
|
+
|
47
|
+
# Run the reactor, waiting for events, and calling the given block if
|
48
|
+
# the reactor is awoken by the waker
|
49
|
+
def run_once
|
50
|
+
# FIXME: This approach is super ghetto. Find some way to make the
|
51
|
+
# ZMQ::Poller wait on the waker's file descriptor
|
52
|
+
if @poller.size == 0
|
53
|
+
readable, _ = select [@waker.io]
|
54
|
+
yield if readable and readable.include? @waker.io
|
55
|
+
else
|
56
|
+
if ::ZMQ::Util.resultcode_ok? @poller.poll(100)
|
57
|
+
@poller.readables.each do |sock|
|
58
|
+
fiber = @readers.delete sock
|
59
|
+
fiber.resume if fiber
|
60
|
+
end
|
61
|
+
|
62
|
+
@poller.writables.each do |sock|
|
63
|
+
fiber = @writers.delete sock
|
64
|
+
fiber.resume if fiber
|
65
|
+
end
|
66
|
+
end
|
67
|
+
|
68
|
+
readable, _ = select [@waker.io], [], [], 0
|
69
|
+
yield if readable and readable.include? @waker.io
|
70
|
+
end
|
71
|
+
end
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|