eventbox 0.1.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1 @@
1
+ [![MyQueue calls](https://raw.github.com/larskanis/eventbox/master/docs/images/my_queue_calls.svg?sanitize=true)](https://www.rubydoc.info/gems/eventbox/file/README.md#my_queue_image)
@@ -0,0 +1 @@
1
+ {include:file:docs/images/my_queue_calls.svg}
data/docs/server.md CHANGED
@@ -1,6 +1,13 @@
1
+ ## A TCP server implementation with tracking of startup and shutdown
2
+
1
3
  Race-free server startup and shutdown can be a tricky task.
2
4
  The following example illustrates, how a TCP server can be started and interrupted properly.
3
5
 
6
+ For startup it makes use of {Eventbox::CompletionProc yield} and {Eventbox::CompletionProc#raise} to complete `MyServer.new` either successfully or with the forwarded exception raised by `TCPServer.new`.
7
+
8
+ For the shutdown it makes use of {Eventbox::Action#raise} to send a `Stop` signal to the blocking `accept` method.
9
+ The `Stop` instance carries the {Eventbox::CompletionProc} which is used to signal that the shutdown has finished by returning from `MyServer#stop`.
10
+
4
11
  ```ruby
5
12
  require "eventbox"
6
13
  require "socket"
@@ -8,36 +15,39 @@ require "socket"
8
15
  class MyServer < Eventbox
9
16
  yield_call def init(bind, port, result)
10
17
  @count = 0
11
- @server = start_serving(bind, port, result)
18
+ @server = start_serving(bind, port, result) # Start an action to handle incomming connections
12
19
  end
13
20
 
14
21
  action def start_serving(bind, port, init_done)
15
22
  serv = TCPServer.new(bind, port)
16
23
  rescue => err
17
- init_done.raise err
24
+ init_done.raise err # complete MyServer.new with an exception
18
25
  else
19
- init_done.yield
26
+ init_done.yield # complete MyServer.new without exception
20
27
 
21
- loop do
28
+ loop do # accept all connection requests until Stop is received
22
29
  begin
30
+ # enable interruption by the Stop class for the duration of the `accept` call
23
31
  conn = Thread.handle_interrupt(Stop => :on_blocking) do
24
- serv.accept
32
+ serv.accept # wait for the next connection request come in
25
33
  end
26
34
  rescue Stop => st
27
35
  serv.close
28
- st.stopped.yield
29
- break
36
+ st.stopped.yield # let MyServer#stop return
37
+ break # and exit the action
30
38
  else
31
- MyConnection.new(conn, self)
39
+ MyConnection.new(conn, self) # Handle each client by its own instance
32
40
  end
33
41
  end
34
42
  end
35
43
 
44
+ # A simple example for a shared resource to be used by several threads
36
45
  sync_call def count
37
- @count += 1
46
+ @count += 1 # atomically increment the counter
38
47
  end
39
48
 
40
49
  yield_call def stop(result)
50
+ # Don't return from `stop` externally, but wait until the server is down
41
51
  @server.raise(Stop.new(result))
42
52
  end
43
53
 
@@ -49,11 +59,12 @@ class MyServer < Eventbox
49
59
  end
50
60
  end
51
61
 
62
+ # Each call to `MyConnection.new` starts a new thread to do the communication.
52
63
  class MyConnection < Eventbox
53
64
  action def init(conn, server)
54
65
  conn.write "Hello #{server.count}"
55
66
  ensure
56
- conn.close
67
+ conn.close # Don't wait for an answer but just close the client connection
57
68
  end
58
69
  end
59
70
  ```
@@ -61,15 +72,15 @@ end
61
72
  The server can now be started like so.
62
73
 
63
74
  ```ruby
64
- s = MyServer.new('localhost', 12345)
75
+ s = MyServer.new('localhost', 12345) # Open a TCP socket
65
76
 
66
- 10.times.map do
77
+ 10.times.map do # run 10 client connections in parallel
67
78
  Thread.new do
68
79
  TCPSocket.new('localhost', 12345).read
69
80
  end
70
- end.each { |th| p th.value }
81
+ end.each { |th| p th.value } # and print their responses
71
82
 
72
- s.stop
83
+ s.stop # shutdown the server socket
73
84
  ```
74
85
 
75
86
  It prints some output like this:
data/docs/threadpool.md CHANGED
@@ -1,9 +1,12 @@
1
+ ## A thread-pool implementation making use of Eventbox
2
+
1
3
  The following class implements a thread-pool with a fixed number of threads to be borrowed by the `pool` method.
2
- It shows how the action method `start_pool_thread` makes use of the private yield_call `next_job` to query, wait for and retrieve an object from the event scope.
4
+ It shows how the action method `start_pool_thread` makes use of the private {Eventbox.yield_call yield_call} `next_job` to query, wait for and retrieve an object from the event scope.
3
5
 
4
6
  This kind of object is the block that is given to `pool`.
5
7
  Although all closures (blocks, procs and lambdas) are wrapped in a way that allows safe calls from the event scope, it is just passed through to the action scope and retrieved as the result value of `next_job`.
6
8
  When this happens, the wrapping is automatically removed, so that the pure block given to `pool` is called in `start_pool_thread`.
9
+ That way each action thread runs one block at the same time, but all started action threads process the blocks concurrently.
7
10
 
8
11
  ```ruby
9
12
  class ThreadPool < Eventbox
data/eventbox.gemspec CHANGED
@@ -21,11 +21,6 @@ Gem::Specification.new do |spec|
21
21
  spec.bindir = "exe"
22
22
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
23
23
  spec.require_paths = ["lib"]
24
- spec.required_ruby_version = "~> 2.3"
24
+ spec.required_ruby_version = ">= 3.2"
25
25
  spec.metadata["yard.run"] = "yri" # use "yard" to build full HTML docs.
26
-
27
- spec.add_development_dependency "bundler", ">= 1.16", "< 3"
28
- spec.add_development_dependency "rake", "~> 10.0"
29
- spec.add_development_dependency "minitest", "~> 5.0"
30
- spec.add_development_dependency "yard", "~> 0.9"
31
26
  end
@@ -21,54 +21,55 @@ class Eventbox
21
21
  decls = []
22
22
  convs = []
23
23
  rets = []
24
+ kwrets = []
24
25
  parameters.each_with_index do |(t, n), i|
25
26
  €var = n.to_s.start_with?("€")
26
27
  case t
27
28
  when :req
28
29
  decls << n
29
30
  if €var
30
- convs << "#{n} = WrappedObject.new(#{n}, source_event_loop, :#{n})"
31
+ convs << "#{n} = Sanitizer.wrap_object(#{n}, source_event_loop, target_event_loop, :#{n})"
31
32
  end
32
33
  rets << n
33
34
  when :opt
34
35
  decls << "#{n}=nil"
35
36
  if €var
36
- convs << "#{n} = #{n} ? WrappedObject.new(#{n}, source_event_loop, :#{n}) : []"
37
+ convs << "#{n} = #{n} ? Sanitizer.wrap_object(#{n}, source_event_loop, target_event_loop, :#{n}) : []"
37
38
  end
38
39
  rets << "*#{n}"
39
40
  when :rest
40
41
  decls << "*#{n}"
41
42
  if €var
42
- convs << "#{n}.map!{|v| WrappedObject.new(v, source_event_loop, :#{n}) }"
43
+ convs << "#{n}.map!{|v| Sanitizer.wrap_object(v, source_event_loop, target_event_loop, :#{n}) }"
43
44
  end
44
45
  rets << "*#{n}"
45
46
  when :keyreq
46
47
  decls << "#{n}:"
47
48
  if €var
48
- convs << "#{n} = WrappedObject.new(#{n}, source_event_loop, :#{n})"
49
+ convs << "#{n} = Sanitizer.wrap_object(#{n}, source_event_loop, target_event_loop, :#{n})"
49
50
  end
50
- rets << "#{n}: #{n}"
51
+ kwrets << "#{n}: #{n}"
51
52
  when :key
52
53
  decls << "#{n}:nil"
53
54
  if €var
54
- convs << "#{n} = #{n} ? {#{n}: WrappedObject.new(#{n}, source_event_loop, :#{n})} : {}"
55
+ convs << "#{n} = #{n} ? {#{n}: Sanitizer.wrap_object(#{n}, source_event_loop, target_event_loop, :#{n})} : {}"
55
56
  else
56
57
  convs << "#{n} = #{n} ? {#{n}: #{n}} : {}"
57
58
  end
58
- rets << "**#{n}"
59
+ kwrets << "**#{n}"
59
60
  when :keyrest
60
61
  decls << "**#{n}"
61
62
  if €var
62
- convs << "#{n}.each{|k, v| #{n}[k] = WrappedObject.new(v, source_event_loop, :#{n}) }"
63
+ convs << "#{n}.transform_values!{|v| Sanitizer.wrap_object(v, source_event_loop, target_event_loop, :#{n}) }"
63
64
  end
64
- rets << "**#{n}"
65
+ kwrets << "**#{n}"
65
66
  when :block
66
67
  if €var
67
68
  raise "block to `#{name}' can't be wrapped"
68
69
  end
69
70
  end
70
71
  end
71
- code = "#{is_proc ? :proc : :lambda} do |source_event_loop#{decls.map{|s| ",#{s}"}.join }| # #{name}\n #{convs.join("\n")}\n [#{rets.join(",")}]\nend"
72
+ code = "#{is_proc ? :proc : :lambda} do |source_event_loop, target_event_loop#{decls.map{|s| ",#{s}"}.join }| # #{name}\n #{convs.join("\n")}\n [[#{rets.join(",")}],{#{kwrets.join(",")}}]\nend"
72
73
  instance_eval(code, "wrapper code defined in #{__FILE__}:#{__LINE__} for #{name}")
73
74
  end
74
75
  end
@@ -36,7 +36,7 @@ class Eventbox
36
36
  #
37
37
  # The created method can be safely called from any thread.
38
38
  # All method arguments are passed through the {Sanitizer}.
39
- # Arguments prefixed by a sign are automatically passed as {Eventbox::WrappedObject}.
39
+ # Arguments prefixed by a +€+ sign are automatically passed as {Eventbox::ExternalObject}.
40
40
  #
41
41
  # The method itself might not do any blocking calls or expensive computations - this would impair responsiveness of the {Eventbox} instance.
42
42
  # Instead use {action} in these cases.
@@ -47,13 +47,13 @@ class Eventbox
47
47
  def async_call(name, &block)
48
48
  unbound_method = self.instance_method(name)
49
49
  wrapper = ArgumentWrapper.build(unbound_method, name)
50
- with_block_or_def(name, block) do |*args, &cb|
50
+ with_block_or_def(name, block) do |*args, **kwargs, &cb|
51
51
  if @__event_loop__.event_scope?
52
52
  # Use the correct method within the class hierarchy, instead of just self.send(*args).
53
53
  # Otherwise super() would start an infinite recursion.
54
- unbound_method.bind(eventbox).call(*args, &cb)
54
+ unbound_method.bind_call(eventbox, *args, **kwargs, &cb)
55
55
  else
56
- @__event_loop__.async_call(eventbox, name, args, cb, wrapper)
56
+ @__event_loop__.async_call(eventbox, name, args, kwargs, cb, wrapper)
57
57
  end
58
58
  self
59
59
  end
@@ -70,20 +70,20 @@ class Eventbox
70
70
  # Blocks are executed by the same thread that calls the {sync_call} method to that time.
71
71
  #
72
72
  # All method arguments as well as the result value are passed through the {Sanitizer}.
73
- # Arguments prefixed by a sign are automatically passed as {Eventbox::WrappedObject}.
73
+ # Arguments prefixed by a +€+ sign are automatically passed as {Eventbox::ExternalObject}.
74
74
  #
75
75
  # The method itself might not do any blocking calls or expensive computations - this would impair responsiveness of the {Eventbox} instance.
76
76
  # Instead use {action} in these cases.
77
77
  def sync_call(name, &block)
78
78
  unbound_method = self.instance_method(name)
79
79
  wrapper = ArgumentWrapper.build(unbound_method, name)
80
- with_block_or_def(name, block) do |*args, &cb|
80
+ with_block_or_def(name, block) do |*args, **kwargs, &cb|
81
81
  if @__event_loop__.event_scope?
82
- unbound_method.bind(eventbox).call(*args, &cb)
82
+ unbound_method.bind_call(eventbox, *args, **kwargs, &cb)
83
83
  else
84
84
  answer_queue = Queue.new
85
- sel = @__event_loop__.sync_call(eventbox, name, args, cb, answer_queue, wrapper)
86
- @__event_loop__.callback_loop(answer_queue, sel)
85
+ sel = @__event_loop__.sync_call(eventbox, name, args, kwargs, cb, answer_queue, wrapper)
86
+ @__event_loop__.callback_loop(answer_queue, sel, name)
87
87
  end
88
88
  end
89
89
  end
@@ -106,7 +106,7 @@ class Eventbox
106
106
  # Blocks are executed by the same thread that calls the {yield_call} method to that time.
107
107
  #
108
108
  # All method arguments as well as the result value are passed through the {Sanitizer}.
109
- # Arguments prefixed by a sign are automatically passed as {Eventbox::WrappedObject}.
109
+ # Arguments prefixed by a +€+ sign are automatically passed as {Eventbox::ExternalObject}.
110
110
  #
111
111
  # The method itself as well as the Proc object might not do any blocking calls or expensive computations - this would impair responsiveness of the {Eventbox} instance.
112
112
  # Instead use {action} in these cases.
@@ -115,37 +115,38 @@ class Eventbox
115
115
  wrapper = ArgumentWrapper.build(unbound_method, name)
116
116
  with_block_or_def(name, block) do |*args, **kwargs, &cb|
117
117
  if @__event_loop__.event_scope?
118
- @__event_loop__.safe_yield_result(args, name)
119
- args << kwargs unless kwargs.empty?
120
- unbound_method.bind(eventbox).call(*args, &cb)
118
+ @__event_loop__.internal_yield_result(args, name)
119
+ unbound_method.bind(eventbox).call(*args, **kwargs, &cb)
121
120
  self
122
121
  else
123
122
  answer_queue = Queue.new
124
123
  sel = @__event_loop__.yield_call(eventbox, name, args, kwargs, cb, answer_queue, wrapper)
125
- @__event_loop__.callback_loop(answer_queue, sel)
124
+ @__event_loop__.callback_loop(answer_queue, sel, name)
126
125
  end
127
126
  end
128
127
  end
129
128
 
130
129
  # Threadsafe write access to instance variables.
131
- def attr_writer(name)
132
- async_call(define_method("#{name}=") do |value|
133
- instance_variable_set("@#{name}", value)
134
- end)
130
+ def attr_writer(*names)
131
+ super
132
+ names.each do |name|
133
+ async_call(:"#{name}=")
134
+ end
135
135
  end
136
136
 
137
137
  # Threadsafe read access to instance variables.
138
- def attr_reader(name)
139
- sync_call(define_method("#{name}") do
140
- instance_variable_get("@#{name}")
141
- end)
138
+ def attr_reader(*names)
139
+ super
140
+ names.each do |name|
141
+ sync_call(:"#{name}")
142
+ end
142
143
  end
143
144
 
144
145
  # Threadsafe read and write access to instance variables.
145
146
  #
146
- # Attention: Be careful with read-modify-write operations - they are *not* atomic but are executed as two independent operations.
147
+ # Attention: Be careful with read-modify-write operations like "+=" - they are *not* atomic but are executed as two independent operations.
147
148
  #
148
- # This will lose counter increments, since `counter` is incremented in a non-atomic manner:
149
+ # This will lose counter increments, since +counter+ is incremented in a non-atomic manner:
149
150
  # attr_accessor :counter
150
151
  # async_call def start
151
152
  # 10.times { do_something }
@@ -164,9 +165,12 @@ class Eventbox
164
165
  # async_call def increment(by)
165
166
  # @counter += by
166
167
  # end
167
- def attr_accessor(name)
168
- attr_reader name
169
- attr_writer name
168
+ def attr_accessor(*names)
169
+ super
170
+ names.each do |name|
171
+ async_call(:"#{name}=")
172
+ sync_call(:"#{name}")
173
+ end
170
174
  end
171
175
 
172
176
  # Define a private method for asynchronous execution.
@@ -192,7 +196,7 @@ class Eventbox
192
196
  # str # => "value1"
193
197
  # action.current? # => true
194
198
  # # `action' can be passed to event scope or external scope,
195
- # # in order to send a signals per Action#raise
199
+ # # in order to send a signal per Action#raise
196
200
  # end
197
201
  #
198
202
  def action(name, &block)
@@ -219,7 +223,7 @@ class Eventbox
219
223
 
220
224
  # An Action object is thin wrapper for a Ruby thread.
221
225
  #
222
- # It is returned by {Eventbox#action} and optionally passed as last argument to action methods.
226
+ # It is returned by {Eventbox::Boxable#action action methods} and optionally passed as last argument to action methods.
223
227
  # It can be used to interrupt the program execution by an exception.
224
228
  #
225
229
  # However in contrast to ruby's builtin threads, any interruption must be explicit allowed.
@@ -228,7 +232,7 @@ class Eventbox
228
232
  # It is raised by {Eventbox#shutdown!} and is delivered as soon as a blocking operation is executed.
229
233
  #
230
234
  # An Action object can be used to stop the action while blocking operations.
231
- # It should be made sure, that the `rescue` statement is outside of the block to `handle_interrupt`.
235
+ # It should be made sure, that the +rescue+ statement is outside of the block to +handle_interrupt+.
232
236
  # Otherwise it could happen, that the rescuing code is interrupted by the signal.
233
237
  # Sending custom signals to an action works like:
234
238
  #
@@ -266,10 +270,9 @@ class Eventbox
266
270
  #
267
271
  # This method does nothing if the action is already finished.
268
272
  #
269
- # If {raise} is called within the action (#current? returns `true`), all exceptions are delivered immediately.
270
- # This happens regardless of the current interrupt mask set by `Thread.handle_interrupt`.
273
+ # If {raise} is called within the action ({#current?} returns +true+), all exceptions are delivered immediately.
274
+ # This happens regardless of the current interrupt mask set by +Thread.handle_interrupt+.
271
275
  def raise(*args)
272
- # ignore raise, if sent from the action thread
273
276
  if AbortAction === args[0] || (Module === args[0] && args[0].ancestors.include?(AbortAction))
274
277
  ::Kernel.raise InvalidAccess, "Use of Eventbox::AbortAction is not allowed - use Action#abort or a custom exception subclass"
275
278
  end
@@ -294,5 +297,10 @@ class Eventbox
294
297
  def join
295
298
  @thread.join
296
299
  end
300
+
301
+ # @private
302
+ def terminate
303
+ @thread.terminate
304
+ end
297
305
  end
298
306
  end
@@ -0,0 +1,47 @@
1
+ # frozen-string-literal: true
2
+
3
+ class Eventbox
4
+ module CallContext
5
+ # @private
6
+ def __answer_queue__
7
+ @__answer_queue__
8
+ end
9
+
10
+ # @private
11
+ attr_writer :__answer_queue__
12
+ end
13
+
14
+ class BlockingExternalCallContext
15
+ include CallContext
16
+ end
17
+
18
+ class ActionCallContext
19
+ include CallContext
20
+
21
+ # @private
22
+ def initialize(event_loop)
23
+ answer_queue = Queue.new
24
+ meth = proc do
25
+ event_loop.callback_loop(answer_queue, nil, self.class)
26
+ end
27
+ @action = event_loop.start_action(meth, self.class, [])
28
+
29
+ def answer_queue.gc_stop(object_id)
30
+ close
31
+ end
32
+ ObjectSpace.define_finalizer(self, answer_queue.method(:gc_stop))
33
+
34
+ @__answer_queue__ = answer_queue
35
+ end
36
+
37
+ # The action that drives the call context.
38
+ attr_reader :action
39
+
40
+ # Terminate the call context and the driving action.
41
+ #
42
+ # The method returns immediately and the corresponding action is terminated asynchonously.
43
+ def shutdown!
44
+ @__answer_queue__.close
45
+ end
46
+ end
47
+ end