standard-procedure-plumbing 0.3.3 → 0.4.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +114 -122
- data/lib/plumbing/actor/async.rb +49 -0
- data/lib/plumbing/actor/inline.rb +33 -0
- data/lib/plumbing/actor/kernel.rb +10 -0
- data/lib/plumbing/{valve → actor}/rails.rb +1 -1
- data/lib/plumbing/actor/threaded.rb +76 -0
- data/lib/plumbing/actor/transporter.rb +61 -0
- data/lib/plumbing/actor.rb +93 -0
- data/lib/plumbing/config.rb +8 -8
- data/lib/plumbing/junction.rb +4 -3
- data/lib/plumbing/pipe.rb +3 -3
- data/lib/plumbing/version.rb +1 -1
- data/lib/plumbing.rb +1 -1
- data/spec/become_equal_to_matcher.rb +3 -2
- data/spec/examples/{valve_spec.rb → actor_spec.rb} +39 -34
- data/spec/examples/await_spec.rb +43 -0
- data/spec/examples/pipe_spec.rb +46 -10
- data/spec/plumbing/a_pipe.rb +14 -10
- data/spec/plumbing/actor/transporter_spec.rb +159 -0
- data/spec/plumbing/actor_spec.rb +310 -0
- data/spec/plumbing/custom_filter_spec.rb +1 -1
- metadata +28 -11
- data/lib/plumbing/valve/async.rb +0 -43
- data/lib/plumbing/valve/inline.rb +0 -20
- data/lib/plumbing/valve/message.rb +0 -5
- data/lib/plumbing/valve/threaded.rb +0 -67
- data/lib/plumbing/valve.rb +0 -71
- data/spec/plumbing/valve_spec.rb +0 -171
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: ace6c01658b88c08c34d48406f983fdad449bc4f122b8dfed026853841b48d35
|
4
|
+
data.tar.gz: 96b4458d0c667e10bb47fb8881de2e21df0ff40243cf07ff30e2e22f29892625
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: a5ee7af05314dbb45d9f46a302f6102f93237334224f82f1c34cdd5c252829039bf6b4bc18a562fcdb5d76a8ca229a484b0f3d3e123b7e5efd46d302f50c72b4
|
7
|
+
data.tar.gz: 005f6cee1eb899207659955ea5a4179133c65e07db913e8f3f9c730cfca8947684927c9b99b675e9f54dc4ea4820159d709d61b146113a993783cec2a0c5842c
|
data/README.md
CHANGED
@@ -1,16 +1,18 @@
|
|
1
1
|
# Plumbing
|
2
2
|
|
3
|
+
Actors, Observers and Data Pipelines.
|
4
|
+
|
3
5
|
## Configuration
|
4
6
|
|
5
|
-
The most important configuration setting is the `mode`, which governs how
|
7
|
+
The most important configuration setting is the `mode`, which governs how background tasks are handled.
|
6
8
|
|
7
|
-
By default it is `:inline`, so every command or query is handled synchronously. This is the ruby behaviour you know and love.
|
9
|
+
By default it is `:inline`, so every command or query is handled synchronously. This is the ruby behaviour you know and love (although see the section on `await` below).
|
8
10
|
|
9
|
-
|
11
|
+
`:async` mode handles tasks using fibers (via the [Async gem](https://socketry.github.io/async/index.html)). Your code should include the "async" gem in its bundle, as Plumbing does not load it by default.
|
10
12
|
|
11
|
-
|
13
|
+
`:threaded` mode handles tasks using a thread pool via [Concurrent Ruby](https://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html)). Your code should include the "concurrent-ruby" gem in its bundle, as Plumbing does not load it by default.
|
12
14
|
|
13
|
-
|
15
|
+
However, `:threaded` mode is not safe for Ruby on Rails applications. In this case, use `:threaded_rails` mode, which is identical to `:threaded`, except it wraps the tasks in the Rails executor. This ensures your actors do not interfere with the Rails framework. Note that the Concurrent Ruby's default `:io` scheduler will create extra threads at times of high demand, which may put pressure on the ActiveRecord database connection pool. A future version of plumbing will allow the thread pool to be adjusted with a maximum number of threads, preventing contention with the connection pool.
|
14
16
|
|
15
17
|
The `timeout` setting is used when performing queries - it defaults to 30s.
|
16
18
|
|
@@ -154,140 +156,150 @@ You can also verify that the output generated is as expected by defining a `post
|
|
154
156
|
# => ["first", "external", "third"]
|
155
157
|
```
|
156
158
|
|
157
|
-
## Plumbing::
|
159
|
+
## Plumbing::Actor - safe asynchronous objects
|
158
160
|
|
159
|
-
An [actor](https://en.wikipedia.org/wiki/Actor_model) defines the messages an object can receive, similar to a regular object.
|
161
|
+
An [actor](https://en.wikipedia.org/wiki/Actor_model) defines the messages an object can receive, similar to a regular object.
|
162
|
+
However, in traditional object-orientated programming, a thread of execution moves from one object to another. If there are multiple threads, then each object may be accessed concurrently, leading to race conditions or data-integrity problems - and very hard to track bugs.
|
160
163
|
|
161
|
-
|
164
|
+
Actors are different. Conceptually, each actor has it's own thread of execution, isolated from every other actor in the system. When one actor sends a message to another actor, the receiver does not execute its method in the caller's thread. Instead, it places the message on a queue and waits until its own thread is free to process the work. If the caller would like to access the return value from the method, then it must wait until the receiver has finished processing.
|
162
165
|
|
163
|
-
|
166
|
+
This means each actor is only ever accessed by a single thread and the vast majority of concurrency issues are eliminated.
|
164
167
|
|
165
|
-
|
166
|
-
- Queries return a value so the caller blocks until the actor has returned a value
|
167
|
-
- However, if you call a query and pass `ignore_result: true` then the query will not block, although you will not be able to access the return value - this is for commands that do something and then return a result based on that work (which you may or may not be interested in - see Plumbing::Pipe#add_observer)
|
168
|
-
- None of the above applies if the `Plumbing mode` is set to `:inline` (which is the default) - in this case, the actor behaves like normal ruby code
|
168
|
+
[Plumbing::Actor](/lib/plumbing/actor.rb) allows you to define the `async` public interface to your objects. Calling `.start` builds a proxy to the actual instance of your object and ensures that any messages sent are handled in a manner appropriate to the current mode - immediately for inline mode, using fibers for async mode and using threads for threaded and threaded_rails mode.
|
169
169
|
|
170
|
-
|
170
|
+
When sending messages to an actor, this just works.
|
171
171
|
|
172
|
-
|
172
|
+
However, as the caller, you do not have direct access to the return values of the messages that you send. Instead, you must call `#value` - or alternatively, wrap your call in `await { ... }`. The block form of `await` is added in to ruby's `Kernel` so it is available everywhere. It is also safe to use with non-actors (in which case it just returns the original value from the block).
|
173
173
|
|
174
|
-
|
174
|
+
```ruby
|
175
|
+
@actor = MyActor.start name: "Alice"
|
175
176
|
|
176
|
-
|
177
|
+
@actor.name.value
|
178
|
+
# => "Alice"
|
177
179
|
|
178
|
-
|
180
|
+
await { @actor.name }
|
181
|
+
# => "Alice"
|
179
182
|
|
180
|
-
|
181
|
-
|
183
|
+
await { "Bob" }
|
184
|
+
# => "Bob"
|
185
|
+
```
|
182
186
|
|
183
|
-
|
184
|
-
attr_reader :name, :job_title
|
187
|
+
This then makes the caller's thread block until the receiver's thread has finished its work and returned a value. Or if the receiver raises an exception, that exception is then re-raised in the calling thread.
|
185
188
|
|
186
|
-
|
187
|
-
query :name, :job_title, :greet_slowly
|
188
|
-
command :promote
|
189
|
+
The actor model does not eliminate every possible concurrency issue. If you use `value` or `await`, it is possible to deadlock yourself.
|
189
190
|
|
190
|
-
|
191
|
-
@name = name
|
192
|
-
@job_title = "Sales assistant"
|
193
|
-
end
|
191
|
+
Actor A, running in Thread 1, sends a message to Actor B and then awaits the result, meaning Thread 1 is blocked. Actor B, running in Thread 2, starts to work, but needs to ask Actor A a question. So it sends a message to Actor A and awaits the result. Thread 2 is now blocked, waiting for Actor A to respond. But Actor A, running in Thread 1, is blocked, waiting for Actor B to respond.
|
194
192
|
|
195
|
-
|
196
|
-
sleep 0.5
|
197
|
-
@job_title = "Sales manager"
|
198
|
-
end
|
193
|
+
This potential deadlock only occurs if you use `value` or `await` and have actors that call back in to each other. If your objects are strictly layered, or you never use `value` or `await` (perhaps, instead using a Pipe to observe events), then this particular deadlock should not occur. However, just in case, every call to `value` has a timeout defaulting to 30s.
|
199
194
|
|
200
|
-
|
201
|
-
sleep 0.2
|
202
|
-
"H E L L O"
|
203
|
-
end
|
204
|
-
end
|
205
|
-
```
|
195
|
+
### Inline actors
|
206
196
|
|
207
|
-
|
197
|
+
Even though inline mode is not asynchronous, you must still use `value` or `await` to access the results from another actor. However, as deadlocks are impossible in a single thread, there is no timeout.
|
208
198
|
|
209
|
-
|
210
|
-
require "plumbing"
|
199
|
+
### Async actors
|
211
200
|
|
212
|
-
|
201
|
+
Using async mode is probably the easiest way to add concurrency to your application. It uses fibers to allow for "concurrency but not parallelism" - that is execution will happen in the background but your objects or data will never be accessed by two things at the exact same time.
|
213
202
|
|
214
|
-
|
215
|
-
# => "Alice"
|
216
|
-
puts @person.job_title
|
217
|
-
# => "Sales assistant"
|
203
|
+
### Threaded actors
|
218
204
|
|
219
|
-
|
220
|
-
# this will block for 0.5 seconds
|
221
|
-
puts @person.job_title
|
222
|
-
# => "Sales manager"
|
205
|
+
Using threaded (or threaded_rails) mode gives you concurrency and parallelism. If all your public objects are actors and you are careful about callbacks then the actor model will keep your code safe. But there are a couple of extra things to consider.
|
223
206
|
|
224
|
-
|
225
|
-
|
207
|
+
Firstly, when you pass parameters or return results between threads, those objects are "transported" across the boundaries.
|
208
|
+
Most objects are cloned. Hashes, keyword arguments and arrays have their contents recursively transported. And any object that uses `GlobalID::Identification` (for example, ActiveRecord models) are marshalled into a GlobalID, then unmarshalled back in to their original object. This is to prevent the same object from being amended in both the caller and receiver's threads.
|
226
209
|
|
227
|
-
|
228
|
-
# this will block for 0.2 seconds (as the mode is :inline) before returning nil
|
229
|
-
```
|
210
|
+
Secondly, when you pass a block (or Proc parameter) to another actor, the block/proc will be executed in the receiver's thread. This means you must not access any variables that would normally be in scope for your block (whether local variables or instance variables of other objects - see note below) This is because you will be accessing them from a different thread to where they were defined, leading to potential race conditions. And, if you access any actors, you must not use `value` or `await` or you risk a deadlock. If you are within an actor and need to pass a block or proc parameter, you should use the `safely` method to ensure that your block is run within the context of the calling actor, not the receiving actor.
|
230
211
|
|
231
|
-
|
212
|
+
For example, when defining a custom filter, the filter adds itself as an observer to its source. The source triggers the `received` method on the filter, which will run in the context of the source. So the custom filter uses `safely` to move back into its own context and access its instance variables.
|
232
213
|
|
233
214
|
```ruby
|
234
|
-
|
235
|
-
|
215
|
+
class EveryThirdEvent < Plumbing::CustomFilter
|
216
|
+
def initialize source:
|
217
|
+
super
|
218
|
+
@events = []
|
219
|
+
end
|
236
220
|
|
237
|
-
|
238
|
-
|
221
|
+
def received event
|
222
|
+
safely do
|
223
|
+
@events << event
|
224
|
+
if @events.count >= 3
|
225
|
+
@events.clear
|
226
|
+
self << event
|
227
|
+
end
|
228
|
+
end
|
229
|
+
end
|
230
|
+
end
|
231
|
+
```
|
239
232
|
|
240
|
-
|
241
|
-
# => "Alice"
|
242
|
-
puts @person.job_title
|
243
|
-
# => "Sales assistant"
|
233
|
+
(Note: we break that rule in the specs for Pipe objects - we use a block observer that sets the value on a local variable. That's because it is a controlled situation where we know there are only two threads involved and we are explicitly waiting for the second thread to complete. For almost every app that uses actors, there will be multiple threads and it will be impossible to predict the access patterns).
|
244
234
|
|
245
|
-
|
246
|
-
# this will return immediately without blocking
|
247
|
-
puts @person.job_title
|
248
|
-
# => "Sales manager" (this will block for 0.5s because #job_title query will not start until the #promote command has completed)
|
235
|
+
### Constructing actors
|
249
236
|
|
250
|
-
|
251
|
-
# this will block for 0.2 seconds before returning "H E L L O"
|
237
|
+
Instead of constructing your object with `.new`, use `.start`. This builds a proxy object that wraps the target instance and dispatches messages through a safe mechanism. Only messages that have been defined as part of the actor are available in this proxy - so you don't have to worry about callers bypassing the actor's internal context.
|
252
238
|
|
253
|
-
|
254
|
-
|
255
|
-
|
239
|
+
### Referencing actors
|
240
|
+
|
241
|
+
If you're within a method inside your actor and you want to pass a reference to yourself, instead of using `self`, you should use `proxy` (which is also aliased as `as_actor` or `async`).
|
256
242
|
|
257
|
-
|
243
|
+
Also be aware that if you use actors in one place, you need to use them everywhere - especially if you're using threads. This is because as the actor sends messages to its collaborators, those calls will be made from within the actor's internal context. If the collaborators are also actors, the subsequent messages will be handled correctly, if not, data consistency bugs could occur. This does not mean that every class needs to be an actor, just your "public API" classes which may be accessed from multiple actors or other threads.
|
244
|
+
|
245
|
+
### Usage
|
246
|
+
|
247
|
+
[Defining an actor](/spec/examples/actor_spec.rb)
|
258
248
|
|
259
249
|
```ruby
|
260
250
|
require "plumbing"
|
261
|
-
require "concurrent"
|
262
251
|
|
263
|
-
|
252
|
+
class Employee
|
253
|
+
include Plumbing::Actor
|
254
|
+
async :name, :job_title, :greet_slowly, :promote
|
255
|
+
attr_reader :name, :job_title
|
256
|
+
|
257
|
+
def initialize(name)
|
258
|
+
@name = name
|
259
|
+
@job_title = "Sales assistant"
|
260
|
+
end
|
261
|
+
|
262
|
+
private
|
263
|
+
|
264
|
+
def promote
|
265
|
+
sleep 0.5
|
266
|
+
@job_title = "Sales manager"
|
267
|
+
end
|
268
|
+
|
269
|
+
def greet_slowly
|
270
|
+
sleep 0.2
|
271
|
+
"H E L L O"
|
272
|
+
end
|
273
|
+
end
|
274
|
+
|
264
275
|
@person = Employee.start "Alice"
|
265
276
|
|
266
|
-
|
267
|
-
# =>
|
268
|
-
|
269
|
-
#
|
277
|
+
await { @person.name }
|
278
|
+
# => "Alice"
|
279
|
+
await { @person.job_title }
|
280
|
+
# => "Sales assistant"
|
270
281
|
|
271
|
-
|
272
|
-
|
273
|
-
|
274
|
-
# => "Sales manager" (this will block for 0.5s because #job_title query will not start until the #promote command has completed)
|
282
|
+
# by using `await`, we will block until `greet_slowly` has returned a value
|
283
|
+
await { @person.greet_slowly }
|
284
|
+
# => "H E L L O"
|
275
285
|
|
286
|
+
# this time, we're not awaiting the result, so this will run in the background (unless we're using inline mode)
|
276
287
|
@person.greet_slowly
|
277
|
-
# this will block for 0.2 seconds before returning "H E L L O"
|
278
288
|
|
279
|
-
|
280
|
-
|
289
|
+
# this will run in the background
|
290
|
+
@person.promote
|
291
|
+
# this will block - it will not return until the previous calls, #greet_slowly, #promote, and this call to #job_title have completed
|
292
|
+
await { @person.job_title }
|
293
|
+
# => "Sales manager"
|
281
294
|
```
|
282
295
|
|
283
|
-
|
284
296
|
## Plumbing::Pipe - a composable observer
|
285
297
|
|
286
298
|
[Observers](https://ruby-doc.org/3.3.0/stdlibs/observer/Observable.html) in Ruby are a pattern where objects (observers) register their interest in another object (the observable). This pattern is common throughout programming languages (event listeners in Javascript, the dependency protocol in [Smalltalk](https://en.wikipedia.org/wiki/Smalltalk)).
|
287
299
|
|
288
300
|
[Plumbing::Pipe](lib/plumbing/pipe.rb) makes observers "composable". Instead of simply just registering for notifications from a single observable, we can build sequences of pipes. These sequences can filter notifications and route them to different listeners, or merge multiple sources into a single stream of notifications.
|
289
301
|
|
290
|
-
Pipes are implemented as
|
302
|
+
Pipes are implemented as actors, meaning that event notifications can be dispatched asynchronously. The observer's callback will be triggered from within the pipe's internal context so you should immediately trigger a command on another actor to maintain safety.
|
291
303
|
|
292
304
|
### Usage
|
293
305
|
|
@@ -332,12 +344,17 @@ Pipes are implemented as valves, meaning that event notifications can be dispatc
|
|
332
344
|
end
|
333
345
|
|
334
346
|
def received event
|
335
|
-
#
|
336
|
-
|
337
|
-
#
|
338
|
-
|
339
|
-
|
340
|
-
|
347
|
+
# #received is called in the context of the `source` actor
|
348
|
+
# in order to safely access the `EveryThirdEvent` instance variables
|
349
|
+
# we need to move into the context of our own actor
|
350
|
+
safely do
|
351
|
+
# store this event into our buffer
|
352
|
+
@events << event
|
353
|
+
# if this is the third event we've received then clear the buffer and broadcast the latest event
|
354
|
+
if @events.count >= 3
|
355
|
+
@events.clear
|
356
|
+
self << event
|
357
|
+
end
|
341
358
|
end
|
342
359
|
end
|
343
360
|
end
|
@@ -374,31 +391,6 @@ Pipes are implemented as valves, meaning that event notifications can be dispatc
|
|
374
391
|
@second_source.notify "two"
|
375
392
|
# => "two"
|
376
393
|
```
|
377
|
-
|
378
|
-
[Dispatching events asynchronously (using Fibers)](/spec/examples/pipe_spec.rb):
|
379
|
-
```ruby
|
380
|
-
require "plumbing"
|
381
|
-
require "async"
|
382
|
-
|
383
|
-
Plumbing.configure mode: :async
|
384
|
-
|
385
|
-
Sync do
|
386
|
-
@first_source = Plumbing::Pipe.start
|
387
|
-
@second_source = Plumbing::Pipe.start
|
388
|
-
|
389
|
-
@junction = Plumbing::Junction.start @first_source, @second_source
|
390
|
-
|
391
|
-
@filter = Plumbing::Filter.start source: @junction do |event|
|
392
|
-
%w[one-one two-two].include? event.type
|
393
|
-
end
|
394
|
-
|
395
|
-
@first_source.notify "one-one"
|
396
|
-
@first_source.notify "one-two"
|
397
|
-
@second_source.notify "two-one"
|
398
|
-
@second_source.notify "two-two"
|
399
|
-
end
|
400
|
-
```
|
401
|
-
|
402
394
|
## Plumbing::RubberDuck - duck types and type-casts
|
403
395
|
|
404
396
|
Define an [interface or protocol](https://en.wikipedia.org/wiki/Interface_(object-oriented_programming)) specifying which messages you expect to be able to send.
|
@@ -488,7 +480,7 @@ Then:
|
|
488
480
|
```ruby
|
489
481
|
require 'plumbing'
|
490
482
|
|
491
|
-
# Set the mode for your
|
483
|
+
# Set the mode for your Actors and Pipes
|
492
484
|
Plumbing.config mode: :async
|
493
485
|
```
|
494
486
|
|
@@ -0,0 +1,49 @@
|
|
1
|
+
require "async"
|
2
|
+
require "async/semaphore"
|
3
|
+
require "timeout"
|
4
|
+
|
5
|
+
module Plumbing
|
6
|
+
module Actor
|
7
|
+
class Async
|
8
|
+
attr_reader :target
|
9
|
+
|
10
|
+
def initialize target
|
11
|
+
@target = target
|
12
|
+
@queue = []
|
13
|
+
@semaphore = ::Async::Semaphore.new(1)
|
14
|
+
end
|
15
|
+
|
16
|
+
# Send the message to the target and wrap the result
|
17
|
+
def send_message message_name, *args, &block
|
18
|
+
task = @semaphore.async do
|
19
|
+
@target.send message_name, *args, &block
|
20
|
+
end
|
21
|
+
Result.new(task)
|
22
|
+
end
|
23
|
+
|
24
|
+
def safely(&)
|
25
|
+
send_message(:perform_safely, &)
|
26
|
+
nil
|
27
|
+
end
|
28
|
+
|
29
|
+
def within_actor? = true
|
30
|
+
|
31
|
+
def stop
|
32
|
+
# do nothing
|
33
|
+
end
|
34
|
+
|
35
|
+
Result = Data.define(:task) do
|
36
|
+
def value
|
37
|
+
Timeout.timeout(Plumbing::Actor.timeout) do
|
38
|
+
task.wait
|
39
|
+
end
|
40
|
+
end
|
41
|
+
end
|
42
|
+
private_constant :Result
|
43
|
+
end
|
44
|
+
|
45
|
+
def self.timeout
|
46
|
+
Plumbing.config.timeout
|
47
|
+
end
|
48
|
+
end
|
49
|
+
end
|
@@ -0,0 +1,33 @@
|
|
1
|
+
module Plumbing
|
2
|
+
module Actor
|
3
|
+
class Inline
|
4
|
+
def initialize target
|
5
|
+
@target = target
|
6
|
+
end
|
7
|
+
|
8
|
+
# Send the message to the target and wrap the result
|
9
|
+
def send_message(message_name, *, &)
|
10
|
+
value = @target.send(message_name, *, &)
|
11
|
+
Result.new(value)
|
12
|
+
rescue => ex
|
13
|
+
Result.new(ex)
|
14
|
+
end
|
15
|
+
|
16
|
+
def safely(&)
|
17
|
+
send_message(:perform_safely, &)
|
18
|
+
nil
|
19
|
+
end
|
20
|
+
|
21
|
+
def within_actor? = true
|
22
|
+
|
23
|
+
def stop
|
24
|
+
# do nothing
|
25
|
+
end
|
26
|
+
|
27
|
+
Result = Data.define(:result) do
|
28
|
+
def value = result.is_a?(Exception) ? raise(result) : result
|
29
|
+
end
|
30
|
+
private_constant :Result
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
@@ -0,0 +1,76 @@
|
|
1
|
+
require "concurrent/array"
|
2
|
+
require "concurrent/mvar"
|
3
|
+
require "concurrent/immutable_struct"
|
4
|
+
require "concurrent/promises"
|
5
|
+
require_relative "transporter"
|
6
|
+
|
7
|
+
module Plumbing
|
8
|
+
module Actor
|
9
|
+
class Threaded
|
10
|
+
attr_reader :target
|
11
|
+
|
12
|
+
def initialize target
|
13
|
+
@target = target
|
14
|
+
@queue = Concurrent::Array.new
|
15
|
+
@mutex = Thread::Mutex.new
|
16
|
+
end
|
17
|
+
|
18
|
+
# Send the message to the target and wrap the result
|
19
|
+
def send_message message_name, *args, &block
|
20
|
+
Message.new(@target, message_name, Plumbing::Actor.transporter.marshal(*args), block, Concurrent::MVar.new).tap do |message|
|
21
|
+
@mutex.synchronize do
|
22
|
+
@queue << message
|
23
|
+
send_messages if @queue.any?
|
24
|
+
end
|
25
|
+
end
|
26
|
+
end
|
27
|
+
|
28
|
+
def safely(&)
|
29
|
+
send_message(:perform_safely, &)
|
30
|
+
nil
|
31
|
+
end
|
32
|
+
|
33
|
+
def within_actor? = @mutex.owned?
|
34
|
+
|
35
|
+
def stop
|
36
|
+
within_actor? ? @queue.clear : @mutex.synchronize { @queue.clear }
|
37
|
+
end
|
38
|
+
|
39
|
+
protected
|
40
|
+
|
41
|
+
def future(&) = Concurrent::Promises.future(&)
|
42
|
+
|
43
|
+
private
|
44
|
+
|
45
|
+
def send_messages
|
46
|
+
future do
|
47
|
+
@mutex.synchronize do
|
48
|
+
message = @queue.shift
|
49
|
+
message&.call
|
50
|
+
end
|
51
|
+
end
|
52
|
+
end
|
53
|
+
|
54
|
+
class Message < Concurrent::ImmutableStruct.new(:target, :message_name, :packed_args, :unsafe_block, :result)
|
55
|
+
def call
|
56
|
+
args = Plumbing::Actor.transporter.unmarshal(*packed_args)
|
57
|
+
value = target.send message_name, *args, &unsafe_block
|
58
|
+
|
59
|
+
result.put Plumbing::Actor.transporter.marshal(value)
|
60
|
+
rescue => ex
|
61
|
+
result.put ex
|
62
|
+
end
|
63
|
+
|
64
|
+
def value
|
65
|
+
value = Plumbing::Actor.transporter.unmarshal(*result.take(Plumbing.config.timeout)).first
|
66
|
+
raise value if value.is_a? Exception
|
67
|
+
value
|
68
|
+
end
|
69
|
+
end
|
70
|
+
end
|
71
|
+
|
72
|
+
def self.transporter
|
73
|
+
@transporter ||= Plumbing::Actor::Transporter.new
|
74
|
+
end
|
75
|
+
end
|
76
|
+
end
|
@@ -0,0 +1,61 @@
|
|
1
|
+
require "global_id"
|
2
|
+
|
3
|
+
module Plumbing
|
4
|
+
module Actor
|
5
|
+
class Transporter
|
6
|
+
def marshal *arguments
|
7
|
+
pack_array arguments
|
8
|
+
end
|
9
|
+
|
10
|
+
def unmarshal *arguments
|
11
|
+
unpack_array arguments
|
12
|
+
end
|
13
|
+
|
14
|
+
private
|
15
|
+
|
16
|
+
def pack argument
|
17
|
+
case argument
|
18
|
+
when GlobalID::Identification then pack_global_id argument
|
19
|
+
when Array then pack_array argument
|
20
|
+
when Hash then pack_hash argument
|
21
|
+
else argument.clone
|
22
|
+
end
|
23
|
+
end
|
24
|
+
|
25
|
+
def pack_array arguments
|
26
|
+
arguments.map { |a| pack a }
|
27
|
+
end
|
28
|
+
|
29
|
+
def pack_hash arguments
|
30
|
+
arguments.transform_values { |v| pack v }
|
31
|
+
end
|
32
|
+
|
33
|
+
def pack_global_id argument
|
34
|
+
argument.to_global_id.to_s
|
35
|
+
end
|
36
|
+
|
37
|
+
def unpack argument
|
38
|
+
case argument
|
39
|
+
when String then unpack_string argument
|
40
|
+
when Array then unpack_array argument
|
41
|
+
when Hash then unpack_hash argument
|
42
|
+
else argument
|
43
|
+
end
|
44
|
+
end
|
45
|
+
|
46
|
+
def unpack_array arguments
|
47
|
+
arguments.map { |a| unpack a }
|
48
|
+
end
|
49
|
+
|
50
|
+
def unpack_hash arguments
|
51
|
+
arguments.to_h do |key, value|
|
52
|
+
[key, unpack(value)]
|
53
|
+
end
|
54
|
+
end
|
55
|
+
|
56
|
+
def unpack_string argument
|
57
|
+
argument.start_with?("gid://") ? GlobalID::Locator.locate(argument) : argument
|
58
|
+
end
|
59
|
+
end
|
60
|
+
end
|
61
|
+
end
|