standard-procedure-plumbing 0.3.3 → 0.4.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: ba7883be2c006839c549d70a37c96dced0cc0e38af1093676a3503bbbd9dcd95
4
- data.tar.gz: 9ad82790ba2badcc614a795b559347b6413e810e35fe01db572f1a0f2ccbeb49
3
+ metadata.gz: ace6c01658b88c08c34d48406f983fdad449bc4f122b8dfed026853841b48d35
4
+ data.tar.gz: 96b4458d0c667e10bb47fb8881de2e21df0ff40243cf07ff30e2e22f29892625
5
5
  SHA512:
6
- metadata.gz: 57f478c6b91598bdc88028ac99bceff692a34d620809ac71b3d89b24930794d8db5d9723b0cde844deaf3b9973bdda3c68ee7c77e1a8c9c824748b5278e5c606
7
- data.tar.gz: 2dd95476896445746356141cbe5925490d9d7e4b96b9a60aefd0841b8b8851990d235c866bfb4073a8a0b5adb9588af1f88eff43988969ac1a5960ef091483ed
6
+ metadata.gz: a5ee7af05314dbb45d9f46a302f6102f93237334224f82f1c34cdd5c252829039bf6b4bc18a562fcdb5d76a8ca229a484b0f3d3e123b7e5efd46d302f50c72b4
7
+ data.tar.gz: 005f6cee1eb899207659955ea5a4179133c65e07db913e8f3f9c730cfca8947684927c9b99b675e9f54dc4ea4820159d709d61b146113a993783cec2a0c5842c
data/README.md CHANGED
@@ -1,16 +1,18 @@
1
1
  # Plumbing
2
2
 
3
+ Actors, Observers and Data Pipelines.
4
+
3
5
  ## Configuration
4
6
 
5
- The most important configuration setting is the `mode`, which governs how messages are handled by Valves.
7
+ The most important configuration setting is the `mode`, which governs how background tasks are handled.
6
8
 
7
- By default it is `:inline`, so every command or query is handled synchronously. This is the ruby behaviour you know and love.
9
+ By default it is `:inline`, so every command or query is handled synchronously. This is the ruby behaviour you know and love (although see the section on `await` below).
8
10
 
9
- If it is set to `:async`, commands and queries will be handled asynchronously using fibers (via the [Async gem](https://socketry.github.io/async/index.html)). Your code should include the "async" gem in its bundle, as Plumbing does not load it by default.
11
+ `:async` mode handles tasks using fibers (via the [Async gem](https://socketry.github.io/async/index.html)). Your code should include the "async" gem in its bundle, as Plumbing does not load it by default.
10
12
 
11
- If it is set to `:threaded`, commands and queries will be handled asynchronously by a thread pool (via [Concurrent Ruby](https://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html)), using the default `:io` executor. Your code should include the "concurrent-ruby" gem in its bundle, as Plumbing does not load it by default.
13
+ `:threaded` mode handles tasks using a thread pool via [Concurrent Ruby](https://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises.html)). Your code should include the "concurrent-ruby" gem in its bundle, as Plumbing does not load it by default.
12
14
 
13
- If you want to use threads in a Rails application, set the mode to `:rails`. This ensures that the work is wrapped in the Rails executor (which prevents multi-threading issues in the framework). At present, the `:io` executor may cause issues as we may exceed the number of database connections in the Rails' connection pool. We will fix this at some point in the future.
15
+ However, `:threaded` mode is not safe for Ruby on Rails applications. In this case, use `:threaded_rails` mode, which is identical to `:threaded`, except it wraps the tasks in the Rails executor. This ensures your actors do not interfere with the Rails framework. Note that the Concurrent Ruby's default `:io` scheduler will create extra threads at times of high demand, which may put pressure on the ActiveRecord database connection pool. A future version of plumbing will allow the thread pool to be adjusted with a maximum number of threads, preventing contention with the connection pool.
14
16
 
15
17
  The `timeout` setting is used when performing queries - it defaults to 30s.
16
18
 
@@ -154,140 +156,150 @@ You can also verify that the output generated is as expected by defining a `post
154
156
  # => ["first", "external", "third"]
155
157
  ```
156
158
 
157
- ## Plumbing::Valve - safe asynchronous objects
159
+ ## Plumbing::Actor - safe asynchronous objects
158
160
 
159
- An [actor](https://en.wikipedia.org/wiki/Actor_model) defines the messages an object can receive, similar to a regular object. However, a normal object if accessed concurrently can have data consistency issues and race conditions leading to hard-to-reproduce bugs. Actors, however, ensure that, no matter which thread (or fiber) is sending the message, the internal processing of the message (the method definition) is handled sequentially. This means the internal state of an object is never accessed concurrently, eliminating those issues.
161
+ An [actor](https://en.wikipedia.org/wiki/Actor_model) defines the messages an object can receive, similar to a regular object.
162
+ However, in traditional object-orientated programming, a thread of execution moves from one object to another. If there are multiple threads, then each object may be accessed concurrently, leading to race conditions or data-integrity problems - and very hard to track bugs.
160
163
 
161
- [Plumbing::Valve](/lib/plumbing/valve.rb) ensures that all messages received are channelled into a concurrency-safe queue. This allows you to take an existing class and ensures that messages received via its public API are made concurrency-safe.
164
+ Actors are different. Conceptually, each actor has it's own thread of execution, isolated from every other actor in the system. When one actor sends a message to another actor, the receiver does not execute its method in the caller's thread. Instead, it places the message on a queue and waits until its own thread is free to process the work. If the caller would like to access the return value from the method, then it must wait until the receiver has finished processing.
162
165
 
163
- Include the Plumbing::Valve module into your class, define the messages the objects can respond to and set the `Plumbing` configuration to set the desired concurrency model. Messages themselves are split into two categories: commands and queries.
166
+ This means each actor is only ever accessed by a single thread and the vast majority of concurrency issues are eliminated.
164
167
 
165
- - Commands have no return value so when the message is sent, the caller does not block, the task is called asynchronously and the caller continues immediately
166
- - Queries return a value so the caller blocks until the actor has returned a value
167
- - However, if you call a query and pass `ignore_result: true` then the query will not block, although you will not be able to access the return value - this is for commands that do something and then return a result based on that work (which you may or may not be interested in - see Plumbing::Pipe#add_observer)
168
- - None of the above applies if the `Plumbing mode` is set to `:inline` (which is the default) - in this case, the actor behaves like normal ruby code
168
+ [Plumbing::Actor](/lib/plumbing/actor.rb) allows you to define the `async` public interface to your objects. Calling `.start` builds a proxy to the actual instance of your object and ensures that any messages sent are handled in a manner appropriate to the current mode - immediately for inline mode, using fibers for async mode and using threads for threaded and threaded_rails mode.
169
169
 
170
- Instead of constructing your object with `.new`, use `.start`. This builds a proxy object that wraps the target instance and dispatches messages through a safe mechanism. Only messages that have been defined as part of the valve are available in this proxy - so you don't have to worry about callers bypassing the valve's internal context.
170
+ When sending messages to an actor, this just works.
171
171
 
172
- Even when using actors, there is one condition where concurrency may cause issues. If object A makes a query to object B which in turn makes a query back to object A, you will hit a deadlock. This is because A is waiting on the response from B but B is now querying, and waiting for, A. This does not apply to commands because they do not wait for a response. However, when writing queries, be careful who you interact with - the configuration allows you to set a timeout (defaulting to 30s) in case this happens.
172
+ However, as the caller, you do not have direct access to the return values of the messages that you send. Instead, you must call `#value` - or alternatively, wrap your call in `await { ... }`. The block form of `await` is added in to ruby's `Kernel` so it is available everywhere. It is also safe to use with non-actors (in which case it just returns the original value from the block).
173
173
 
174
- Also be aware that if you use valves in one place, you need to use them everywhere - especially if you're using threads or ractors (coming soon). This is because as the valve sends messages to its collaborators, those calls will be made from within the valve's internal context. If the collaborators are also valves, the subsequent messages will be handled correctly, if not, data consistency bugs could occur.
174
+ ```ruby
175
+ @actor = MyActor.start name: "Alice"
175
176
 
176
- ### Usage
177
+ @actor.name.value
178
+ # => "Alice"
177
179
 
178
- [Defining an actor](/spec/examples/valve_spec.rb)
180
+ await { @actor.name }
181
+ # => "Alice"
179
182
 
180
- ```ruby
181
- require "plumbing"
183
+ await { "Bob" }
184
+ # => "Bob"
185
+ ```
182
186
 
183
- class Employee
184
- attr_reader :name, :job_title
187
+ This then makes the caller's thread block until the receiver's thread has finished its work and returned a value. Or if the receiver raises an exception, that exception is then re-raised in the calling thread.
185
188
 
186
- include Plumbing::Valve
187
- query :name, :job_title, :greet_slowly
188
- command :promote
189
+ The actor model does not eliminate every possible concurrency issue. If you use `value` or `await`, it is possible to deadlock yourself.
189
190
 
190
- def initialize(name)
191
- @name = name
192
- @job_title = "Sales assistant"
193
- end
191
+ Actor A, running in Thread 1, sends a message to Actor B and then awaits the result, meaning Thread 1 is blocked. Actor B, running in Thread 2, starts to work, but needs to ask Actor A a question. So it sends a message to Actor A and awaits the result. Thread 2 is now blocked, waiting for Actor A to respond. But Actor A, running in Thread 1, is blocked, waiting for Actor B to respond.
194
192
 
195
- def promote
196
- sleep 0.5
197
- @job_title = "Sales manager"
198
- end
193
+ This potential deadlock only occurs if you use `value` or `await` and have actors that call back in to each other. If your objects are strictly layered, or you never use `value` or `await` (perhaps, instead using a Pipe to observe events), then this particular deadlock should not occur. However, just in case, every call to `value` has a timeout defaulting to 30s.
199
194
 
200
- def greet_slowly
201
- sleep 0.2
202
- "H E L L O"
203
- end
204
- end
205
- ```
195
+ ### Inline actors
206
196
 
207
- [Acting inline](/spec/examples/valve_spec.rb) with no concurrency
197
+ Even though inline mode is not asynchronous, you must still use `value` or `await` to access the results from another actor. However, as deadlocks are impossible in a single thread, there is no timeout.
208
198
 
209
- ```ruby
210
- require "plumbing"
199
+ ### Async actors
211
200
 
212
- @person = Employee.start "Alice"
201
+ Using async mode is probably the easiest way to add concurrency to your application. It uses fibers to allow for "concurrency but not parallelism" - that is execution will happen in the background but your objects or data will never be accessed by two things at the exact same time.
213
202
 
214
- puts @person.name
215
- # => "Alice"
216
- puts @person.job_title
217
- # => "Sales assistant"
203
+ ### Threaded actors
218
204
 
219
- @person.promote
220
- # this will block for 0.5 seconds
221
- puts @person.job_title
222
- # => "Sales manager"
205
+ Using threaded (or threaded_rails) mode gives you concurrency and parallelism. If all your public objects are actors and you are careful about callbacks then the actor model will keep your code safe. But there are a couple of extra things to consider.
223
206
 
224
- @person.greet_slowly
225
- # this will block for 0.2 seconds before returning "H E L L O"
207
+ Firstly, when you pass parameters or return results between threads, those objects are "transported" across the boundaries.
208
+ Most objects are cloned. Hashes, keyword arguments and arrays have their contents recursively transported. And any object that uses `GlobalID::Identification` (for example, ActiveRecord models) are marshalled into a GlobalID, then unmarshalled back in to their original object. This is to prevent the same object from being amended in both the caller and receiver's threads.
226
209
 
227
- @person.greet_slowly(ignore_result: true)
228
- # this will block for 0.2 seconds (as the mode is :inline) before returning nil
229
- ```
210
+ Secondly, when you pass a block (or Proc parameter) to another actor, the block/proc will be executed in the receiver's thread. This means you must not access any variables that would normally be in scope for your block (whether local variables or instance variables of other objects - see note below) This is because you will be accessing them from a different thread to where they were defined, leading to potential race conditions. And, if you access any actors, you must not use `value` or `await` or you risk a deadlock. If you are within an actor and need to pass a block or proc parameter, you should use the `safely` method to ensure that your block is run within the context of the calling actor, not the receiving actor.
230
211
 
231
- [Using fibers](/spec/examples/valve_spec.rb) with concurrency but no parallelism
212
+ For example, when defining a custom filter, the filter adds itself as an observer to its source. The source triggers the `received` method on the filter, which will run in the context of the source. So the custom filter uses `safely` to move back into its own context and access its instance variables.
232
213
 
233
214
  ```ruby
234
- require "plumbing"
235
- require "async"
215
+ class EveryThirdEvent < Plumbing::CustomFilter
216
+ def initialize source:
217
+ super
218
+ @events = []
219
+ end
236
220
 
237
- Plumbing.configure mode: :async
238
- @person = Employee.start "Alice"
221
+ def received event
222
+ safely do
223
+ @events << event
224
+ if @events.count >= 3
225
+ @events.clear
226
+ self << event
227
+ end
228
+ end
229
+ end
230
+ end
231
+ ```
239
232
 
240
- puts @person.name
241
- # => "Alice"
242
- puts @person.job_title
243
- # => "Sales assistant"
233
+ (Note: we break that rule in the specs for Pipe objects - we use a block observer that sets the value on a local variable. That's because it is a controlled situation where we know there are only two threads involved and we are explicitly waiting for the second thread to complete. For almost every app that uses actors, there will be multiple threads and it will be impossible to predict the access patterns).
244
234
 
245
- @person.promote
246
- # this will return immediately without blocking
247
- puts @person.job_title
248
- # => "Sales manager" (this will block for 0.5s because #job_title query will not start until the #promote command has completed)
235
+ ### Constructing actors
249
236
 
250
- @person.greet_slowly
251
- # this will block for 0.2 seconds before returning "H E L L O"
237
+ Instead of constructing your object with `.new`, use `.start`. This builds a proxy object that wraps the target instance and dispatches messages through a safe mechanism. Only messages that have been defined as part of the actor are available in this proxy - so you don't have to worry about callers bypassing the actor's internal context.
252
238
 
253
- @person.greet_slowly(ignore_result: true)
254
- # this will not block and returns nil
255
- ```
239
+ ### Referencing actors
240
+
241
+ If you're within a method inside your actor and you want to pass a reference to yourself, instead of using `self`, you should use `proxy` (which is also aliased as `as_actor` or `async`).
256
242
 
257
- [Using threads](/spec/examples/valve_spec.rb) with concurrency and some parallelism
243
+ Also be aware that if you use actors in one place, you need to use them everywhere - especially if you're using threads. This is because as the actor sends messages to its collaborators, those calls will be made from within the actor's internal context. If the collaborators are also actors, the subsequent messages will be handled correctly, if not, data consistency bugs could occur. This does not mean that every class needs to be an actor, just your "public API" classes which may be accessed from multiple actors or other threads.
244
+
245
+ ### Usage
246
+
247
+ [Defining an actor](/spec/examples/actor_spec.rb)
258
248
 
259
249
  ```ruby
260
250
  require "plumbing"
261
- require "concurrent"
262
251
 
263
- Plumbing.configure mode: :threaded
252
+ class Employee
253
+ include Plumbing::Actor
254
+ async :name, :job_title, :greet_slowly, :promote
255
+ attr_reader :name, :job_title
256
+
257
+ def initialize(name)
258
+ @name = name
259
+ @job_title = "Sales assistant"
260
+ end
261
+
262
+ private
263
+
264
+ def promote
265
+ sleep 0.5
266
+ @job_title = "Sales manager"
267
+ end
268
+
269
+ def greet_slowly
270
+ sleep 0.2
271
+ "H E L L O"
272
+ end
273
+ end
274
+
264
275
  @person = Employee.start "Alice"
265
276
 
266
- puts @person.name
267
- # => "Alice"
268
- puts @person.job_title
269
- # => "Sales assistant"
277
+ await { @person.name }
278
+ # => "Alice"
279
+ await { @person.job_title }
280
+ # => "Sales assistant"
270
281
 
271
- @person.promote
272
- # this will return immediately without blocking
273
- puts @person.job_title
274
- # => "Sales manager" (this will block for 0.5s because #job_title query will not start until the #promote command has completed)
282
+ # by using `await`, we will block until `greet_slowly` has returned a value
283
+ await { @person.greet_slowly }
284
+ # => "H E L L O"
275
285
 
286
+ # this time, we're not awaiting the result, so this will run in the background (unless we're using inline mode)
276
287
  @person.greet_slowly
277
- # this will block for 0.2 seconds before returning "H E L L O"
278
288
 
279
- @person.greet_slowly(ignore_result: true)
280
- # this will not block and returns nil
289
+ # this will run in the background
290
+ @person.promote
291
+ # this will block - it will not return until the previous calls, #greet_slowly, #promote, and this call to #job_title have completed
292
+ await { @person.job_title }
293
+ # => "Sales manager"
281
294
  ```
282
295
 
283
-
284
296
  ## Plumbing::Pipe - a composable observer
285
297
 
286
298
  [Observers](https://ruby-doc.org/3.3.0/stdlibs/observer/Observable.html) in Ruby are a pattern where objects (observers) register their interest in another object (the observable). This pattern is common throughout programming languages (event listeners in Javascript, the dependency protocol in [Smalltalk](https://en.wikipedia.org/wiki/Smalltalk)).
287
299
 
288
300
  [Plumbing::Pipe](lib/plumbing/pipe.rb) makes observers "composable". Instead of simply just registering for notifications from a single observable, we can build sequences of pipes. These sequences can filter notifications and route them to different listeners, or merge multiple sources into a single stream of notifications.
289
301
 
290
- Pipes are implemented as valves, meaning that event notifications can be dispatched asynchronously. The observer's callback will be triggered from within the pipe's internal context so you should immediately trigger a command on another valve to maintain safety.
302
+ Pipes are implemented as actors, meaning that event notifications can be dispatched asynchronously. The observer's callback will be triggered from within the pipe's internal context so you should immediately trigger a command on another actor to maintain safety.
291
303
 
292
304
  ### Usage
293
305
 
@@ -332,12 +344,17 @@ Pipes are implemented as valves, meaning that event notifications can be dispatc
332
344
  end
333
345
 
334
346
  def received event
335
- # store this event into our buffer
336
- @events << event
337
- # if this is the third event we've received then clear the buffer and broadcast the latest event
338
- if @events.count >= 3
339
- @events.clear
340
- self << event
347
+ # #received is called in the context of the `source` actor
348
+ # in order to safely access the `EveryThirdEvent` instance variables
349
+ # we need to move into the context of our own actor
350
+ safely do
351
+ # store this event into our buffer
352
+ @events << event
353
+ # if this is the third event we've received then clear the buffer and broadcast the latest event
354
+ if @events.count >= 3
355
+ @events.clear
356
+ self << event
357
+ end
341
358
  end
342
359
  end
343
360
  end
@@ -374,31 +391,6 @@ Pipes are implemented as valves, meaning that event notifications can be dispatc
374
391
  @second_source.notify "two"
375
392
  # => "two"
376
393
  ```
377
-
378
- [Dispatching events asynchronously (using Fibers)](/spec/examples/pipe_spec.rb):
379
- ```ruby
380
- require "plumbing"
381
- require "async"
382
-
383
- Plumbing.configure mode: :async
384
-
385
- Sync do
386
- @first_source = Plumbing::Pipe.start
387
- @second_source = Plumbing::Pipe.start
388
-
389
- @junction = Plumbing::Junction.start @first_source, @second_source
390
-
391
- @filter = Plumbing::Filter.start source: @junction do |event|
392
- %w[one-one two-two].include? event.type
393
- end
394
-
395
- @first_source.notify "one-one"
396
- @first_source.notify "one-two"
397
- @second_source.notify "two-one"
398
- @second_source.notify "two-two"
399
- end
400
- ```
401
-
402
394
  ## Plumbing::RubberDuck - duck types and type-casts
403
395
 
404
396
  Define an [interface or protocol](https://en.wikipedia.org/wiki/Interface_(object-oriented_programming)) specifying which messages you expect to be able to send.
@@ -488,7 +480,7 @@ Then:
488
480
  ```ruby
489
481
  require 'plumbing'
490
482
 
491
- # Set the mode for your Valves and Pipes
483
+ # Set the mode for your Actors and Pipes
492
484
  Plumbing.config mode: :async
493
485
  ```
494
486
 
@@ -0,0 +1,49 @@
1
+ require "async"
2
+ require "async/semaphore"
3
+ require "timeout"
4
+
5
+ module Plumbing
6
+ module Actor
7
+ class Async
8
+ attr_reader :target
9
+
10
+ def initialize target
11
+ @target = target
12
+ @queue = []
13
+ @semaphore = ::Async::Semaphore.new(1)
14
+ end
15
+
16
+ # Send the message to the target and wrap the result
17
+ def send_message message_name, *args, &block
18
+ task = @semaphore.async do
19
+ @target.send message_name, *args, &block
20
+ end
21
+ Result.new(task)
22
+ end
23
+
24
+ def safely(&)
25
+ send_message(:perform_safely, &)
26
+ nil
27
+ end
28
+
29
+ def within_actor? = true
30
+
31
+ def stop
32
+ # do nothing
33
+ end
34
+
35
+ Result = Data.define(:task) do
36
+ def value
37
+ Timeout.timeout(Plumbing::Actor.timeout) do
38
+ task.wait
39
+ end
40
+ end
41
+ end
42
+ private_constant :Result
43
+ end
44
+
45
+ def self.timeout
46
+ Plumbing.config.timeout
47
+ end
48
+ end
49
+ end
@@ -0,0 +1,33 @@
1
+ module Plumbing
2
+ module Actor
3
+ class Inline
4
+ def initialize target
5
+ @target = target
6
+ end
7
+
8
+ # Send the message to the target and wrap the result
9
+ def send_message(message_name, *, &)
10
+ value = @target.send(message_name, *, &)
11
+ Result.new(value)
12
+ rescue => ex
13
+ Result.new(ex)
14
+ end
15
+
16
+ def safely(&)
17
+ send_message(:perform_safely, &)
18
+ nil
19
+ end
20
+
21
+ def within_actor? = true
22
+
23
+ def stop
24
+ # do nothing
25
+ end
26
+
27
+ Result = Data.define(:result) do
28
+ def value = result.is_a?(Exception) ? raise(result) : result
29
+ end
30
+ private_constant :Result
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,10 @@
1
+ module Plumbing
2
+ module Actor
3
+ ::Kernel.class_eval do
4
+ def await &block
5
+ result = block.call
6
+ result.respond_to?(:value) ? result.send(:value) : result
7
+ end
8
+ end
9
+ end
10
+ end
@@ -1,7 +1,7 @@
1
1
  require_relative "threaded"
2
2
 
3
3
  module Plumbing
4
- module Valve
4
+ module Actor
5
5
  class Rails < Threaded
6
6
  protected
7
7
 
@@ -0,0 +1,76 @@
1
+ require "concurrent/array"
2
+ require "concurrent/mvar"
3
+ require "concurrent/immutable_struct"
4
+ require "concurrent/promises"
5
+ require_relative "transporter"
6
+
7
+ module Plumbing
8
+ module Actor
9
+ class Threaded
10
+ attr_reader :target
11
+
12
+ def initialize target
13
+ @target = target
14
+ @queue = Concurrent::Array.new
15
+ @mutex = Thread::Mutex.new
16
+ end
17
+
18
+ # Send the message to the target and wrap the result
19
+ def send_message message_name, *args, &block
20
+ Message.new(@target, message_name, Plumbing::Actor.transporter.marshal(*args), block, Concurrent::MVar.new).tap do |message|
21
+ @mutex.synchronize do
22
+ @queue << message
23
+ send_messages if @queue.any?
24
+ end
25
+ end
26
+ end
27
+
28
+ def safely(&)
29
+ send_message(:perform_safely, &)
30
+ nil
31
+ end
32
+
33
+ def within_actor? = @mutex.owned?
34
+
35
+ def stop
36
+ within_actor? ? @queue.clear : @mutex.synchronize { @queue.clear }
37
+ end
38
+
39
+ protected
40
+
41
+ def future(&) = Concurrent::Promises.future(&)
42
+
43
+ private
44
+
45
+ def send_messages
46
+ future do
47
+ @mutex.synchronize do
48
+ message = @queue.shift
49
+ message&.call
50
+ end
51
+ end
52
+ end
53
+
54
+ class Message < Concurrent::ImmutableStruct.new(:target, :message_name, :packed_args, :unsafe_block, :result)
55
+ def call
56
+ args = Plumbing::Actor.transporter.unmarshal(*packed_args)
57
+ value = target.send message_name, *args, &unsafe_block
58
+
59
+ result.put Plumbing::Actor.transporter.marshal(value)
60
+ rescue => ex
61
+ result.put ex
62
+ end
63
+
64
+ def value
65
+ value = Plumbing::Actor.transporter.unmarshal(*result.take(Plumbing.config.timeout)).first
66
+ raise value if value.is_a? Exception
67
+ value
68
+ end
69
+ end
70
+ end
71
+
72
+ def self.transporter
73
+ @transporter ||= Plumbing::Actor::Transporter.new
74
+ end
75
+ end
76
+ end
@@ -0,0 +1,61 @@
1
+ require "global_id"
2
+
3
+ module Plumbing
4
+ module Actor
5
+ class Transporter
6
+ def marshal *arguments
7
+ pack_array arguments
8
+ end
9
+
10
+ def unmarshal *arguments
11
+ unpack_array arguments
12
+ end
13
+
14
+ private
15
+
16
+ def pack argument
17
+ case argument
18
+ when GlobalID::Identification then pack_global_id argument
19
+ when Array then pack_array argument
20
+ when Hash then pack_hash argument
21
+ else argument.clone
22
+ end
23
+ end
24
+
25
+ def pack_array arguments
26
+ arguments.map { |a| pack a }
27
+ end
28
+
29
+ def pack_hash arguments
30
+ arguments.transform_values { |v| pack v }
31
+ end
32
+
33
+ def pack_global_id argument
34
+ argument.to_global_id.to_s
35
+ end
36
+
37
+ def unpack argument
38
+ case argument
39
+ when String then unpack_string argument
40
+ when Array then unpack_array argument
41
+ when Hash then unpack_hash argument
42
+ else argument
43
+ end
44
+ end
45
+
46
+ def unpack_array arguments
47
+ arguments.map { |a| unpack a }
48
+ end
49
+
50
+ def unpack_hash arguments
51
+ arguments.to_h do |key, value|
52
+ [key, unpack(value)]
53
+ end
54
+ end
55
+
56
+ def unpack_string argument
57
+ argument.start_with?("gid://") ? GlobalID::Locator.locate(argument) : argument
58
+ end
59
+ end
60
+ end
61
+ end