concurrent-ruby-edge 0.4.1 → 0.5.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +26 -0
- data/README.md +36 -4
- data/lib-edge/concurrent-edge.rb +4 -0
- data/lib-edge/concurrent/actor/reference.rb +3 -0
- data/lib-edge/concurrent/edge/cancellation.rb +78 -112
- data/lib-edge/concurrent/edge/channel.rb +450 -0
- data/lib-edge/concurrent/edge/erlang_actor.rb +1545 -0
- data/lib-edge/concurrent/edge/processing_actor.rb +83 -64
- data/lib-edge/concurrent/edge/promises.rb +80 -110
- data/lib-edge/concurrent/edge/throttle.rb +167 -141
- data/lib-edge/concurrent/edge/version.rb +3 -0
- metadata +8 -5
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: e86341312f902de51400e475c3f05e35592c6c8178974e6a1bb58b16e91230ac
|
4
|
+
data.tar.gz: dafef294e63a9fa19760978ef848b60bb6ec2b2e6db0cb948a893d2d379d2eeb
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 6251909073feaacc9bf1dbaacf192a1fa71cf264ed9a0cca60a7039151b27458252ccc857377761d4c2a104610e6f978d983f75bd474eebbb3f5b1e3d68ecb56
|
7
|
+
data.tar.gz: 7e201108e4e985a4afa99e8b6ab280420c20b0e07e1dc83988555c11f6fd52383068f59af935000fccaa41a01717251ba6827cfe48c4404c35faf29791b845c0
|
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,31 @@
|
|
1
1
|
## Current
|
2
2
|
|
3
|
+
## Release v1.1.5, edge v0.5.0 (10 mar 2019)
|
4
|
+
|
5
|
+
concurrent-ruby:
|
6
|
+
|
7
|
+
* fix potential leak of context on JRuby and Java 7
|
8
|
+
|
9
|
+
concurrent-ruby-edge:
|
10
|
+
|
11
|
+
* Add finalized Concurrent::Cancellation
|
12
|
+
* Add finalized Concurrent::Throttle
|
13
|
+
* Add finalized Concurrent::Promises::Channel
|
14
|
+
* Add new Concurrent::ErlangActor
|
15
|
+
|
16
|
+
## Release v1.1.4 (14 Dec 2018)
|
17
|
+
|
18
|
+
* (#780) Remove java_alias of 'submit' method of Runnable to let executor service work on java 11
|
19
|
+
* (#776) Fix NameError on defining a struct with a name which is already taken in an ancestor
|
20
|
+
|
21
|
+
## Release v1.1.3 (7 Nov 2018)
|
22
|
+
|
23
|
+
* (#775) fix partial require of the gem (although not officially supported)
|
24
|
+
|
25
|
+
## Release v1.1.2 (6 Nov 2018)
|
26
|
+
|
27
|
+
* (#773) more defensive 1.9.3 support
|
28
|
+
|
3
29
|
## Release v1.1.1, edge v0.4.1 (1 Nov 2018)
|
4
30
|
|
5
31
|
* (#768) add support for 1.9.3 back
|
data/README.md
CHANGED
@@ -42,8 +42,8 @@ appreciate your help. Would you like to contribute? Great! Have a look at
|
|
42
42
|
## Thread Safety
|
43
43
|
|
44
44
|
*Concurrent Ruby makes one of the strongest thread safety guarantees of any Ruby concurrency
|
45
|
-
library, providing consistent behavior and guarantees on all
|
46
|
-
(MRI/CRuby, JRuby,
|
45
|
+
library, providing consistent behavior and guarantees on all four of the main Ruby interpreters
|
46
|
+
(MRI/CRuby, JRuby, Rubinius, TruffleRuby).*
|
47
47
|
|
48
48
|
Every abstraction in this library is thread safe. Specific thread safety guarantees are documented
|
49
49
|
with each abstraction.
|
@@ -59,7 +59,7 @@ Concurrent Ruby is also the only Ruby library which provides a full suite of thr
|
|
59
59
|
immutable variable types and data structures.
|
60
60
|
|
61
61
|
We've also initiated discussion to document [memory model](docs-source/synchronization.md) of Ruby which
|
62
|
-
would provide consistent behaviour and guarantees on all
|
62
|
+
would provide consistent behaviour and guarantees on all four of the main Ruby interpreters
|
63
63
|
(MRI/CRuby, JRuby, Rubinius, TruffleRuby).
|
64
64
|
|
65
65
|
## Features & Documentation
|
@@ -224,6 +224,35 @@ be obeyed though. Features developed in `concurrent-ruby-edge` are expected to m
|
|
224
224
|
*Status: will be moved to core soon.*
|
225
225
|
* [LockFreeStack](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/LockFreeStack.html)
|
226
226
|
*Status: missing documentation and tests.*
|
227
|
+
* [Promises::Channel](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Promises/Channel.html)
|
228
|
+
A first in first out channel that accepts messages with push family of methods and returns
|
229
|
+
messages with pop family of methods.
|
230
|
+
Pop and push operations can be represented as futures, see `#pop_op` and `#push_op`.
|
231
|
+
The capacity of the channel can be limited to support back pressure, use capacity option in `#initialize`.
|
232
|
+
`#pop` method blocks ans `#pop_op` returns pending future if there is no message in the channel.
|
233
|
+
If the capacity is limited the `#push` method blocks and `#push_op` returns pending future.
|
234
|
+
* [Cancellation](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Cancellation.html)
|
235
|
+
The Cancellation abstraction provides cooperative cancellation.
|
236
|
+
|
237
|
+
The standard methods `Thread#raise` of `Thread#kill` available in Ruby
|
238
|
+
are very dangerous (see linked the blog posts bellow).
|
239
|
+
Therefore concurrent-ruby provides an alternative.
|
240
|
+
|
241
|
+
* <https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/>
|
242
|
+
* <http://www.mikeperham.com/2015/05/08/timeout-rubys-most-dangerous-api/>
|
243
|
+
* <http://blog.headius.com/2008/02/rubys-threadraise-threadkill-timeoutrb.html>
|
244
|
+
|
245
|
+
It provides an object which represents a task which can be executed,
|
246
|
+
the task has to get the reference to the object and periodically cooperatively check that it is not cancelled.
|
247
|
+
Good practices to make tasks cancellable:
|
248
|
+
* check cancellation every cycle of a loop which does significant work,
|
249
|
+
* do all blocking actions in a loop with a timeout then on timeout check cancellation
|
250
|
+
and if ok block again with the timeout
|
251
|
+
* [Throttle](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/Throttle.html)
|
252
|
+
A tool managing concurrency level of tasks.
|
253
|
+
* [ErlangActor](http://ruby-concurrency.github.io/concurrent-ruby/master/Concurrent/ErlangActor.html)
|
254
|
+
Actor implementation which precisely matches Erlang actor behaviour.
|
255
|
+
Requires at least Ruby 2.1 otherwise it's not loaded.
|
227
256
|
|
228
257
|
## Supported Ruby versions
|
229
258
|
|
@@ -339,11 +368,14 @@ and to the past maintainers
|
|
339
368
|
* [Paweł Obrok](https://github.com/obrok)
|
340
369
|
* [Lucas Allan](https://github.com/lucasallan)
|
341
370
|
|
371
|
+
and to [Ruby Association](https://www.ruby.or.jp/en/) for sponsoring a project
|
372
|
+
["Enhancing Ruby’s concurrency tooling"](https://www.ruby.or.jp/en/news/20181106) in 2018.
|
373
|
+
|
342
374
|
## License and Copyright
|
343
375
|
|
344
376
|
*Concurrent Ruby* is free software released under the
|
345
377
|
[MIT License](http://www.opensource.org/licenses/MIT).
|
346
378
|
|
347
|
-
The *Concurrent Ruby* [logo](https://
|
379
|
+
The *Concurrent Ruby* [logo](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/docs-source/logo/concurrent-ruby-logo-300x300.png) was
|
348
380
|
designed by [David Jones](https://twitter.com/zombyboy). It is Copyright © 2014
|
349
381
|
[Jerry D'Antonio](https://twitter.com/jerrydantonio). All Rights Reserved.
|
data/lib-edge/concurrent-edge.rb
CHANGED
@@ -1,5 +1,7 @@
|
|
1
1
|
require 'concurrent'
|
2
2
|
|
3
|
+
require 'concurrent/edge/version'
|
4
|
+
|
3
5
|
require 'concurrent/actor'
|
4
6
|
require 'concurrent/agent'
|
5
7
|
require 'concurrent/channel'
|
@@ -10,5 +12,7 @@ require 'concurrent/edge/lock_free_queue'
|
|
10
12
|
|
11
13
|
require 'concurrent/edge/cancellation'
|
12
14
|
require 'concurrent/edge/throttle'
|
15
|
+
require 'concurrent/edge/channel'
|
13
16
|
|
14
17
|
require 'concurrent/edge/processing_actor'
|
18
|
+
require 'concurrent/edge/erlang_actor' if Concurrent.ruby_version :>=, 2, 1, 0
|
@@ -1,139 +1,105 @@
|
|
1
|
+
require 'concurrent/concern/deprecation'
|
2
|
+
|
1
3
|
module Concurrent
|
2
4
|
|
3
|
-
#
|
4
|
-
#
|
5
|
+
# TODO (pitr-ch 27-Mar-2016): cooperation with mutex, condition, select etc?
|
6
|
+
# TODO (pitr-ch 10-Dec-2018): integrate with enumerator?
|
7
|
+
# token.cancelable(array.each_with_index).each do |v, i|
|
8
|
+
# # stops iterating when cancelled
|
9
|
+
# end
|
10
|
+
# token.cancelable(array).each_with_index do |v, i|
|
11
|
+
# # stops iterating when cancelled
|
12
|
+
# end
|
13
|
+
|
14
|
+
# The Cancellation abstraction provides cooperative cancellation.
|
15
|
+
#
|
16
|
+
# The standard methods `Thread#raise` of `Thread#kill` available in Ruby
|
17
|
+
# are very dangerous (see linked the blog posts bellow).
|
18
|
+
# Therefore concurrent-ruby provides an alternative.
|
19
|
+
# * <https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/>
|
20
|
+
# * <http://www.mikeperham.com/2015/05/08/timeout-rubys-most-dangerous-api/>
|
21
|
+
# * <http://blog.headius.com/2008/02/rubys-threadraise-threadkill-timeoutrb.html>
|
22
|
+
#
|
23
|
+
# It provides an object which represents a task which can be executed,
|
24
|
+
# the task has to get the reference to the object and periodically cooperatively check that it is not cancelled.
|
25
|
+
# Good practices to make tasks cancellable:
|
26
|
+
# * check cancellation every cycle of a loop which does significant work,
|
27
|
+
# * do all blocking actions in a loop with a timeout then on timeout check cancellation
|
28
|
+
# and if ok block again with the timeout
|
5
29
|
#
|
6
|
-
#
|
7
|
-
# # Create new cancellation. `cancellation` is used for cancelling, `token` is passed down to
|
8
|
-
# # tasks for cooperative cancellation
|
9
|
-
# cancellation, token = Concurrent::Cancellation.create
|
10
|
-
# Thread.new(token) do |token|
|
11
|
-
# # Count 1+1 (simulating some other meaningful work) repeatedly
|
12
|
-
# # until the token is cancelled through cancellation.
|
13
|
-
# token.loop_until_canceled { 1+1 }
|
14
|
-
# end
|
15
|
-
# sleep 0.1
|
16
|
-
# cancellation.cancel # Stop the thread by cancelling
|
30
|
+
# The idea was inspired by <https://msdn.microsoft.com/en-us/library/dd537607(v=vs.110).aspx>
|
17
31
|
# @!macro warn.edge
|
32
|
+
#
|
33
|
+
# {include:file:docs-source/cancellation.out.md}
|
18
34
|
class Cancellation < Synchronization::Object
|
19
35
|
safe_initialization!
|
20
36
|
|
21
|
-
#
|
22
|
-
#
|
23
|
-
#
|
24
|
-
# @
|
25
|
-
|
26
|
-
|
27
|
-
# cancellation, token = Concurrent::Cancellation.create
|
28
|
-
# @return [Array(Cancellation, Cancellation::Token)]
|
29
|
-
def self.create(resolvable_future_or_event = Promises.resolvable_event, *resolve_args)
|
30
|
-
cancellation = new(resolvable_future_or_event, *resolve_args)
|
31
|
-
[cancellation, cancellation.token]
|
37
|
+
# Create Cancellation which will cancel itself in given time
|
38
|
+
#
|
39
|
+
# @!macro promises.param.intended_time
|
40
|
+
# @return [Cancellation]
|
41
|
+
def self.timeout(intended_time)
|
42
|
+
new Concurrent::Promises.schedule(intended_time)
|
32
43
|
end
|
33
44
|
|
34
|
-
|
45
|
+
# Creates the cancellation object.
|
46
|
+
#
|
47
|
+
# @param [Promises::Future, Promises::Event] origin of the cancellation.
|
48
|
+
# When it is resolved the cancellation is canceled.
|
49
|
+
# @example
|
50
|
+
# cancellation, origin = Concurrent::Cancellation.new
|
51
|
+
# @see #to_ary
|
52
|
+
def initialize(origin = Promises.resolvable_event)
|
53
|
+
super()
|
54
|
+
@Origin = origin
|
55
|
+
end
|
35
56
|
|
36
|
-
#
|
37
|
-
# @return [
|
38
|
-
|
39
|
-
|
57
|
+
# Allow to multi-assign the Cancellation object
|
58
|
+
# @return [Array(Cancellation, Promises::Future), Array(Cancellation, Promises::Event)]
|
59
|
+
# @example
|
60
|
+
# cancellation = Concurrent::Cancellation.new
|
61
|
+
# cancellation, origin = Concurrent::Cancellation.new
|
62
|
+
def to_ary
|
63
|
+
[self, @Origin]
|
40
64
|
end
|
41
65
|
|
42
|
-
#
|
43
|
-
# @return [
|
44
|
-
|
45
|
-
|
46
|
-
!!@Cancel.resolve(*@ResolveArgs, raise_on_repeated_call)
|
66
|
+
# The event or future which is the origin of the cancellation
|
67
|
+
# @return [Promises::Future, Promises::Event]
|
68
|
+
def origin
|
69
|
+
@Origin
|
47
70
|
end
|
48
71
|
|
49
72
|
# Is the cancellation cancelled?
|
73
|
+
# Respective, was the origin of the cancellation resolved.
|
50
74
|
# @return [true, false]
|
51
75
|
def canceled?
|
52
|
-
@
|
76
|
+
@Origin.resolved?
|
53
77
|
end
|
54
78
|
|
55
|
-
#
|
56
|
-
# @
|
57
|
-
|
58
|
-
|
79
|
+
# Raise error when cancelled
|
80
|
+
# @param [#exception] error to be risen
|
81
|
+
# @raise the error
|
82
|
+
# @return [self]
|
83
|
+
def check!(error = CancelledOperationError)
|
84
|
+
raise error if canceled?
|
85
|
+
self
|
59
86
|
end
|
60
87
|
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
@
|
68
|
-
@Token = Token.new @Cancel.with_hidden_resolvable
|
69
|
-
@ResolveArgs = resolve_args
|
88
|
+
# Creates a new Cancellation which is cancelled when first
|
89
|
+
# of the supplied cancellations or self is cancelled.
|
90
|
+
#
|
91
|
+
# @param [Cancellation] cancellations to combine
|
92
|
+
# @return [Cancellation] new cancellation
|
93
|
+
def join(*cancellations)
|
94
|
+
Cancellation.new Promises.any_event(*[@Origin, *cancellations.map(&:origin)])
|
70
95
|
end
|
71
96
|
|
72
|
-
#
|
73
|
-
|
74
|
-
|
75
|
-
|
76
|
-
# @return [Event] Event which will be resolved when the token is cancelled.
|
77
|
-
def to_event
|
78
|
-
@Cancel.to_event
|
79
|
-
end
|
80
|
-
|
81
|
-
# @return [Future] Future which will be resolved when the token is cancelled with arguments passed in
|
82
|
-
# {Cancellation.create} .
|
83
|
-
def to_future
|
84
|
-
@Cancel.to_future
|
85
|
-
end
|
86
|
-
|
87
|
-
# Is the token cancelled?
|
88
|
-
# @return [true, false]
|
89
|
-
def canceled?
|
90
|
-
@Cancel.resolved?
|
91
|
-
end
|
92
|
-
|
93
|
-
# Repeatedly evaluates block until the token is {#canceled?}.
|
94
|
-
# @yield to the block repeatedly.
|
95
|
-
# @yieldreturn [Object]
|
96
|
-
# @return [Object] last result of the block
|
97
|
-
def loop_until_canceled(&block)
|
98
|
-
until canceled?
|
99
|
-
result = block.call
|
100
|
-
end
|
101
|
-
result
|
102
|
-
end
|
103
|
-
|
104
|
-
# Raise error when cancelled
|
105
|
-
# @param [#exception] error to be risen
|
106
|
-
# @raise the error
|
107
|
-
# @return [self]
|
108
|
-
def raise_if_canceled(error = CancelledOperationError)
|
109
|
-
raise error if canceled?
|
110
|
-
self
|
111
|
-
end
|
112
|
-
|
113
|
-
# Creates a new token which is cancelled when any of the tokens is.
|
114
|
-
# @param [Token] tokens to combine
|
115
|
-
# @return [Token] new token
|
116
|
-
def join(*tokens, &block)
|
117
|
-
block ||= -> token_list { Promises.any_event(*token_list.map(&:to_event)) }
|
118
|
-
self.class.new block.call([@Cancel, *tokens])
|
119
|
-
end
|
120
|
-
|
121
|
-
# Short string representation.
|
122
|
-
# @return [String]
|
123
|
-
def to_s
|
124
|
-
format '%s canceled:%s>', super[0..-2], canceled?
|
125
|
-
end
|
126
|
-
|
127
|
-
alias_method :inspect, :to_s
|
128
|
-
|
129
|
-
private
|
130
|
-
|
131
|
-
def initialize(cancel)
|
132
|
-
@Cancel = cancel
|
133
|
-
end
|
97
|
+
# Short string representation.
|
98
|
+
# @return [String]
|
99
|
+
def to_s
|
100
|
+
format '%s %s>', super[0..-2], canceled? ? 'canceled' : 'pending'
|
134
101
|
end
|
135
102
|
|
136
|
-
|
137
|
-
# TODO (pitr-ch 27-Mar-2016): examples (scheduled to be cancelled in 10 sec)
|
103
|
+
alias_method :inspect, :to_s
|
138
104
|
end
|
139
105
|
end
|
@@ -0,0 +1,450 @@
|
|
1
|
+
# @!macro warn.edge
|
2
|
+
module Concurrent
|
3
|
+
module Promises
|
4
|
+
|
5
|
+
# A first in first out channel that accepts messages with push family of methods and returns
|
6
|
+
# messages with pop family of methods.
|
7
|
+
# Pop and push operations can be represented as futures, see {#pop_op} and {#push_op}.
|
8
|
+
# The capacity of the channel can be limited to support back pressure, use capacity option in {#initialize}.
|
9
|
+
# {#pop} method blocks ans {#pop_op} returns pending future if there is no message in the channel.
|
10
|
+
# If the capacity is limited the {#push} method blocks and {#push_op} returns pending future.
|
11
|
+
#
|
12
|
+
# {include:file:docs-source/channel.out.md}
|
13
|
+
# @!macro warn.edge
|
14
|
+
class Channel < Concurrent::Synchronization::Object
|
15
|
+
|
16
|
+
# TODO (pitr-ch 06-Jan-2019): rename to Conduit?, to be able to place it into Concurrent namespace?
|
17
|
+
# TODO (pitr-ch 14-Jan-2019): better documentation, do few examples from go
|
18
|
+
# TODO (pitr-ch 12-Dec-2018): implement channel closing,
|
19
|
+
# - as a child class? To also have a channel which cannot be closed.
|
20
|
+
# TODO (pitr-ch 26-Dec-2016): replace with lock-free implementation, at least getting a message when available should be lock free same goes for push with space available
|
21
|
+
|
22
|
+
# @!macro channel.warn.blocks
|
23
|
+
# @note This function potentially blocks current thread until it can continue.
|
24
|
+
# Be careful it can deadlock.
|
25
|
+
#
|
26
|
+
# @!macro channel.param.timeout
|
27
|
+
# @param [Numeric] timeout the maximum time in second to wait.
|
28
|
+
|
29
|
+
safe_initialization!
|
30
|
+
|
31
|
+
# Default capacity of the Channel, makes it accept unlimited number of messages.
|
32
|
+
UNLIMITED_CAPACITY = ::Object.new
|
33
|
+
UNLIMITED_CAPACITY.singleton_class.class_eval do
|
34
|
+
include Comparable
|
35
|
+
|
36
|
+
def <=>(other)
|
37
|
+
1
|
38
|
+
end
|
39
|
+
|
40
|
+
def to_s
|
41
|
+
'unlimited'
|
42
|
+
end
|
43
|
+
end
|
44
|
+
|
45
|
+
NOTHING = Object.new
|
46
|
+
private_constant :NOTHING
|
47
|
+
|
48
|
+
# An object which matches anything (with #===)
|
49
|
+
ANY = Object.new.tap do |any|
|
50
|
+
def any.===(other)
|
51
|
+
true
|
52
|
+
end
|
53
|
+
|
54
|
+
def any.to_s
|
55
|
+
'ANY'
|
56
|
+
end
|
57
|
+
end
|
58
|
+
|
59
|
+
# Create channel.
|
60
|
+
# @param [Integer, UNLIMITED_CAPACITY] capacity the maximum number of messages which can be stored in the channel.
|
61
|
+
def initialize(capacity = UNLIMITED_CAPACITY)
|
62
|
+
super()
|
63
|
+
@Capacity = capacity
|
64
|
+
@Mutex = Mutex.new
|
65
|
+
# TODO (pitr-ch 28-Jan-2019): consider linked lists or other data structures for following attributes, things are being deleted from the middle
|
66
|
+
@Probes = []
|
67
|
+
@Messages = []
|
68
|
+
@PendingPush = []
|
69
|
+
end
|
70
|
+
|
71
|
+
# Push the message into the channel if there is space available.
|
72
|
+
# @param [Object] message
|
73
|
+
# @return [true, false]
|
74
|
+
def try_push(message)
|
75
|
+
@Mutex.synchronize { ns_try_push(message) }
|
76
|
+
end
|
77
|
+
|
78
|
+
# Returns future which will fulfill when the message is pushed to the channel.
|
79
|
+
# @!macro chanel.operation_wait
|
80
|
+
# If it is later waited on the operation with a timeout e.g.`channel.pop_op.wait(1)`
|
81
|
+
# it will not prevent the channel to fulfill the operation later after the timeout.
|
82
|
+
# The operation has to be either processed later
|
83
|
+
# ```ruby
|
84
|
+
# pop_op = channel.pop_op
|
85
|
+
# if pop_op.wait(1)
|
86
|
+
# process_message pop_op.value
|
87
|
+
# else
|
88
|
+
# pop_op.then { |message| log_unprocessed_message message }
|
89
|
+
# end
|
90
|
+
# ```
|
91
|
+
# or the operation can be prevented from completion after timing out by using
|
92
|
+
# `channel.pop_op.wait(1, [true, nil, nil])`.
|
93
|
+
# It will fulfill the operation on timeout preventing channel from doing the operation,
|
94
|
+
# e.g. popping a message.
|
95
|
+
#
|
96
|
+
# @param [Object] message
|
97
|
+
# @return [ResolvableFuture(self)]
|
98
|
+
def push_op(message)
|
99
|
+
@Mutex.synchronize do
|
100
|
+
if ns_try_push(message)
|
101
|
+
Promises.fulfilled_future self
|
102
|
+
else
|
103
|
+
pushed = Promises.resolvable_future
|
104
|
+
@PendingPush.push message, pushed
|
105
|
+
return pushed
|
106
|
+
end
|
107
|
+
end
|
108
|
+
end
|
109
|
+
|
110
|
+
# Blocks current thread until the message is pushed into the channel.
|
111
|
+
#
|
112
|
+
# @!macro channel.warn.blocks
|
113
|
+
# @param [Object] message
|
114
|
+
# @!macro channel.param.timeout
|
115
|
+
# @return [self, true, false] self implies timeout was not used, true implies timeout was used
|
116
|
+
# and it was pushed, false implies it was not pushed within timeout.
|
117
|
+
def push(message, timeout = nil)
|
118
|
+
pushed_op = @Mutex.synchronize do
|
119
|
+
return timeout ? true : self if ns_try_push(message)
|
120
|
+
|
121
|
+
pushed = Promises.resolvable_future
|
122
|
+
# TODO (pitr-ch 06-Jan-2019): clear timed out pushes in @PendingPush, null messages
|
123
|
+
@PendingPush.push message, pushed
|
124
|
+
pushed
|
125
|
+
end
|
126
|
+
|
127
|
+
result = pushed_op.wait!(timeout, [true, self, nil])
|
128
|
+
result == pushed_op ? self : result
|
129
|
+
end
|
130
|
+
|
131
|
+
# @!macro promises.channel.try_pop
|
132
|
+
# Pop a message from the channel if there is one available.
|
133
|
+
# @param [Object] no_value returned when there is no message available
|
134
|
+
# @return [Object, no_value] message or nil when there is no message
|
135
|
+
def try_pop(no_value = nil)
|
136
|
+
try_pop_matching ANY, no_value
|
137
|
+
end
|
138
|
+
|
139
|
+
# @!macro promises.channel.try_pop
|
140
|
+
# @!macro promises.channel.param.matcher
|
141
|
+
# @param [#===] matcher only consider message which matches `matcher === a_message`
|
142
|
+
def try_pop_matching(matcher, no_value = nil)
|
143
|
+
@Mutex.synchronize do
|
144
|
+
message = ns_shift_message matcher
|
145
|
+
return message if message != NOTHING
|
146
|
+
message = ns_consume_pending_push matcher
|
147
|
+
return message != NOTHING ? message : no_value
|
148
|
+
end
|
149
|
+
end
|
150
|
+
|
151
|
+
# @!macro promises.channel.pop_op
|
152
|
+
# Returns a future witch will become fulfilled with a value from the channel when one is available.
|
153
|
+
# @!macro chanel.operation_wait
|
154
|
+
#
|
155
|
+
# @param [ResolvableFuture] probe the future which will be fulfilled with a channel value
|
156
|
+
# @return [Future(Object)] the probe, its value will be the message when available.
|
157
|
+
def pop_op(probe = Promises.resolvable_future)
|
158
|
+
@Mutex.synchronize { ns_pop_op(ANY, probe, false) }
|
159
|
+
end
|
160
|
+
|
161
|
+
# @!macro promises.channel.pop_op
|
162
|
+
# @!macro promises.channel.param.matcher
|
163
|
+
def pop_op_matching(matcher, probe = Promises.resolvable_future)
|
164
|
+
@Mutex.synchronize { ns_pop_op(matcher, probe, false) }
|
165
|
+
end
|
166
|
+
|
167
|
+
# @!macro promises.channel.pop
|
168
|
+
# Blocks current thread until a message is available in the channel for popping.
|
169
|
+
#
|
170
|
+
# @!macro channel.warn.blocks
|
171
|
+
# @!macro channel.param.timeout
|
172
|
+
# @param [Object] timeout_value a value returned by the method when it times out
|
173
|
+
# @return [Object, nil] message or nil when timed out
|
174
|
+
def pop(timeout = nil, timeout_value = nil)
|
175
|
+
pop_matching ANY, timeout, timeout_value
|
176
|
+
end
|
177
|
+
|
178
|
+
# @!macro promises.channel.pop
|
179
|
+
# @!macro promises.channel.param.matcher
|
180
|
+
def pop_matching(matcher, timeout = nil, timeout_value = nil)
|
181
|
+
# TODO (pitr-ch 27-Jan-2019): should it try to match pending pushes if it fails to match in the buffer? Maybe only if the size is zero. It could be surprising if it's used as a throttle it might be expected that it will not pop if buffer is full of messages which di not match, it might it expected it will block until the message is added to the buffer
|
182
|
+
# that it returns even if the buffer is full. User might expect that it has to be in the buffer first.
|
183
|
+
probe = @Mutex.synchronize do
|
184
|
+
message = ns_shift_message matcher
|
185
|
+
if message == NOTHING
|
186
|
+
message = ns_consume_pending_push matcher
|
187
|
+
return message if message != NOTHING
|
188
|
+
else
|
189
|
+
new_message = ns_consume_pending_push ANY
|
190
|
+
@Messages.push new_message unless new_message == NOTHING
|
191
|
+
return message
|
192
|
+
end
|
193
|
+
|
194
|
+
probe = Promises.resolvable_future
|
195
|
+
@Probes.push probe, false, matcher
|
196
|
+
probe
|
197
|
+
end
|
198
|
+
|
199
|
+
probe.value!(timeout, timeout_value, [true, timeout_value, nil])
|
200
|
+
end
|
201
|
+
|
202
|
+
# @!macro promises.channel.peek
|
203
|
+
# Behaves as {#try_pop} but it does not remove the message from the channel
|
204
|
+
# @param [Object] no_value returned when there is no message available
|
205
|
+
# @return [Object, no_value] message or nil when there is no message
|
206
|
+
def peek(no_value = nil)
|
207
|
+
peek_matching ANY, no_value
|
208
|
+
end
|
209
|
+
|
210
|
+
# @!macro promises.channel.peek
|
211
|
+
# @!macro promises.channel.param.matcher
|
212
|
+
def peek_matching(matcher, no_value = nil)
|
213
|
+
@Mutex.synchronize do
|
214
|
+
message = ns_shift_message matcher, false
|
215
|
+
return message if message != NOTHING
|
216
|
+
message = ns_consume_pending_push matcher, false
|
217
|
+
return message != NOTHING ? message : no_value
|
218
|
+
end
|
219
|
+
end
|
220
|
+
|
221
|
+
# @!macro promises.channel.try_select
|
222
|
+
# If message is available in the receiver or any of the provided channels
|
223
|
+
# the channel message pair is returned. If there is no message nil is returned.
|
224
|
+
# The returned channel is the origin of the message.
|
225
|
+
#
|
226
|
+
# @param [Channel, ::Array<Channel>] channels
|
227
|
+
# @return [::Array(Channel, Object), nil]
|
228
|
+
# pair [channel, message] if one of the channels is available for reading
|
229
|
+
def try_select(channels)
|
230
|
+
try_select_matching ANY, channels
|
231
|
+
end
|
232
|
+
|
233
|
+
# @!macro promises.channel.try_select
|
234
|
+
# @!macro promises.channel.param.matcher
|
235
|
+
def try_select_matching(matcher, channels)
|
236
|
+
message = nil
|
237
|
+
channel = [self, *channels].find do |ch|
|
238
|
+
message = ch.try_pop_matching(matcher, NOTHING)
|
239
|
+
message != NOTHING
|
240
|
+
end
|
241
|
+
channel ? [channel, message] : nil
|
242
|
+
end
|
243
|
+
|
244
|
+
# @!macro promises.channel.select_op
|
245
|
+
# When message is available in the receiver or any of the provided channels
|
246
|
+
# the future is fulfilled with a channel message pair.
|
247
|
+
# The returned channel is the origin of the message.
|
248
|
+
# @!macro chanel.operation_wait
|
249
|
+
#
|
250
|
+
# @param [Channel, ::Array<Channel>] channels
|
251
|
+
# @param [ResolvableFuture] probe the future which will be fulfilled with the message
|
252
|
+
# @return [ResolvableFuture(::Array(Channel, Object))] a future which is fulfilled with
|
253
|
+
# pair [channel, message] when one of the channels is available for reading
|
254
|
+
def select_op(channels, probe = Promises.resolvable_future)
|
255
|
+
select_op_matching ANY, channels, probe
|
256
|
+
end
|
257
|
+
|
258
|
+
# @!macro promises.channel.select_op
|
259
|
+
# @!macro promises.channel.param.matcher
|
260
|
+
def select_op_matching(matcher, channels, probe = Promises.resolvable_future)
|
261
|
+
[self, *channels].each { |ch| ch.partial_select_op matcher, probe }
|
262
|
+
probe
|
263
|
+
end
|
264
|
+
|
265
|
+
# @!macro promises.channel.select
|
266
|
+
# As {#select_op} but does not return future,
|
267
|
+
# it block current thread instead until there is a message available
|
268
|
+
# in the receiver or in any of the channels.
|
269
|
+
#
|
270
|
+
# @!macro channel.warn.blocks
|
271
|
+
# @param [Channel, ::Array<Channel>] channels
|
272
|
+
# @!macro channel.param.timeout
|
273
|
+
# @return [::Array(Channel, Object), nil] message or nil when timed out
|
274
|
+
# @see #select_op
|
275
|
+
def select(channels, timeout = nil)
|
276
|
+
select_matching ANY, channels, timeout
|
277
|
+
end
|
278
|
+
|
279
|
+
# @!macro promises.channel.select
|
280
|
+
# @!macro promises.channel.param.matcher
|
281
|
+
def select_matching(matcher, channels, timeout = nil)
|
282
|
+
probe = select_op_matching(matcher, channels)
|
283
|
+
probe.value!(timeout, nil, [true, nil, nil])
|
284
|
+
end
|
285
|
+
|
286
|
+
# @return [Integer] The number of messages currently stored in the channel.
|
287
|
+
def size
|
288
|
+
@Mutex.synchronize { @Messages.size }
|
289
|
+
end
|
290
|
+
|
291
|
+
# @return [Integer] Maximum capacity of the Channel.
|
292
|
+
def capacity
|
293
|
+
@Capacity
|
294
|
+
end
|
295
|
+
|
296
|
+
# @return [String] Short string representation.
|
297
|
+
def to_s
|
298
|
+
format '%s capacity taken %s of %s>', super[0..-2], size, @Capacity
|
299
|
+
end
|
300
|
+
|
301
|
+
alias_method :inspect, :to_s
|
302
|
+
|
303
|
+
class << self
|
304
|
+
|
305
|
+
# @see #try_select
|
306
|
+
# @return [::Array(Channel, Object)]
|
307
|
+
def try_select(channels)
|
308
|
+
channels.first.try_select(channels[1..-1])
|
309
|
+
end
|
310
|
+
|
311
|
+
# @see #select_op
|
312
|
+
# @return [Future(::Array(Channel, Object))]
|
313
|
+
def select_op(channels, probe = Promises.resolvable_future)
|
314
|
+
channels.first.select_op(channels[1..-1], probe)
|
315
|
+
end
|
316
|
+
|
317
|
+
# @see #select
|
318
|
+
# @return [::Array(Channel, Object), nil]
|
319
|
+
def select(channels, timeout = nil)
|
320
|
+
channels.first.select(channels[1..-1], timeout)
|
321
|
+
end
|
322
|
+
|
323
|
+
# @see #try_select_matching
|
324
|
+
# @return [::Array(Channel, Object)]
|
325
|
+
def try_select_matching(matcher, channels)
|
326
|
+
channels.first.try_select_matching(matcher, channels[1..-1])
|
327
|
+
end
|
328
|
+
|
329
|
+
# @see #select_op_matching
|
330
|
+
# @return [Future(::Array(Channel, Object))]
|
331
|
+
def select_op_matching(matcher, channels, probe = Promises.resolvable_future)
|
332
|
+
channels.first.select_op_matching(matcher, channels[1..-1], probe)
|
333
|
+
end
|
334
|
+
|
335
|
+
# @see #select_matching
|
336
|
+
# @return [::Array(Channel, Object), nil]
|
337
|
+
def select_matching(matcher, channels, timeout = nil)
|
338
|
+
channels.first.select_matching(matcher, channels[1..-1], timeout)
|
339
|
+
end
|
340
|
+
end
|
341
|
+
|
342
|
+
# @!visibility private
|
343
|
+
def partial_select_op(matcher, probe)
|
344
|
+
@Mutex.synchronize { ns_pop_op(matcher, probe, true) }
|
345
|
+
end
|
346
|
+
|
347
|
+
private
|
348
|
+
|
349
|
+
def ns_pop_op(matcher, probe, include_channel)
|
350
|
+
message = ns_shift_message matcher
|
351
|
+
|
352
|
+
# got message from buffer
|
353
|
+
if message != NOTHING
|
354
|
+
if probe.fulfill(include_channel ? [self, message] : message, false)
|
355
|
+
new_message = ns_consume_pending_push ANY
|
356
|
+
@Messages.push new_message unless new_message == NOTHING
|
357
|
+
else
|
358
|
+
@Messages.unshift message
|
359
|
+
end
|
360
|
+
return probe
|
361
|
+
end
|
362
|
+
|
363
|
+
# no message in buffer, try to pair with a pending push
|
364
|
+
i = 0
|
365
|
+
while true
|
366
|
+
message, pushed = @PendingPush[i, 2]
|
367
|
+
break if pushed.nil?
|
368
|
+
|
369
|
+
if matcher === message
|
370
|
+
value = include_channel ? [self, message] : message
|
371
|
+
if Promises::Resolvable.atomic_resolution(probe => [true, value, nil],
|
372
|
+
pushed => [true, self, nil])
|
373
|
+
@PendingPush[i, 2] = []
|
374
|
+
return probe
|
375
|
+
end
|
376
|
+
|
377
|
+
if probe.resolved?
|
378
|
+
return probe
|
379
|
+
end
|
380
|
+
|
381
|
+
# so pushed.resolved? has to be true, remove the push
|
382
|
+
@PendingPush[i, 2] = []
|
383
|
+
end
|
384
|
+
|
385
|
+
i += 2
|
386
|
+
end
|
387
|
+
|
388
|
+
# no push to pair with
|
389
|
+
# TODO (pitr-ch 11-Jan-2019): clear up probes when timed out, use callback
|
390
|
+
@Probes.push probe, include_channel, matcher if probe.pending?
|
391
|
+
return probe
|
392
|
+
end
|
393
|
+
|
394
|
+
def ns_consume_pending_push(matcher, remove = true)
|
395
|
+
i = 0
|
396
|
+
while true
|
397
|
+
message, pushed = @PendingPush[i, 2]
|
398
|
+
return NOTHING unless pushed
|
399
|
+
|
400
|
+
if matcher === message
|
401
|
+
resolved = pushed.resolved?
|
402
|
+
@PendingPush[i, 2] = [] if remove || resolved
|
403
|
+
# can fail if timed-out, so try without error
|
404
|
+
if remove ? pushed.fulfill(self, false) : !resolved
|
405
|
+
# pushed fulfilled so actually push the message
|
406
|
+
return message
|
407
|
+
end
|
408
|
+
end
|
409
|
+
|
410
|
+
i += 2
|
411
|
+
end
|
412
|
+
end
|
413
|
+
|
414
|
+
def ns_try_push(message)
|
415
|
+
i = 0
|
416
|
+
while true
|
417
|
+
probe, include_channel, matcher = @Probes[i, 3]
|
418
|
+
break unless probe
|
419
|
+
if matcher === message && probe.fulfill(include_channel ? [self, message] : message, false)
|
420
|
+
@Probes[i, 3] = []
|
421
|
+
return true
|
422
|
+
end
|
423
|
+
i += 3
|
424
|
+
end
|
425
|
+
|
426
|
+
if @Capacity > @Messages.size
|
427
|
+
@Messages.push message
|
428
|
+
true
|
429
|
+
else
|
430
|
+
false
|
431
|
+
end
|
432
|
+
end
|
433
|
+
|
434
|
+
def ns_shift_message(matcher, remove = true)
|
435
|
+
i = 0
|
436
|
+
while true
|
437
|
+
message = @Messages.fetch(i, NOTHING)
|
438
|
+
return NOTHING if message == NOTHING
|
439
|
+
|
440
|
+
if matcher === message
|
441
|
+
@Messages.delete_at i if remove
|
442
|
+
return message
|
443
|
+
end
|
444
|
+
|
445
|
+
i += 1
|
446
|
+
end
|
447
|
+
end
|
448
|
+
end
|
449
|
+
end
|
450
|
+
end
|