concurrent-ruby 0.7.1-x64-mingw32 → 0.7.2-x64-mingw32

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,15 +1,15 @@
1
1
  ---
2
2
  !binary "U0hBMQ==":
3
3
  metadata.gz: !binary |-
4
- N2E3ZjExMmMwZTAzN2ZhYzkyODA3ZWJkODFlMDc2OGQwMmYwMmRhNQ==
4
+ ZTY4MWIzYzA5NDU1OTJjZTIxNzZlMGNkZjA4NzlmZGFmZDI4NmRkMQ==
5
5
  data.tar.gz: !binary |-
6
- Y2NkYWRiNjFmOTE0NTJhYWQwZTBhYWQxYjA5MjJjY2Q4ZjIwMjM4ZA==
6
+ NWRkMGIyM2Q0MTE3MjlmMTc5NTgzOTQ4ZGIxOWFkMTVlZTMwNTE5NA==
7
7
  SHA512:
8
8
  metadata.gz: !binary |-
9
- YzE4N2FiODY5MzlkMTYxZGYyM2Q2YWY5MDJlNmE0MzU2NjZhNjk4Njc4ODhj
10
- NjA1ZDc0NGUyMGVkMGVkMWFiOTRhMDRkM2M4YWU3YmNiMzQzYmY0Mzg4MDJj
11
- OTAwMjFlZjY5OTA1YWJjMjViODc4YjY4MjdiMGNkMzU1MGFkZWQ=
9
+ MzA0NGE3MzA5MDRhNDI3MDY5YzVlODU1YzM1ZDA4NDViMzdlMWRiZjliNWJj
10
+ MTdlNGI5YTYxNTJjYzkyYWUwOWI3MGM5NzI3N2YyOWEyZTY3NzE5MjA2Mjk0
11
+ MGI2NjI1NTcxNzY1NjA3OTdkMTlmN2U1MTI4Y2E0Y2JkMmJkMDA=
12
12
  data.tar.gz: !binary |-
13
- YTc3NWJmMGMyMDQyOGYyYjIwYTMxZjQ4ZWJjYTUwNWU1YTZiYTdjZjA1MmUx
14
- MmRlY2MwMWEzY2RmMmVlYjM5M2VhM2NjNDA3OGE1MTBjODJlOTExOGYzMzRl
15
- ZTdmMTJkMDhlNTE3ZWUzNTU4MDJiOTA3MGM2NmQ2MzJmNTI1YmI=
13
+ NWYzZjMwNDM3M2EyNGYzYjYzZjIyOTcyMTY1NjQzMGJhM2FkYjI5M2MyOTJl
14
+ NGY5Zjk1MzA0N2JlNmNlYzIyYWZlMDU1ZWQ1OTI4M2ZhMGQyN2FmMTUwZjAz
15
+ YmU1ZDBhOThjNDQ3MmJmZWEwZWQwNzI3OTMyN2FhYWE1YzhhMWU=
data/CHANGELOG.md CHANGED
@@ -1,3 +1,20 @@
1
+ ### Next Release v0.7.2 (24 January 2015)
2
+
3
+ * New `Semaphore` class based on [java.util.concurrent.Semaphore](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html)
4
+ * New `Promise.all?` and `Promise.any?` class methods
5
+ * Renamed `:overflow_policy` on thread pools to `:fallback_policy`
6
+ * Thread pools still accept the `:overflow_policy` option but display a warning
7
+ * Thread pools now implement `fallback_policy` behavior when not running (rather than universally rejecting tasks)
8
+ * Fixed minor `set_deref_options` constructor bug in `Promise` class
9
+ * Fixed minor `require` bug in `ThreadLocalVar` class
10
+ * Fixed race condition bug in `TimerSet` class
11
+ * Fixed race condition bug in `TimerSet` class
12
+ * Fixed signal bug in `TimerSet#post` method
13
+ * Numerous non-functional updates to clear warning when running in debug mode
14
+ * Fixed more intermittently failing tests
15
+ * Tests now run on new Travis build environment
16
+ * Multiple documentation updates
17
+
1
18
  ## Current Release v0.7.1 (4 December 2014)
2
19
 
3
20
  Please see the [roadmap](https://github.com/ruby-concurrency/concurrent-ruby/issues/142) for more information on the next planned release.
data/README.md CHANGED
@@ -54,19 +54,18 @@ This library contains a variety of concurrency abstractions at high and low leve
54
54
 
55
55
  ### High-level, general-purpose asynchronous concurrency abstractions
56
56
 
57
- * [Actor](./doc/actor/main.md): Implements the Actor Model, where concurrent actors exchange messages.
58
- * [Agent](./doc/agent.md): A single atomic value that represents an identity.
59
- * [Async](./doc/async.md): A mixin module that provides simple asynchronous behavior to any standard class/object or object.
60
- * [Future](./doc/future.md): An asynchronous operation that produces a value.
61
- * [Dataflow](./doc/dataflow.md): Built on Futures, Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available.
62
- * [Promise](./doc/promise.md): Similar to Futures, with more features.
63
- * [ScheduledTask](./doc/scheduled_task.md): Like a Future scheduled for a specific future time.
57
+ * [Actor](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Actor.html): Implements the Actor Model, where concurrent actors exchange messages.
58
+ * [Agent](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Agent.html): A single atomic value that represents an identity.
59
+ * [Async](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Async.html): A mixin module that provides simple asynchronous behavior to any standard class/object or object.
60
+ * [Future](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Future.html): An asynchronous operation that produces a value.
61
+ * [Dataflow](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Dataflow.html): Built on Futures, Dataflow allows you to create a task that will be scheduled when all of its data dependencies are available.
62
+ * [Promise](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Promise.html): Similar to Futures, with more features.
63
+ * [ScheduledTask](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ScheduledTask.html): Like a Future scheduled for a specific future time.
64
64
  * [TimerTask](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/TimerTask.html): A Thread that periodically wakes up to perform work at regular intervals.
65
65
 
66
-
67
66
  ### Java-inspired ThreadPools and other executors
68
67
 
69
- * See [ThreadPool](./doc/thread_pools.md) overview, which also contains a list of other Executors available.
68
+ * See [ThreadPool](http://ruby-concurrency.github.io/concurrent-ruby/file.thread_pools.html) overview, which also contains a list of other Executors available.
70
69
 
71
70
  ### Thread-safe Observers
72
71
 
@@ -75,6 +74,7 @@ This library contains a variety of concurrency abstractions at high and low leve
75
74
  * [CopyOnWriteObserverSet](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/CopyOnWriteObserverSet.html)
76
75
 
77
76
  ### Thread synchronization classes and algorithms
77
+
78
78
  Lower-level abstractions mainly used as building blocks.
79
79
 
80
80
  * [condition](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Condition.html)
@@ -82,10 +82,12 @@ Lower-level abstractions mainly used as building blocks.
82
82
  * [cyclic barrier](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/CyclicBarrier.html)
83
83
  * [event](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Event.html)
84
84
  * [exchanger](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Exchanger.html)
85
+ * [semaphore](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Semaphore.html)
85
86
  * [timeout](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent.html#timeout-class_method)
86
87
  * [timer](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent.html#timer-class_method)
87
88
 
88
89
  ### Thread-safe variables
90
+
89
91
  Lower-level abstractions mainly used as building blocks.
90
92
 
91
93
  * [AtomicBoolean](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/AtomicBoolean.html)
@@ -94,9 +96,7 @@ Lower-level abstractions mainly used as building blocks.
94
96
  * [I-Structures](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/IVar.html) (IVar)
95
97
  * [M-Structures](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/MVar.html) (MVar)
96
98
  * [thread-local variables](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ThreadLocalVar.html)
97
- * [software transactional memory](./doc/tvar.md) (TVar)
98
-
99
-
99
+ * [software transactional memory](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/TVar.html) (TVar)
100
100
 
101
101
  ## Installing and Building
102
102
 
Binary file
@@ -81,13 +81,13 @@ module Concurrent
81
81
  super unless @delegate.respond_to?(method)
82
82
  Async::validate_argc(@delegate, method, *args)
83
83
 
84
- self.define_singleton_method(method) do |*args|
85
- Async::validate_argc(@delegate, method, *args)
84
+ self.define_singleton_method(method) do |*args2|
85
+ Async::validate_argc(@delegate, method, *args2)
86
86
  ivar = Concurrent::IVar.new
87
87
  value, reason = nil, nil
88
88
  @serializer.post(@executor.value) do
89
89
  begin
90
- value = @delegate.send(method, *args, &block)
90
+ value = @delegate.send(method, *args2, &block)
91
91
  rescue => reason
92
92
  # caught
93
93
  ensure
@@ -25,8 +25,8 @@ module Concurrent
25
25
  class MutexAtomicFixnum
26
26
 
27
27
  # http://stackoverflow.com/questions/535721/ruby-max-integer
28
- MIN_VALUE = -(2**(0.size * 8 -2))
29
- MAX_VALUE = (2**(0.size * 8 -2) -1)
28
+ MIN_VALUE = -(2**(0.size * 8 - 2))
29
+ MAX_VALUE = (2**(0.size * 8 - 2) - 1)
30
30
 
31
31
  # @!macro [attach] atomic_fixnum_method_initialize
32
32
  #
@@ -0,0 +1,232 @@
1
+ require 'concurrent/atomic/condition'
2
+
3
+ module Concurrent
4
+ class MutexSemaphore
5
+ # @!macro [attach] semaphore_method_initialize
6
+ #
7
+ # Create a new `Semaphore` with the initial `count`.
8
+ #
9
+ # @param [Fixnum] count the initial count
10
+ #
11
+ # @raise [ArgumentError] if `count` is not an integer or is less than zero
12
+ def initialize(count)
13
+ unless count.is_a?(Fixnum) && count >= 0
14
+ fail ArgumentError, 'count must be an non-negative integer'
15
+ end
16
+ @mutex = Mutex.new
17
+ @condition = Condition.new
18
+ @free = count
19
+ end
20
+
21
+ # @!macro [attach] semaphore_method_acquire
22
+ #
23
+ # Acquires the given number of permits from this semaphore,
24
+ # blocking until all are available.
25
+ #
26
+ # @param [Fixnum] permits Number of permits to acquire
27
+ #
28
+ # @raise [ArgumentError] if `permits` is not an integer or is less than
29
+ # one
30
+ #
31
+ # @return [Nil]
32
+ def acquire(permits = 1)
33
+ unless permits.is_a?(Fixnum) && permits > 0
34
+ fail ArgumentError, 'permits must be an integer greater than zero'
35
+ end
36
+ @mutex.synchronize do
37
+ try_acquire_timed(permits, nil)
38
+ nil
39
+ end
40
+ end
41
+
42
+ # @!macro [attach] semaphore_method_available_permits
43
+ #
44
+ # Returns the current number of permits available in this semaphore.
45
+ #
46
+ # @return [Integer]
47
+ def available_permits
48
+ @mutex.synchronize { @free }
49
+ end
50
+
51
+ # @!macro [attach] semaphore_method_drain_permits
52
+ #
53
+ # Acquires and returns all permits that are immediately available.
54
+ #
55
+ # @return [Integer]
56
+ def drain_permits
57
+ @mutex.synchronize do
58
+ @free.tap { |_| @free = 0 }
59
+ end
60
+ end
61
+
62
+ # @!macro [attach] semaphore_method_try_acquire
63
+ #
64
+ # Acquires the given number of permits from this semaphore,
65
+ # only if all are available at the time of invocation or within
66
+ # `timeout` interval
67
+ #
68
+ # @param [Fixnum] permits the number of permits to acquire
69
+ #
70
+ # @param [Fixnum] timeout the number of seconds to wait for the counter
71
+ # or `nil` to return immediately
72
+ #
73
+ # @raise [ArgumentError] if `permits` is not an integer or is less than
74
+ # one
75
+ #
76
+ # @return [Boolean] `false` if no permits are available, `true` when
77
+ # acquired a permit
78
+ def try_acquire(permits = 1, timeout = nil)
79
+ unless permits.is_a?(Fixnum) && permits > 0
80
+ fail ArgumentError, 'permits must be an integer greater than zero'
81
+ end
82
+ @mutex.synchronize do
83
+ if timeout.nil?
84
+ try_acquire_now(permits)
85
+ else
86
+ try_acquire_timed(permits, timeout)
87
+ end
88
+ end
89
+ end
90
+
91
+ # @!macro [attach] semaphore_method_release
92
+ #
93
+ # Releases the given number of permits, returning them to the semaphore.
94
+ #
95
+ # @param [Fixnum] permits Number of permits to return to the semaphore.
96
+ #
97
+ # @raise [ArgumentError] if `permits` is not a number or is less than one
98
+ #
99
+ # @return [Nil]
100
+ def release(permits = 1)
101
+ unless permits.is_a?(Fixnum) && permits > 0
102
+ fail ArgumentError, 'permits must be an integer greater than zero'
103
+ end
104
+ @mutex.synchronize do
105
+ @free += permits
106
+ permits.times { @condition.signal }
107
+ end
108
+ nil
109
+ end
110
+
111
+ # @!macro [attach] semaphore_method_reduce_permits
112
+ #
113
+ # @api private
114
+ #
115
+ # Shrinks the number of available permits by the indicated reduction.
116
+ #
117
+ # @param [Fixnum] reduction Number of permits to remove.
118
+ #
119
+ # @raise [ArgumentError] if `reduction` is not an integer or is negative
120
+ #
121
+ # @raise [ArgumentError] if `@free` - `@reduction` is less than zero
122
+ #
123
+ # @return [Nil]
124
+ def reduce_permits(reduction)
125
+ unless reduction.is_a?(Fixnum) && reduction >= 0
126
+ fail ArgumentError, 'reduction must be an non-negative integer'
127
+ end
128
+ @mutex.synchronize { @free -= reduction }
129
+ nil
130
+ end
131
+
132
+ private
133
+
134
+ def try_acquire_now(permits)
135
+ if @free >= permits
136
+ @free -= permits
137
+ true
138
+ else
139
+ false
140
+ end
141
+ end
142
+
143
+ def try_acquire_timed(permits, timeout)
144
+ remaining = Condition::Result.new(timeout)
145
+ while !try_acquire_now(permits) && remaining.can_wait?
146
+ @condition.signal
147
+ remaining = @condition.wait(@mutex, remaining.remaining_time)
148
+ end
149
+ remaining.can_wait? ? true : false
150
+ end
151
+ end
152
+
153
+ if RUBY_PLATFORM == 'java'
154
+
155
+ # @!macro semaphore
156
+ #
157
+ # A counting semaphore. Conceptually, a semaphore maintains a set of permits. Each {#acquire} blocks if necessary
158
+ # until a permit is available, and then takes it. Each {#release} adds a permit,
159
+ # potentially releasing a blocking acquirer.
160
+ # However, no actual permit objects are used; the Semaphore just keeps a count of the number available and
161
+ # acts accordingly.
162
+ class JavaSemaphore
163
+ # @!macro semaphore_method_initialize
164
+ def initialize(count)
165
+ unless count.is_a?(Fixnum) && count >= 0
166
+ fail(ArgumentError,
167
+ 'count must be in integer greater than or equal zero')
168
+ end
169
+ @semaphore = java.util.concurrent.Semaphore.new(count)
170
+ end
171
+
172
+ # @!macro semaphore_method_acquire
173
+ def acquire(permits = 1)
174
+ unless permits.is_a?(Fixnum) && permits > 0
175
+ fail ArgumentError, 'permits must be an integer greater than zero'
176
+ end
177
+ @semaphore.acquire(permits)
178
+ end
179
+
180
+ # @!macro semaphore_method_available_permits
181
+ def available_permits
182
+ @semaphore.availablePermits
183
+ end
184
+
185
+ # @!macro semaphore_method_drain_permits
186
+ def drain_permits
187
+ @semaphore.drainPermits
188
+ end
189
+
190
+ # @!macro semaphore_method_try_acquire
191
+ def try_acquire(permits = 1, timeout = nil)
192
+ unless permits.is_a?(Fixnum) && permits > 0
193
+ fail ArgumentError, 'permits must be an integer greater than zero'
194
+ end
195
+ if timeout.nil?
196
+ @semaphore.tryAcquire(permits)
197
+ else
198
+ @semaphore.tryAcquire(permits,
199
+ timeout,
200
+ java.util.concurrent.TimeUnit::SECONDS)
201
+ end
202
+ end
203
+
204
+ # @!macro semaphore_method_release
205
+ def release(permits = 1)
206
+ unless permits.is_a?(Fixnum) && permits > 0
207
+ fail ArgumentError, 'permits must be an integer greater than zero'
208
+ end
209
+ @semaphore.release(permits)
210
+ true
211
+ end
212
+
213
+ # @!macro semaphore_method_reduce_permits
214
+ def reduce_permits(reduction)
215
+ unless reduction.is_a?(Fixnum) && reduction >= 0
216
+ fail ArgumentError, 'reduction must be an non-negative integer'
217
+ end
218
+ @semaphore.reducePermits(reduction)
219
+ end
220
+ end
221
+
222
+ # @!macro semaphore
223
+ class Semaphore < JavaSemaphore
224
+ end
225
+
226
+ else
227
+
228
+ # @!macro semaphore
229
+ class Semaphore < MutexSemaphore
230
+ end
231
+ end
232
+ end
@@ -8,3 +8,5 @@ require 'concurrent/atomic/cyclic_barrier'
8
8
  require 'concurrent/atomic/count_down_latch'
9
9
  require 'concurrent/atomic/event'
10
10
  require 'concurrent/atomic/synchronization'
11
+ require 'concurrent/atomic/semaphore'
12
+ require 'concurrent/atomic/thread_local_var'
@@ -102,7 +102,7 @@ module Concurrent
102
102
  max_threads: [20, Concurrent.processor_count * 15].max,
103
103
  idletime: 2 * 60, # 2 minutes
104
104
  max_queue: 0, # unlimited
105
- overflow_policy: :abort # raise an exception
105
+ fallback_policy: :abort # raise an exception
106
106
  )
107
107
  end
108
108
 
@@ -112,7 +112,7 @@ module Concurrent
112
112
  max_threads: [2, Concurrent.processor_count].max,
113
113
  idletime: 10 * 60, # 10 minutes
114
114
  max_queue: [20, Concurrent.processor_count * 15].max,
115
- overflow_policy: :abort # raise an exception
115
+ fallback_policy: :abort # raise an exception
116
116
  )
117
117
  end
118
118
  end
@@ -5,7 +5,12 @@ require 'concurrent/atomic/event'
5
5
  module Concurrent
6
6
 
7
7
  module Executor
8
-
8
+ # The policy defining how rejected tasks (tasks received once the
9
+ # queue size reaches the configured `max_queue`, or after the
10
+ # executor has shut down) are handled. Must be one of the values
11
+ # specified in `FALLBACK_POLICIES`.
12
+ attr_reader :fallback_policy
13
+
9
14
  # @!macro [attach] executor_module_method_can_overflow_question
10
15
  #
11
16
  # Does the task queue have a maximum size?
@@ -17,6 +22,31 @@ module Concurrent
17
22
  false
18
23
  end
19
24
 
25
+ # Handler which executes the `fallback_policy` once the queue size
26
+ # reaches `max_queue`.
27
+ #
28
+ # @param [Array] args the arguments to the task which is being handled.
29
+ #
30
+ # @!visibility private
31
+ def handle_fallback(*args)
32
+ case @fallback_policy
33
+ when :abort
34
+ raise RejectedExecutionError
35
+ when :discard
36
+ false
37
+ when :caller_runs
38
+ begin
39
+ yield(*args)
40
+ rescue => ex
41
+ # let it fail
42
+ log DEBUG, ex
43
+ end
44
+ true
45
+ else
46
+ fail "Unknown fallback policy #{@fallback_policy}"
47
+ end
48
+ end
49
+
20
50
  # @!macro [attach] executor_module_method_serialized_question
21
51
  #
22
52
  # Does this executor guarantee serialization of its operations?
@@ -63,6 +93,9 @@ module Concurrent
63
93
  include Executor
64
94
  include Logging
65
95
 
96
+ # The set of possible fallback policies that may be set at thread pool creation.
97
+ FALLBACK_POLICIES = [:abort, :discard, :caller_runs]
98
+
66
99
  # @!macro [attach] executor_method_post
67
100
  #
68
101
  # Submit a task to the executor for asynchronous processing.
@@ -78,7 +111,8 @@ module Concurrent
78
111
  def post(*args, &task)
79
112
  raise ArgumentError.new('no block given') unless block_given?
80
113
  mutex.synchronize do
81
- return false unless running?
114
+ # If the executor is shut down, reject this task
115
+ return handle_fallback(*args, &task) unless running?
82
116
  execute(*args, &task)
83
117
  true
84
118
  end
@@ -210,16 +244,20 @@ module Concurrent
210
244
  include Executor
211
245
  java_import 'java.lang.Runnable'
212
246
 
247
+ # The set of possible fallback policies that may be set at thread pool creation.
248
+ FALLBACK_POLICIES = {
249
+ abort: java.util.concurrent.ThreadPoolExecutor::AbortPolicy,
250
+ discard: java.util.concurrent.ThreadPoolExecutor::DiscardPolicy,
251
+ caller_runs: java.util.concurrent.ThreadPoolExecutor::CallerRunsPolicy
252
+ }.freeze
253
+
213
254
  # @!macro executor_method_post
214
- def post(*args)
255
+ def post(*args, &task)
215
256
  raise ArgumentError.new('no block given') unless block_given?
216
- if running?
217
- executor_submit = @executor.java_method(:submit, [Runnable.java_class])
218
- executor_submit.call { yield(*args) }
219
- true
220
- else
221
- false
222
- end
257
+ return handle_fallback(*args, &task) unless running?
258
+ executor_submit = @executor.java_method(:submit, [Runnable.java_class])
259
+ executor_submit.call { yield(*args) }
260
+ true
223
261
  rescue Java::JavaUtilConcurrent::RejectedExecutionException
224
262
  raise RejectedExecutionError
225
263
  end
@@ -28,7 +28,7 @@ module Concurrent
28
28
  return false unless running?
29
29
 
30
30
  event = Concurrent::Event.new
31
- internal_executor.post do
31
+ @internal_executor.post do
32
32
  begin
33
33
  task.call(*args)
34
34
  ensure
@@ -39,8 +39,5 @@ module Concurrent
39
39
 
40
40
  true
41
41
  end
42
-
43
- private
44
- attr_reader :internal_executor
45
42
  end
46
43
  end
@@ -10,19 +10,20 @@ if RUBY_PLATFORM == 'java'
10
10
  # Create a new thread pool.
11
11
  #
12
12
  # @param [Hash] opts the options defining pool behavior.
13
- # @option opts [Symbol] :overflow_policy (`:abort`) the overflow policy
13
+ # @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy
14
14
  #
15
- # @raise [ArgumentError] if `overflow_policy` is not a known policy
15
+ # @raise [ArgumentError] if `fallback_policy` is not a known policy
16
16
  #
17
17
  # @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newCachedThreadPool--
18
18
  def initialize(opts = {})
19
- @overflow_policy = opts.fetch(:overflow_policy, :abort)
19
+ @fallback_policy = opts.fetch(:fallback_policy, opts.fetch(:overflow_policy, :abort))
20
+ warn '[DEPRECATED] :overflow_policy is deprecated terminology, please use :fallback_policy instead' if opts.has_key?(:overflow_policy)
20
21
  @max_queue = 0
21
22
 
22
- raise ArgumentError.new("#{@overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.keys.include?(@overflow_policy)
23
+ raise ArgumentError.new("#{@fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.keys.include?(@fallback_policy)
23
24
 
24
25
  @executor = java.util.concurrent.Executors.newCachedThreadPool
25
- @executor.setRejectedExecutionHandler(OVERFLOW_POLICIES[@overflow_policy].new)
26
+ @executor.setRejectedExecutionHandler(FALLBACK_POLICIES[@fallback_policy].new)
26
27
 
27
28
  set_shutdown_hook
28
29
  end
@@ -10,10 +10,10 @@ if RUBY_PLATFORM == 'java'
10
10
  # Create a new thread pool.
11
11
  #
12
12
  # @param [Hash] opts the options defining pool behavior.
13
- # @option opts [Symbol] :overflow_policy (`:abort`) the overflow policy
13
+ # @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy
14
14
  #
15
15
  # @raise [ArgumentError] if `num_threads` is less than or equal to zero
16
- # @raise [ArgumentError] if `overflow_policy` is not a known policy
16
+ # @raise [ArgumentError] if `fallback_policy` is not a known policy
17
17
  #
18
18
  # @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newFixedThreadPool-int-
19
19
  def initialize(num_threads, opts = {})
@@ -24,16 +24,6 @@ if RUBY_PLATFORM == 'java'
24
24
  }.merge(opts)
25
25
  super(opts)
26
26
 
27
-
28
- #@overflow_policy = opts.fetch(:overflow_policy, :abort)
29
- #@max_queue = 0
30
- #
31
- #raise ArgumentError.new('number of threads must be greater than zero') if num_threads < 1
32
- #raise ArgumentError.new("#{@overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.keys.include?(@overflow_policy)
33
- #
34
- #@executor = java.util.concurrent.Executors.newFixedThreadPool(num_threads)
35
- #@executor.setRejectedExecutionHandler(OVERFLOW_POLICIES[@overflow_policy].new)
36
-
37
27
  set_shutdown_hook
38
28
  end
39
29
  end
@@ -10,11 +10,17 @@ if RUBY_PLATFORM == 'java'
10
10
 
11
11
  # Create a new thread pool.
12
12
  #
13
+ # @option opts [Symbol] :fallback_policy (:discard) the policy
14
+ # for handling new tasks that are received when the queue size
15
+ # has reached `max_queue` or after the executor has shut down
16
+ #
13
17
  # @see http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html
14
18
  # @see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executors.html
15
19
  # @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html
16
20
  def initialize(opts = {})
17
21
  @executor = java.util.concurrent.Executors.newSingleThreadExecutor
22
+ @fallback_policy = opts.fetch(:fallback_policy, :discard)
23
+ raise ArgumentError.new("#{@fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.keys.include?(@fallback_policy)
18
24
  set_shutdown_hook
19
25
  end
20
26
  end
@@ -20,26 +20,14 @@ if RUBY_PLATFORM == 'java'
20
20
  # before being reclaimed.
21
21
  DEFAULT_THREAD_IDLETIMEOUT = 60
22
22
 
23
- # The set of possible overflow policies that may be set at thread pool creation.
24
- OVERFLOW_POLICIES = {
25
- abort: java.util.concurrent.ThreadPoolExecutor::AbortPolicy,
26
- discard: java.util.concurrent.ThreadPoolExecutor::DiscardPolicy,
27
- caller_runs: java.util.concurrent.ThreadPoolExecutor::CallerRunsPolicy
28
- }.freeze
29
-
30
23
  # The maximum number of threads that may be created in the pool.
31
24
  attr_reader :max_length
32
25
 
33
26
  # The maximum number of tasks that may be waiting in the work queue at any one time.
34
27
  # When the queue size reaches `max_queue` subsequent tasks will be rejected in
35
- # accordance with the configured `overflow_policy`.
28
+ # accordance with the configured `fallback_policy`.
36
29
  attr_reader :max_queue
37
30
 
38
- # The policy defining how rejected tasks (tasks received once the queue size reaches
39
- # the configured `max_queue`) are handled. Must be one of the values specified in
40
- # `OVERFLOW_POLICIES`.
41
- attr_reader :overflow_policy
42
-
43
31
  # Create a new thread pool.
44
32
  #
45
33
  # @param [Hash] opts the options which configure the thread pool
@@ -52,14 +40,15 @@ if RUBY_PLATFORM == 'java'
52
40
  # number of seconds a thread may be idle before being reclaimed
53
41
  # @option opts [Integer] :max_queue (DEFAULT_MAX_QUEUE_SIZE) the maximum
54
42
  # number of tasks allowed in the work queue at any one time; a value of
55
- # zero means the queue may grow without bounnd
56
- # @option opts [Symbol] :overflow_policy (:abort) the policy for handling new
57
- # tasks that are received when the queue size has reached `max_queue`
43
+ # zero means the queue may grow without bound
44
+ # @option opts [Symbol] :fallback_policy (:abort) the policy for handling new
45
+ # tasks that are received when the queue size has reached
46
+ # `max_queue` or the executir has shut down
58
47
  #
59
48
  # @raise [ArgumentError] if `:max_threads` is less than one
60
49
  # @raise [ArgumentError] if `:min_threads` is less than zero
61
- # @raise [ArgumentError] if `:overflow_policy` is not one of the values specified
62
- # in `OVERFLOW_POLICIES`
50
+ # @raise [ArgumentError] if `:fallback_policy` is not one of the values specified
51
+ # in `FALLBACK_POLICIES`
63
52
  #
64
53
  # @see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
65
54
  def initialize(opts = {})
@@ -67,12 +56,13 @@ if RUBY_PLATFORM == 'java'
67
56
  max_length = opts.fetch(:max_threads, DEFAULT_MAX_POOL_SIZE).to_i
68
57
  idletime = opts.fetch(:idletime, DEFAULT_THREAD_IDLETIMEOUT).to_i
69
58
  @max_queue = opts.fetch(:max_queue, DEFAULT_MAX_QUEUE_SIZE).to_i
70
- @overflow_policy = opts.fetch(:overflow_policy, :abort)
59
+ @fallback_policy = opts.fetch(:fallback_policy, opts.fetch(:overflow_policy, :abort))
60
+ warn '[DEPRECATED] :overflow_policy is deprecated terminology, please use :fallback_policy instead' if opts.has_key?(:overflow_policy)
71
61
 
72
62
  raise ArgumentError.new('max_threads must be greater than zero') if max_length <= 0
73
63
  raise ArgumentError.new('min_threads cannot be less than zero') if min_length < 0
74
64
  raise ArgumentError.new('min_threads cannot be more than max_threads') if min_length > max_length
75
- raise ArgumentError.new("#{@overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.keys.include?(@overflow_policy)
65
+ raise ArgumentError.new("#{fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.include?(@fallback_policy)
76
66
 
77
67
  if @max_queue == 0
78
68
  queue = java.util.concurrent.LinkedBlockingQueue.new
@@ -83,7 +73,7 @@ if RUBY_PLATFORM == 'java'
83
73
  @executor = java.util.concurrent.ThreadPoolExecutor.new(
84
74
  min_length, max_length,
85
75
  idletime, java.util.concurrent.TimeUnit::SECONDS,
86
- queue, OVERFLOW_POLICIES[@overflow_policy].new)
76
+ queue, FALLBACK_POLICIES[@fallback_policy].new)
87
77
 
88
78
  set_shutdown_hook
89
79
  end
@@ -8,18 +8,18 @@ module Concurrent
8
8
  # Create a new thread pool.
9
9
  #
10
10
  # @param [Hash] opts the options defining pool behavior.
11
- # number of seconds a thread may be idle before it is reclaimed
11
+ # @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy
12
12
  #
13
- # @raise [ArgumentError] if `overflow_policy` is not a known policy
13
+ # @raise [ArgumentError] if `fallback_policy` is not a known policy
14
14
  def initialize(opts = {})
15
- overflow_policy = opts.fetch(:overflow_policy, :abort)
15
+ fallback_policy = opts.fetch(:fallback_policy, opts.fetch(:overflow_policy, :abort))
16
16
 
17
- raise ArgumentError.new("#{overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.include?(overflow_policy)
17
+ raise ArgumentError.new("#{fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.include?(fallback_policy)
18
18
 
19
19
  opts = opts.merge(
20
20
  min_threads: 0,
21
21
  max_threads: DEFAULT_MAX_POOL_SIZE,
22
- overflow_policy: overflow_policy,
22
+ fallback_policy: fallback_policy,
23
23
  max_queue: DEFAULT_MAX_QUEUE_SIZE,
24
24
  idletime: DEFAULT_THREAD_IDLETIMEOUT
25
25
  )
@@ -9,20 +9,20 @@ module Concurrent
9
9
  #
10
10
  # @param [Integer] num_threads the number of threads to allocate
11
11
  # @param [Hash] opts the options defining pool behavior.
12
- # @option opts [Symbol] :overflow_policy (`:abort`) the overflow policy
12
+ # @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy
13
13
  #
14
14
  # @raise [ArgumentError] if `num_threads` is less than or equal to zero
15
- # @raise [ArgumentError] if `overflow_policy` is not a known policy
15
+ # @raise [ArgumentError] if `fallback_policy` is not a known policy
16
16
  def initialize(num_threads, opts = {})
17
- overflow_policy = opts.fetch(:overflow_policy, :abort)
17
+ fallback_policy = opts.fetch(:fallback_policy, opts.fetch(:overflow_policy, :abort))
18
18
 
19
19
  raise ArgumentError.new('number of threads must be greater than zero') if num_threads < 1
20
- raise ArgumentError.new("#{overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.include?(overflow_policy)
20
+ raise ArgumentError.new("#{fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.include?(fallback_policy)
21
21
 
22
22
  opts = {
23
23
  min_threads: num_threads,
24
24
  max_threads: num_threads,
25
- overflow_policy: overflow_policy,
25
+ fallback_policy: fallback_policy,
26
26
  max_queue: DEFAULT_MAX_QUEUE_SIZE,
27
27
  idletime: DEFAULT_THREAD_IDLETIMEOUT,
28
28
  }.merge(opts)
@@ -9,12 +9,18 @@ module Concurrent
9
9
 
10
10
  # Create a new thread pool.
11
11
  #
12
+ # @option opts [Symbol] :fallback_policy (:discard) the policy for
13
+ # handling new tasks that are received when the queue size has
14
+ # reached `max_queue` or after the executor has shut down
15
+ #
12
16
  # @see http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html
13
17
  # @see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Executors.html
14
18
  # @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html
15
19
  def initialize(opts = {})
16
20
  @queue = Queue.new
17
21
  @thread = nil
22
+ @fallback_policy = opts.fetch(:fallback_policy, :discard)
23
+ raise ArgumentError.new("#{@fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.include?(@fallback_policy)
18
24
  init_executor
19
25
  end
20
26
 
@@ -23,9 +23,6 @@ module Concurrent
23
23
  # before being reclaimed.
24
24
  DEFAULT_THREAD_IDLETIMEOUT = 60
25
25
 
26
- # The set of possible overflow policies that may be set at thread pool creation.
27
- OVERFLOW_POLICIES = [:abort, :discard, :caller_runs]
28
-
29
26
  # The maximum number of threads that may be created in the pool.
30
27
  attr_reader :max_length
31
28
 
@@ -46,14 +43,9 @@ module Concurrent
46
43
 
47
44
  # The maximum number of tasks that may be waiting in the work queue at any one time.
48
45
  # When the queue size reaches `max_queue` subsequent tasks will be rejected in
49
- # accordance with the configured `overflow_policy`.
46
+ # accordance with the configured `fallback_policy`.
50
47
  attr_reader :max_queue
51
48
 
52
- # The policy defining how rejected tasks (tasks received once the queue size reaches
53
- # the configured `max_queue`) are handled. Must be one of the values specified in
54
- # `OVERFLOW_POLICIES`.
55
- attr_reader :overflow_policy
56
-
57
49
  # Create a new thread pool.
58
50
  #
59
51
  # @param [Hash] opts the options which configure the thread pool
@@ -66,14 +58,15 @@ module Concurrent
66
58
  # number of seconds a thread may be idle before being reclaimed
67
59
  # @option opts [Integer] :max_queue (DEFAULT_MAX_QUEUE_SIZE) the maximum
68
60
  # number of tasks allowed in the work queue at any one time; a value of
69
- # zero means the queue may grow without bounnd
70
- # @option opts [Symbol] :overflow_policy (:abort) the policy for handling new
71
- # tasks that are received when the queue size has reached `max_queue`
61
+ # zero means the queue may grow without bound
62
+ # @option opts [Symbol] :fallback_policy (:abort) the policy for handling new
63
+ # tasks that are received when the queue size has reached
64
+ # `max_queue` or the executor has shut down
72
65
  #
73
66
  # @raise [ArgumentError] if `:max_threads` is less than one
74
67
  # @raise [ArgumentError] if `:min_threads` is less than zero
75
- # @raise [ArgumentError] if `:overflow_policy` is not one of the values specified
76
- # in `OVERFLOW_POLICIES`
68
+ # @raise [ArgumentError] if `:fallback_policy` is not one of the values specified
69
+ # in `FALLBACK_POLICIES`
77
70
  #
78
71
  # @see http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.html
79
72
  def initialize(opts = {})
@@ -81,11 +74,12 @@ module Concurrent
81
74
  @max_length = opts.fetch(:max_threads, DEFAULT_MAX_POOL_SIZE).to_i
82
75
  @idletime = opts.fetch(:idletime, DEFAULT_THREAD_IDLETIMEOUT).to_i
83
76
  @max_queue = opts.fetch(:max_queue, DEFAULT_MAX_QUEUE_SIZE).to_i
84
- @overflow_policy = opts.fetch(:overflow_policy, :abort)
77
+ @fallback_policy = opts.fetch(:fallback_policy, opts.fetch(:overflow_policy, :abort))
78
+ warn '[DEPRECATED] :overflow_policy is deprecated terminology, please use :fallback_policy instead' if opts.has_key?(:overflow_policy)
85
79
 
86
80
  raise ArgumentError.new('max_threads must be greater than zero') if @max_length <= 0
87
81
  raise ArgumentError.new('min_threads cannot be less than zero') if @min_length < 0
88
- raise ArgumentError.new("#{overflow_policy} is not a valid overflow policy") unless OVERFLOW_POLICIES.include?(@overflow_policy)
82
+ raise ArgumentError.new("#{fallback_policy} is not a valid fallback policy") unless FALLBACK_POLICIES.include?(@fallback_policy)
89
83
  raise ArgumentError.new('min_threads cannot be more than max_threads') if min_length > max_length
90
84
 
91
85
  init_executor
@@ -169,7 +163,7 @@ module Concurrent
169
163
  @scheduled_task_count += 1
170
164
  @queue << [args, task]
171
165
  else
172
- handle_overflow(*args, &task) if @max_queue != 0 && @queue.length >= @max_queue
166
+ handle_fallback(*args, &task) if @max_queue != 0 && @queue.length >= @max_queue
173
167
  end
174
168
  end
175
169
 
@@ -224,29 +218,6 @@ module Concurrent
224
218
  capacity
225
219
  end
226
220
 
227
- # Handler which executes the `overflow_policy` once the queue size
228
- # reaches `max_queue`.
229
- #
230
- # @param [Array] args the arguments to the task which is being handled.
231
- #
232
- # @!visibility private
233
- def handle_overflow(*args)
234
- case @overflow_policy
235
- when :abort
236
- raise RejectedExecutionError
237
- when :discard
238
- false
239
- when :caller_runs
240
- begin
241
- yield(*args)
242
- rescue => ex
243
- # let it fail
244
- log DEBUG, ex
245
- end
246
- true
247
- end
248
- end
249
-
250
221
  # Scan all threads in the pool and reclaim any that are dead or
251
222
  # have been idle too long. Will check the last time the pool was
252
223
  # pruned and only run if the configured garbage collection
@@ -13,6 +13,7 @@ module Concurrent
13
13
  @parent = parent
14
14
  @mutex = Mutex.new
15
15
  @last_activity = Time.now.to_f
16
+ @thread = nil
16
17
  end
17
18
 
18
19
  # @!visibility private
@@ -12,7 +12,7 @@ module Concurrent
12
12
 
13
13
  Job = Struct.new(:executor, :args, :block) do
14
14
  def call
15
- block.call *args
15
+ block.call(*args)
16
16
  end
17
17
  end
18
18
 
@@ -40,17 +40,15 @@ module Concurrent
40
40
  # * `idletime`: The number of seconds that a thread may be idle before being reclaimed.
41
41
  # * `max_queue`: The maximum number of tasks that may be waiting in the work queue at
42
42
  # any one time. When the queue size reaches `max_queue` subsequent tasks will be
43
- # rejected in accordance with the configured `overflow_policy`.
44
- # * `overflow_policy`: The policy defining how rejected tasks are handled. #
43
+ # rejected in accordance with the configured `fallback_policy`.
44
+ # * `fallback_policy`: The policy defining how rejected tasks are handled. #
45
45
  #
46
- # Three overflow policies are supported:
46
+ # Three fallback policies are supported:
47
47
  #
48
48
  # * `:abort`: Raise a `RejectedExecutionError` exception and discard the task.
49
- # * `:discard`: Silently discard the task and return `nil` as the task result.
49
+ # * `:discard`: Discard the task and return false.
50
50
  # * `:caller_runs`: Execute the task on the calling thread.
51
51
  #
52
- # {include:file:doc/thread_pools.md}
53
- #
54
52
  # @note When running on the JVM (JRuby) this class will inherit from `JavaThreadPoolExecutor`.
55
53
  # On all other platforms it will inherit from `RubyThreadPoolExecutor`.
56
54
  #
@@ -55,10 +55,10 @@ module Concurrent
55
55
  @queue.push(Task.new(time, args, task))
56
56
  @timer_executor.post(&method(:process_tasks))
57
57
  end
58
-
59
- true
60
58
  end
61
59
 
60
+ @condition.signal
61
+ true
62
62
  end
63
63
 
64
64
  # For a timer, #kill is like an orderly shutdown, except we need to manually
@@ -129,8 +129,20 @@ module Concurrent
129
129
  interval = task.time - Time.now.to_f
130
130
 
131
131
  if interval <= 0
132
+ # We need to remove the task from the queue before passing
133
+ # it to the executor, to avoid race conditions where we pass
134
+ # the peek'ed task to the executor and then pop a different
135
+ # one that's been added in the meantime.
136
+ #
137
+ # Note that there's no race condition between the peek and
138
+ # this pop - this pop could retrieve a different task from
139
+ # the peek, but that task would be due to fire now anyway
140
+ # (because @queue is a priority queue, and this thread is
141
+ # the only reader, so whatever timer is at the head of the
142
+ # queue now must have the same pop time, or a closer one, as
143
+ # when we peeked).
144
+ task = mutex.synchronize { @queue.pop }
132
145
  @task_executor.post(*task.args, &task.op)
133
- mutex.synchronize { @queue.pop }
134
146
  else
135
147
  mutex.synchronize do
136
148
  @condition.wait(mutex, [interval, 60].min)
@@ -49,7 +49,7 @@ module Concurrent
49
49
  # @return self
50
50
  # @param [Object] key
51
51
  def unregister(key)
52
- @data.update { |h| h.dup.tap { |h| h.delete(key) } }
52
+ @data.update { |h| h.dup.tap { |j| j.delete(key) } }
53
53
  self
54
54
  end
55
55
 
@@ -46,7 +46,7 @@ module Concurrent
46
46
  # as #add_observer but it can be used for chaining
47
47
  # @return [Observable] self
48
48
  def with_observer(*args, &block)
49
- add_observer *args, &block
49
+ add_observer(*args, &block)
50
50
  self
51
51
  end
52
52
 
@@ -5,7 +5,167 @@ require 'concurrent/options_parser'
5
5
 
6
6
  module Concurrent
7
7
 
8
- # {include:file:doc/promise.md}
8
+ PromiseExecutionError = Class.new(StandardError)
9
+
10
+ # Promises are inspired by the JavaScript [Promises/A](http://wiki.commonjs.org/wiki/Promises/A)
11
+ # and [Promises/A+](http://promises-aplus.github.io/promises-spec/) specifications.
12
+ #
13
+ # > A promise represents the eventual value returned from the single completion of an operation.
14
+ #
15
+ # Promises are similar to futures and share many of the same behaviours. Promises are far more
16
+ # robust, however. Promises can be chained in a tree structure where each promise may have zero
17
+ # or more children. Promises are chained using the `then` method. The result of a call to `then`
18
+ # is always another promise. Promises are resolved asynchronously (with respect to the main thread)
19
+ # but in a strict order: parents are guaranteed to be resolved before their children, children
20
+ # before their younger siblings. The `then` method takes two parameters: an optional block to
21
+ # be executed upon parent resolution and an optional callable to be executed upon parent failure.
22
+ # The result of each promise is passed to each of its children upon resolution. When a promise
23
+ # is rejected all its children will be summarily rejected and will receive the reason.
24
+ #
25
+ # Promises have four possible states: *unscheduled*, *pending*, *rejected*, and *fulfilled*. A
26
+ # Promise created using `.new` will be *unscheduled*. It is scheduled by calling the `execute`
27
+ # method. Upon execution the Promise and all its children will be set to *pending*. When a promise
28
+ # is *pending* it will remain in that state until processing is complete. A completed Promise is
29
+ # either *rejected*, indicating that an exception was thrown during processing, or *fulfilled*,
30
+ # indicating it succeeded. If a Promise is *fulfilled* its `value` will be updated to reflect
31
+ # the result of the operation. If *rejected* the `reason` will be updated with a reference to
32
+ # the thrown exception. The predicate methods `unscheduled?`, `pending?`, `rejected?`, and
33
+ # `fulfilled?` can be called at any time to obtain the state of the Promise, as can the `state`
34
+ # method, which returns a symbol. A Promise created using `.execute` will be *pending*, a Promise
35
+ # created using `.fulfill(value)` will be *fulfilled* with the given value and a Promise created
36
+ # using `.reject(reason)` will be *rejected* with the given reason.
37
+ #
38
+ # Retrieving the value of a promise is done through the `value` (alias: `deref`) method. Obtaining
39
+ # the value of a promise is a potentially blocking operation. When a promise is *rejected* a call
40
+ # to `value` will return `nil` immediately. When a promise is *fulfilled* a call to `value` will
41
+ # immediately return the current value. When a promise is *pending* a call to `value` will block
42
+ # until the promise is either *rejected* or *fulfilled*. A *timeout* value can be passed to `value`
43
+ # to limit how long the call will block. If `nil` the call will block indefinitely. If `0` the call
44
+ # will not block. Any other integer or float value will indicate the maximum number of seconds to block.
45
+ #
46
+ # Promises run on the global thread pool.
47
+ #
48
+ # ### Examples
49
+ #
50
+ # Start by requiring promises
51
+ #
52
+ # ```ruby
53
+ # require 'concurrent'
54
+ # ```
55
+ #
56
+ # Then create one
57
+ #
58
+ # ```ruby
59
+ # p = Concurrent::Promise.execute do
60
+ # # do something
61
+ # 42
62
+ # end
63
+ # ```
64
+ #
65
+ # Promises can be chained using the `then` method. The `then` method accepts a block, to be executed
66
+ # on fulfillment, and a callable argument to be executed on rejection. The result of the each promise
67
+ # is passed as the block argument to chained promises.
68
+ #
69
+ # ```ruby
70
+ # p = Concurrent::Promise.new{10}.then{|x| x * 2}.then{|result| result - 10 }.execute
71
+ # ```
72
+ #
73
+ # And so on, and so on, and so on...
74
+ #
75
+ # ```ruby
76
+ # p = Concurrent::Promise.fulfill(20).
77
+ # then{|result| result - 10 }.
78
+ # then{|result| result * 3 }.
79
+ # then{|result| result % 5 }.execute
80
+ # ```
81
+ #
82
+ # The initial state of a newly created Promise depends on the state of its parent:
83
+ # - if parent is *unscheduled* the child will be *unscheduled*
84
+ # - if parent is *pending* the child will be *pending*
85
+ # - if parent is *fulfilled* the child will be *pending*
86
+ # - if parent is *rejected* the child will be *pending* (but will ultimately be *rejected*)
87
+ #
88
+ # Promises are executed asynchronously from the main thread. By the time a child Promise finishes
89
+ # nitialization it may be in a different state that its parent (by the time a child is created its parent
90
+ # may have completed execution and changed state). Despite being asynchronous, however, the order of
91
+ # execution of Promise objects in a chain (or tree) is strictly defined.
92
+ #
93
+ # There are multiple ways to create and execute a new `Promise`. Both ways provide identical behavior:
94
+ #
95
+ # ```ruby
96
+ # # create, operate, then execute
97
+ # p1 = Concurrent::Promise.new{ "Hello World!" }
98
+ # p1.state #=> :unscheduled
99
+ # p1.execute
100
+ #
101
+ # # create and immediately execute
102
+ # p2 = Concurrent::Promise.new{ "Hello World!" }.execute
103
+ #
104
+ # # execute during creation
105
+ # p3 = Concurrent::Promise.execute{ "Hello World!" }
106
+ # ```
107
+ #
108
+ # Once the `execute` method is called a `Promise` becomes `pending`:
109
+ #
110
+ # ```ruby
111
+ # p = Concurrent::Promise.execute{ "Hello, world!" }
112
+ # p.state #=> :pending
113
+ # p.pending? #=> true
114
+ # ```
115
+ #
116
+ # Wait a little bit, and the promise will resolve and provide a value:
117
+ #
118
+ # ```ruby
119
+ # p = Concurrent::Promise.execute{ "Hello, world!" }
120
+ # sleep(0.1)
121
+ #
122
+ # p.state #=> :fulfilled
123
+ # p.fulfilled? #=> true
124
+ # p.value #=> "Hello, world!"
125
+ # ```
126
+ #
127
+ # If an exception occurs, the promise will be rejected and will provide
128
+ # a reason for the rejection:
129
+ #
130
+ # ```ruby
131
+ # p = Concurrent::Promise.execute{ raise StandardError.new("Here comes the Boom!") }
132
+ # sleep(0.1)
133
+ #
134
+ # p.state #=> :rejected
135
+ # p.rejected? #=> true
136
+ # p.reason #=> "#<StandardError: Here comes the Boom!>"
137
+ # ```
138
+ #
139
+ # #### Rejection
140
+ #
141
+ # When a promise is rejected all its children will be rejected and will receive the rejection `reason`
142
+ # as the rejection callable parameter:
143
+ #
144
+ # ```ruby
145
+ # p = [ Concurrent::Promise.execute{ Thread.pass; raise StandardError } ]
146
+ #
147
+ # c1 = p.then(Proc.new{ |reason| 42 })
148
+ # c2 = p.then(Proc.new{ |reason| raise 'Boom!' })
149
+ #
150
+ # sleep(0.1)
151
+ #
152
+ # c1.state #=> :rejected
153
+ # c2.state #=> :rejected
154
+ # ```
155
+ #
156
+ # Once a promise is rejected it will continue to accept children that will receive immediately rejection
157
+ # (they will be executed asynchronously).
158
+ #
159
+ # #### Aliases
160
+ #
161
+ # The `then` method is the most generic alias: it accepts a block to be executed upon parent fulfillment
162
+ # and a callable to be executed upon parent rejection. At least one of them should be passed. The default
163
+ # block is `{ |result| result }` that fulfills the child with the parent value. The default callable is
164
+ # `{ |reason| raise reason }` that rejects the child with the parent reason.
165
+ #
166
+ # - `on_success { |result| ... }` is the same as `then {|result| ... }`
167
+ # - `rescue { |reason| ... }` is the same as `then(Proc.new { |reason| ... } )`
168
+ # - `rescue` is aliased by `catch` and `on_error`
9
169
  class Promise
10
170
  # TODO unify promise and future to single class, with dataflow
11
171
  include Obligation
@@ -44,6 +204,7 @@ module Concurrent
44
204
  @children = []
45
205
 
46
206
  init_obligation
207
+ set_deref_options(opts)
47
208
  end
48
209
 
49
210
  # @return [Promise]
@@ -100,7 +261,7 @@ module Concurrent
100
261
  # @return [Promise]
101
262
  def on_success(&block)
102
263
  raise ArgumentError.new('no block given') unless block_given?
103
- self.then &block
264
+ self.then(&block)
104
265
  end
105
266
 
106
267
  # @return [Promise]
@@ -168,8 +329,66 @@ module Concurrent
168
329
  self.class.zip(self, *others)
169
330
  end
170
331
 
332
+ # Aggregates a collection of promises and executes the `then` condition
333
+ # if all aggregated promises succeed. Executes the `rescue` handler with
334
+ # a `Concurrent::PromiseExecutionError` if any of the aggregated promises
335
+ # fail. Upon execution will execute any of the aggregate promises that
336
+ # were not already executed.
337
+ #
338
+ # @!macro [attach] promise_self_aggregate
339
+ #
340
+ # The returned promise will not yet have been executed. Additional `#then`
341
+ # and `#rescue` handlers may still be provided. Once the returned promise
342
+ # is execute the aggregate promises will be also be executed (if they have
343
+ # not been executed already). The results of the aggregate promises will
344
+ # be checked upon completion. The necessary `#then` and `#rescue` blocks
345
+ # on the aggregating promise will then be executed as appropriate. If the
346
+ # `#rescue` handlers are executed the raises exception will be
347
+ # `Concurrent::PromiseExecutionError`.
348
+ #
349
+ # @param [Array] promises Zero or more promises to aggregate
350
+ # @return [Promise] an unscheduled (not executed) promise that aggregates
351
+ # the promises given as arguments
352
+ def self.all?(*promises)
353
+ aggregate(:all?, *promises)
354
+ end
355
+
356
+ # Aggregates a collection of promises and executes the `then` condition
357
+ # if any aggregated promises succeed. Executes the `rescue` handler with
358
+ # a `Concurrent::PromiseExecutionError` if any of the aggregated promises
359
+ # fail. Upon execution will execute any of the aggregate promises that
360
+ # were not already executed.
361
+ #
362
+ # @!macro promise_self_aggregate
363
+ def self.any?(*promises)
364
+ aggregate(:any?, *promises)
365
+ end
366
+
171
367
  protected
172
368
 
369
+ # Aggregate a collection of zero or more promises under a composite promise,
370
+ # execute the aggregated promises and collect them into a standard Ruby array,
371
+ # call the given Ruby `Ennnumerable` predicate (such as `any?`, `all?`, `none?`,
372
+ # or `one?`) on the collection checking for the success or failure of each,
373
+ # then executing the composite's `#then` handlers if the predicate returns
374
+ # `true` or executing the composite's `#rescue` handlers if the predicate
375
+ # returns false.
376
+ #
377
+ # @!macro promise_self_aggregate
378
+ def self.aggregate(method, *promises)
379
+ composite = Promise.new do
380
+ completed = promises.collect do |promise|
381
+ promise.execute if promise.unscheduled?
382
+ promise.wait
383
+ promise
384
+ end
385
+ unless completed.empty? || completed.send(method){|promise| promise.fulfilled? }
386
+ raise PromiseExecutionError
387
+ end
388
+ end
389
+ composite
390
+ end
391
+
173
392
  def set_pending
174
393
  mutex.synchronize do
175
394
  @state = :pending
@@ -26,7 +26,7 @@ module Concurrent
26
26
  def execute
27
27
  if compare_and_set_state(:pending, :unscheduled)
28
28
  @schedule_time = TimerSet.calculate_schedule_time(@intended_time)
29
- Concurrent::timer(@schedule_time.to_f - Time.now.to_f) { @executor.post &method(:process_task) }
29
+ Concurrent::timer(@schedule_time.to_f - Time.now.to_f) { @executor.post(&method(:process_task)) }
30
30
  self
31
31
  end
32
32
  end
@@ -317,7 +317,7 @@ module Concurrent
317
317
  def execute_task(completion)
318
318
  return unless @running.true?
319
319
  Concurrent::timer(execution_interval, completion, &method(:timeout_task))
320
- success, value, reason = @executor.execute(self)
320
+ _success, value, reason = @executor.execute(self)
321
321
  if completion.try?
322
322
  self.value = value
323
323
  schedule_next_task
@@ -1,3 +1,3 @@
1
1
  module Concurrent
2
- VERSION = '0.7.1'
2
+ VERSION = '0.7.2'
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: concurrent-ruby
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.1
4
+ version: 0.7.2
5
5
  platform: x64-mingw32
6
6
  authors:
7
7
  - Jerry D'Antonio
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-12-05 00:00:00.000000000 Z
11
+ date: 2015-01-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ref
@@ -91,6 +91,7 @@ files:
91
91
  - lib/concurrent/atomic/count_down_latch.rb
92
92
  - lib/concurrent/atomic/cyclic_barrier.rb
93
93
  - lib/concurrent/atomic/event.rb
94
+ - lib/concurrent/atomic/semaphore.rb
94
95
  - lib/concurrent/atomic/synchronization.rb
95
96
  - lib/concurrent/atomic/thread_local_var.rb
96
97
  - lib/concurrent/atomic_reference/concurrent_update_error.rb
@@ -177,7 +178,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
177
178
  version: '0'
178
179
  requirements: []
179
180
  rubyforge_project:
180
- rubygems_version: 2.4.4
181
+ rubygems_version: 2.4.5
181
182
  signing_key:
182
183
  specification_version: 4
183
184
  summary: Modern concurrency tools for Ruby. Inspired by Erlang, Clojure, Scala, Haskell,