concurrent-ruby 1.0.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +14 -1
- data/README.md +4 -8
- data/lib/concurrent/array.rb +1 -1
- data/lib/concurrent/async.rb +2 -2
- data/lib/concurrent/atomic_reference/jruby+truffle.rb +1 -1
- data/lib/concurrent/atomics.rb +1 -0
- data/lib/concurrent/dataflow.rb +5 -3
- data/lib/concurrent/delay.rb +1 -0
- data/lib/concurrent/executor/fixed_thread_pool.rb +5 -5
- data/lib/concurrent/executor/ruby_thread_pool_executor.rb +18 -3
- data/lib/concurrent/executor/thread_pool_executor.rb +15 -9
- data/lib/concurrent/executor/timer_set.rb +3 -3
- data/lib/concurrent/future.rb +2 -2
- data/lib/concurrent/hash.rb +1 -1
- data/lib/concurrent/map.rb +50 -1
- data/lib/concurrent/options.rb +0 -2
- data/lib/concurrent/promise.rb +2 -2
- data/lib/concurrent/scheduled_task.rb +2 -2
- data/lib/concurrent/synchronization.rb +2 -0
- data/lib/concurrent/synchronization/abstract_lockable_object.rb +1 -1
- data/lib/concurrent/synchronization/lockable_object.rb +3 -1
- data/lib/concurrent/synchronization/object.rb +5 -3
- data/lib/concurrent/synchronization/rbx_object.rb +2 -0
- data/lib/concurrent/synchronization/truffle_lockable_object.rb +9 -0
- data/lib/concurrent/synchronization/truffle_object.rb +32 -0
- data/lib/concurrent/timer_task.rb +2 -0
- data/lib/concurrent/utility/engine.rb +8 -0
- data/lib/concurrent/utility/processor_counter.rb +2 -0
- data/lib/concurrent/version.rb +2 -2
- metadata +5 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: cf265efdf2111b4e83c6d8bd20f0d0d3552b746c
|
4
|
+
data.tar.gz: ead40801aecaef1362729dfd1249757cd671fc83
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 2af252bdc7618b58f1898fa1562d14dc9e2c28ef7f2b3ee5ee95d91ab4831698f1c3154c782f01b958e46396c1325b1e6393e2118da24d78681f872a30c93d50
|
7
|
+
data.tar.gz: af77e8aeb320a96f252b8b8ed5d19324ca3886fabaf92cff889750951779023fbf70f5d74b59d0f7a79cba614129b8545671435ef2e35e788010cd52e2178e7e
|
data/CHANGELOG.md
CHANGED
@@ -1,4 +1,17 @@
|
|
1
|
-
## Current Release v1.0.
|
1
|
+
## Current Release v1.0.1 (27 February 2016)
|
2
|
+
|
3
|
+
* Fix "uninitialized constant Concurrent::ReentrantReadWriteLock" error.
|
4
|
+
* Better handling of `autoload` vs. `require`.
|
5
|
+
* Improved API for Edge `Future` zipping.
|
6
|
+
* Fix reference leak in Edge `Future` constructor .
|
7
|
+
* Fix bug which prevented thread pools from surviving a `fork`.
|
8
|
+
* Fix bug in which `TimerTask` did not correctly specify all its dependencies.
|
9
|
+
* Improved support for JRuby+Truffle
|
10
|
+
* Improved error messages.
|
11
|
+
* Improved documentation.
|
12
|
+
* Updated README and CONTRIBUTING.
|
13
|
+
|
14
|
+
### Release v1.0.0 (13 November 2015)
|
2
15
|
|
3
16
|
* Rename `attr_volatile_with_cas` to `attr_atomic`
|
4
17
|
* Add `clear_each` to `LockFreeStack`
|
data/README.md
CHANGED
@@ -45,9 +45,9 @@
|
|
45
45
|
|
46
46
|
### Supported Ruby versions
|
47
47
|
|
48
|
-
MRI 1.9.3, 2.0
|
48
|
+
MRI 1.9.3, 2.0 and above, JRuby 1.7x in 1.9 mode, JRuby 9000, and Rubinius 2.x are supported.
|
49
49
|
This gem should be fully compatible with any interpreter that is compliant with Ruby 1.9.3 or newer.
|
50
|
-
Java 8 is preferred for JRuby but every Java version on which JRuby 9000 runs
|
50
|
+
Java 8 is preferred for JRuby but every Java version on which JRuby 9000 runs is supported.
|
51
51
|
|
52
52
|
## Thread Safety
|
53
53
|
|
@@ -59,14 +59,10 @@ It is critical to remember, however, that Ruby is a language of mutable referenc
|
|
59
59
|
|
60
60
|
## Features & Documentation
|
61
61
|
|
62
|
-
We have a roadmap guiding our work toward the [v1.0.0 release](https://github.com/ruby-concurrency/concurrent-ruby/issues/257).
|
63
|
-
|
64
62
|
The primary site for documentation is the automatically generated [API documentation](http://ruby-concurrency.github.io/concurrent-ruby/frames.html)
|
65
63
|
|
66
64
|
We also have a [mailing list](http://groups.google.com/group/concurrent-ruby) and [IRC (gitter)](https://gitter.im/ruby-concurrency/concurrent-ruby).
|
67
65
|
|
68
|
-
This library contains a variety of concurrency abstractions at high and low levels. One of the high-level abstractions is likely to meet most common needs.
|
69
|
-
|
70
66
|
#### General-purpose Concurrency Abstractions
|
71
67
|
|
72
68
|
* [Async](http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Async.html): A mixin module that provides simple asynchronous behavior to a class. Loosely based on Erlang's [gen_server](http://www.erlang.org/doc/man/gen_server.html).
|
@@ -182,7 +178,7 @@ gem install concurrent-ruby
|
|
182
178
|
or add the following line to Gemfile:
|
183
179
|
|
184
180
|
```ruby
|
185
|
-
gem 'concurrent-ruby'
|
181
|
+
gem 'concurrent-ruby', require: 'concurrent'
|
186
182
|
```
|
187
183
|
|
188
184
|
and run `bundle install` from your shell.
|
@@ -198,7 +194,7 @@ gem install concurrent-ruby-edge
|
|
198
194
|
or add the following line to Gemfile:
|
199
195
|
|
200
196
|
```ruby
|
201
|
-
gem 'concurrent-ruby-edge'
|
197
|
+
gem 'concurrent-ruby-edge', require: 'concurrent-edge'
|
202
198
|
```
|
203
199
|
|
204
200
|
and run `bundle install` from your shell.
|
data/lib/concurrent/array.rb
CHANGED
data/lib/concurrent/async.rb
CHANGED
@@ -58,7 +58,7 @@ module Concurrent
|
|
58
58
|
# end
|
59
59
|
# ```
|
60
60
|
#
|
61
|
-
# When defining a constructor it is
|
61
|
+
# When defining a constructor it is critical that the first line be a call to
|
62
62
|
# `super` with no arguments. The `super` method initializes the background
|
63
63
|
# thread and other asynchronous components.
|
64
64
|
#
|
@@ -153,7 +153,7 @@ module Concurrent
|
|
153
153
|
# subtly different.
|
154
154
|
#
|
155
155
|
# When internal state is accessed via the `async` and `await` proxy methods,
|
156
|
-
# the returned value represents the object's
|
156
|
+
# the returned value represents the object's state *at the time the call is
|
157
157
|
# processed*, which may *not* be the state of the object at the time the call
|
158
158
|
# is made.
|
159
159
|
#
|
@@ -1 +1 @@
|
|
1
|
-
require 'concurrent/atomic_reference/
|
1
|
+
require 'concurrent/atomic_reference/mutex_atomic'
|
data/lib/concurrent/atomics.rb
CHANGED
@@ -48,5 +48,6 @@ require 'concurrent/atomic/cyclic_barrier'
|
|
48
48
|
require 'concurrent/atomic/count_down_latch'
|
49
49
|
require 'concurrent/atomic/event'
|
50
50
|
require 'concurrent/atomic/read_write_lock'
|
51
|
+
require 'concurrent/atomic/reentrant_read_write_lock'
|
51
52
|
require 'concurrent/atomic/semaphore'
|
52
53
|
require 'concurrent/atomic/thread_local_var'
|
data/lib/concurrent/dataflow.rb
CHANGED
@@ -39,7 +39,7 @@ module Concurrent
|
|
39
39
|
call_dataflow(:value, executor, *inputs, &block)
|
40
40
|
end
|
41
41
|
module_function :dataflow_with
|
42
|
-
|
42
|
+
|
43
43
|
def dataflow!(*inputs, &block)
|
44
44
|
dataflow_with!(Concurrent.global_io_executor, *inputs, &block)
|
45
45
|
end
|
@@ -50,12 +50,14 @@ module Concurrent
|
|
50
50
|
end
|
51
51
|
module_function :dataflow_with!
|
52
52
|
|
53
|
-
private
|
53
|
+
private
|
54
54
|
|
55
55
|
def call_dataflow(method, executor, *inputs, &block)
|
56
56
|
raise ArgumentError.new('an executor must be provided') if executor.nil?
|
57
57
|
raise ArgumentError.new('no block given') unless block_given?
|
58
|
-
|
58
|
+
unless inputs.all? { |input| input.is_a? IVar }
|
59
|
+
raise ArgumentError.new("Not all dependencies are IVars.\nDependencies: #{ inputs.inspect }")
|
60
|
+
end
|
59
61
|
|
60
62
|
result = Future.new(executor: executor) do
|
61
63
|
values = inputs.map { |input| input.send(method) }
|
data/lib/concurrent/delay.rb
CHANGED
@@ -116,8 +116,8 @@ module Concurrent
|
|
116
116
|
#
|
117
117
|
# * `idletime`: The number of seconds that a thread may be idle before being reclaimed.
|
118
118
|
# * `max_queue`: The maximum number of tasks that may be waiting in the work queue at
|
119
|
-
# any one time. When the queue size reaches `max_queue`
|
120
|
-
# rejected in accordance with the configured `fallback_policy`.
|
119
|
+
# any one time. When the queue size reaches `max_queue` and no new threads can be created,
|
120
|
+
# subsequent tasks will be rejected in accordance with the configured `fallback_policy`.
|
121
121
|
# * `auto_terminate`: When true (default) an `at_exit` handler will be registered which
|
122
122
|
# will stop the thread pool when the application exits. See below for more information
|
123
123
|
# on shutting down thread pools.
|
@@ -146,7 +146,7 @@ module Concurrent
|
|
146
146
|
# On some runtime platforms (most notably the JVM) the application will not
|
147
147
|
# exit until all thread pools have been shutdown. To prevent applications from
|
148
148
|
# "hanging" on exit all thread pools include an `at_exit` handler that will
|
149
|
-
# stop the thread pool when the application
|
149
|
+
# stop the thread pool when the application exits. This handler uses a brute
|
150
150
|
# force method to stop the pool and makes no guarantees regarding resources being
|
151
151
|
# used by any tasks still running. Registration of this `at_exit` handler can be
|
152
152
|
# prevented by setting the thread pool's constructor `:auto_terminate` option to
|
@@ -171,8 +171,8 @@ module Concurrent
|
|
171
171
|
|
172
172
|
# @!macro [attach] fixed_thread_pool
|
173
173
|
#
|
174
|
-
# A thread pool
|
175
|
-
#
|
174
|
+
# A thread pool that reuses a fixed number of threads operating off an unbounded queue.
|
175
|
+
# At any point, at most `num_threads` will be active processing tasks. When all threads are busy new
|
176
176
|
# tasks `#post` to the thread pool are enqueued until a thread becomes available.
|
177
177
|
# Should a thread crash for any reason the thread will immediately be removed
|
178
178
|
# from the pool and replaced.
|
@@ -131,6 +131,7 @@ module Concurrent
|
|
131
131
|
@scheduled_task_count = 0
|
132
132
|
@completed_task_count = 0
|
133
133
|
@largest_length = 0
|
134
|
+
@ruby_pid = $$ # detects if Ruby has forked
|
134
135
|
|
135
136
|
@gc_interval = opts.fetch(:gc_interval, @idletime / 2.0).to_i # undocumented
|
136
137
|
@next_gc_time = Concurrent.monotonic_time + @gc_interval
|
@@ -143,6 +144,8 @@ module Concurrent
|
|
143
144
|
|
144
145
|
# @!visibility private
|
145
146
|
def ns_execute(*args, &task)
|
147
|
+
ns_reset_if_forked
|
148
|
+
|
146
149
|
if ns_assign_worker(*args, &task) || ns_enqueue(*args, &task)
|
147
150
|
@scheduled_task_count += 1
|
148
151
|
else
|
@@ -150,21 +153,21 @@ module Concurrent
|
|
150
153
|
end
|
151
154
|
|
152
155
|
ns_prune_pool if @next_gc_time < Concurrent.monotonic_time
|
153
|
-
# raise unless @ready.empty? || @queue.empty? # assert
|
154
156
|
end
|
155
157
|
|
156
158
|
# @!visibility private
|
157
159
|
def ns_shutdown_execution
|
160
|
+
ns_reset_if_forked
|
161
|
+
|
158
162
|
if @pool.empty?
|
159
163
|
# nothing to do
|
160
164
|
stopped_event.set
|
161
165
|
end
|
166
|
+
|
162
167
|
if @queue.empty?
|
163
168
|
# no more tasks will be accepted, just stop all workers
|
164
169
|
@pool.each(&:stop)
|
165
170
|
end
|
166
|
-
|
167
|
-
# raise unless @ready.empty? || @queue.empty? # assert
|
168
171
|
end
|
169
172
|
|
170
173
|
# @!visibility private
|
@@ -273,6 +276,18 @@ module Concurrent
|
|
273
276
|
@next_gc_time = Concurrent.monotonic_time + @gc_interval
|
274
277
|
end
|
275
278
|
|
279
|
+
def ns_reset_if_forked
|
280
|
+
if $$ != @ruby_pid
|
281
|
+
@queue.clear
|
282
|
+
@ready.clear
|
283
|
+
@pool.clear
|
284
|
+
@scheduled_task_count = 0
|
285
|
+
@completed_task_count = 0
|
286
|
+
@largest_length = 0
|
287
|
+
@ruby_pid = $$
|
288
|
+
end
|
289
|
+
end
|
290
|
+
|
276
291
|
# @!visibility private
|
277
292
|
class Worker
|
278
293
|
include Concern::Logging
|
@@ -18,16 +18,22 @@ module Concurrent
|
|
18
18
|
# @!macro [attach] thread_pool_executor
|
19
19
|
#
|
20
20
|
# An abstraction composed of one or more threads and a task queue. Tasks
|
21
|
-
# (blocks or `proc` objects) are
|
21
|
+
# (blocks or `proc` objects) are submitted to the pool and added to the queue.
|
22
22
|
# The threads in the pool remove the tasks and execute them in the order
|
23
|
-
# they were received.
|
24
|
-
#
|
25
|
-
#
|
26
|
-
#
|
27
|
-
#
|
23
|
+
# they were received.
|
24
|
+
#
|
25
|
+
# A `ThreadPoolExecutor` will automatically adjust the pool size according
|
26
|
+
# to the bounds set by `min-threads` and `max-threads`. When a new task is
|
27
|
+
# submitted and fewer than `min-threads` threads are running, a new thread
|
28
|
+
# is created to handle the request, even if other worker threads are idle.
|
29
|
+
# If there are more than `min-threads` but less than `max-threads` threads
|
30
|
+
# running, a new thread will be created only if the queue is full.
|
31
|
+
#
|
32
|
+
# Threads that are idle for too long will be garbage collected, down to the
|
33
|
+
# configured minimum options. Should a thread crash it, too, will be garbage collected.
|
28
34
|
#
|
29
35
|
# `ThreadPoolExecutor` is based on the Java class of the same name. From
|
30
|
-
# the official Java
|
36
|
+
# the official Java documentation;
|
31
37
|
#
|
32
38
|
# > Thread pools address two different problems: they usually provide
|
33
39
|
# > improved performance when executing large numbers of asynchronous tasks,
|
@@ -57,8 +63,8 @@ module Concurrent
|
|
57
63
|
#
|
58
64
|
# @option opts [Integer] :max_threads (DEFAULT_MAX_POOL_SIZE) the maximum
|
59
65
|
# number of threads to be created
|
60
|
-
# @option opts [Integer] :min_threads (DEFAULT_MIN_POOL_SIZE)
|
61
|
-
#
|
66
|
+
# @option opts [Integer] :min_threads (DEFAULT_MIN_POOL_SIZE) When a new task is submitted
|
67
|
+
# and fewer than `min_threads` are running, a new thread is created
|
62
68
|
# @option opts [Integer] :idletime (DEFAULT_THREAD_IDLETIMEOUT) the maximum
|
63
69
|
# number of seconds a thread may be idle before being reclaimed
|
64
70
|
# @option opts [Integer] :max_queue (DEFAULT_MAX_QUEUE_SIZE) the maximum
|
@@ -4,9 +4,9 @@ require 'concurrent/collection/non_concurrent_priority_queue'
|
|
4
4
|
require 'concurrent/executor/executor_service'
|
5
5
|
require 'concurrent/executor/single_thread_executor'
|
6
6
|
|
7
|
-
|
7
|
+
require 'concurrent/options'
|
8
8
|
|
9
|
-
|
9
|
+
module Concurrent
|
10
10
|
|
11
11
|
# Executes a collection of tasks, each after a given delay. A master task
|
12
12
|
# monitors the set and schedules each task for execution at the appropriate
|
@@ -21,7 +21,7 @@ module Concurrent
|
|
21
21
|
# Create a new set of timed tasks.
|
22
22
|
#
|
23
23
|
# @!macro [attach] executor_options
|
24
|
-
#
|
24
|
+
#
|
25
25
|
# @param [Hash] opts the options used to specify the executor on which to perform actions
|
26
26
|
# @option opts [Executor] :executor when set use the given `Executor` instance.
|
27
27
|
# Three special values are also supported: `:task` returns the global task pool,
|
data/lib/concurrent/future.rb
CHANGED
@@ -4,9 +4,9 @@ require 'concurrent/errors'
|
|
4
4
|
require 'concurrent/ivar'
|
5
5
|
require 'concurrent/executor/safe_task_executor'
|
6
6
|
|
7
|
-
|
7
|
+
require 'concurrent/options'
|
8
8
|
|
9
|
-
|
9
|
+
module Concurrent
|
10
10
|
|
11
11
|
# {include:file:doc/future.md}
|
12
12
|
#
|
data/lib/concurrent/hash.rb
CHANGED
data/lib/concurrent/map.rb
CHANGED
@@ -18,6 +18,9 @@ module Concurrent
|
|
18
18
|
when 'rbx'
|
19
19
|
require 'concurrent/collection/map/atomic_reference_map_backend'
|
20
20
|
AtomicReferenceMapBackend
|
21
|
+
when 'jruby+truffle'
|
22
|
+
require 'concurrent/collection/map/atomic_reference_map_backend'
|
23
|
+
AtomicReferenceMapBackend
|
21
24
|
else
|
22
25
|
warn 'Concurrent::Map: unsupported Ruby engine, using a fully synchronized Concurrent::Map implementation' if $VERBOSE
|
23
26
|
require 'concurrent/collection/map/synchronized_map_backend'
|
@@ -38,8 +41,43 @@ module Concurrent
|
|
38
41
|
# > require 'concurrent'
|
39
42
|
# >
|
40
43
|
# > map = Concurrent::Map.new
|
41
|
-
|
42
44
|
class Map < Collection::MapImplementation
|
45
|
+
|
46
|
+
# @!macro [new] map_method_is_atomic
|
47
|
+
# This method is atomic. Atomic methods of `Map` which accept a block
|
48
|
+
# do not allow the `self` instance to be used within the block. Doing
|
49
|
+
# so will cause a deadlock.
|
50
|
+
|
51
|
+
# @!method put_if_absent
|
52
|
+
# @!macro map_method_is_atomic
|
53
|
+
|
54
|
+
# @!method compute_if_absent
|
55
|
+
# @!macro map_method_is_atomic
|
56
|
+
|
57
|
+
# @!method compute_if_present
|
58
|
+
# @!macro map_method_is_atomic
|
59
|
+
|
60
|
+
# @!method compute
|
61
|
+
# @!macro map_method_is_atomic
|
62
|
+
|
63
|
+
# @!method merge_pair
|
64
|
+
# @!macro map_method_is_atomic
|
65
|
+
|
66
|
+
# @!method replace_pair
|
67
|
+
# @!macro map_method_is_atomic
|
68
|
+
|
69
|
+
# @!method replace_if_exists
|
70
|
+
# @!macro map_method_is_atomic
|
71
|
+
|
72
|
+
# @!method get_and_set
|
73
|
+
# @!macro map_method_is_atomic
|
74
|
+
|
75
|
+
# @!method delete
|
76
|
+
# @!macro map_method_is_atomic
|
77
|
+
|
78
|
+
# @!method delete_pair
|
79
|
+
# @!macro map_method_is_atomic
|
80
|
+
|
43
81
|
def initialize(options = nil, &block)
|
44
82
|
if options.kind_of?(::Hash)
|
45
83
|
validate_options_hash!(options)
|
@@ -68,6 +106,15 @@ module Concurrent
|
|
68
106
|
alias_method :get, :[]
|
69
107
|
alias_method :put, :[]=
|
70
108
|
|
109
|
+
# @!macro [attach] map_method_not_atomic
|
110
|
+
# The "fetch-then-act" methods of `Map` are not atomic. `Map` is intended
|
111
|
+
# to be use as a concurrency primitive with strong happens-before
|
112
|
+
# guarantees. It is not intended to be used as a high-level abstraction
|
113
|
+
# supporting complex operations. All read and write operations are
|
114
|
+
# thread safe, but no guarantees are made regarding race conditions
|
115
|
+
# between the fetch operation and yielding to the block. Additionally,
|
116
|
+
# this method does not support recursion. This is due to internal
|
117
|
+
# constraints that are very unlikely to change in the near future.
|
71
118
|
def fetch(key, default_value = NULL)
|
72
119
|
if NULL != (value = get_or_default(key, NULL))
|
73
120
|
value
|
@@ -80,12 +127,14 @@ module Concurrent
|
|
80
127
|
end
|
81
128
|
end
|
82
129
|
|
130
|
+
# @!macro map_method_not_atomic
|
83
131
|
def fetch_or_store(key, default_value = NULL)
|
84
132
|
fetch(key) do
|
85
133
|
put(key, block_given? ? yield(key) : (NULL == default_value ? raise_fetch_no_key : default_value))
|
86
134
|
end
|
87
135
|
end
|
88
136
|
|
137
|
+
# @!macro map_method_is_atomic
|
89
138
|
def put_if_absent(key, value)
|
90
139
|
computed = false
|
91
140
|
result = compute_if_absent(key) do
|
data/lib/concurrent/options.rb
CHANGED
data/lib/concurrent/promise.rb
CHANGED
@@ -3,9 +3,9 @@ require 'concurrent/constants'
|
|
3
3
|
require 'concurrent/errors'
|
4
4
|
require 'concurrent/ivar'
|
5
5
|
|
6
|
-
|
6
|
+
require 'concurrent/options'
|
7
7
|
|
8
|
-
|
8
|
+
module Concurrent
|
9
9
|
|
10
10
|
PromiseExecutionError = Class.new(StandardError)
|
11
11
|
|
@@ -5,9 +5,9 @@ require 'concurrent/ivar'
|
|
5
5
|
require 'concurrent/collection/copy_on_notify_observer_set'
|
6
6
|
require 'concurrent/utility/monotonic_time'
|
7
7
|
|
8
|
-
|
8
|
+
require 'concurrent/options'
|
9
9
|
|
10
|
-
|
10
|
+
module Concurrent
|
11
11
|
|
12
12
|
# `ScheduledTask` is a close relative of `Concurrent::Future` but with one
|
13
13
|
# important difference: A `Future` is set to execute as soon as possible
|
@@ -7,6 +7,7 @@ Concurrent.load_native_extensions
|
|
7
7
|
require 'concurrent/synchronization/mri_object'
|
8
8
|
require 'concurrent/synchronization/jruby_object'
|
9
9
|
require 'concurrent/synchronization/rbx_object'
|
10
|
+
require 'concurrent/synchronization/truffle_object'
|
10
11
|
require 'concurrent/synchronization/object'
|
11
12
|
require 'concurrent/synchronization/volatile'
|
12
13
|
|
@@ -14,6 +15,7 @@ require 'concurrent/synchronization/abstract_lockable_object'
|
|
14
15
|
require 'concurrent/synchronization/mri_lockable_object'
|
15
16
|
require 'concurrent/synchronization/jruby_lockable_object'
|
16
17
|
require 'concurrent/synchronization/rbx_lockable_object'
|
18
|
+
require 'concurrent/synchronization/truffle_lockable_object'
|
17
19
|
|
18
20
|
require 'concurrent/synchronization/lockable_object'
|
19
21
|
|
@@ -10,8 +10,10 @@ module Concurrent
|
|
10
10
|
MriMutexLockableObject
|
11
11
|
when Concurrent.on_jruby?
|
12
12
|
JRubyLockableObject
|
13
|
-
when Concurrent.on_rbx?
|
13
|
+
when Concurrent.on_rbx?
|
14
14
|
RbxLockableObject
|
15
|
+
when Concurrent.on_truffle?
|
16
|
+
MriMutexLockableObject
|
15
17
|
else
|
16
18
|
warn 'Possibly unsupported Ruby implementation'
|
17
19
|
MriMonitorLockableObject
|
@@ -8,8 +8,10 @@ module Concurrent
|
|
8
8
|
MriObject
|
9
9
|
when Concurrent.on_jruby?
|
10
10
|
JRubyObject
|
11
|
-
when Concurrent.on_rbx?
|
11
|
+
when Concurrent.on_rbx?
|
12
12
|
RbxObject
|
13
|
+
when Concurrent.on_truffle?
|
14
|
+
TruffleObject
|
13
15
|
else
|
14
16
|
MriObject
|
15
17
|
end
|
@@ -49,8 +51,8 @@ module Concurrent
|
|
49
51
|
# define only once, and not again in children
|
50
52
|
return if safe_initialization?
|
51
53
|
|
52
|
-
def self.new(*)
|
53
|
-
object = super
|
54
|
+
def self.new(*args, &block)
|
55
|
+
object = super(*args, &block)
|
54
56
|
ensure
|
55
57
|
object.full_memory_barrier if object
|
56
58
|
end
|
@@ -7,6 +7,7 @@ module Concurrent
|
|
7
7
|
end
|
8
8
|
|
9
9
|
module ClassMethods
|
10
|
+
|
10
11
|
def attr_volatile(*names)
|
11
12
|
names.each do |name|
|
12
13
|
ivar = :"@volatile_#{name}"
|
@@ -24,6 +25,7 @@ module Concurrent
|
|
24
25
|
end
|
25
26
|
names.map { |n| [n, :"#{n}="] }.flatten
|
26
27
|
end
|
28
|
+
|
27
29
|
end
|
28
30
|
|
29
31
|
def full_memory_barrier
|
@@ -0,0 +1,32 @@
|
|
1
|
+
module Concurrent
|
2
|
+
module Synchronization
|
3
|
+
|
4
|
+
module TruffleAttrVolatile
|
5
|
+
def self.included(base)
|
6
|
+
base.extend(ClassMethods)
|
7
|
+
end
|
8
|
+
|
9
|
+
module ClassMethods
|
10
|
+
def attr_volatile(*names)
|
11
|
+
# TODO may not always be available
|
12
|
+
attr_atomic(*names)
|
13
|
+
end
|
14
|
+
end
|
15
|
+
|
16
|
+
def full_memory_barrier
|
17
|
+
# Rubinius instance variables are not volatile so we need to insert barrier
|
18
|
+
Rubinius.memory_barrier
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
# @!visibility private
|
23
|
+
# @!macro internal_implementation_note
|
24
|
+
class TruffleObject < AbstractObject
|
25
|
+
include TruffleAttrVolatile
|
26
|
+
|
27
|
+
def initialize
|
28
|
+
# nothing to do
|
29
|
+
end
|
30
|
+
end
|
31
|
+
end
|
32
|
+
end
|
@@ -3,7 +3,9 @@ require 'concurrent/concern/dereferenceable'
|
|
3
3
|
require 'concurrent/concern/observable'
|
4
4
|
require 'concurrent/atomic/atomic_boolean'
|
5
5
|
require 'concurrent/executor/executor_service'
|
6
|
+
require 'concurrent/executor/ruby_executor_service'
|
6
7
|
require 'concurrent/executor/safe_task_executor'
|
8
|
+
require 'concurrent/scheduled_task'
|
7
9
|
|
8
10
|
module Concurrent
|
9
11
|
|
@@ -27,6 +27,14 @@ module Concurrent
|
|
27
27
|
!(RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/).nil?
|
28
28
|
end
|
29
29
|
|
30
|
+
def on_osx?
|
31
|
+
!(RbConfig::CONFIG['host_os'] =~ /darwin|mac os/).nil?
|
32
|
+
end
|
33
|
+
|
34
|
+
def on_linux?
|
35
|
+
!(RbConfig::CONFIG['host_os'] =~ /linux/).nil?
|
36
|
+
end
|
37
|
+
|
30
38
|
def ruby_engine
|
31
39
|
defined?(RUBY_ENGINE) ? RUBY_ENGINE : 'ruby'
|
32
40
|
end
|
@@ -75,6 +75,8 @@ module Concurrent
|
|
75
75
|
def compute_processor_count
|
76
76
|
if Concurrent.on_jruby?
|
77
77
|
java.lang.Runtime.getRuntime.availableProcessors
|
78
|
+
elsif Concurrent.on_truffle?
|
79
|
+
Truffle::Primitive.logical_processors
|
78
80
|
else
|
79
81
|
os_name = RbConfig::CONFIG["target_os"]
|
80
82
|
if os_name =~ /mingw|mswin/
|
data/lib/concurrent/version.rb
CHANGED
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: concurrent-ruby
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0.
|
4
|
+
version: 1.0.1
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Jerry D'Antonio
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date:
|
12
|
+
date: 2016-02-27 00:00:00.000000000 Z
|
13
13
|
dependencies: []
|
14
14
|
description: |
|
15
15
|
Modern concurrency tools including agents, futures, promises, thread pools, actors, supervisors, and more.
|
@@ -128,6 +128,8 @@ files:
|
|
128
128
|
- lib/concurrent/synchronization/object.rb
|
129
129
|
- lib/concurrent/synchronization/rbx_lockable_object.rb
|
130
130
|
- lib/concurrent/synchronization/rbx_object.rb
|
131
|
+
- lib/concurrent/synchronization/truffle_lockable_object.rb
|
132
|
+
- lib/concurrent/synchronization/truffle_object.rb
|
131
133
|
- lib/concurrent/synchronization/volatile.rb
|
132
134
|
- lib/concurrent/thread_safe/synchronized_delegator.rb
|
133
135
|
- lib/concurrent/thread_safe/util.rb
|
@@ -167,7 +169,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
167
169
|
version: '0'
|
168
170
|
requirements: []
|
169
171
|
rubyforge_project:
|
170
|
-
rubygems_version: 2.
|
172
|
+
rubygems_version: 2.6.0
|
171
173
|
signing_key:
|
172
174
|
specification_version: 4
|
173
175
|
summary: Modern concurrency tools for Ruby. Inspired by Erlang, Clojure, Scala, Haskell,
|