async 2.32.0 → 2.32.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data/context/best-practices.md +0 -2
- data/context/tasks.md +11 -6
- data/context/thread-safety.md +249 -206
- data/lib/async/version.rb +1 -1
- data/readme.md +4 -4
- data/releases.md +4 -0
- data.tar.gz.sig +0 -0
- metadata +1 -1
- metadata.gz.sig +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 59ef83bbeaff186a176dea0301c0b1772188f21053715a8c3123de07beb0d4a2
|
4
|
+
data.tar.gz: e03bfe837ec4030ec88a02b4568166fcc78d6b85d2c788592dbe65ec60ffefa6
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 603f5de4548b6d03aa4f8ca15f1e920e3c172c6985fdfbb4f23da9355ad07dc77cffa15a25b1272c14edddb991a90449bd76ca6036930539bd58ea83d0dd3a79
|
7
|
+
data.tar.gz: 99d9d6129853a7cbb50a41f0778d7225fb02f435e8f06001a18c2acbccb63c1cbd4f1e7145292388a7763ded53a85de548d98c3d5d9f0bfc894f8431414c6d39
|
checksums.yaml.gz.sig
CHANGED
Binary file
|
data/context/best-practices.md
CHANGED
data/context/tasks.md
CHANGED
@@ -189,14 +189,19 @@ end
|
|
189
189
|
barrier = Async::Barrier.new
|
190
190
|
semaphore = Async::Semaphore.new(2, parent: barrier)
|
191
191
|
|
192
|
-
|
193
|
-
|
194
|
-
|
192
|
+
begin
|
193
|
+
jobs.each do |job|
|
194
|
+
semaphore.async do
|
195
|
+
# ... process job ...
|
196
|
+
end
|
195
197
|
end
|
196
|
-
end
|
197
198
|
|
198
|
-
# Wait until all jobs are done:
|
199
|
-
barrier.wait
|
199
|
+
# Wait until all jobs are done:
|
200
|
+
barrier.wait
|
201
|
+
ensure
|
202
|
+
# Stop any remaining jobs:
|
203
|
+
barrier.stop
|
204
|
+
end
|
200
205
|
~~~
|
201
206
|
|
202
207
|
## Stopping a Task
|
data/context/thread-safety.md
CHANGED
@@ -14,14 +14,14 @@ When analyzing existing projects, you should check files one by one, looking for
|
|
14
14
|
|
15
15
|
- **Data corruption is the primary concern** - prevention is absolutely critical.
|
16
16
|
- **Isolation should be the default** - operations should not share mutable state.
|
17
|
-
|
17
|
+
- **Shared mutable state should be avoided**. Prefer pure functions, immutable objects, and dependency injection.
|
18
18
|
- **Assume that code will be executed concurrently** by multiple fibers, threads and processes.
|
19
19
|
- **Assume that code may context switch at any time**, but especially during I/O operations.
|
20
|
-
|
21
|
-
|
20
|
+
- I/O operations include network calls, file I/O, database queries, etc.
|
21
|
+
- Other context switch points include `Fiber.yield`, `sleep`, waiting on child processes, DNS queries, and interrupts (signal handling).
|
22
22
|
- **Fibers and threads are NOT the same thing**, however they do share similar safety requirements.
|
23
23
|
- **C extensions e.g. C/Rust etc. can block the fiber scheduler entirely**.
|
24
|
-
|
24
|
+
- Native code, when implemented correctly, is usually okay, but bugs can exist anywhere, even in mature code.
|
25
25
|
|
26
26
|
## Quick Reference
|
27
27
|
|
@@ -71,23 +71,23 @@ Therefore, the best practice is to avoid shared mutable state whenever possible.
|
|
71
71
|
### Shared mutable state
|
72
72
|
|
73
73
|
Shared mutable state, including class instance variables accessed by multiple threads or fibers, is problematic and should be avoided. This includes class instance variables, module variables, and any mutable objects that are shared across threads or fibers.
|
74
|
-
|
74
|
+
|
75
75
|
```ruby
|
76
76
|
class CurrencyConverter
|
77
|
-
|
78
|
-
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
90
|
-
|
77
|
+
def initialize
|
78
|
+
@exchange_rates = {} # Issue: Shared mutable state
|
79
|
+
end
|
80
|
+
|
81
|
+
def update_rate(currency, rate)
|
82
|
+
# Issue: Multiple threads can modify @exchange_rates concurrently
|
83
|
+
@exchange_rates[currency] = rate
|
84
|
+
end
|
85
|
+
|
86
|
+
def convert(amount, from_currency, to_currency)
|
87
|
+
# Issue: If @exchange_rates is modified while this method runs, it can lead to incorrect conversions
|
88
|
+
rate = @exchange_rates[from_currency] / @exchange_rates[to_currency]
|
89
|
+
amount * rate
|
90
|
+
end
|
91
91
|
end
|
92
92
|
```
|
93
93
|
|
@@ -105,15 +105,15 @@ Class variables (`@@variable`) and class attributes (`class_attribute`) represen
|
|
105
105
|
|
106
106
|
```ruby
|
107
107
|
class GlobalConfig
|
108
|
-
|
108
|
+
@@settings = {} # Issue: Class variables are shared across inheritance
|
109
109
|
|
110
|
-
|
111
|
-
|
112
|
-
|
110
|
+
def set(key, value)
|
111
|
+
@@settings[key] = value
|
112
|
+
end
|
113
113
|
|
114
|
-
|
115
|
-
|
116
|
-
|
114
|
+
def get(key)
|
115
|
+
@@settings[key]
|
116
|
+
end
|
117
117
|
end
|
118
118
|
|
119
119
|
class UserConfig < GlobalConfig
|
@@ -137,9 +137,9 @@ Lazy initialization is a common pattern in Ruby, but the `||=` operator is not a
|
|
137
137
|
|
138
138
|
```ruby
|
139
139
|
class Loader
|
140
|
-
|
141
|
-
|
142
|
-
|
140
|
+
def self.data
|
141
|
+
@data ||= JSON.load_file('data.json')
|
142
|
+
end
|
143
143
|
end
|
144
144
|
```
|
145
145
|
|
@@ -151,21 +151,21 @@ This could cause situations where `self.data != self.data` for example, or modif
|
|
151
151
|
|
152
152
|
```ruby
|
153
153
|
class Loader
|
154
|
-
|
154
|
+
@mutex = Mutex.new
|
155
155
|
|
156
|
-
|
157
|
-
|
158
|
-
|
156
|
+
def self.data
|
157
|
+
# Double-checked locking pattern:
|
158
|
+
return @data if @data
|
159
159
|
|
160
|
-
|
161
|
-
|
160
|
+
@mutex.synchronize do
|
161
|
+
return @data if @data
|
162
162
|
|
163
|
-
|
164
|
-
|
165
|
-
|
163
|
+
# Now we are sure that @data is nil, we can safely fetch it:
|
164
|
+
@data = JSON.load_file('data.json')
|
165
|
+
end
|
166
166
|
|
167
|
-
|
168
|
-
|
167
|
+
return @data
|
168
|
+
end
|
169
169
|
end
|
170
170
|
```
|
171
171
|
|
@@ -173,19 +173,19 @@ In addition, it should be noted that lazy initialization of a `Mutex` (and other
|
|
173
173
|
|
174
174
|
```ruby
|
175
175
|
class Loader
|
176
|
-
|
177
|
-
|
176
|
+
def self.data
|
177
|
+
@mutex ||= Mutex.new # Issue: Not thread-safe
|
178
178
|
|
179
|
-
|
180
|
-
|
181
|
-
|
179
|
+
@mutex.synchronize do
|
180
|
+
# Double-checked locking pattern:
|
181
|
+
return @data if @data
|
182
182
|
|
183
|
-
|
184
|
-
|
185
|
-
|
183
|
+
# Now we are sure that @data is nil, we can safely fetch it:
|
184
|
+
@data = JSON.load_file('data.json')
|
185
|
+
end
|
186
186
|
|
187
|
-
|
188
|
-
|
187
|
+
return @data
|
188
|
+
end
|
189
189
|
end
|
190
190
|
```
|
191
191
|
|
@@ -195,15 +195,15 @@ In the case that each instance is only accessed by a single thread or fiber, mem
|
|
195
195
|
|
196
196
|
```ruby
|
197
197
|
class Loader
|
198
|
-
|
199
|
-
|
200
|
-
|
201
|
-
|
198
|
+
def things
|
199
|
+
# Safe: each instance has its own @things
|
200
|
+
@things ||= compute_things
|
201
|
+
end
|
202
202
|
end
|
203
203
|
|
204
204
|
def do_something
|
205
|
-
|
206
|
-
|
205
|
+
loader = Loader.new
|
206
|
+
loader.things # Safe: only accessed by this thread/fiber
|
207
207
|
end
|
208
208
|
```
|
209
209
|
|
@@ -213,11 +213,11 @@ Like lazy initialization, memoization using `Hash` caches can lead to race condi
|
|
213
213
|
|
214
214
|
```ruby
|
215
215
|
class ExpensiveComputation
|
216
|
-
|
216
|
+
@cache = {}
|
217
217
|
|
218
|
-
|
219
|
-
|
220
|
-
|
218
|
+
def self.compute(key)
|
219
|
+
@cache[key] ||= expensive_operation(key) # Issue: Not thread-safe
|
220
|
+
end
|
221
221
|
end
|
222
222
|
```
|
223
223
|
|
@@ -229,14 +229,14 @@ Note that this mutex creates contention on all calls to `compute`, which can be
|
|
229
229
|
|
230
230
|
```ruby
|
231
231
|
class ExpensiveComputation
|
232
|
-
|
233
|
-
|
232
|
+
@cache = {}
|
233
|
+
@mutex = Mutex.new
|
234
234
|
|
235
|
-
|
236
|
-
|
237
|
-
|
238
|
-
|
239
|
-
|
235
|
+
def self.compute(key)
|
236
|
+
@mutex.synchronize do
|
237
|
+
@cache[key] ||= expensive_operation(key)
|
238
|
+
end
|
239
|
+
end
|
240
240
|
end
|
241
241
|
```
|
242
242
|
|
@@ -244,13 +244,13 @@ end
|
|
244
244
|
|
245
245
|
```ruby
|
246
246
|
class ExpensiveComputation
|
247
|
-
|
247
|
+
@cache = Concurrent::Map.new
|
248
248
|
|
249
|
-
|
250
|
-
|
251
|
-
|
252
|
-
|
253
|
-
|
249
|
+
def self.compute(key)
|
250
|
+
@cache.compute_if_absent(key) do
|
251
|
+
expensive_operation(key)
|
252
|
+
end
|
253
|
+
end
|
254
254
|
end
|
255
255
|
```
|
256
256
|
|
@@ -305,11 +305,11 @@ Sharing network connections, database connections, or other resources across thr
|
|
305
305
|
client = Database.connect
|
306
306
|
|
307
307
|
Thread.new do
|
308
|
-
|
308
|
+
results = client.query("SELECT * FROM users")
|
309
309
|
end
|
310
310
|
|
311
311
|
Thread.new do
|
312
|
-
|
312
|
+
results = client.query("SELECT * FROM products")
|
313
313
|
end
|
314
314
|
```
|
315
315
|
|
@@ -322,19 +322,19 @@ Using a connection pool can help manage shared connections safely:
|
|
322
322
|
```ruby
|
323
323
|
require 'connection_pool'
|
324
324
|
pool = ConnectionPool.new(size: 5, timeout: 5) do
|
325
|
-
|
325
|
+
Database.connect
|
326
326
|
end
|
327
327
|
|
328
328
|
Thread.new do
|
329
|
-
|
330
|
-
|
331
|
-
|
329
|
+
pool.with do |client|
|
330
|
+
results = client.query("SELECT * FROM users")
|
331
|
+
end
|
332
332
|
end
|
333
333
|
|
334
334
|
Thread.new do
|
335
|
-
|
336
|
-
|
337
|
-
|
335
|
+
pool.with do |client|
|
336
|
+
results = client.query("SELECT * FROM products")
|
337
|
+
end
|
338
338
|
end
|
339
339
|
```
|
340
340
|
|
@@ -344,18 +344,18 @@ Enumerating shared mutable container (e.g. `Array` or `Hash`) can cause consiste
|
|
344
344
|
|
345
345
|
```ruby
|
346
346
|
class SharedList
|
347
|
-
|
348
|
-
|
349
|
-
|
347
|
+
def initialize
|
348
|
+
@list = []
|
349
|
+
end
|
350
350
|
|
351
|
-
|
352
|
-
|
353
|
-
|
351
|
+
def add(item)
|
352
|
+
@list << item
|
353
|
+
end
|
354
354
|
|
355
|
-
|
356
|
-
|
357
|
-
|
358
|
-
|
355
|
+
def each(&block)
|
356
|
+
# Issue: Modifications during enumeration can lead to inconsistent state
|
357
|
+
@list.each(&block)
|
358
|
+
end
|
359
359
|
end
|
360
360
|
```
|
361
361
|
|
@@ -369,22 +369,22 @@ To ensure that the enumeration is safe, you can use a `Mutex` to synchronize acc
|
|
369
369
|
|
370
370
|
```ruby
|
371
371
|
class SharedList
|
372
|
-
|
373
|
-
|
374
|
-
|
375
|
-
|
376
|
-
|
377
|
-
|
378
|
-
|
379
|
-
|
380
|
-
|
381
|
-
|
382
|
-
|
383
|
-
|
384
|
-
|
385
|
-
|
386
|
-
|
387
|
-
|
372
|
+
def initialize
|
373
|
+
@list = []
|
374
|
+
@mutex = Mutex.new
|
375
|
+
end
|
376
|
+
|
377
|
+
def add(item)
|
378
|
+
@mutex.synchronize do
|
379
|
+
@list << item
|
380
|
+
end
|
381
|
+
end
|
382
|
+
|
383
|
+
def each(&block)
|
384
|
+
@mutex.synchronize do
|
385
|
+
@list.each(&block)
|
386
|
+
end
|
387
|
+
end
|
388
388
|
end
|
389
389
|
```
|
390
390
|
|
@@ -395,13 +395,13 @@ Alternatively, you can defer operations that modify the shared state until after
|
|
395
395
|
```ruby
|
396
396
|
stale = []
|
397
397
|
shared_list.each do |item|
|
398
|
-
|
399
|
-
|
400
|
-
|
398
|
+
if item.stale?
|
399
|
+
stale << item
|
400
|
+
end
|
401
401
|
end
|
402
402
|
|
403
403
|
stale.each do |item|
|
404
|
-
|
404
|
+
shared_list.remove(item)
|
405
405
|
end
|
406
406
|
```
|
407
407
|
|
@@ -410,7 +410,7 @@ Or better yet, use immutable data structures or pure functions that do not rely
|
|
410
410
|
```ruby
|
411
411
|
fresh = []
|
412
412
|
shared_list.each do |item|
|
413
|
-
|
413
|
+
fresh << item unless item.stale?
|
414
414
|
end
|
415
415
|
|
416
416
|
shared_list.replace(fresh) # Replace the entire list with a new one
|
@@ -422,7 +422,7 @@ Race conditions occur when state changes in an unpredictable way due to concurre
|
|
422
422
|
|
423
423
|
```ruby
|
424
424
|
while system.busy?
|
425
|
-
|
425
|
+
system.wait
|
426
426
|
end
|
427
427
|
```
|
428
428
|
|
@@ -434,26 +434,26 @@ If you are able to modify the state transition logic of the shared resource, you
|
|
434
434
|
|
435
435
|
```ruby
|
436
436
|
class System
|
437
|
-
|
438
|
-
|
439
|
-
|
440
|
-
|
441
|
-
|
442
|
-
|
443
|
-
|
444
|
-
|
445
|
-
|
446
|
-
|
447
|
-
|
448
|
-
|
449
|
-
|
450
|
-
|
451
|
-
|
452
|
-
|
453
|
-
|
454
|
-
|
455
|
-
|
456
|
-
|
437
|
+
def initialize
|
438
|
+
@mutex = Mutex.new
|
439
|
+
@condition = ConditionVariable.new
|
440
|
+
@usage = 0
|
441
|
+
end
|
442
|
+
|
443
|
+
def release
|
444
|
+
@mutex.synchronize do
|
445
|
+
@usage -= 1
|
446
|
+
@condition.signal if @usage == 0
|
447
|
+
end
|
448
|
+
end
|
449
|
+
|
450
|
+
def wait_until_free
|
451
|
+
@mutex.synchronize do
|
452
|
+
while @usage > 0
|
453
|
+
@condition.wait(@mutex)
|
454
|
+
end
|
455
|
+
end
|
456
|
+
end
|
457
457
|
end
|
458
458
|
```
|
459
459
|
|
@@ -463,10 +463,10 @@ External resources can also lead to "time of check to time of use" issues, where
|
|
463
463
|
|
464
464
|
```ruby
|
465
465
|
if File.exist?('cache.json')
|
466
|
-
|
466
|
+
@data = File.read('cache.json')
|
467
467
|
else
|
468
|
-
|
469
|
-
|
468
|
+
@data = fetch_data_from_api
|
469
|
+
File.write('cache.json', @data)
|
470
470
|
end
|
471
471
|
```
|
472
472
|
|
@@ -480,12 +480,12 @@ Using content-addressable storage and atomic file operations can help avoid race
|
|
480
480
|
|
481
481
|
```ruby
|
482
482
|
begin
|
483
|
-
|
483
|
+
File.read('cache.json')
|
484
484
|
rescue Errno::ENOENT
|
485
|
-
|
486
|
-
|
487
|
-
|
488
|
-
|
485
|
+
File.open('cache.json', 'w') do |file|
|
486
|
+
file.flock(File::LOCK_EX)
|
487
|
+
file.write(fetch_data_from_api)
|
488
|
+
end
|
489
489
|
end
|
490
490
|
```
|
491
491
|
|
@@ -497,10 +497,10 @@ Using actual thread-local storage for "per-request" state can be problematic in
|
|
497
497
|
|
498
498
|
```ruby
|
499
499
|
class RequestContext
|
500
|
-
|
501
|
-
|
502
|
-
|
503
|
-
|
500
|
+
def self.current
|
501
|
+
Thread.current.thread_variable_get(:request_context) ||
|
502
|
+
Thread.current.thread_variable_set(:request_context, Hash.new)
|
503
|
+
end
|
504
504
|
end
|
505
505
|
```
|
506
506
|
|
@@ -510,55 +510,98 @@ In addition, some libraries may use `Thread.current` as a key in a hash or other
|
|
510
510
|
|
511
511
|
```ruby
|
512
512
|
class Pool
|
513
|
-
|
514
|
-
|
515
|
-
|
516
|
-
|
517
|
-
|
518
|
-
|
519
|
-
|
520
|
-
|
521
|
-
|
522
|
-
|
513
|
+
def initialize
|
514
|
+
@connections = {}
|
515
|
+
@mutex = Mutex.new
|
516
|
+
end
|
517
|
+
|
518
|
+
def current_connection
|
519
|
+
@mutex.synchronize do
|
520
|
+
@connections[Thread.current] ||= create_new_connection
|
521
|
+
end
|
522
|
+
end
|
523
523
|
end
|
524
524
|
```
|
525
525
|
|
526
|
-
#### Use `
|
526
|
+
#### Use `Fiber.attr` for per-request state
|
527
527
|
|
528
|
-
|
528
|
+
Use `Fiber.attr :my_attribute` for storing per-request state. `Fiber.current.my_attribute` provides a clean, readable way to define fiber-scoped attributes with excellent performance characteristics.
|
529
529
|
|
530
530
|
```ruby
|
531
|
-
|
532
|
-
|
531
|
+
# Use prefixed names to avoid namespace conflicts:
|
532
|
+
Fiber.attr :falcon_request_id
|
533
533
|
|
534
|
-
|
534
|
+
# Set per-request state:
|
535
|
+
Fiber.current.falcon_request_id = SecureRandom.uuid
|
535
536
|
|
536
|
-
|
537
|
+
# Access anywhere in the same fiber:
|
538
|
+
def process_data
|
539
|
+
puts "Request ID: #{Fiber.current.falcon_request_id}"
|
540
|
+
end
|
541
|
+
```
|
537
542
|
|
538
|
-
|
543
|
+
#### Use `Fiber[key]` for inheritable per-request state
|
544
|
+
|
545
|
+
Use `Fiber[key]` when you need per-request state that **inherits across concurrency boundaries** (to child fibers and threads). This is particularly useful for request tracing, user context, or other state that should flow through your entire request processing pipeline.
|
539
546
|
|
540
547
|
```ruby
|
541
|
-
Fiber[:user_id] = request.session[:user_id]
|
548
|
+
Fiber[:user_id] = request.session[:user_id]
|
549
|
+
Fiber[:trace_id] = request.headers['X-Trace-ID']
|
542
550
|
|
543
551
|
jobs.each do |job|
|
544
|
-
|
545
|
-
|
546
|
-
|
547
|
-
|
552
|
+
Thread.new do
|
553
|
+
# Child threads inherit the fiber storage:
|
554
|
+
puts "Processing job for user #{Fiber[:user_id]} (trace: #{Fiber[:trace_id]})"
|
555
|
+
process_job(job)
|
556
|
+
end
|
557
|
+
end
|
558
|
+
|
559
|
+
Async do |task|
|
560
|
+
# Child fibers also inherit the storage:
|
561
|
+
task.async do
|
562
|
+
puts "Background task for user: #{Fiber[:user_id]}"
|
563
|
+
end
|
548
564
|
end
|
549
565
|
```
|
550
566
|
|
551
|
-
|
567
|
+
Note that since this state is inherited across concurrency boundaries, any mutable objects stored here must be thread-safe or you risk data corruption. Therefore you should prefer immutable values including strings, numbers, frozen objects, or other immutable data types, and avoid mutable objects like arrays, hashes, unless they're thread-safe.
|
568
|
+
|
569
|
+
```ruby
|
570
|
+
# Safe - immutable values:
|
571
|
+
Fiber[:user_id] = "user_123"
|
572
|
+
Fiber[:account_type] = :premium
|
573
|
+
|
574
|
+
# Risky - mutable objects:
|
575
|
+
Fiber[:user_preferences] = {} # Could be modified concurrently
|
576
|
+
Fiber[:request_data] = SomeObject.new # Unless thread-safe
|
577
|
+
```
|
552
578
|
|
553
|
-
|
579
|
+
#### Use `Thread.current` for per-request state
|
580
|
+
|
581
|
+
While `Thread.current[key]` is technically safe for per-request state in Ruby, **it has significant readability and comprehension issues** that make it a poor choice:
|
582
|
+
|
583
|
+
```ruby
|
584
|
+
Thread.current[:connection] ||= create_new_connection
|
585
|
+
```
|
586
|
+
|
587
|
+
Despite the look, this is actually stored per-fiber and thus acceptable for storing per-request state. While it works, the confusion it creates makes `Fiber.attr` or `Fiber[key]` much better options.
|
588
|
+
|
589
|
+
#### Use `Thread.attr` for actual thread-local storage
|
590
|
+
|
591
|
+
When you specifically need **actual per-thread storage** (not per-fiber), use `Thread.attr`. This is scoped to the OS thread and is not inherited by child fibers.
|
554
592
|
|
555
593
|
```ruby
|
556
|
-
|
594
|
+
Thread.attr :connection_pool
|
557
595
|
|
558
|
-
|
596
|
+
# Each thread gets its own connection pool
|
597
|
+
Thread.current.connection_pool = ConnectionPool.new(size: 5)
|
598
|
+
|
599
|
+
def get_connection
|
600
|
+
Thread.current.connection_pool.checkout
|
601
|
+
end
|
559
602
|
```
|
560
603
|
|
561
|
-
This
|
604
|
+
This approach is rarely needed in fiber-based applications and should be used sparingly, however it can be useful for state or caches that should persist across multiple fibers but is not generally thread safe. Storing an instance per thread can help mitigate thread safety issues.
|
562
605
|
|
563
606
|
### C extensions that block the scheduler
|
564
607
|
|
@@ -570,29 +613,29 @@ Synchronization primitives like `Mutex`, `ConditionVariable`, and `Queue` are es
|
|
570
613
|
|
571
614
|
```ruby
|
572
615
|
class Counter
|
573
|
-
|
574
|
-
|
575
|
-
|
576
|
-
|
577
|
-
|
578
|
-
|
579
|
-
|
580
|
-
|
581
|
-
|
582
|
-
|
583
|
-
|
584
|
-
|
585
|
-
|
586
|
-
|
587
|
-
|
588
|
-
|
589
|
-
|
590
|
-
|
616
|
+
def initialize(count = 0)
|
617
|
+
@count = count
|
618
|
+
@mutex = Mutex.new
|
619
|
+
end
|
620
|
+
|
621
|
+
def increment
|
622
|
+
@mutex.synchronize do
|
623
|
+
@count += 1
|
624
|
+
end
|
625
|
+
end
|
626
|
+
|
627
|
+
def times
|
628
|
+
@mutex.synchronize do
|
629
|
+
@count.times do |i|
|
630
|
+
yield i
|
631
|
+
end
|
632
|
+
end
|
633
|
+
end
|
591
634
|
end
|
592
635
|
|
593
636
|
counter = Counter.new
|
594
637
|
counter.times do |i|
|
595
|
-
|
638
|
+
counter.increment # deadlock
|
596
639
|
end
|
597
640
|
```
|
598
641
|
|
@@ -606,33 +649,33 @@ As an alternative to the above, reducing the scope of the lock can help avoid de
|
|
606
649
|
|
607
650
|
```ruby
|
608
651
|
class Counter
|
609
|
-
|
652
|
+
# ...
|
610
653
|
|
611
|
-
|
612
|
-
|
654
|
+
def times
|
655
|
+
count = @mutex.synchronize{@count}
|
613
656
|
|
614
|
-
|
615
|
-
|
616
|
-
|
617
|
-
|
618
|
-
|
657
|
+
# Avoid holding the lock while yielding to user code:
|
658
|
+
count.times do |i|
|
659
|
+
yield i
|
660
|
+
end
|
661
|
+
end
|
619
662
|
end
|
620
663
|
```
|
621
664
|
|
622
665
|
## Best Practices for Concurrency in Ruby
|
623
666
|
|
624
667
|
1. **Favor pure, isolated, and immutable objects and functions.**
|
625
|
-
|
668
|
+
The safest and easiest way to write concurrent code is to avoid shared mutable state entirely. Isolated objects and pure functions eliminate the risk of race conditions and make reasoning about code much simpler.
|
626
669
|
|
627
670
|
2. **Use per-request (or per-fiber) state correctly.**
|
628
|
-
|
671
|
+
When you need to associate state with a request, job, or fiber, prefer explicit context passing, or use fiber-local variables (e.g. `Fiber[:key]`). Avoid using thread-local storage in fiber-based code, as fibers may share threads and this can lead to subtle bugs.
|
629
672
|
|
630
673
|
3. **Use synchronization primitives only when sharing is truly necessary.**
|
631
|
-
|
674
|
+
If you must share mutable state (for performance, memory efficiency, or correctness), protect it with the appropriate synchronization primitives:
|
632
675
|
|
633
|
-
|
634
|
-
|
635
|
-
|
676
|
+
* Prefer high-level, lock-free data structures (e.g. `Concurrent::Map`) when possible.
|
677
|
+
* If locks are necessary, use fine-grained locking to minimize contention and reduce deadlock risk.
|
678
|
+
* Avoid coarse-grained locks except as a last resort, as they can severely limit concurrency and hurt performance.
|
636
679
|
|
637
680
|
### Hierarchy of Concurrency Safety
|
638
681
|
|
data/lib/async/version.rb
CHANGED
data/readme.md
CHANGED
@@ -35,6 +35,10 @@ Please see the [project documentation](https://socketry.github.io/async/) for mo
|
|
35
35
|
|
36
36
|
Please see the [project releases](https://socketry.github.io/async/releases/index) for all releases.
|
37
37
|
|
38
|
+
### v2.32.1
|
39
|
+
|
40
|
+
- Fix typo in documentation.
|
41
|
+
|
38
42
|
### v2.32.0
|
39
43
|
|
40
44
|
- Introduce `Queue#waiting_count` and `PriorityQueue#waiting_count`. Generally for statistics/testing purposes only.
|
@@ -80,10 +84,6 @@ This release introduces thread-safety as a core concept of Async. Many core clas
|
|
80
84
|
|
81
85
|
- Fix `context/index.yaml` schema.
|
82
86
|
|
83
|
-
### v2.27.1
|
84
|
-
|
85
|
-
- Updated documentation and agent context.
|
86
|
-
|
87
87
|
## See Also
|
88
88
|
|
89
89
|
- [async-http](https://github.com/socketry/async-http) — Asynchronous HTTP client/server.
|
data/releases.md
CHANGED
data.tar.gz.sig
CHANGED
Binary file
|
metadata
CHANGED
metadata.gz.sig
CHANGED
Binary file
|