async 2.32.0 → 2.32.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5a816b65e2cdf3f1f9ee99d421b32f13ed00c858bfa2b1ff0f814cf33c9dd470
4
- data.tar.gz: 85dc9dfad110e4ae2f0d4db45db9b1f941b797891f945da1caaa302df233f813
3
+ metadata.gz: 59ef83bbeaff186a176dea0301c0b1772188f21053715a8c3123de07beb0d4a2
4
+ data.tar.gz: e03bfe837ec4030ec88a02b4568166fcc78d6b85d2c788592dbe65ec60ffefa6
5
5
  SHA512:
6
- metadata.gz: e6cf114ad81e9eb511504ed44b719f9838ad84623fea14aab43acd531a9d8493b92eb0a24a49bb410670c0be82c8de5617280f39e984f1d9477c4828d96d2fc5
7
- data.tar.gz: 2acb3e1516b2670a57cb717713d882f9d75807864924bf7aa691aa82cd5406a2b273c7aa813a30037a3dabce338095eb7faa5efff8f5f5b56290a07b3b0927b5
6
+ metadata.gz: 603f5de4548b6d03aa4f8ca15f1e920e3c172c6985fdfbb4f23da9355ad07dc77cffa15a25b1272c14edddb991a90449bd76ca6036930539bd58ea83d0dd3a79
7
+ data.tar.gz: 99d9d6129853a7cbb50a41f0778d7225fb02f435e8f06001a18c2acbccb63c1cbd4f1e7145292388a7763ded53a85de548d98c3d5d9f0bfc894f8431414c6d39
checksums.yaml.gz.sig CHANGED
Binary file
@@ -184,5 +184,3 @@ end
184
184
  ```
185
185
 
186
186
  It can be especially important to impose timeouts when processing user-provided data.
187
-
188
- ##
data/context/tasks.md CHANGED
@@ -189,14 +189,19 @@ end
189
189
  barrier = Async::Barrier.new
190
190
  semaphore = Async::Semaphore.new(2, parent: barrier)
191
191
 
192
- jobs.each do |job|
193
- semaphore.async(parent: barrier) do
194
- # ... process job ...
192
+ begin
193
+ jobs.each do |job|
194
+ semaphore.async do
195
+ # ... process job ...
196
+ end
195
197
  end
196
- end
197
198
 
198
- # Wait until all jobs are done:
199
- barrier.wait
199
+ # Wait until all jobs are done:
200
+ barrier.wait
201
+ ensure
202
+ # Stop any remaining jobs:
203
+ barrier.stop
204
+ end
200
205
  ~~~
201
206
 
202
207
  ## Stopping a Task
@@ -14,14 +14,14 @@ When analyzing existing projects, you should check files one by one, looking for
14
14
 
15
15
  - **Data corruption is the primary concern** - prevention is absolutely critical.
16
16
  - **Isolation should be the default** - operations should not share mutable state.
17
- - **Shared mutable state should be avoided**. Prefer pure functions, immutable objects, and dependency injection.
17
+ - **Shared mutable state should be avoided**. Prefer pure functions, immutable objects, and dependency injection.
18
18
  - **Assume that code will be executed concurrently** by multiple fibers, threads and processes.
19
19
  - **Assume that code may context switch at any time**, but especially during I/O operations.
20
- - I/O operations include network calls, file I/O, database queries, etc.
21
- - Other context switch points include `Fiber.yield`, `sleep`, waiting on child processes, DNS queries, and interrupts (signal handling).
20
+ - I/O operations include network calls, file I/O, database queries, etc.
21
+ - Other context switch points include `Fiber.yield`, `sleep`, waiting on child processes, DNS queries, and interrupts (signal handling).
22
22
  - **Fibers and threads are NOT the same thing**, however they do share similar safety requirements.
23
23
  - **C extensions e.g. C/Rust etc. can block the fiber scheduler entirely**.
24
- - Native code, when implemented correctly, is usually okay, but bugs can exist anywhere, even in mature code.
24
+ - Native code, when implemented correctly, is usually okay, but bugs can exist anywhere, even in mature code.
25
25
 
26
26
  ## Quick Reference
27
27
 
@@ -71,23 +71,23 @@ Therefore, the best practice is to avoid shared mutable state whenever possible.
71
71
  ### Shared mutable state
72
72
 
73
73
  Shared mutable state, including class instance variables accessed by multiple threads or fibers, is problematic and should be avoided. This includes class instance variables, module variables, and any mutable objects that are shared across threads or fibers.
74
-
74
+
75
75
  ```ruby
76
76
  class CurrencyConverter
77
- def initialize
78
- @exchange_rates = {} # Issue: Shared mutable state
79
- end
80
-
81
- def update_rate(currency, rate)
82
- # Issue: Multiple threads can modify @exchange_rates concurrently
83
- @exchange_rates[currency] = rate
84
- end
85
-
86
- def convert(amount, from_currency, to_currency)
87
- # Issue: If @exchange_rates is modified while this method runs, it can lead to incorrect conversions
88
- rate = @exchange_rates[from_currency] / @exchange_rates[to_currency]
89
- amount * rate
90
- end
77
+ def initialize
78
+ @exchange_rates = {} # Issue: Shared mutable state
79
+ end
80
+
81
+ def update_rate(currency, rate)
82
+ # Issue: Multiple threads can modify @exchange_rates concurrently
83
+ @exchange_rates[currency] = rate
84
+ end
85
+
86
+ def convert(amount, from_currency, to_currency)
87
+ # Issue: If @exchange_rates is modified while this method runs, it can lead to incorrect conversions
88
+ rate = @exchange_rates[from_currency] / @exchange_rates[to_currency]
89
+ amount * rate
90
+ end
91
91
  end
92
92
  ```
93
93
 
@@ -105,15 +105,15 @@ Class variables (`@@variable`) and class attributes (`class_attribute`) represen
105
105
 
106
106
  ```ruby
107
107
  class GlobalConfig
108
- @@settings = {} # Issue: Class variables are shared across inheritance
108
+ @@settings = {} # Issue: Class variables are shared across inheritance
109
109
 
110
- def set(key, value)
111
- @@settings[key] = value
112
- end
110
+ def set(key, value)
111
+ @@settings[key] = value
112
+ end
113
113
 
114
- def get(key)
115
- @@settings[key]
116
- end
114
+ def get(key)
115
+ @@settings[key]
116
+ end
117
117
  end
118
118
 
119
119
  class UserConfig < GlobalConfig
@@ -137,9 +137,9 @@ Lazy initialization is a common pattern in Ruby, but the `||=` operator is not a
137
137
 
138
138
  ```ruby
139
139
  class Loader
140
- def self.data
141
- @data ||= JSON.load_file('data.json')
142
- end
140
+ def self.data
141
+ @data ||= JSON.load_file('data.json')
142
+ end
143
143
  end
144
144
  ```
145
145
 
@@ -151,21 +151,21 @@ This could cause situations where `self.data != self.data` for example, or modif
151
151
 
152
152
  ```ruby
153
153
  class Loader
154
- @mutex = Mutex.new
154
+ @mutex = Mutex.new
155
155
 
156
- def self.data
157
- # Double-checked locking pattern:
158
- return @data if @data
156
+ def self.data
157
+ # Double-checked locking pattern:
158
+ return @data if @data
159
159
 
160
- @mutex.synchronize do
161
- return @data if @data
160
+ @mutex.synchronize do
161
+ return @data if @data
162
162
 
163
- # Now we are sure that @data is nil, we can safely fetch it:
164
- @data = JSON.load_file('data.json')
165
- end
163
+ # Now we are sure that @data is nil, we can safely fetch it:
164
+ @data = JSON.load_file('data.json')
165
+ end
166
166
 
167
- return @data
168
- end
167
+ return @data
168
+ end
169
169
  end
170
170
  ```
171
171
 
@@ -173,19 +173,19 @@ In addition, it should be noted that lazy initialization of a `Mutex` (and other
173
173
 
174
174
  ```ruby
175
175
  class Loader
176
- def self.data
177
- @mutex ||= Mutex.new # Issue: Not thread-safe
176
+ def self.data
177
+ @mutex ||= Mutex.new # Issue: Not thread-safe
178
178
 
179
- @mutex.synchronize do
180
- # Double-checked locking pattern:
181
- return @data if @data
179
+ @mutex.synchronize do
180
+ # Double-checked locking pattern:
181
+ return @data if @data
182
182
 
183
- # Now we are sure that @data is nil, we can safely fetch it:
184
- @data = JSON.load_file('data.json')
185
- end
183
+ # Now we are sure that @data is nil, we can safely fetch it:
184
+ @data = JSON.load_file('data.json')
185
+ end
186
186
 
187
- return @data
188
- end
187
+ return @data
188
+ end
189
189
  end
190
190
  ```
191
191
 
@@ -195,15 +195,15 @@ In the case that each instance is only accessed by a single thread or fiber, mem
195
195
 
196
196
  ```ruby
197
197
  class Loader
198
- def things
199
- # Safe: each instance has its own @things
200
- @things ||= compute_things
201
- end
198
+ def things
199
+ # Safe: each instance has its own @things
200
+ @things ||= compute_things
201
+ end
202
202
  end
203
203
 
204
204
  def do_something
205
- loader = Loader.new
206
- loader.things # Safe: only accessed by this thread/fiber
205
+ loader = Loader.new
206
+ loader.things # Safe: only accessed by this thread/fiber
207
207
  end
208
208
  ```
209
209
 
@@ -213,11 +213,11 @@ Like lazy initialization, memoization using `Hash` caches can lead to race condi
213
213
 
214
214
  ```ruby
215
215
  class ExpensiveComputation
216
- @cache = {}
216
+ @cache = {}
217
217
 
218
- def self.compute(key)
219
- @cache[key] ||= expensive_operation(key) # Issue: Not thread-safe
220
- end
218
+ def self.compute(key)
219
+ @cache[key] ||= expensive_operation(key) # Issue: Not thread-safe
220
+ end
221
221
  end
222
222
  ```
223
223
 
@@ -229,14 +229,14 @@ Note that this mutex creates contention on all calls to `compute`, which can be
229
229
 
230
230
  ```ruby
231
231
  class ExpensiveComputation
232
- @cache = {}
233
- @mutex = Mutex.new
232
+ @cache = {}
233
+ @mutex = Mutex.new
234
234
 
235
- def self.compute(key)
236
- @mutex.synchronize do
237
- @cache[key] ||= expensive_operation(key)
238
- end
239
- end
235
+ def self.compute(key)
236
+ @mutex.synchronize do
237
+ @cache[key] ||= expensive_operation(key)
238
+ end
239
+ end
240
240
  end
241
241
  ```
242
242
 
@@ -244,13 +244,13 @@ end
244
244
 
245
245
  ```ruby
246
246
  class ExpensiveComputation
247
- @cache = Concurrent::Map.new
247
+ @cache = Concurrent::Map.new
248
248
 
249
- def self.compute(key)
250
- @cache.compute_if_absent(key) do
251
- expensive_operation(key)
252
- end
253
- end
249
+ def self.compute(key)
250
+ @cache.compute_if_absent(key) do
251
+ expensive_operation(key)
252
+ end
253
+ end
254
254
  end
255
255
  ```
256
256
 
@@ -305,11 +305,11 @@ Sharing network connections, database connections, or other resources across thr
305
305
  client = Database.connect
306
306
 
307
307
  Thread.new do
308
- results = client.query("SELECT * FROM users")
308
+ results = client.query("SELECT * FROM users")
309
309
  end
310
310
 
311
311
  Thread.new do
312
- results = client.query("SELECT * FROM products")
312
+ results = client.query("SELECT * FROM products")
313
313
  end
314
314
  ```
315
315
 
@@ -322,19 +322,19 @@ Using a connection pool can help manage shared connections safely:
322
322
  ```ruby
323
323
  require 'connection_pool'
324
324
  pool = ConnectionPool.new(size: 5, timeout: 5) do
325
- Database.connect
325
+ Database.connect
326
326
  end
327
327
 
328
328
  Thread.new do
329
- pool.with do |client|
330
- results = client.query("SELECT * FROM users")
331
- end
329
+ pool.with do |client|
330
+ results = client.query("SELECT * FROM users")
331
+ end
332
332
  end
333
333
 
334
334
  Thread.new do
335
- pool.with do |client|
336
- results = client.query("SELECT * FROM products")
337
- end
335
+ pool.with do |client|
336
+ results = client.query("SELECT * FROM products")
337
+ end
338
338
  end
339
339
  ```
340
340
 
@@ -344,18 +344,18 @@ Enumerating shared mutable container (e.g. `Array` or `Hash`) can cause consiste
344
344
 
345
345
  ```ruby
346
346
  class SharedList
347
- def initialize
348
- @list = []
349
- end
347
+ def initialize
348
+ @list = []
349
+ end
350
350
 
351
- def add(item)
352
- @list << item
353
- end
351
+ def add(item)
352
+ @list << item
353
+ end
354
354
 
355
- def each(&block)
356
- # Issue: Modifications during enumeration can lead to inconsistent state
357
- @list.each(&block)
358
- end
355
+ def each(&block)
356
+ # Issue: Modifications during enumeration can lead to inconsistent state
357
+ @list.each(&block)
358
+ end
359
359
  end
360
360
  ```
361
361
 
@@ -369,22 +369,22 @@ To ensure that the enumeration is safe, you can use a `Mutex` to synchronize acc
369
369
 
370
370
  ```ruby
371
371
  class SharedList
372
- def initialize
373
- @list = []
374
- @mutex = Mutex.new
375
- end
376
-
377
- def add(item)
378
- @mutex.synchronize do
379
- @list << item
380
- end
381
- end
382
-
383
- def each(&block)
384
- @mutex.synchronize do
385
- @list.each(&block)
386
- end
387
- end
372
+ def initialize
373
+ @list = []
374
+ @mutex = Mutex.new
375
+ end
376
+
377
+ def add(item)
378
+ @mutex.synchronize do
379
+ @list << item
380
+ end
381
+ end
382
+
383
+ def each(&block)
384
+ @mutex.synchronize do
385
+ @list.each(&block)
386
+ end
387
+ end
388
388
  end
389
389
  ```
390
390
 
@@ -395,13 +395,13 @@ Alternatively, you can defer operations that modify the shared state until after
395
395
  ```ruby
396
396
  stale = []
397
397
  shared_list.each do |item|
398
- if item.stale?
399
- stale << item
400
- end
398
+ if item.stale?
399
+ stale << item
400
+ end
401
401
  end
402
402
 
403
403
  stale.each do |item|
404
- shared_list.remove(item)
404
+ shared_list.remove(item)
405
405
  end
406
406
  ```
407
407
 
@@ -410,7 +410,7 @@ Or better yet, use immutable data structures or pure functions that do not rely
410
410
  ```ruby
411
411
  fresh = []
412
412
  shared_list.each do |item|
413
- fresh << item unless item.stale?
413
+ fresh << item unless item.stale?
414
414
  end
415
415
 
416
416
  shared_list.replace(fresh) # Replace the entire list with a new one
@@ -422,7 +422,7 @@ Race conditions occur when state changes in an unpredictable way due to concurre
422
422
 
423
423
  ```ruby
424
424
  while system.busy?
425
- system.wait
425
+ system.wait
426
426
  end
427
427
  ```
428
428
 
@@ -434,26 +434,26 @@ If you are able to modify the state transition logic of the shared resource, you
434
434
 
435
435
  ```ruby
436
436
  class System
437
- def initialize
438
- @mutex = Mutex.new
439
- @condition = ConditionVariable.new
440
- @usage = 0
441
- end
442
-
443
- def release
444
- @mutex.synchronize do
445
- @usage -= 1
446
- @condition.signal if @usage == 0
447
- end
448
- end
449
-
450
- def wait_until_free
451
- @mutex.synchronize do
452
- while @usage > 0
453
- @condition.wait(@mutex)
454
- end
455
- end
456
- end
437
+ def initialize
438
+ @mutex = Mutex.new
439
+ @condition = ConditionVariable.new
440
+ @usage = 0
441
+ end
442
+
443
+ def release
444
+ @mutex.synchronize do
445
+ @usage -= 1
446
+ @condition.signal if @usage == 0
447
+ end
448
+ end
449
+
450
+ def wait_until_free
451
+ @mutex.synchronize do
452
+ while @usage > 0
453
+ @condition.wait(@mutex)
454
+ end
455
+ end
456
+ end
457
457
  end
458
458
  ```
459
459
 
@@ -463,10 +463,10 @@ External resources can also lead to "time of check to time of use" issues, where
463
463
 
464
464
  ```ruby
465
465
  if File.exist?('cache.json')
466
- @data = File.read('cache.json')
466
+ @data = File.read('cache.json')
467
467
  else
468
- @data = fetch_data_from_api
469
- File.write('cache.json', @data)
468
+ @data = fetch_data_from_api
469
+ File.write('cache.json', @data)
470
470
  end
471
471
  ```
472
472
 
@@ -480,12 +480,12 @@ Using content-addressable storage and atomic file operations can help avoid race
480
480
 
481
481
  ```ruby
482
482
  begin
483
- File.read('cache.json')
483
+ File.read('cache.json')
484
484
  rescue Errno::ENOENT
485
- File.open('cache.json', 'w') do |file|
486
- file.flock(File::LOCK_EX)
487
- file.write(fetch_data_from_api)
488
- end
485
+ File.open('cache.json', 'w') do |file|
486
+ file.flock(File::LOCK_EX)
487
+ file.write(fetch_data_from_api)
488
+ end
489
489
  end
490
490
  ```
491
491
 
@@ -497,10 +497,10 @@ Using actual thread-local storage for "per-request" state can be problematic in
497
497
 
498
498
  ```ruby
499
499
  class RequestContext
500
- def self.current
501
- Thread.current.thread_variable_get(:request_context) ||
502
- Thread.current.thread_variable_set(:request_context, Hash.new)
503
- end
500
+ def self.current
501
+ Thread.current.thread_variable_get(:request_context) ||
502
+ Thread.current.thread_variable_set(:request_context, Hash.new)
503
+ end
504
504
  end
505
505
  ```
506
506
 
@@ -510,55 +510,98 @@ In addition, some libraries may use `Thread.current` as a key in a hash or other
510
510
 
511
511
  ```ruby
512
512
  class Pool
513
- def initialize
514
- @connections = {}
515
- @mutex = Mutex.new
516
- end
517
-
518
- def current_connection
519
- @mutex.synchronize do
520
- @connections[Thread.current] ||= create_new_connection
521
- end
522
- end
513
+ def initialize
514
+ @connections = {}
515
+ @mutex = Mutex.new
516
+ end
517
+
518
+ def current_connection
519
+ @mutex.synchronize do
520
+ @connections[Thread.current] ||= create_new_connection
521
+ end
522
+ end
523
523
  end
524
524
  ```
525
525
 
526
- #### Use `Thread.current` for per-request state
526
+ #### Use `Fiber.attr` for per-request state
527
527
 
528
- Despite the look, this is actually fiber-local and thus scoped to the smallest unit of concurrency in Ruby, which is the fiber. This means that it is safe to use `Thread.current` for per-request state, as long as you are aware that it is actually fiber-local storage.
528
+ Use `Fiber.attr :my_attribute` for storing per-request state. `Fiber.current.my_attribute` provides a clean, readable way to define fiber-scoped attributes with excellent performance characteristics.
529
529
 
530
530
  ```ruby
531
- Thread.current[:connection] ||= create_new_connection
532
- ```
531
+ # Use prefixed names to avoid namespace conflicts:
532
+ Fiber.attr :falcon_request_id
533
533
 
534
- As a counter point, it not a good idea to use fiber-local storage for a cache, since it will never be shared.
534
+ # Set per-request state:
535
+ Fiber.current.falcon_request_id = SecureRandom.uuid
535
536
 
536
- #### Use `Fiber[key]` for per-request state
537
+ # Access anywhere in the same fiber:
538
+ def process_data
539
+ puts "Request ID: #{Fiber.current.falcon_request_id}"
540
+ end
541
+ ```
537
542
 
538
- Using `Fiber[key]` can be a better alternative for per-request state as it is scoped to the fiber and is also inherited to child contexts.
543
+ #### Use `Fiber[key]` for inheritable per-request state
544
+
545
+ Use `Fiber[key]` when you need per-request state that **inherits across concurrency boundaries** (to child fibers and threads). This is particularly useful for request tracing, user context, or other state that should flow through your entire request processing pipeline.
539
546
 
540
547
  ```ruby
541
- Fiber[:user_id] = request.session[:user_id] # Set per-request state
548
+ Fiber[:user_id] = request.session[:user_id]
549
+ Fiber[:trace_id] = request.headers['X-Trace-ID']
542
550
 
543
551
  jobs.each do |job|
544
- Thread.new do
545
- puts "Processing job for user #{Fiber[:user_id]}"
546
- # Do something with the job...
547
- end
552
+ Thread.new do
553
+ # Child threads inherit the fiber storage:
554
+ puts "Processing job for user #{Fiber[:user_id]} (trace: #{Fiber[:trace_id]})"
555
+ process_job(job)
556
+ end
557
+ end
558
+
559
+ Async do |task|
560
+ # Child fibers also inherit the storage:
561
+ task.async do
562
+ puts "Background task for user: #{Fiber[:user_id]}"
563
+ end
548
564
  end
549
565
  ```
550
566
 
551
- #### Use `Fiber.attr` for per-request state
567
+ Note that since this state is inherited across concurrency boundaries, any mutable objects stored here must be thread-safe or you risk data corruption. Therefore you should prefer immutable values including strings, numbers, frozen objects, or other immutable data types, and avoid mutable objects like arrays, hashes, unless they're thread-safe.
568
+
569
+ ```ruby
570
+ # Safe - immutable values:
571
+ Fiber[:user_id] = "user_123"
572
+ Fiber[:account_type] = :premium
573
+
574
+ # Risky - mutable objects:
575
+ Fiber[:user_preferences] = {} # Could be modified concurrently
576
+ Fiber[:request_data] = SomeObject.new # Unless thread-safe
577
+ ```
552
578
 
553
- As a direct alternative to `Thread.current`, with a slight performance advantage and readability improvement, you can use `Fiber.attr` to store per-request state. This is scoped to the fiber and is also inherited to child contexts.
579
+ #### Use `Thread.current` for per-request state
580
+
581
+ While `Thread.current[key]` is technically safe for per-request state in Ruby, **it has significant readability and comprehension issues** that make it a poor choice:
582
+
583
+ ```ruby
584
+ Thread.current[:connection] ||= create_new_connection
585
+ ```
586
+
587
+ Despite the look, this is actually stored per-fiber and thus acceptable for storing per-request state. While it works, the confusion it creates makes `Fiber.attr` or `Fiber[key]` much better options.
588
+
589
+ #### Use `Thread.attr` for actual thread-local storage
590
+
591
+ When you specifically need **actual per-thread storage** (not per-fiber), use `Thread.attr`. This is scoped to the OS thread and is not inherited by child fibers.
554
592
 
555
593
  ```ruby
556
- Fiber.attr :my_application_user_id
594
+ Thread.attr :connection_pool
557
595
 
558
- Fiber.current.my_application_user_id = request.session[:user_id] # Set per-request state
596
+ # Each thread gets its own connection pool
597
+ Thread.current.connection_pool = ConnectionPool.new(size: 5)
598
+
599
+ def get_connection
600
+ Thread.current.connection_pool.checkout
601
+ end
559
602
  ```
560
603
 
561
- This state is not inherited to child fibers (or threads), so it's use is limited to the current fiber context. It should also be noted that the same technique can be used for threads, e.g. `Thread.attr`, but this has the same issues as `Thread.current.thread_variable_get/set`, since it is scoped to the thread and not the fiber.
604
+ This approach is rarely needed in fiber-based applications and should be used sparingly, however it can be useful for state or caches that should persist across multiple fibers but is not generally thread safe. Storing an instance per thread can help mitigate thread safety issues.
562
605
 
563
606
  ### C extensions that block the scheduler
564
607
 
@@ -570,29 +613,29 @@ Synchronization primitives like `Mutex`, `ConditionVariable`, and `Queue` are es
570
613
 
571
614
  ```ruby
572
615
  class Counter
573
- def initialize(count = 0)
574
- @count = count
575
- @mutex = Mutex.new
576
- end
577
-
578
- def increment
579
- @mutex.synchronize do
580
- @count += 1
581
- end
582
- end
583
-
584
- def times
585
- @mutex.synchronize do
586
- @count.times do |i|
587
- yield i
588
- end
589
- end
590
- end
616
+ def initialize(count = 0)
617
+ @count = count
618
+ @mutex = Mutex.new
619
+ end
620
+
621
+ def increment
622
+ @mutex.synchronize do
623
+ @count += 1
624
+ end
625
+ end
626
+
627
+ def times
628
+ @mutex.synchronize do
629
+ @count.times do |i|
630
+ yield i
631
+ end
632
+ end
633
+ end
591
634
  end
592
635
 
593
636
  counter = Counter.new
594
637
  counter.times do |i|
595
- counter.increment # deadlock
638
+ counter.increment # deadlock
596
639
  end
597
640
  ```
598
641
 
@@ -606,33 +649,33 @@ As an alternative to the above, reducing the scope of the lock can help avoid de
606
649
 
607
650
  ```ruby
608
651
  class Counter
609
- # ...
652
+ # ...
610
653
 
611
- def times
612
- count = @mutex.synchronize{@count}
654
+ def times
655
+ count = @mutex.synchronize{@count}
613
656
 
614
- # Avoid holding the lock while yielding to user code:
615
- count.times do |i|
616
- yield i
617
- end
618
- end
657
+ # Avoid holding the lock while yielding to user code:
658
+ count.times do |i|
659
+ yield i
660
+ end
661
+ end
619
662
  end
620
663
  ```
621
664
 
622
665
  ## Best Practices for Concurrency in Ruby
623
666
 
624
667
  1. **Favor pure, isolated, and immutable objects and functions.**
625
- The safest and easiest way to write concurrent code is to avoid shared mutable state entirely. Isolated objects and pure functions eliminate the risk of race conditions and make reasoning about code much simpler.
668
+ The safest and easiest way to write concurrent code is to avoid shared mutable state entirely. Isolated objects and pure functions eliminate the risk of race conditions and make reasoning about code much simpler.
626
669
 
627
670
  2. **Use per-request (or per-fiber) state correctly.**
628
- When you need to associate state with a request, job, or fiber, prefer explicit context passing, or use fiber-local variables (e.g. `Fiber[:key]`). Avoid using thread-local storage in fiber-based code, as fibers may share threads and this can lead to subtle bugs.
671
+ When you need to associate state with a request, job, or fiber, prefer explicit context passing, or use fiber-local variables (e.g. `Fiber[:key]`). Avoid using thread-local storage in fiber-based code, as fibers may share threads and this can lead to subtle bugs.
629
672
 
630
673
  3. **Use synchronization primitives only when sharing is truly necessary.**
631
- If you must share mutable state (for performance, memory efficiency, or correctness), protect it with the appropriate synchronization primitives:
674
+ If you must share mutable state (for performance, memory efficiency, or correctness), protect it with the appropriate synchronization primitives:
632
675
 
633
- * Prefer high-level, lock-free data structures (e.g. `Concurrent::Map`) when possible.
634
- * If locks are necessary, use fine-grained locking to minimize contention and reduce deadlock risk.
635
- * Avoid coarse-grained locks except as a last resort, as they can severely limit concurrency and hurt performance.
676
+ * Prefer high-level, lock-free data structures (e.g. `Concurrent::Map`) when possible.
677
+ * If locks are necessary, use fine-grained locking to minimize contention and reduce deadlock risk.
678
+ * Avoid coarse-grained locks except as a last resort, as they can severely limit concurrency and hurt performance.
636
679
 
637
680
  ### Hierarchy of Concurrency Safety
638
681
 
data/lib/async/version.rb CHANGED
@@ -4,5 +4,5 @@
4
4
  # Copyright, 2017-2025, by Samuel Williams.
5
5
 
6
6
  module Async
7
- VERSION = "2.32.0"
7
+ VERSION = "2.32.1"
8
8
  end
data/readme.md CHANGED
@@ -35,6 +35,10 @@ Please see the [project documentation](https://socketry.github.io/async/) for mo
35
35
 
36
36
  Please see the [project releases](https://socketry.github.io/async/releases/index) for all releases.
37
37
 
38
+ ### v2.32.1
39
+
40
+ - Fix typo in documentation.
41
+
38
42
  ### v2.32.0
39
43
 
40
44
  - Introduce `Queue#waiting_count` and `PriorityQueue#waiting_count`. Generally for statistics/testing purposes only.
@@ -80,10 +84,6 @@ This release introduces thread-safety as a core concept of Async. Many core clas
80
84
 
81
85
  - Fix `context/index.yaml` schema.
82
86
 
83
- ### v2.27.1
84
-
85
- - Updated documentation and agent context.
86
-
87
87
  ## See Also
88
88
 
89
89
  - [async-http](https://github.com/socketry/async-http) — Asynchronous HTTP client/server.
data/releases.md CHANGED
@@ -1,5 +1,9 @@
1
1
  # Releases
2
2
 
3
+ ## v2.32.1
4
+
5
+ - Fix typo in documentation.
6
+
3
7
  ## v2.32.0
4
8
 
5
9
  - Introduce `Queue#waiting_count` and `PriorityQueue#waiting_count`. Generally for statistics/testing purposes only.
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: async
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.32.0
4
+ version: 2.32.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Samuel Williams
metadata.gz.sig CHANGED
Binary file