async 2.34.0 → 2.35.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 05c05a5ba6d2dc436a12de9ff3cb562b3376892fd8c6615be8c4eb654d570fef
4
- data.tar.gz: 2cb04cbecc3237b4476e5f42e4e130e8116ea10890c1b766140b238451c13f81
3
+ metadata.gz: ad14103d1146e967a38187e2d6d3d64b745b50f2f833b164c151ef53d270673e
4
+ data.tar.gz: b34db7f89b4054ed33dfbd22d979060b90d939180cf3014c4aa7a5b8c3f070eb
5
5
  SHA512:
6
- metadata.gz: 16d4488b9d5fa9702bba6fead00c9c772402f792817daba79596cdf573e9f7b111ab77b85166d9b7b93c92babd9a717bf8fe11aa56ae67beded351fda99eaa7b
7
- data.tar.gz: 289f3a854fcda83b35e92a1ebc14425722fdc856f8dfaa60e29f630ba79b3d72269816f7f01f940aa8926685701cd5ef1ba285a0576f91354107083cfaf96905
6
+ metadata.gz: 9b73a930aa37a5cb7695ca5deb66e181d80bd1b1157fbcbf711f6ff2f288c8b1294f05e32e02b68289608b1948315509c66bc541250cd14a19ed40461e283e99
7
+ data.tar.gz: 82cef6a6f824e38cf4cdd24a0a6447394e5c8cb37f1c6ec606433e34aa9bd0d6495f92d3fcc290cf298ed497b3967ab9beac5f60d18849f5bcc04df7bee34a39
checksums.yaml.gz.sig CHANGED
Binary file
@@ -7,7 +7,7 @@ This guide gives an overview of best practices for using Async.
7
7
  `Async{}` has two uses: it creates an event loop if one doesn't exist, and it creates a task which runs asynchronously with respect to the parent scope. However, the top level `Async{}` block will be synchronous because it creates the event loop. In some programs, you do not care about executing asynchronously, but you still want your code to run in an event loop. `Sync{}` exists to do this efficiently.
8
8
 
9
9
  ```ruby
10
- require 'async'
10
+ require "async"
11
11
 
12
12
  class Packages
13
13
  def initialize(urls)
@@ -55,12 +55,24 @@ This expresses the intent to the caller that this method should only be invoked
55
55
 
56
56
  ## Use barriers to manage unbounded concurrency
57
57
 
58
- Barriers provide a way to manage an unbounded number of tasks.
58
+ Barriers provide a way to manage an unbounded number of tasks. The top-level `Barrier` method creates a barrier with built-in load management using an `Async::Idler`.
59
59
 
60
60
  ```ruby
61
- Async do
62
- barrier = Async::Barrier.new
63
-
61
+ Barrier do |barrier|
62
+ items.each do |item|
63
+ barrier.async do
64
+ process(item)
65
+ end
66
+ end
67
+ end
68
+ ```
69
+
70
+ The barrier will automatically wait for all tasks to complete and stop any outstanding tasks when the block exits. By default, it uses an `Async::Idler` to prevent system overload by scheduling tasks when the system load is below 80%.
71
+
72
+ If you want to process tasks in order of completion, you can explicitly call `wait` with a block:
73
+
74
+ ```ruby
75
+ Barrier do |barrier|
64
76
  items.each do |item|
65
77
  barrier.async do
66
78
  process(item)
@@ -71,50 +83,61 @@ Async do
71
83
  barrier.wait do |task|
72
84
  result = task.wait
73
85
  # Do something with result.
74
-
86
+
75
87
  # If you don't want to wait for any more tasks you can break:
76
88
  break
77
89
  end
78
-
79
- # Or just wait for all tasks to finish:
80
- barrier.wait # May raise an exception if a task failed.
81
- ensure
82
- # Stop all outstanding tasks in the barrier:
83
- barrier&.stop
90
+ end
91
+ ```
92
+
93
+ To disable load management (not recommended for unbounded concurrency), you can pass `parent: nil`:
94
+
95
+ ```ruby
96
+ Barrier(parent: nil) do |barrier|
97
+ # No load management - creates tasks as fast as possible
98
+ items.each do |item|
99
+ barrier.async do
100
+ process(item)
101
+ end
102
+ end
84
103
  end
85
104
  ```
86
105
 
87
106
  ## Use a semaphore to limit the number of concurrent tasks
88
107
 
89
- Semaphores allow you to limit the level of concurrency to a fixed number of tasks:
108
+ Semaphores allow you to limit the level of concurrency to a fixed number of tasks. When using semaphores with barriers, the barrier should be the root of your task hierarchy, and the semaphore should be a child of the barrier:
90
109
 
91
110
  ```ruby
92
- Async do |task|
93
- barrier = Async::Barrier.new
111
+ Barrier(parent: nil) do |barrier|
94
112
  semaphore = Async::Semaphore.new(4, parent: barrier)
95
113
 
96
- # Since the semaphore.async may block, we need to run the work scheduling in a child task:
97
- task.async do
98
- items.each do |item|
99
- semaphore.async do
100
- process(item)
101
- end
114
+ items.each do |item|
115
+ semaphore.async do
116
+ process(item)
102
117
  end
103
118
  end
104
-
105
- # Wait for all the work to complete:
106
- barrier.wait
107
- ensure
108
- # Stop all outstanding tasks in the barrier:
109
- barrier&.stop
110
119
  end
111
120
  ```
112
121
 
113
- In general, the barrier should be the root of your task hierarchy, and the semaphore should be a child of the barrier. This allows you to manage the lifetime of all tasks created by the semaphore, and ensures that all tasks are stopped when the barrier is stopped.
122
+ In this example, we use `parent: nil` for the barrier to disable load management, since the semaphore already provides concurrency control. The semaphore limits execution to 4 concurrent tasks, and the barrier ensures all tasks are stopped when the block exits.
114
123
 
115
124
  ### Idler
116
125
 
117
- Idlers are like semaphores but with a limit defined by current processor utilization. In other words, an idler will do work up to a specific ratio of idle/busy time in the scheduler, and try to maintain that.
126
+ Idlers are like semaphores but with a limit defined by current processor utilization. In other words, an idler will schedule work up to a specific ratio of idle/busy time in the scheduler.
127
+
128
+ The top-level `Barrier` method uses an idler by default, making it safe for unbounded concurrency:
129
+
130
+ ```ruby
131
+ Barrier do |barrier| # Uses Async::Idler.new(0.8) by default
132
+ work.each do |work|
133
+ barrier.async do
134
+ work.call
135
+ end
136
+ end
137
+ end
138
+ ```
139
+
140
+ You can also use an idler directly without a barrier:
118
141
 
119
142
  ```ruby
120
143
  Async do
@@ -145,7 +168,7 @@ Async do |task|
145
168
  while chunk = socket.gets
146
169
  queue.push(chunk)
147
170
  end
148
- end
171
+
149
172
  # After this point, we won't be able to add items to the queue, and popping items will eventually result in nil once all items are dequeued:
150
173
  queue.close
151
174
  end
data/context/debugging.md CHANGED
@@ -9,7 +9,7 @@ This guide explains how to debug issues with programs that use Async.
9
9
  The simplest way to debug an Async program is to use `puts` to print messages to the console. This is useful for understanding the flow of your program and the values of variables. However, it can be difficult to use `puts` to debug programs that use asynchronous code, as the output may be interleaved. To prevent this, wrap it in `Fiber.blocking{}`:
10
10
 
11
11
  ```ruby
12
- require 'async'
12
+ require "async"
13
13
 
14
14
  Async do
15
15
  3.times do |i|
@@ -42,8 +42,8 @@ If you don't use `Fiber.blocking{}`, the event loop will continue to run and you
42
42
  The `async-debug` gem provides a visual debugger for Async programs. It is a powerful tool that allows you to inspect the state of your program and see the hierarchy of your program:
43
43
 
44
44
  ```ruby
45
- require 'async'
46
- require 'async/debug'
45
+ require "async"
46
+ require "async/debug"
47
47
 
48
48
  Sync do
49
49
  debugger = Async::Debug.serve
@@ -65,7 +65,7 @@ You should consider the boundary around your program and the request handling. F
65
65
  Similar to a promise, {ruby Async::Task} produces results. In order to wait for these results, you must invoke {ruby Async::Task#wait}:
66
66
 
67
67
  ``` ruby
68
- require 'async'
68
+ require "async"
69
69
 
70
70
  task = Async do
71
71
  rand
@@ -99,7 +99,7 @@ end
99
99
  Unless you need fan-out, map-reduce style concurrency, you can actually use a slightly more efficient {ruby Kernel::Sync} execution model. This method will run your block in the current event loop if one exists, or create an event loop if not. You can use it for code which uses asynchronous primitives, but itself does not need to be asynchronous with respect to other tasks.
100
100
 
101
101
  ```ruby
102
- require 'async/http/internet'
102
+ require "async/http/internet"
103
103
 
104
104
  def fetch(url)
105
105
  Sync do
@@ -109,11 +109,11 @@ def fetch(url)
109
109
  end
110
110
 
111
111
  # At the level of your program, this method will create an event loop:
112
- fetch(...)
112
+ fetch("https://example.com")
113
113
 
114
114
  Sync do
115
115
  # The event loop already exists, and will be reused:
116
- fetch(...)
116
+ fetch("https://example.com")
117
117
  end
118
118
  ```
119
119
 
@@ -154,13 +154,13 @@ The former allows you to inject the parent, which could be a barrier or semaphor
154
154
  The Fiber Scheduler interface is compatible with most pure Ruby code and well-behaved C code. For example, you can use {ruby Net::HTTP} for performing concurrent HTTP requests:
155
155
 
156
156
  ```ruby
157
- urls = [...]
157
+ urls = ["http://example.com", "http://example.org", "http://example.net"]
158
158
 
159
159
  Async do
160
160
  # Perform several concurrent requests:
161
161
  responses = urls.map do |url|
162
162
  Async do
163
- Net::HTTP.get(url)
163
+ Net::HTTP.get(URI(url))
164
164
  end
165
165
  end.map(&:wait)
166
166
  end
data/context/scheduler.md CHANGED
@@ -78,7 +78,7 @@ You can use this approach to embed the reactor in another event loop. For some i
78
78
  In order to integrate with native Ruby blocking operations, the {ruby Async::Scheduler} uses a {ruby Fiber::Scheduler} interface.
79
79
 
80
80
  ```ruby
81
- require 'async'
81
+ require "async"
82
82
 
83
83
  scheduler = Async::Scheduler.new
84
84
  Fiber.set_scheduler(scheduler)
data/context/tasks.md CHANGED
@@ -99,8 +99,8 @@ By constructing your program correctly, it's easy to implement concurrent map-re
99
99
  ```ruby
100
100
  Async do
101
101
  # Map (create several concurrent tasks)
102
- users_size = Async{User.size}
103
- posts_size = Async{Post.size}
102
+ users_size = Async {User.size}
103
+ posts_size = Async {Post.size}
104
104
 
105
105
  # Reduce (wait for and merge the results)
106
106
  average = posts_size.wait / users_size.wait
@@ -220,7 +220,7 @@ Async do
220
220
  puts "Hello World #{i}"
221
221
  end
222
222
  end
223
-
223
+
224
224
  # Stop all the above tasks:
225
225
  tasks.each(&:stop)
226
226
  end
@@ -414,8 +414,8 @@ and you could be handling 1000s of requests per second.
414
414
  The task doing the updating in the background is an implementation detail, so it is marked as `transient`.
415
415
 
416
416
  ```ruby
417
- require 'async'
418
- require 'thread/local' # thread-local gem.
417
+ require "async"
418
+ require "thread/local" # thread-local gem.
419
419
 
420
420
  class TimeStringCache
421
421
  extend Thread::Local # defines `instance` class method that lazy-creates a separate instance per thread
@@ -450,4 +450,4 @@ Async do
450
450
  end
451
451
  ```
452
452
 
453
- Upon existing the top level async block, the {ruby @refresh} task will be set to `nil`. Bear in mind, you should not share these resources across threads; doing so would need some form of mutual exclusion.
453
+ Upon exiting the top level async block, the {ruby @refresh} task will be set to `nil`. Bear in mind, you should not share these resources across threads; doing so would need some form of mutual exclusion.
@@ -106,11 +106,11 @@ Class variables (`@@variable`) and class attributes (`class_attribute`) represen
106
106
  ```ruby
107
107
  class GlobalConfig
108
108
  @@settings = {} # Issue: Class variables are shared across inheritance
109
-
109
+
110
110
  def set(key, value)
111
111
  @@settings[key] = value
112
112
  end
113
-
113
+
114
114
  def get(key)
115
115
  @@settings[key]
116
116
  end
@@ -138,7 +138,7 @@ Lazy initialization is a common pattern in Ruby, but the `||=` operator is not a
138
138
  ```ruby
139
139
  class Loader
140
140
  def self.data
141
- @data ||= JSON.load_file('data.json')
141
+ @data ||= JSON.load_file("data.json")
142
142
  end
143
143
  end
144
144
  ```
@@ -152,18 +152,18 @@ This could cause situations where `self.data != self.data` for example, or modif
152
152
  ```ruby
153
153
  class Loader
154
154
  @mutex = Mutex.new
155
-
155
+
156
156
  def self.data
157
157
  # Double-checked locking pattern:
158
158
  return @data if @data
159
-
159
+
160
160
  @mutex.synchronize do
161
161
  return @data if @data
162
-
162
+
163
163
  # Now we are sure that @data is nil, we can safely fetch it:
164
- @data = JSON.load_file('data.json')
164
+ @data = JSON.load_file("data.json")
165
165
  end
166
-
166
+
167
167
  return @data
168
168
  end
169
169
  end
@@ -175,15 +175,15 @@ In addition, it should be noted that lazy initialization of a `Mutex` (and other
175
175
  class Loader
176
176
  def self.data
177
177
  @mutex ||= Mutex.new # Issue: Not thread-safe
178
-
178
+
179
179
  @mutex.synchronize do
180
180
  # Double-checked locking pattern:
181
181
  return @data if @data
182
-
182
+
183
183
  # Now we are sure that @data is nil, we can safely fetch it:
184
- @data = JSON.load_file('data.json')
184
+ @data = JSON.load_file("data.json")
185
185
  end
186
-
186
+
187
187
  return @data
188
188
  end
189
189
  end
@@ -214,7 +214,7 @@ Like lazy initialization, memoization using `Hash` caches can lead to race condi
214
214
  ```ruby
215
215
  class ExpensiveComputation
216
216
  @cache = {}
217
-
217
+
218
218
  def self.compute(key)
219
219
  @cache[key] ||= expensive_operation(key) # Issue: Not thread-safe
220
220
  end
@@ -231,7 +231,7 @@ Note that this mutex creates contention on all calls to `compute`, which can be
231
231
  class ExpensiveComputation
232
232
  @cache = {}
233
233
  @mutex = Mutex.new
234
-
234
+
235
235
  def self.compute(key)
236
236
  @mutex.synchronize do
237
237
  @cache[key] ||= expensive_operation(key)
@@ -245,7 +245,7 @@ end
245
245
  ```ruby
246
246
  class ExpensiveComputation
247
247
  @cache = Concurrent::Map.new
248
-
248
+
249
249
  def self.compute(key)
250
250
  @cache.compute_if_absent(key) do
251
251
  expensive_operation(key)
@@ -320,7 +320,7 @@ end
320
320
  Using a connection pool can help manage shared connections safely:
321
321
 
322
322
  ```ruby
323
- require 'connection_pool'
323
+ require "connection_pool"
324
324
  pool = ConnectionPool.new(size: 5, timeout: 5) do
325
325
  Database.connect
326
326
  end
@@ -347,11 +347,11 @@ class SharedList
347
347
  def initialize
348
348
  @list = []
349
349
  end
350
-
350
+
351
351
  def add(item)
352
352
  @list << item
353
353
  end
354
-
354
+
355
355
  def each(&block)
356
356
  # Issue: Modifications during enumeration can lead to inconsistent state
357
357
  @list.each(&block)
@@ -373,13 +373,13 @@ class SharedList
373
373
  @list = []
374
374
  @mutex = Mutex.new
375
375
  end
376
-
376
+
377
377
  def add(item)
378
378
  @mutex.synchronize do
379
379
  @list << item
380
380
  end
381
381
  end
382
-
382
+
383
383
  def each(&block)
384
384
  @mutex.synchronize do
385
385
  @list.each(&block)
@@ -439,14 +439,14 @@ class System
439
439
  @condition = ConditionVariable.new
440
440
  @usage = 0
441
441
  end
442
-
442
+
443
443
  def release
444
444
  @mutex.synchronize do
445
445
  @usage -= 1
446
446
  @condition.signal if @usage == 0
447
447
  end
448
448
  end
449
-
449
+
450
450
  def wait_until_free
451
451
  @mutex.synchronize do
452
452
  while @usage > 0
@@ -462,11 +462,11 @@ end
462
462
  External resources can also lead to "time of check to time of use" issues, where the state of the resource changes between checking its status and using it.
463
463
 
464
464
  ```ruby
465
- if File.exist?('cache.json')
466
- @data = File.read('cache.json')
465
+ if File.exist?("cache.json")
466
+ @data = File.read("cache.json")
467
467
  else
468
468
  @data = fetch_data_from_api
469
- File.write('cache.json', @data)
469
+ File.write("cache.json", @data)
470
470
  end
471
471
  ```
472
472
 
@@ -480,9 +480,9 @@ Using content-addressable storage and atomic file operations can help avoid race
480
480
 
481
481
  ```ruby
482
482
  begin
483
- File.read('cache.json')
483
+ File.read("cache.json")
484
484
  rescue Errno::ENOENT
485
- File.open('cache.json', 'w') do |file|
485
+ File.open("cache.json", "w") do |file|
486
486
  file.flock(File::LOCK_EX)
487
487
  file.write(fetch_data_from_api)
488
488
  end
@@ -546,7 +546,7 @@ Use `Fiber[key]` when you need per-request state that **inherits across concurre
546
546
 
547
547
  ```ruby
548
548
  Fiber[:user_id] = request.session[:user_id]
549
- Fiber[:trace_id] = request.headers['X-Trace-ID']
549
+ Fiber[:trace_id] = request.headers["X-Trace-ID"]
550
550
 
551
551
  jobs.each do |job|
552
552
  Thread.new do
@@ -617,13 +617,13 @@ class Counter
617
617
  @count = count
618
618
  @mutex = Mutex.new
619
619
  end
620
-
620
+
621
621
  def increment
622
622
  @mutex.synchronize do
623
623
  @count += 1
624
624
  end
625
625
  end
626
-
626
+
627
627
  def times
628
628
  @mutex.synchronize do
629
629
  @count.times do |i|
@@ -650,10 +650,10 @@ As an alternative to the above, reducing the scope of the lock can help avoid de
650
650
  ```ruby
651
651
  class Counter
652
652
  # ...
653
-
653
+
654
654
  def times
655
655
  count = @mutex.synchronize{@count}
656
-
656
+
657
657
  # Avoid holding the lock while yielding to user code:
658
658
  count.times do |i|
659
659
  yield i
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  # Released under the MIT License.
4
+ # Copyright, 2025, by Shopify Inc.
4
5
  # Copyright, 2025, by Samuel Williams.
5
6
 
6
7
  require_relative "clock"
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Released under the MIT License.
4
+ # Copyright, 2025, by Shopify Inc.
5
+ # Copyright, 2025, by Samuel Williams.
6
+
7
+ module Async
8
+ # Private module that hooks into Process._fork to handle fork events.
9
+ #
10
+ # If `Scheduler#process_fork` hook is adopted in Ruby 4, this code can be removed after Ruby < 4 is no longer supported.
11
+ module ForkHandler
12
+ def _fork(&block)
13
+ result = super
14
+
15
+ if result.zero?
16
+ # Child process:
17
+ if Fiber.scheduler.respond_to?(:process_fork)
18
+ Fiber.scheduler.process_fork
19
+ end
20
+ end
21
+
22
+ return result
23
+ end
24
+ end
25
+
26
+ private_constant :ForkHandler
27
+
28
+ # Hook into Process._fork to handle fork events automatically:
29
+ unless (Fiber.const_get(:SCHEDULER_PROCESS_FORK) rescue false)
30
+ ::Process.singleton_class.prepend(ForkHandler)
31
+ end
32
+ end
data/lib/async/idler.rb CHANGED
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  # Released under the MIT License.
4
- # Copyright, 2024, by Samuel Williams.
4
+ # Copyright, 2024-2025, by Samuel Williams.
5
5
 
6
6
  module Async
7
7
  # A load balancing mechanism that can be used process work when the system is idle.
data/lib/async/node.rb CHANGED
@@ -214,7 +214,8 @@ module Async
214
214
  end
215
215
 
216
216
  protected def remove_child(child)
217
- @children.remove(child)
217
+ # In the case of a fork, the children list may be nil:
218
+ @children&.remove(child)
218
219
  child.set_parent(nil)
219
220
  end
220
221
 
data/lib/async/promise.rb CHANGED
@@ -30,39 +30,39 @@ module Async
30
30
 
31
31
  # @returns [Boolean] Whether the promise has been resolved or rejected.
32
32
  def resolved?
33
- @mutex.synchronize {!!@resolved}
33
+ @mutex.synchronize{!!@resolved}
34
34
  end
35
35
 
36
36
  # @returns [Symbol | Nil] The internal resolved state (:completed, :failed, :cancelled, or nil if pending).
37
37
  # @private For internal use by Task.
38
38
  def resolved
39
- @mutex.synchronize {@resolved}
39
+ @mutex.synchronize{@resolved}
40
40
  end
41
41
 
42
42
  # @returns [Boolean] Whether the promise has been cancelled.
43
43
  def cancelled?
44
- @mutex.synchronize {@resolved == :cancelled}
44
+ @mutex.synchronize{@resolved == :cancelled}
45
45
  end
46
46
 
47
47
  # @returns [Boolean] Whether the promise failed with an exception.
48
48
  def failed?
49
- @mutex.synchronize {@resolved == :failed}
49
+ @mutex.synchronize{@resolved == :failed}
50
50
  end
51
51
 
52
52
  # @returns [Boolean] Whether the promise has completed successfully.
53
53
  def completed?
54
- @mutex.synchronize {@resolved == :completed}
54
+ @mutex.synchronize{@resolved == :completed}
55
55
  end
56
56
 
57
57
  # @returns [Boolean] Whether any fibers are currently waiting for this promise.
58
58
  def waiting?
59
- @mutex.synchronize {@waiting > 0}
59
+ @mutex.synchronize{@waiting > 0}
60
60
  end
61
61
 
62
62
  # Artificially mark that someone is waiting (useful for suppressing warnings).
63
63
  # @private Internal use only.
64
64
  def suppress_warnings!
65
- @mutex.synchronize {@waiting += 1}
65
+ @mutex.synchronize{@waiting += 1}
66
66
  end
67
67
 
68
68
  # Non-blocking access to the current value. Returns nil if not yet resolved.
@@ -71,7 +71,7 @@ module Async
71
71
  #
72
72
  # @returns [Object | Nil] The stored value, or nil if pending.
73
73
  def value
74
- @mutex.synchronize {@resolved ? @value : nil}
74
+ @mutex.synchronize{@resolved ? @value : nil}
75
75
  end
76
76
 
77
77
  # Wait for the promise to be resolved and return the value.
data/lib/async/queue.rb CHANGED
@@ -72,7 +72,7 @@ module Async
72
72
 
73
73
  # Add multiple items to the queue.
74
74
  def enqueue(*items)
75
- items.each {|item| @delegate.push(item)}
75
+ items.each{|item| @delegate.push(item)}
76
76
  rescue ClosedQueueError
77
77
  raise ClosedError, "Cannot enqueue items to a closed queue!"
78
78
  end
@@ -9,6 +9,7 @@
9
9
  require_relative "clock"
10
10
  require_relative "task"
11
11
  require_relative "timeout"
12
+ require_relative "fork_handler"
12
13
 
13
14
  require "io/event"
14
15
 
@@ -146,24 +147,26 @@ module Async
146
147
  # Terminate all child tasks and close the scheduler.
147
148
  # @public Since *Async v1*.
148
149
  def close
149
- self.run_loop do
150
- until self.terminate
151
- self.run_once!
150
+ unless @children.nil?
151
+ self.run_loop do
152
+ until self.terminate
153
+ self.run_once!
154
+ end
152
155
  end
153
156
  end
154
157
 
155
158
  Kernel.raise "Closing scheduler with blocked operations!" if @blocked > 0
156
159
  ensure
157
160
  # We want `@selector = nil` to be a visible side effect from this point forward, specifically in `#interrupt` and `#unblock`. If the selector is closed, then we don't want to push any fibers to it.
158
- selector = @selector
159
- @selector = nil
160
-
161
- selector&.close
162
-
163
- worker_pool = @worker_pool
164
- @worker_pool = nil
161
+ if selector = @selector
162
+ @selector = nil
163
+ selector.close
164
+ end
165
165
 
166
- worker_pool&.close
166
+ if worker_pool = @worker_pool
167
+ @worker_pool = nil
168
+ worker_pool.close
169
+ end
167
170
 
168
171
  consume
169
172
  end
@@ -642,5 +645,24 @@ module Async
642
645
  yield duration
643
646
  end
644
647
  end
648
+
649
+ # Handle fork in the child process. This method is called automatically when `Process.fork` is invoked.
650
+ #
651
+ # The child process starts with a clean slate - no scheduler is set. Users can create a new scheduler if needed.
652
+ #
653
+ # @public Since *Async v2.35*.
654
+ def process_fork
655
+ if profiler = @profiler
656
+ @profiler = nil
657
+ profiler.stop
658
+ end
659
+
660
+ @children = nil
661
+ @selector = nil
662
+ @timers = nil
663
+
664
+ # Close the scheduler:
665
+ Fiber.set_scheduler(nil)
666
+ end
645
667
  end
646
668
  end
data/lib/async/task.md CHANGED
@@ -21,7 +21,7 @@ Initialized --> Stopped : Stop
21
21
  ## Example
22
22
 
23
23
  ```ruby
24
- require 'async'
24
+ require "async"
25
25
 
26
26
  # Create an asynchronous task that sleeps for 1 second:
27
27
  Async do |task|
data/lib/async/version.rb CHANGED
@@ -4,5 +4,5 @@
4
4
  # Copyright, 2017-2025, by Samuel Williams.
5
5
 
6
6
  module Async
7
- VERSION = "2.34.0"
7
+ VERSION = "2.35.0"
8
8
  end
data/lib/async.rb CHANGED
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  # Released under the MIT License.
4
- # Copyright, 2017-2024, by Samuel Williams.
4
+ # Copyright, 2017-2025, by Samuel Williams.
5
5
  # Copyright, 2020, by Salim Semaoune.
6
6
 
7
7
  require_relative "async/version"
data/lib/kernel/sync.rb CHANGED
@@ -20,7 +20,7 @@ module Kernel
20
20
  def Sync(annotation: nil, &block)
21
21
  if task = ::Async::Task.current?
22
22
  if annotation
23
- task.annotate(annotation) {yield task}
23
+ task.annotate(annotation){yield task}
24
24
  else
25
25
  yield task
26
26
  end
@@ -12,6 +12,6 @@ Traces::Provider(Async::Barrier) do
12
12
  "size" => self.size
13
13
  }
14
14
 
15
- Traces.trace("async.barrier.wait", attributes: attributes) {super}
15
+ Traces.trace("async.barrier.wait", attributes: attributes){super}
16
16
  end
17
17
  end
data/license.md CHANGED
@@ -34,6 +34,7 @@ Copyright, 2025, by Alan Wu.
34
34
  Copyright, 2025, by Shopify Inc.
35
35
  Copyright, 2025, by Josh Teeter.
36
36
  Copyright, 2025, by Jatin Goyal.
37
+ Copyright, 2025, by Yuhi Sato.
37
38
 
38
39
  Permission is hereby granted, free of charge, to any person obtaining a copy
39
40
  of this software and associated documentation files (the "Software"), to deal
data/readme.md CHANGED
@@ -35,6 +35,10 @@ Please see the [project documentation](https://socketry.github.io/async/) for mo
35
35
 
36
36
  Please see the [project releases](https://socketry.github.io/async/releases/index) for all releases.
37
37
 
38
+ ### v2.35.0
39
+
40
+ - `Process.fork` is now properly handled by the Async fiber scheduler, ensuring that the scheduler state is correctly reset in the child process after a fork. This prevents issues where the child process inherits the scheduler state from the parent, which could lead to unexpected behavior.
41
+
38
42
  ### v2.34.0
39
43
 
40
44
  - [`Kernel::Barrier` Convenience Interface](https://socketry.github.io/async/releases/index#kernel::barrier-convenience-interface)
@@ -80,18 +84,13 @@ This release introduces thread-safety as a core concept of Async. Many core clas
80
84
 
81
85
  - Use `Traces.current_context` and `Traces.with_context` for better integration with OpenTelemetry.
82
86
 
83
- ### v2.27.4
84
-
85
- - Suppress excessive warning in `Async::Scheduler#async`.
86
-
87
87
  ## See Also
88
88
 
89
89
  - [async-http](https://github.com/socketry/async-http) — Asynchronous HTTP client/server.
90
+ - [falcon](https://github.com/socketry/falcon) — A rack compatible server built on top of `async-http`.
90
91
  - [async-websocket](https://github.com/socketry/async-websocket) — Asynchronous client and server websockets.
91
92
  - [async-dns](https://github.com/socketry/async-dns) — Asynchronous DNS resolver and server.
92
- - [falcon](https://github.com/socketry/falcon) — A rack compatible server built on top of `async-http`.
93
- - [rubydns](https://github.com/ioquatix/rubydns) — An easy to use Ruby DNS server.
94
- - [slack-ruby-bot](https://github.com/slack-ruby/slack-ruby-bot) — A client for making slack bots.
93
+ - [toolbox](https://github.com/socketry/toolbox) — GDB & LLDB extensions for debugging Ruby applications with Fibers.
95
94
 
96
95
  ## Contributing
97
96
 
data/releases.md CHANGED
@@ -1,5 +1,9 @@
1
1
  # Releases
2
2
 
3
+ ## v2.35.0
4
+
5
+ - `Process.fork` is now properly handled by the Async fiber scheduler, ensuring that the scheduler state is correctly reset in the child process after a fork. This prevents issues where the child process inherits the scheduler state from the parent, which could lead to unexpected behavior.
6
+
3
7
  ## v2.34.0
4
8
 
5
9
  ### `Kernel::Barrier` Convenience Interface
@@ -7,7 +11,7 @@
7
11
  Starting multiple concurrent tasks and waiting for them to finish is a common pattern. This change introduces a small ergonomic helper, `Barrier`, defined in `Kernel`, that encapsulates this behavior: it creates an `Async::Barrier`, yields it to a block, waits for completion (using `Sync` to run a reactor if needed), and ensures remaining tasks are stopped on exit.
8
12
 
9
13
  ``` ruby
10
- require 'async'
14
+ require "async"
11
15
 
12
16
  Barrier do |barrier|
13
17
  3.times do |i|
@@ -66,15 +70,15 @@ This release introduces the new `Async::Promise` class and refactors `Async::Tas
66
70
  <!-- end list -->
67
71
 
68
72
  ``` ruby
69
- require 'async/promise'
73
+ require "async/promise"
70
74
 
71
75
  # Basic promise usage - works independently of Async framework
72
76
  promise = Async::Promise.new
73
77
 
74
78
  # In another thread or fiber, resolve the promise
75
79
  Thread.new do
76
- sleep(1) # Simulate some work
77
- promise.resolve("Hello, World!")
80
+ sleep(1) # Simulate some work
81
+ promise.resolve("Hello, World!")
78
82
  end
79
83
 
80
84
  # Wait for the result
@@ -93,34 +97,34 @@ Promises bridge Thread and Fiber concurrency models - a promise resolved in one
93
97
  The new `Async::PriorityQueue` provides a thread-safe, fiber-aware queue where consumers can specify priority levels. Higher priority consumers are served first when items become available, with FIFO ordering maintained for equal priorities. This is useful for implementing priority-based task processing systems where critical operations need to be handled before lower priority work.
94
98
 
95
99
  ``` ruby
96
- require 'async'
97
- require 'async/priority_queue'
100
+ require "async"
101
+ require "async/priority_queue"
98
102
 
99
103
  Async do
100
- queue = Async::PriorityQueue.new
101
-
102
- # Start consumers with different priorities
103
- low_priority = async do
104
- puts "Low priority consumer got: #{queue.dequeue(priority: 1)}"
105
- end
106
-
107
- medium_priority = async do
108
- puts "Medium priority consumer got: #{queue.dequeue(priority: 5)}"
109
- end
110
-
111
- high_priority = async do
112
- puts "High priority consumer got: #{queue.dequeue(priority: 10)}"
113
- end
114
-
115
- # Add items to the queue
116
- queue.push("first item")
117
- queue.push("second item")
118
- queue.push("third item")
119
-
120
- # Output:
121
- # High priority consumer got: first item
122
- # Medium priority consumer got: second item
123
- # Low priority consumer got: third item
104
+ queue = Async::PriorityQueue.new
105
+
106
+ # Start consumers with different priorities
107
+ low_priority = async do
108
+ puts "Low priority consumer got: #{queue.dequeue(priority: 1)}"
109
+ end
110
+
111
+ medium_priority = async do
112
+ puts "Medium priority consumer got: #{queue.dequeue(priority: 5)}"
113
+ end
114
+
115
+ high_priority = async do
116
+ puts "High priority consumer got: #{queue.dequeue(priority: 10)}"
117
+ end
118
+
119
+ # Add items to the queue
120
+ queue.push("first item")
121
+ queue.push("second item")
122
+ queue.push("third item")
123
+
124
+ # Output:
125
+ # High priority consumer got: first item
126
+ # Medium priority consumer got: second item
127
+ # Low priority consumer got: third item
124
128
  end
125
129
  ```
126
130
 
@@ -370,10 +374,10 @@ This gives better visibility into what the scheduler is doing, and should help d
370
374
  The `async` gem depends on `console` gem, because my goal was to have good logging by default without thinking about it too much. However, some users prefer to avoid using the `console` gem for logging, so I've added an experimental set of shims which should allow you to bypass the `console` gem entirely.
371
375
 
372
376
  ``` ruby
373
- require 'async/console'
374
- require 'async'
377
+ require "async/console"
378
+ require "async"
375
379
 
376
- Async{raise "Boom"}
380
+ Async {raise "Boom"}
377
381
  ```
378
382
 
379
383
  Will now use `Kernel#warn` to print the task failure warning:
@@ -407,7 +411,7 @@ reactor = Async::Reactor.new # internally calls Fiber.set_scheduler
407
411
 
408
412
  # This should run in the above reactor, rather than creating a new one.
409
413
  Async do
410
- puts "Hello World"
414
+ puts "Hello World"
411
415
  end
412
416
  ```
413
417
 
data.tar.gz.sig CHANGED
Binary file
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: async
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.34.0
4
+ version: 2.35.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Samuel Williams
@@ -38,6 +38,7 @@ authors:
38
38
  - Sokolov Yura
39
39
  - Stefan Wrobel
40
40
  - Trevor Turk
41
+ - Yuhi Sato
41
42
  bindir: bin
42
43
  cert_chain:
43
44
  - |
@@ -160,6 +161,7 @@ files:
160
161
  - lib/async/condition.rb
161
162
  - lib/async/console.rb
162
163
  - lib/async/deadline.rb
164
+ - lib/async/fork_handler.rb
163
165
  - lib/async/idler.rb
164
166
  - lib/async/limited_queue.rb
165
167
  - lib/async/list.rb
metadata.gz.sig CHANGED
Binary file