async 2.32.0 → 2.39.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- checksums.yaml.gz.sig +0 -0
- data/context/best-practices.md +53 -32
- data/context/debugging.md +3 -3
- data/context/getting-started.md +6 -6
- data/context/scheduler.md +4 -4
- data/context/tasks.md +41 -36
- data/context/thread-safety.md +267 -224
- data/lib/async/barrier.rb +41 -10
- data/lib/async/cancel.rb +80 -0
- data/lib/async/clock.rb +22 -1
- data/lib/async/deadline.rb +1 -0
- data/lib/async/error.rb +17 -0
- data/lib/async/fork_handler.rb +32 -0
- data/lib/async/idler.rb +27 -15
- data/lib/async/loop.rb +84 -0
- data/lib/async/node.rb +28 -9
- data/lib/async/promise.rb +112 -37
- data/lib/async/queue.rb +1 -1
- data/lib/async/scheduler.rb +40 -17
- data/lib/async/stop.rb +3 -75
- data/lib/async/task.rb +160 -91
- data/lib/async/version.rb +3 -2
- data/lib/async.rb +3 -5
- data/lib/kernel/barrier.rb +31 -0
- data/lib/kernel/sync.rb +1 -1
- data/lib/traces/provider/async/barrier.rb +1 -1
- data/license.md +4 -2
- data/readme.md +26 -32
- data/releases.md +104 -33
- data.tar.gz.sig +0 -0
- metadata +9 -3
- metadata.gz.sig +0 -0
- data/lib/async/task.md +0 -30
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 1d8427e86e8ab22c81e41dd6e7df7bd79087e3fa72ce786f07ae7b65d9d94545
|
|
4
|
+
data.tar.gz: 30e9e7e88b211aa21afe844e3f9510b5af39d2b4af1d97710ce533f581b5c57b
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 337694b65547afc0d4c1568a02c0c08d03d0d70a8b2319b9b9a77660ad9f37c82ff3ab49a3856ebee477657df6b254d62e59ca6b31e9312484c02e696deab007
|
|
7
|
+
data.tar.gz: e9bc3a1a27f9ba9ea6004dee40df9b385d7959c30e7059521805e91d1f64e9ed16bbc0920b4eb3b1dae8a727ab52ed8258f95bb2605e219f311920bf3fe2b031
|
checksums.yaml.gz.sig
CHANGED
|
Binary file
|
data/context/best-practices.md
CHANGED
|
@@ -7,7 +7,7 @@ This guide gives an overview of best practices for using Async.
|
|
|
7
7
|
`Async{}` has two uses: it creates an event loop if one doesn't exist, and it creates a task which runs asynchronously with respect to the parent scope. However, the top level `Async{}` block will be synchronous because it creates the event loop. In some programs, you do not care about executing asynchronously, but you still want your code to run in an event loop. `Sync{}` exists to do this efficiently.
|
|
8
8
|
|
|
9
9
|
```ruby
|
|
10
|
-
require
|
|
10
|
+
require "async"
|
|
11
11
|
|
|
12
12
|
class Packages
|
|
13
13
|
def initialize(urls)
|
|
@@ -55,12 +55,24 @@ This expresses the intent to the caller that this method should only be invoked
|
|
|
55
55
|
|
|
56
56
|
## Use barriers to manage unbounded concurrency
|
|
57
57
|
|
|
58
|
-
Barriers provide a way to manage an unbounded number of tasks.
|
|
58
|
+
Barriers provide a way to manage an unbounded number of tasks. The top-level `Barrier` method creates a barrier with built-in load management using an `Async::Idler`.
|
|
59
59
|
|
|
60
60
|
```ruby
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
61
|
+
Barrier do |barrier|
|
|
62
|
+
items.each do |item|
|
|
63
|
+
barrier.async do
|
|
64
|
+
process(item)
|
|
65
|
+
end
|
|
66
|
+
end
|
|
67
|
+
end
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
The barrier will automatically wait for all tasks to complete and stop any outstanding tasks when the block exits. By default, it uses an `Async::Idler` to prevent system overload by scheduling tasks when the system load is below 80%.
|
|
71
|
+
|
|
72
|
+
If you want to process tasks in order of completion, you can explicitly call `wait` with a block:
|
|
73
|
+
|
|
74
|
+
```ruby
|
|
75
|
+
Barrier do |barrier|
|
|
64
76
|
items.each do |item|
|
|
65
77
|
barrier.async do
|
|
66
78
|
process(item)
|
|
@@ -71,50 +83,61 @@ Async do
|
|
|
71
83
|
barrier.wait do |task|
|
|
72
84
|
result = task.wait
|
|
73
85
|
# Do something with result.
|
|
74
|
-
|
|
86
|
+
|
|
75
87
|
# If you don't want to wait for any more tasks you can break:
|
|
76
88
|
break
|
|
77
89
|
end
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
90
|
+
end
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
To disable load management (not recommended for unbounded concurrency), you can pass `parent: nil`:
|
|
94
|
+
|
|
95
|
+
```ruby
|
|
96
|
+
Barrier(parent: nil) do |barrier|
|
|
97
|
+
# No load management - creates tasks as fast as possible
|
|
98
|
+
items.each do |item|
|
|
99
|
+
barrier.async do
|
|
100
|
+
process(item)
|
|
101
|
+
end
|
|
102
|
+
end
|
|
84
103
|
end
|
|
85
104
|
```
|
|
86
105
|
|
|
87
106
|
## Use a semaphore to limit the number of concurrent tasks
|
|
88
107
|
|
|
89
|
-
Semaphores allow you to limit the level of concurrency to a fixed number of tasks:
|
|
108
|
+
Semaphores allow you to limit the level of concurrency to a fixed number of tasks. When using semaphores with barriers, the barrier should be the root of your task hierarchy, and the semaphore should be a child of the barrier:
|
|
90
109
|
|
|
91
110
|
```ruby
|
|
92
|
-
|
|
93
|
-
barrier = Async::Barrier.new
|
|
111
|
+
Barrier(parent: nil) do |barrier|
|
|
94
112
|
semaphore = Async::Semaphore.new(4, parent: barrier)
|
|
95
113
|
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
semaphore.async do
|
|
100
|
-
process(item)
|
|
101
|
-
end
|
|
114
|
+
items.each do |item|
|
|
115
|
+
semaphore.async do
|
|
116
|
+
process(item)
|
|
102
117
|
end
|
|
103
118
|
end
|
|
104
|
-
|
|
105
|
-
# Wait for all the work to complete:
|
|
106
|
-
barrier.wait
|
|
107
|
-
ensure
|
|
108
|
-
# Stop all outstanding tasks in the barrier:
|
|
109
|
-
barrier&.stop
|
|
110
119
|
end
|
|
111
120
|
```
|
|
112
121
|
|
|
113
|
-
In
|
|
122
|
+
In this example, we use `parent: nil` for the barrier to disable load management, since the semaphore already provides concurrency control. The semaphore limits execution to 4 concurrent tasks, and the barrier ensures all tasks are stopped when the block exits.
|
|
114
123
|
|
|
115
124
|
### Idler
|
|
116
125
|
|
|
117
|
-
Idlers are like semaphores but with a limit defined by current processor utilization. In other words, an idler will
|
|
126
|
+
Idlers are like semaphores but with a limit defined by current processor utilization. In other words, an idler will schedule work up to a specific ratio of idle/busy time in the scheduler.
|
|
127
|
+
|
|
128
|
+
The top-level `Barrier` method uses an idler by default, making it safe for unbounded concurrency:
|
|
129
|
+
|
|
130
|
+
```ruby
|
|
131
|
+
Barrier do |barrier| # Uses Async::Idler.new(0.8) by default
|
|
132
|
+
work.each do |work|
|
|
133
|
+
barrier.async do
|
|
134
|
+
work.call
|
|
135
|
+
end
|
|
136
|
+
end
|
|
137
|
+
end
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
You can also use an idler directly without a barrier:
|
|
118
141
|
|
|
119
142
|
```ruby
|
|
120
143
|
Async do
|
|
@@ -145,7 +168,7 @@ Async do |task|
|
|
|
145
168
|
while chunk = socket.gets
|
|
146
169
|
queue.push(chunk)
|
|
147
170
|
end
|
|
148
|
-
|
|
171
|
+
|
|
149
172
|
# After this point, we won't be able to add items to the queue, and popping items will eventually result in nil once all items are dequeued:
|
|
150
173
|
queue.close
|
|
151
174
|
end
|
|
@@ -184,5 +207,3 @@ end
|
|
|
184
207
|
```
|
|
185
208
|
|
|
186
209
|
It can be especially important to impose timeouts when processing user-provided data.
|
|
187
|
-
|
|
188
|
-
##
|
data/context/debugging.md
CHANGED
|
@@ -9,7 +9,7 @@ This guide explains how to debug issues with programs that use Async.
|
|
|
9
9
|
The simplest way to debug an Async program is to use `puts` to print messages to the console. This is useful for understanding the flow of your program and the values of variables. However, it can be difficult to use `puts` to debug programs that use asynchronous code, as the output may be interleaved. To prevent this, wrap it in `Fiber.blocking{}`:
|
|
10
10
|
|
|
11
11
|
```ruby
|
|
12
|
-
require
|
|
12
|
+
require "async"
|
|
13
13
|
|
|
14
14
|
Async do
|
|
15
15
|
3.times do |i|
|
|
@@ -42,8 +42,8 @@ If you don't use `Fiber.blocking{}`, the event loop will continue to run and you
|
|
|
42
42
|
The `async-debug` gem provides a visual debugger for Async programs. It is a powerful tool that allows you to inspect the state of your program and see the hierarchy of your program:
|
|
43
43
|
|
|
44
44
|
```ruby
|
|
45
|
-
require
|
|
46
|
-
require
|
|
45
|
+
require "async"
|
|
46
|
+
require "async/debug"
|
|
47
47
|
|
|
48
48
|
Sync do
|
|
49
49
|
debugger = Async::Debug.serve
|
data/context/getting-started.md
CHANGED
|
@@ -65,7 +65,7 @@ You should consider the boundary around your program and the request handling. F
|
|
|
65
65
|
Similar to a promise, {ruby Async::Task} produces results. In order to wait for these results, you must invoke {ruby Async::Task#wait}:
|
|
66
66
|
|
|
67
67
|
``` ruby
|
|
68
|
-
require
|
|
68
|
+
require "async"
|
|
69
69
|
|
|
70
70
|
task = Async do
|
|
71
71
|
rand
|
|
@@ -99,7 +99,7 @@ end
|
|
|
99
99
|
Unless you need fan-out, map-reduce style concurrency, you can actually use a slightly more efficient {ruby Kernel::Sync} execution model. This method will run your block in the current event loop if one exists, or create an event loop if not. You can use it for code which uses asynchronous primitives, but itself does not need to be asynchronous with respect to other tasks.
|
|
100
100
|
|
|
101
101
|
```ruby
|
|
102
|
-
require
|
|
102
|
+
require "async/http/internet"
|
|
103
103
|
|
|
104
104
|
def fetch(url)
|
|
105
105
|
Sync do
|
|
@@ -109,11 +109,11 @@ def fetch(url)
|
|
|
109
109
|
end
|
|
110
110
|
|
|
111
111
|
# At the level of your program, this method will create an event loop:
|
|
112
|
-
fetch(
|
|
112
|
+
fetch("https://example.com")
|
|
113
113
|
|
|
114
114
|
Sync do
|
|
115
115
|
# The event loop already exists, and will be reused:
|
|
116
|
-
fetch(
|
|
116
|
+
fetch("https://example.com")
|
|
117
117
|
end
|
|
118
118
|
```
|
|
119
119
|
|
|
@@ -154,13 +154,13 @@ The former allows you to inject the parent, which could be a barrier or semaphor
|
|
|
154
154
|
The Fiber Scheduler interface is compatible with most pure Ruby code and well-behaved C code. For example, you can use {ruby Net::HTTP} for performing concurrent HTTP requests:
|
|
155
155
|
|
|
156
156
|
```ruby
|
|
157
|
-
urls = [
|
|
157
|
+
urls = ["http://example.com", "http://example.org", "http://example.net"]
|
|
158
158
|
|
|
159
159
|
Async do
|
|
160
160
|
# Perform several concurrent requests:
|
|
161
161
|
responses = urls.map do |url|
|
|
162
162
|
Async do
|
|
163
|
-
Net::HTTP.get(url)
|
|
163
|
+
Net::HTTP.get(URI(url))
|
|
164
164
|
end
|
|
165
165
|
end.map(&:wait)
|
|
166
166
|
end
|
data/context/scheduler.md
CHANGED
|
@@ -38,7 +38,7 @@ Async do |task|
|
|
|
38
38
|
task.print_hierarchy($stderr)
|
|
39
39
|
|
|
40
40
|
# Kill the subtask
|
|
41
|
-
subtask.
|
|
41
|
+
subtask.cancel
|
|
42
42
|
end
|
|
43
43
|
~~~
|
|
44
44
|
|
|
@@ -69,16 +69,16 @@ end
|
|
|
69
69
|
|
|
70
70
|
You can use this approach to embed the reactor in another event loop. For some integrations, you may want to specify the maximum time to wait to {ruby Async::Scheduler#run_once}.
|
|
71
71
|
|
|
72
|
-
###
|
|
72
|
+
### Cancelling a Scheduler
|
|
73
73
|
|
|
74
|
-
{ruby Async::Scheduler#
|
|
74
|
+
{ruby Async::Scheduler#cancel} will cancel the current scheduler and all children tasks.
|
|
75
75
|
|
|
76
76
|
### Fiber Scheduler Integration
|
|
77
77
|
|
|
78
78
|
In order to integrate with native Ruby blocking operations, the {ruby Async::Scheduler} uses a {ruby Fiber::Scheduler} interface.
|
|
79
79
|
|
|
80
80
|
```ruby
|
|
81
|
-
require
|
|
81
|
+
require "async"
|
|
82
82
|
|
|
83
83
|
scheduler = Async::Scheduler.new
|
|
84
84
|
Fiber.set_scheduler(scheduler)
|
data/context/tasks.md
CHANGED
|
@@ -4,7 +4,7 @@ This guide explains how asynchronous tasks work and how to use them.
|
|
|
4
4
|
|
|
5
5
|
## Overview
|
|
6
6
|
|
|
7
|
-
Tasks are the smallest unit of sequential code execution in {ruby Async}. Tasks can create other tasks, and Async tracks the parent-child relationship between tasks. When a parent task is
|
|
7
|
+
Tasks are the smallest unit of sequential code execution in {ruby Async}. Tasks can create other tasks, and Async tracks the parent-child relationship between tasks. When a parent task is cancelled, it will also cancel all its children tasks. The reactor always starts with one root task.
|
|
8
8
|
|
|
9
9
|
```mermaid
|
|
10
10
|
graph LR
|
|
@@ -23,11 +23,11 @@ graph LR
|
|
|
23
23
|
|
|
24
24
|
A fiber is a lightweight unit of execution that can be suspended and resumed at specific points. After a fiber is suspended, it can be resumed later at the same point with the same execution state. Because only one fiber can execute at a time, they are often referred to as a mechanism for cooperative concurrency.
|
|
25
25
|
|
|
26
|
-
A task provides extra functionality on top of fibers. A task behaves like a promise: it either succeeds with a value or fails with an exception. Tasks keep track of their parent-child relationships, and when a parent task is
|
|
26
|
+
A task provides extra functionality on top of fibers. A task behaves like a promise: it either succeeds with a value or fails with an exception. Tasks keep track of their parent-child relationships, and when a parent task is cancelled, it will also cancel all its children tasks. This makes it easier to create complex programs with many concurrent tasks.
|
|
27
27
|
|
|
28
28
|
### Why does Async manipulate tasks and not fibers?
|
|
29
29
|
|
|
30
|
-
The {ruby Async::Scheduler} actually works directly with fibers for most operations and isn't aware of tasks. However, the reactor does maintain a tree of tasks for the purpose of managing task and reactor life-cycle. For example,
|
|
30
|
+
The {ruby Async::Scheduler} actually works directly with fibers for most operations and isn't aware of tasks. However, the reactor does maintain a tree of tasks for the purpose of managing task and reactor life-cycle. For example, cancelling a parent task will cancel all its children tasks, and the reactor will exit when all tasks are finished.
|
|
31
31
|
|
|
32
32
|
## Task Lifecycle
|
|
33
33
|
|
|
@@ -40,20 +40,20 @@ stateDiagram-v2
|
|
|
40
40
|
|
|
41
41
|
running --> failed : unhandled StandardError-derived exception
|
|
42
42
|
running --> complete : user code finished
|
|
43
|
-
running -->
|
|
43
|
+
running --> cancelled : cancel
|
|
44
44
|
|
|
45
|
-
initialized -->
|
|
45
|
+
initialized --> cancelled : cancel
|
|
46
46
|
|
|
47
47
|
failed --> [*]
|
|
48
48
|
complete --> [*]
|
|
49
|
-
|
|
49
|
+
cancelled --> [*]
|
|
50
50
|
```
|
|
51
51
|
|
|
52
|
-
Tasks are created in the `initialized` state, and are run by the reactor. During the execution, a task can either `complete` successfully, become `failed` with an unhandled `StandardError`-derived exception, or be explicitly `
|
|
52
|
+
Tasks are created in the `initialized` state, and are run by the reactor. During the execution, a task can either `complete` successfully, become `failed` with an unhandled `StandardError`-derived exception, or be explicitly `cancelled`. In all of these cases, you can wait for a task to complete by using {ruby Async::Task#wait}.
|
|
53
53
|
|
|
54
54
|
1. In the case the task successfully completed, the result will be whatever value was generated by the last expression in the task.
|
|
55
55
|
2. In the case the task failed with an unhandled `StandardError`-derived exception, waiting on the task will re-raise the exception.
|
|
56
|
-
3. In the case the task was
|
|
56
|
+
3. In the case the task was cancelled, the result will be `nil`.
|
|
57
57
|
|
|
58
58
|
## Starting A Task
|
|
59
59
|
|
|
@@ -175,8 +175,8 @@ Async do
|
|
|
175
175
|
break if done.size >= 2
|
|
176
176
|
end
|
|
177
177
|
ensure
|
|
178
|
-
# The remainder of the tasks will be
|
|
179
|
-
barrier.
|
|
178
|
+
# The remainder of the tasks will be cancelled:
|
|
179
|
+
barrier.cancel
|
|
180
180
|
end
|
|
181
181
|
end
|
|
182
182
|
```
|
|
@@ -189,23 +189,28 @@ end
|
|
|
189
189
|
barrier = Async::Barrier.new
|
|
190
190
|
semaphore = Async::Semaphore.new(2, parent: barrier)
|
|
191
191
|
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
192
|
+
begin
|
|
193
|
+
jobs.each do |job|
|
|
194
|
+
semaphore.async do
|
|
195
|
+
# ... process job ...
|
|
196
|
+
end
|
|
195
197
|
end
|
|
196
|
-
end
|
|
197
198
|
|
|
198
|
-
# Wait until all jobs are done:
|
|
199
|
-
barrier.wait
|
|
199
|
+
# Wait until all jobs are done:
|
|
200
|
+
barrier.wait
|
|
201
|
+
ensure
|
|
202
|
+
# Cancel any remaining jobs:
|
|
203
|
+
barrier.cancel
|
|
204
|
+
end
|
|
200
205
|
~~~
|
|
201
206
|
|
|
202
|
-
##
|
|
207
|
+
## Cancelling a Task
|
|
203
208
|
|
|
204
209
|
When a task completes execution, it will enter the `complete` state (or the `failed` state if it raises an unhandled exception).
|
|
205
210
|
|
|
206
|
-
There are various situations where you may want to
|
|
211
|
+
There are various situations where you may want to cancel a task ({ruby Async::Task#cancel}) before it completes. The most common case is shutting down a server. A more complex example is this: you may fan out multiple (10s, 100s) of requests, wait for a subset to complete (e.g. the first 5 or all those that complete within a given deadline), and then cancel the remaining operations.
|
|
207
212
|
|
|
208
|
-
Using the above program as an example, let's
|
|
213
|
+
Using the above program as an example, let's cancel all the tasks just after the first one completes.
|
|
209
214
|
|
|
210
215
|
```ruby
|
|
211
216
|
Async do
|
|
@@ -215,15 +220,15 @@ Async do
|
|
|
215
220
|
puts "Hello World #{i}"
|
|
216
221
|
end
|
|
217
222
|
end
|
|
218
|
-
|
|
219
|
-
#
|
|
220
|
-
tasks.each(&:
|
|
223
|
+
|
|
224
|
+
# Cancel all the above tasks:
|
|
225
|
+
tasks.each(&:cancel)
|
|
221
226
|
end
|
|
222
227
|
```
|
|
223
228
|
|
|
224
|
-
###
|
|
229
|
+
### Cancelling all Tasks held in a Barrier
|
|
225
230
|
|
|
226
|
-
To
|
|
231
|
+
To cancel all the tasks held in a barrier:
|
|
227
232
|
|
|
228
233
|
```ruby
|
|
229
234
|
barrier = Async::Barrier.new
|
|
@@ -236,11 +241,11 @@ Async do
|
|
|
236
241
|
end
|
|
237
242
|
end
|
|
238
243
|
|
|
239
|
-
barrier.
|
|
244
|
+
barrier.cancel
|
|
240
245
|
end
|
|
241
246
|
```
|
|
242
247
|
|
|
243
|
-
Unless your tasks all rescue and suppresses `StandardError`-derived exceptions, be sure to call ({ruby Async::Barrier#
|
|
248
|
+
Unless your tasks all rescue and suppresses `StandardError`-derived exceptions, be sure to call ({ruby Async::Barrier#cancel}) to cancel the remaining tasks:
|
|
244
249
|
|
|
245
250
|
```ruby
|
|
246
251
|
barrier = Async::Barrier.new
|
|
@@ -256,7 +261,7 @@ Async do
|
|
|
256
261
|
begin
|
|
257
262
|
barrier.wait
|
|
258
263
|
ensure
|
|
259
|
-
barrier.
|
|
264
|
+
barrier.cancel
|
|
260
265
|
end
|
|
261
266
|
end
|
|
262
267
|
```
|
|
@@ -268,10 +273,10 @@ In order to ensure your resources are cleaned up correctly, make sure you wrap r
|
|
|
268
273
|
~~~ ruby
|
|
269
274
|
Async do
|
|
270
275
|
begin
|
|
271
|
-
socket = connect(remote_address) # May raise Async::
|
|
276
|
+
socket = connect(remote_address) # May raise Async::Cancel
|
|
272
277
|
|
|
273
|
-
socket.write(...) # May raise Async::
|
|
274
|
-
socket.read(...) # May raise Async::
|
|
278
|
+
socket.write(...) # May raise Async::Cancel
|
|
279
|
+
socket.read(...) # May raise Async::Cancel
|
|
275
280
|
ensure
|
|
276
281
|
socket.close if socket
|
|
277
282
|
end
|
|
@@ -393,9 +398,9 @@ end
|
|
|
393
398
|
|
|
394
399
|
Transient tasks are similar to normal tasks, except for the following differences:
|
|
395
400
|
|
|
396
|
-
1. They are not considered by {ruby Async::Task#finished?}, so they will not keep the reactor alive. Instead, they are
|
|
401
|
+
1. They are not considered by {ruby Async::Task#finished?}, so they will not keep the reactor alive. Instead, they are cancelled (with a {ruby Async::Cancel} exception) when all other (non-transient) tasks are finished.
|
|
397
402
|
2. As soon as a parent task is finished, any transient child tasks will be moved up to be children of the parent's parent. This ensures that they never keep a sub-tree alive.
|
|
398
|
-
3. Similarly, if you `
|
|
403
|
+
3. Similarly, if you `cancel` a task, any transient child tasks will be moved up the tree as above rather than being cancelled.
|
|
399
404
|
|
|
400
405
|
The purpose of transient tasks is when a task is an implementation detail of an object or instance, rather than a concurrency process. Some examples of transient tasks:
|
|
401
406
|
|
|
@@ -409,8 +414,8 @@ and you could be handling 1000s of requests per second.
|
|
|
409
414
|
The task doing the updating in the background is an implementation detail, so it is marked as `transient`.
|
|
410
415
|
|
|
411
416
|
```ruby
|
|
412
|
-
require
|
|
413
|
-
require
|
|
417
|
+
require "async"
|
|
418
|
+
require "thread/local" # thread-local gem.
|
|
414
419
|
|
|
415
420
|
class TimeStringCache
|
|
416
421
|
extend Thread::Local # defines `instance` class method that lazy-creates a separate instance per thread
|
|
@@ -434,7 +439,7 @@ class TimeStringCache
|
|
|
434
439
|
sleep(1)
|
|
435
440
|
end
|
|
436
441
|
ensure
|
|
437
|
-
# When the reactor terminates all tasks, `Async::
|
|
442
|
+
# When the reactor terminates all tasks, `Async::Cancel` will be raised from `sleep` and this code will be invoked. By clearing `@refresh`, we ensure that the task will be recreated if needed again:
|
|
438
443
|
@refresh = nil
|
|
439
444
|
end
|
|
440
445
|
end
|
|
@@ -445,4 +450,4 @@ Async do
|
|
|
445
450
|
end
|
|
446
451
|
```
|
|
447
452
|
|
|
448
|
-
Upon
|
|
453
|
+
Upon exiting the top level async block, the {ruby @refresh} task will be set to `nil`. Bear in mind, you should not share these resources across threads; doing so would need some form of mutual exclusion.
|