async-limiter 1.5.4 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +0 -0
  3. data/context/generic-limiter.md +167 -0
  4. data/context/getting-started.md +226 -0
  5. data/context/index.yaml +41 -0
  6. data/context/limited-limiter.md +184 -0
  7. data/context/queued-limiter.md +109 -0
  8. data/context/timing-strategies.md +666 -0
  9. data/context/token-usage.md +85 -0
  10. data/lib/async/limiter/generic.rb +160 -0
  11. data/lib/async/limiter/limited.rb +103 -0
  12. data/lib/async/limiter/queued.rb +85 -0
  13. data/lib/async/limiter/timing/burst.rb +153 -0
  14. data/lib/async/limiter/timing/fixed_window.rb +42 -0
  15. data/lib/async/limiter/timing/leaky_bucket.rb +146 -0
  16. data/lib/async/limiter/timing/none.rb +56 -0
  17. data/lib/async/limiter/timing/ordered.rb +58 -0
  18. data/lib/async/limiter/timing/sliding_window.rb +152 -0
  19. data/lib/async/limiter/token.rb +102 -0
  20. data/lib/async/limiter/version.rb +10 -3
  21. data/lib/async/limiter.rb +21 -7
  22. data/lib/metrics/provider/async/limiter/generic.rb +74 -0
  23. data/lib/metrics/provider/async/limiter.rb +7 -0
  24. data/lib/traces/provider/async/limiter/generic.rb +41 -0
  25. data/lib/traces/provider/async/limiter.rb +7 -0
  26. data/license.md +25 -0
  27. data/readme.md +45 -0
  28. data/releases.md +50 -0
  29. data.tar.gz.sig +0 -0
  30. metadata +68 -83
  31. metadata.gz.sig +0 -0
  32. data/lib/async/limiter/concurrent.rb +0 -101
  33. data/lib/async/limiter/constants.rb +0 -6
  34. data/lib/async/limiter/unlimited.rb +0 -53
  35. data/lib/async/limiter/window/continuous.rb +0 -21
  36. data/lib/async/limiter/window/fixed.rb +0 -21
  37. data/lib/async/limiter/window/sliding.rb +0 -21
  38. data/lib/async/limiter/window.rb +0 -296
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 69bd1687c9db3ae2672a5b2fb4458d127eb3521d5fc1db244f4ec7c773f32b39
4
- data.tar.gz: 88b8a481dc2ad96e0b49aed1e80ddaa5871b0f8e36cd06526bf9854b32a1c23c
3
+ metadata.gz: 324fe64138f3bd6854bc8d5349b8404a01d6bd11ee64f5b0ad204d4154efa15b
4
+ data.tar.gz: 4cc2963430befada104733f236d4e59b900750c1e3eeeb73fc18f2dece26c5b2
5
5
  SHA512:
6
- metadata.gz: 4a656389da97b0990f90efa4dfbef3e58d01183969018a60b77b85704c014342d55eec954d00b3cf1870ccda8430843036ffcd49bc8449dba82a9b1c4f09d240
7
- data.tar.gz: e63c0d3882fe2c3ad08648295bfe58a861f3ebaf1e0321ab2e6685dab5ec1a25e4184773206278497b2b94e15f67223f268dfdc16e1fb69f8a3db3b99919105c
6
+ metadata.gz: 815670fb97ad80d90e9a5a6ec9034f51242fe793784220af2b3ce769b3afd49389a03a0d53ac042aa98638affef8636607976f6908f834e667ab31ac96fbb628
7
+ data.tar.gz: 5f8df98d26156a041023d05e9c758a17eeefca437cd9ffa34c9a102df820a3e45380dbda8e09b23dd14e189014a49554745349ff13de70647d59a7a5abe33d4d
checksums.yaml.gz.sig ADDED
Binary file
@@ -0,0 +1,167 @@
1
+ # Generic Limiter
2
+
3
+ This guide explains the {ruby Async::Limiter::Generic} class, which provides unlimited concurrency by default and serves as the base implementation for all other limiters. It's ideal when you need timing constraints without concurrency limits, or when building custom limiter implementations.
4
+
5
+ ## Usage
6
+
7
+ The simplest case - no limits on concurrent execution:
8
+
9
+ ```ruby
10
+ require "async"
11
+ require "async/limiter"
12
+
13
+ Async do
14
+ limiter = Async::Limiter::Generic.new
15
+
16
+ # All 100 tasks run concurrently:
17
+ 100.times do |i|
18
+ limiter.async do |task|
19
+ puts "Task #{i} running"
20
+ task.sleep 1
21
+ end
22
+ end
23
+ end
24
+ ```
25
+
26
+ All tasks start immediately and run in parallel, limited only by system resources.
27
+
28
+ ### Async Execution
29
+
30
+ The primary way to use Generic limiter is through the `async` method:
31
+
32
+ ```ruby
33
+ require "async"
34
+ require "async/limiter"
35
+
36
+ Async do
37
+ limiter = Async::Limiter::Generic.new
38
+
39
+ # Create async tasks through the limiter:
40
+ tasks = 5.times.map do |i|
41
+ limiter.async do |task|
42
+ puts "Task #{i} started at #{Time.now}"
43
+ task.sleep 1
44
+ puts "Task #{i} completed at #{Time.now}"
45
+ "result_#{i}"
46
+ end
47
+ end
48
+
49
+ # Wait for all tasks to complete:
50
+ results = tasks.map(&:wait)
51
+ puts "All results: #{results}"
52
+ end
53
+ ```
54
+
55
+ ### Sync Execution
56
+
57
+ For synchronous execution within an async context:
58
+
59
+ ```ruby
60
+ Async do
61
+ limiter = Async::Limiter::Generic.new
62
+
63
+ # Execute synchronously within the limiter:
64
+ result = limiter.sync do |task|
65
+ puts "Executing in task: #{task}"
66
+ "sync result"
67
+ end
68
+
69
+ puts result # => "sync result"
70
+ end
71
+ ```
72
+
73
+ ## Timing Coordination
74
+
75
+ Generic limiters excel when combined with timing strategies for pure rate limiting:
76
+
77
+ ### Rate Limiting Without Concurrency Limits
78
+
79
+ ```ruby
80
+ Async do
81
+ # Allow unlimited concurrency but rate limit to 10 operations per second:
82
+ timing = Async::Limiter::Timing::LeakyBucket.new(10.0, 50.0)
83
+ limiter = Async::Limiter::Generic.new(timing: timing)
84
+
85
+ # All tasks start immediately, but timing strategy controls rate:
86
+ 100.times do |i|
87
+ limiter.async do |task|
88
+ puts "Task #{i} executing at #{Time.now}"
89
+ # Timing strategy ensures rate limiting.
90
+ end
91
+ end
92
+ end
93
+ ```
94
+
95
+ ### Burst Handling
96
+
97
+ ```ruby
98
+ Async do
99
+ # Allow bursts up to 20 operations, then limit to 5 per second:
100
+ timing = Async::Limiter::Timing::SlidingWindow.new(
101
+ # 1-second window:
102
+ 1.0,
103
+ # Allow bursting:
104
+ Async::Limiter::Timing::Burst::Greedy,
105
+ # 5 operations per second:
106
+ 5
107
+ )
108
+
109
+ limiter = Async::Limiter::Generic.new(timing: timing)
110
+
111
+ # First 20 operations execute immediately (burst).
112
+ # Subsequent operations are rate limited:
113
+ 50.times do |i|
114
+ limiter.async do |task|
115
+ puts "Operation #{i} at #{Time.now}"
116
+ end
117
+ end
118
+ end
119
+ ```
120
+
121
+ ## Advanced Usage Patterns
122
+
123
+ ### Cost-Based Operations
124
+
125
+ When using timing strategies, you can specify different costs for operations:
126
+
127
+ ```ruby
128
+ # Create limiter with timing strategy that supports costs:
129
+ timing = Async::Limiter::Timing::LeakyBucket.new(10.0, 50.0) # 10/sec rate, 50 capacity.
130
+ limiter = Async::Limiter::Generic.new(timing: timing)
131
+
132
+ Async do
133
+ # Light operations:
134
+ limiter.acquire(cost: 0.5) do |resource|
135
+ puts "Light operation using #{resource}"
136
+ end
137
+
138
+ # Standard operations (default cost: 1.0):
139
+ limiter.acquire do |resource|
140
+ puts "Standard operation using #{resource}"
141
+ end
142
+
143
+ # Heavy operations:
144
+ limiter.acquire(cost: 5.0) do |resource|
145
+ puts "Heavy operation using #{resource}"
146
+ end
147
+
148
+ # Operations that exceed timing capacity will fail:
149
+ begin
150
+ limiter.acquire(cost: 100.0) # Exceeds capacity of 50.0.
151
+ rescue ArgumentError => error
152
+ Console.error(self, error)
153
+ end
154
+ end
155
+ ```
156
+
157
+ Note that by default, lower cost operations will occur before higher cost operations. In other words, low cost operations will starve out higher cost operations unless you use {ruby Async::Limiter::Timing::Ordered} to force FIFO acquires.
158
+
159
+ ```ruby
160
+ # Default behavior - potential starvation:
161
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
162
+
163
+ # FIFO ordering - prevents starvation:
164
+ timing = Async::Limiter::Timing::Ordered.new(
165
+ Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
166
+ )
167
+ ```
@@ -0,0 +1,226 @@
1
+ # Getting Started
2
+
3
+ This guide explains how to get started the `async-limiter` gem for controlling concurrency and rate limiting in Ruby applications.
4
+
5
+ ## Installation
6
+
7
+ Add the gem to your project:
8
+
9
+ ```bash
10
+ $ bundle add async-limiter
11
+ ```
12
+
13
+ ## Core Concepts
14
+
15
+ `async-limiter` provides three main limiter classes that can be combined with timing strategies:
16
+
17
+ ### Limiter Classes
18
+
19
+ - **{ruby Async::Limiter::Generic}** - Unlimited concurrency (default behavior).
20
+ - **{ruby Async::Limiter::Limited}** - Enforces a concurrency limit (counting semaphore).
21
+ - **{ruby Async::Limiter::Queued}** - Queue-based limiter with priority/timeout support.
22
+
23
+ ### Timing Strategies
24
+
25
+ - **{ruby Async::Limiter::Timing::None}** - No timing constraints (default).
26
+ - **{ruby Async::Limiter::Timing::SlidingWindow}** - Continuous rolling time windows.
27
+ - **{ruby Async::Limiter::Timing::FixedWindow}** - Discrete time boundaries.
28
+ - **{ruby Async::Limiter::Timing::LeakyBucket}** - Token bucket with automatic leaking.
29
+
30
+ ## Usage
31
+
32
+ The simplest case - no limits:
33
+
34
+ ```ruby
35
+ require "async"
36
+ require "async/limiter"
37
+
38
+ Async do
39
+ limiter = Async::Limiter::Generic.new
40
+
41
+ # All tasks run concurrently:
42
+ 100.times do |i|
43
+ limiter.async do |task|
44
+ puts "Task #{i} running"
45
+ task.sleep 1
46
+ end
47
+ end
48
+ end
49
+ ```
50
+
51
+ ### Concurrency Limiting
52
+
53
+ Limit the number of concurrent tasks:
54
+
55
+ ```ruby
56
+ require "async"
57
+ require "async/limiter"
58
+
59
+ Async do
60
+ # Max 2 concurrent tasks:
61
+ limiter = Async::Limiter::Limited.new(2)
62
+
63
+ 4.times do |i|
64
+ limiter.async do |task|
65
+ puts "Task #{i} started"
66
+ task.sleep 1
67
+ puts "Task #{i} finished"
68
+ end
69
+ end
70
+ end
71
+ ```
72
+
73
+ This runs a maximum of 2 tasks concurrently. Total duration is 2 seconds (tasks 0,1 run first, then tasks 2,3).
74
+
75
+ ### Timeouts
76
+
77
+ You can control how long to wait when acquiring resources using the `timeout` parameter. This is particularly useful when working with limited capacity limiters that might block indefinitely.
78
+
79
+ ```ruby
80
+ require "async"
81
+ require "async/limiter"
82
+
83
+ Async do
84
+ # Zero limit will always block:
85
+ limiter = Async::Limiter::Limited.new(0)
86
+
87
+ limiter.acquire(timeout: 3)
88
+ # => nil
89
+
90
+ limiter.acquire(timeout: 3) do
91
+ puts "Acquired."
92
+ end or puts "Timed out!"
93
+ end
94
+ ```
95
+
96
+ **Key timeout behaviors:**
97
+
98
+ - `timeout: nil` (default) - Wait indefinitely until a resource becomes available
99
+ - `timeout: 0` - Non-blocking operation; return immediately if no resource is available
100
+ - `timeout: N` (where N > 0) - Wait up to N seconds for a resource to become available
101
+
102
+ **Return values:**
103
+ - Returns `true` (or the acquired resource) when successful
104
+ - Returns `nil` when the timeout is exceeded or no resource is available
105
+
106
+ ## Rate Limiting
107
+
108
+ Timing strategies can be used to implement rate limiting, for example a continuous rolling time windows:
109
+
110
+ ```ruby
111
+ require "async"
112
+ require "async/limiter"
113
+
114
+ Async do
115
+ # Max 3 tasks within any 1-second sliding window
116
+ timing = Async::Limiter::Timing::SlidingWindow.new(
117
+ 1.0, # 1-second window.
118
+ Async::Limiter::Timing::Burst::Greedy, # Allow bursting
119
+ 3 # 3 tasks per window
120
+ )
121
+
122
+ limiter = Async::Limiter::Limited.new(10, timing: timing)
123
+
124
+ 10.times do |i|
125
+ limiter.async do |task|
126
+ puts "Task #{i} started at #{Time.now}"
127
+ task.sleep 0.5
128
+ end
129
+ end
130
+ end
131
+ ```
132
+
133
+ ### Variable Cost Operations
134
+
135
+ Rate limiting by default works with unit costs - each acquire consumes 1 unit of capacity. However, in more complex situations, you may want to use variable costs to model different operation weights:
136
+
137
+ ```ruby
138
+ require "async"
139
+ require "async/limiter"
140
+
141
+ Async do
142
+ # Leaky bucket: 2 tokens/second, capacity 10
143
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
144
+ limiter = Async::Limiter::Limited.new(100, timing: timing)
145
+
146
+ # Light operations consume fewer tokens:
147
+ limiter.acquire(cost: 0.5) do
148
+ puts "Light database query"
149
+ end
150
+
151
+ # Heavy operations consume more tokens:
152
+ limiter.acquire(cost: 5.0) do
153
+ puts "Complex ML inference"
154
+ end
155
+ end
156
+ ```
157
+
158
+ **Cost represents the resource weight** of each operation:
159
+ - `cost: 0.5` - Light operations (quick queries, cache reads).
160
+ - `cost: 1.0` - Standard operations (default).
161
+ - `cost: 5.0` - Heavy operations (complex computations, large uploads).
162
+
163
+ #### Starvation and Head-of-Line Blocking
164
+
165
+ **Variable costs introduce two important fairness issues:**
166
+
167
+ **1. Starvation Problem:**
168
+ High-cost operations can be indefinitely delayed by streams of low-cost operations:
169
+
170
+ ```ruby
171
+ # Without ordering - starvation can occur
172
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
173
+ limiter = Async::Limiter::Limited.new(100, timing: timing)
174
+
175
+ # High-cost task starts waiting for 8.0 tokens
176
+ limiter.acquire(cost: 8.0) do
177
+ puts "Expensive operation" # May never execute!
178
+ end
179
+
180
+ # Continuous stream of small operations consume tokens as they become available
181
+ 100.times do |i|
182
+ limiter.acquire(cost: 0.5) do
183
+ puts "Quick operation #{i}" # These keep running
184
+ end
185
+ end
186
+ ```
187
+
188
+ **2. Head-of-Line Blocking:**
189
+ When using FIFO ordering to prevent starvation, large operations can block smaller ones:
190
+
191
+ ```ruby
192
+ # With ordering - prevents starvation but creates head-of-line blocking
193
+ ordered_timing = Async::Limiter::Timing::Ordered.new(timing)
194
+ fair_limiter = Async::Limiter::Limited.new(100, timing: ordered_timing)
195
+
196
+ # Large operation blocks the queue
197
+ fair_limiter.acquire(cost: 8.0) do
198
+ puts "Expensive operation (takes time to get tokens)"
199
+ end
200
+
201
+ # These must wait even though they need fewer tokens
202
+ fair_limiter.acquire(cost: 0.5) { puts "Quick op 1" } # Blocked
203
+ fair_limiter.acquire(cost: 0.5) { puts "Quick op 2" } # Blocked
204
+ ```
205
+
206
+ #### Choosing the Right Strategy
207
+
208
+ **Use Unordered (default) when:**
209
+ - Maximum throughput is critical
210
+ - Operations have similar costs
211
+ - Occasional starvation is acceptable
212
+
213
+ **Use Ordered when:**
214
+ - Fairness is more important than efficiency
215
+ - Starvation would be unacceptable
216
+ - Predictable execution order is required
217
+
218
+ ```ruby
219
+ # Unordered: Higher throughput, possible starvation
220
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
221
+
222
+ # Ordered: Fair execution, lower throughput
223
+ ordered_timing = Async::Limiter::Timing::Ordered.new(timing)
224
+ ```
225
+
226
+ The choice depends on whether your application prioritizes **efficiency** (unordered) or **fairness** (ordered).
@@ -0,0 +1,41 @@
1
+ # Automatically generated context index for Utopia::Project guides.
2
+ # Do not edit then files in this directory directly, instead edit the guides and then run `bake utopia:project:agent:context:update`.
3
+ ---
4
+ description: Execution rate limiting for Async
5
+ metadata:
6
+ documentation_uri: https://socketry.github.io/async-limiter/
7
+ source_code_uri: https://github.com/socketry/async-limiter.git
8
+ files:
9
+ - path: getting-started.md
10
+ title: Getting Started
11
+ description: This guide explains how to get started the `async-limiter` gem for
12
+ controlling concurrency and rate limiting in Ruby applications.
13
+ - path: generic-limiter.md
14
+ title: Generic Limiter
15
+ description: This guide explains the <code class="language-ruby">Async::Limiter::Generic</code>
16
+ class, which provides unlimited concurrency by default and serves as the base
17
+ implementation for all other limiters. It's ideal when you need timing constraints
18
+ without concurrency limits, or when building custom limiter implementations.
19
+ - path: limited-limiter.md
20
+ title: Limited Limiter
21
+ description: This guide explains the <code class="language-ruby">Async::Limiter::Limited</code>
22
+ class, which provides semaphore-style concurrency control, enforcing a maximum
23
+ number of concurrent operations. It's perfect for controlling concurrency when
24
+ you have limited capacity or want to prevent system overload.
25
+ - path: queued-limiter.md
26
+ title: Queued Limiter
27
+ description: This guide explains the <code class="language-ruby">Async::Limiter::Queued</code>
28
+ class, which provides priority-based task scheduling with optional resource management.
29
+ Its key feature is priority-based acquisition where higher priority tasks get
30
+ access first, with optional support for distributing specific resources from a
31
+ pre-populated queue.
32
+ - path: timing-strategies.md
33
+ title: Timing Strategies
34
+ description: This guide explains how to use timing strategies to provide rate limiting
35
+ and timing constraints that can be combined with any limiter. They control *when*
36
+ operations can execute, while limiters control *how many* can execute concurrently.
37
+ - path: token-usage.md
38
+ title: Token Usage
39
+ description: This guide explains how to use tokens for advanced resource management
40
+ with `async-limiter`. Tokens provide sophisticated resource handling with support
41
+ for re-acquisition and automatic cleanup.
@@ -0,0 +1,184 @@
1
+ # Limited Limiter
2
+
3
+ This guide explains the {ruby Async::Limiter::Limited} class, which provides semaphore-style concurrency control, enforcing a maximum number of concurrent operations. It's perfect for controlling concurrency when you have limited capacity or want to prevent system overload.
4
+
5
+ ## Usage
6
+
7
+ Limit the number of concurrent tasks:
8
+
9
+ ```ruby
10
+ require "async"
11
+ require "async/limiter"
12
+
13
+ Async do
14
+ # Maximum 2 concurrent tasks
15
+ limiter = Async::Limiter::Limited.new(2)
16
+
17
+ 4.times do |i|
18
+ limiter.async do |task|
19
+ puts "Task #{i} started at #{Time.now}"
20
+ task.sleep 1
21
+ puts "Task #{i} finished at #{Time.now}"
22
+ end
23
+ end
24
+ end
25
+
26
+ # Output shows tasks 0,1 run first, then tasks 2,3
27
+ # Total duration: ~2 seconds instead of ~1 second
28
+ ```
29
+
30
+ ### Block-Based Acquisition
31
+
32
+ The recommended pattern using automatic cleanup:
33
+
34
+ ```ruby
35
+ limiter = Async::Limiter::Limited.new(1)
36
+
37
+ # Acquire with automatic release using blocks:
38
+ limiter.acquire do |acquired|
39
+ puts "I have acquired: #{acquired}"
40
+ # Automatically released when block exits.
41
+ end
42
+ ```
43
+
44
+ ## Timeouts
45
+
46
+ All acquisition methods support comprehensive timeout options:
47
+
48
+ ```ruby
49
+ limiter = Async::Limiter::Limited.new(1)
50
+
51
+ Async do
52
+ # Non-blocking (immediate check) - should succeed:
53
+ if limiter.acquire(timeout: 0)
54
+ puts "Got acquisition immediately"
55
+ else
56
+ puts "No capacity available"
57
+ end
58
+
59
+ # Now limiter is at capacity, so subsequent calls will fail/timeout.
60
+
61
+ # Non-blocking check - will fail since capacity is used:
62
+ if limiter.acquire(timeout: 0)
63
+ puts "Got second acquisition"
64
+ else
65
+ puts "No capacity available for second acquisition"
66
+ end
67
+
68
+ # Timed acquisition - will timeout since capacity is still used:
69
+ if limiter.acquire(timeout: 0.1)
70
+ puts "Got acquisition within timeout"
71
+ else
72
+ puts "Timed out waiting for capacity"
73
+ end
74
+
75
+ # With blocks (automatic cleanup):
76
+ result = limiter.acquire(timeout: 1.0) do |acquired|
77
+ "Successfully acquired and used"
78
+ end
79
+
80
+ puts result || "Acquisition timed out"
81
+ end
82
+ ```
83
+
84
+ ### Concurrent Timeout Behavior
85
+
86
+ The limiter prevents convoy effects where quick timeouts aren't blocked by slow ones:
87
+
88
+ ```ruby
89
+ limiter = Async::Limiter::Limited.new(1)
90
+ Async do
91
+ limiter.acquire # Fill to capacity.
92
+
93
+ results = []
94
+
95
+ # Start multiple tasks with different timeouts:
96
+ tasks = [
97
+ Async {limiter.acquire(timeout: 1.0); results << "Long timeout."},
98
+ Async {limiter.acquire(timeout: 0.1); results << "Short timeout."},
99
+ Async {limiter.acquire(timeout: 0); results << "Non-blocking."},
100
+ ]
101
+
102
+ # All tasks complete quickly, even with a long timeout task present:
103
+ tasks.map(&:wait)
104
+ puts results
105
+ # => ["Non-blocking.", "Short timeout.", "Long timeout."]
106
+ end
107
+ ```
108
+
109
+ ## Dynamic Limit Adjustment
110
+
111
+ Adjust limits at runtime based on changing conditions:
112
+
113
+ ```ruby
114
+ limiter = Async::Limiter::Limited.new(2)
115
+ puts "Initial limit: #{limiter.limit}" # 2
116
+
117
+ # Increase capacity during high load
118
+ limiter.limit = 5
119
+ puts "Increased limit: #{limiter.limit}" # 5
120
+
121
+ # Decrease capacity during high load
122
+ limiter.limit = 1
123
+ puts "Decreased limit: #{limiter.limit}" # 1
124
+ ```
125
+
126
+ ## Cost-Based Operations
127
+
128
+ Operations can consume multiple "units" based on their computational weight:
129
+
130
+ ```ruby
131
+ # Create limiter with timing strategy that has capacity limits:
132
+ timing = Async::Limiter::Timing::LeakyBucket.new(5.0, 10.0) # 5/sec rate, 10 capacity.
133
+ limiter = Async::Limiter::Limited.new(100, timing: timing)
134
+
135
+ Async do
136
+ # Light operations (consume 0.5 units):
137
+ limiter.acquire(cost: 0.5) do
138
+ perform_light_database_query()
139
+ end
140
+
141
+ # Normal operations (default cost: 1.0):
142
+ limiter.acquire do
143
+ perform_standard_operation()
144
+ end
145
+
146
+ # Heavy operations (consume 3.5 units):
147
+ limiter.acquire(cost: 3.5) do
148
+ perform_heavy_computation()
149
+ end
150
+
151
+ # Operations exceeding capacity fail fast:
152
+ begin
153
+ # Exceeds timing capacity of 10.0:
154
+ limiter.acquire(cost: 15.0)
155
+ rescue ArgumentError => error
156
+ puts "#{error.message}"
157
+ # => Cost 15.0 exceeds maximum supported cost 10.0
158
+ end
159
+ end
160
+ ```
161
+
162
+ ### Cost + Timeout Combinations
163
+
164
+ When using cost-based operations with timing strategies, be aware that high-cost operations can be starved by continuous low-cost operations. Use {ruby Async::Limiter::Timing::Ordered} to enforce FIFO ordering if fairness is important:
165
+
166
+ ```ruby
167
+ # Default behavior - potential starvation:
168
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
169
+ limiter = Async::Limiter::Limited.new(100, timing: timing)
170
+
171
+ # High-cost operation might be starved by many small operations:
172
+ result = limiter.acquire(timeout: 30.0, cost: 8.0) do |acquired|
173
+ expensive_machine_learning_inference()
174
+ end
175
+
176
+ # With FIFO ordering - prevents starvation:
177
+ ordered_timing = Async::Limiter::Timing::Ordered.new(timing)
178
+ fair_limiter = Async::Limiter::Limited.new(100, timing: ordered_timing)
179
+
180
+ # High-cost operation is guaranteed to execute in arrival order:
181
+ result = fair_limiter.acquire(timeout: 30.0, cost: 8.0) do |acquired|
182
+ expensive_machine_learning_inference()
183
+ end
184
+ ```