async-limiter 1.5.4 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. checksums.yaml +4 -4
  2. checksums.yaml.gz.sig +0 -0
  3. data/context/generic-limiter.md +167 -0
  4. data/context/getting-started.md +226 -0
  5. data/context/index.yaml +41 -0
  6. data/context/limited-limiter.md +184 -0
  7. data/context/queued-limiter.md +109 -0
  8. data/context/timing-strategies.md +666 -0
  9. data/context/token-usage.md +85 -0
  10. data/lib/async/limiter/generic.rb +160 -0
  11. data/lib/async/limiter/limited.rb +103 -0
  12. data/lib/async/limiter/queued.rb +85 -0
  13. data/lib/async/limiter/timing/burst.rb +153 -0
  14. data/lib/async/limiter/timing/fixed_window.rb +42 -0
  15. data/lib/async/limiter/timing/leaky_bucket.rb +146 -0
  16. data/lib/async/limiter/timing/none.rb +56 -0
  17. data/lib/async/limiter/timing/ordered.rb +58 -0
  18. data/lib/async/limiter/timing/sliding_window.rb +152 -0
  19. data/lib/async/limiter/token.rb +102 -0
  20. data/lib/async/limiter/version.rb +10 -3
  21. data/lib/async/limiter.rb +21 -7
  22. data/lib/metrics/provider/async/limiter/generic.rb +74 -0
  23. data/lib/metrics/provider/async/limiter.rb +7 -0
  24. data/lib/traces/provider/async/limiter/generic.rb +41 -0
  25. data/lib/traces/provider/async/limiter.rb +7 -0
  26. data/license.md +25 -0
  27. data/readme.md +45 -0
  28. data/releases.md +50 -0
  29. data.tar.gz.sig +0 -0
  30. metadata +68 -83
  31. metadata.gz.sig +0 -0
  32. data/lib/async/limiter/concurrent.rb +0 -101
  33. data/lib/async/limiter/constants.rb +0 -6
  34. data/lib/async/limiter/unlimited.rb +0 -53
  35. data/lib/async/limiter/window/continuous.rb +0 -21
  36. data/lib/async/limiter/window/fixed.rb +0 -21
  37. data/lib/async/limiter/window/sliding.rb +0 -21
  38. data/lib/async/limiter/window.rb +0 -296
@@ -0,0 +1,666 @@
1
+ # Timing Strategies
2
+
3
+ This guide explains how to use timing strategies to provide rate limiting and timing constraints that can be combined with any limiter. They control *when* operations can execute, while limiters control *how many* can execute concurrently.
4
+
5
+ ## Available Strategies
6
+
7
+ - **{ruby Async::Limiter::Timing::None}** - No timing constraints (default)
8
+ - **{ruby Async::Limiter::Timing::SlidingWindow}** - Continuous rolling time windows
9
+ - **{ruby Async::Limiter::Timing::FixedWindow}** - Discrete time boundaries
10
+ - **{ruby Async::Limiter::Timing::LeakyBucket}** - Token bucket with automatic leaking
11
+
12
+ ## None Strategy
13
+
14
+ The default strategy that imposes no timing constraints:
15
+
16
+ ```ruby
17
+ require "async"
18
+ require "async/limiter"
19
+
20
+ # Default - no timing constraints
21
+ limiter = Async::Limiter::Limited.new(5) # Only concurrency limit applies
22
+
23
+ # Explicit None strategy
24
+ timing = Async::Limiter::Timing::None.new
25
+ limiter = Async::Limiter::Limited.new(5, timing: timing)
26
+
27
+ # All 5 tasks start immediately (limited by concurrency only)
28
+ 10.times do |i|
29
+ limiter.async do |task|
30
+ puts "Task #{i} started at #{Time.now}"
31
+ task.sleep 1
32
+ end
33
+ end
34
+ ```
35
+
36
+ ## Sliding Window Strategy
37
+
38
+ Provides smooth rate limiting with continuous rolling time windows:
39
+
40
+ ### Basic Usage
41
+
42
+ ```ruby
43
+ # Allow 3 operations within any 1-second sliding window
44
+ timing = Async::Limiter::Timing::SlidingWindow.new(
45
+ 1.0, # 1-second window
46
+ Async::Limiter::Timing::Burst::Greedy, # Burst behavior
47
+ 3 # 3 operations per window
48
+ )
49
+
50
+ limiter = Async::Limiter::Limited.new(10, timing: timing)
51
+
52
+ # First 3 operations execute immediately
53
+ # Subsequent operations are rate limited to maintain 3/second
54
+ 10.times do |i|
55
+ limiter.async do |task|
56
+ puts "Operation #{i} at #{Time.now}"
57
+ task.sleep 0.1
58
+ end
59
+ end
60
+ ```
61
+
62
+ ### Burst Strategies
63
+
64
+ Different burst behaviors affect how operations are scheduled:
65
+
66
+ ```ruby
67
+ # Greedy: Allow immediate bursts up to the limit
68
+ greedy_timing = Async::Limiter::Timing::SlidingWindow.new(
69
+ 2.0, # 2-second window
70
+ Async::Limiter::Timing::Burst::Greedy, # Allow bursts
71
+ 6 # 6 operations per 2 seconds
72
+ )
73
+
74
+ # Conservative: Spread operations evenly over time
75
+ conservative_timing = Async::Limiter::Timing::SlidingWindow.new(
76
+ 2.0, # 2-second window
77
+ Async::Limiter::Timing::Burst::Conservative, # Even distribution
78
+ 6 # 6 operations per 2 seconds
79
+ )
80
+
81
+ # Compare behaviors
82
+ puts "=== Greedy Strategy ==="
83
+ greedy_limiter = Async::Limiter::Limited.new(10, timing: greedy_timing)
84
+
85
+ 10.times do |i|
86
+ greedy_limiter.async do |task|
87
+ puts "Greedy #{i} at #{Time.now}"
88
+ end
89
+ end
90
+
91
+ sleep(3) # Wait for completion
92
+
93
+ puts "=== Conservative Strategy ==="
94
+ conservative_limiter = Async::Limiter::Limited.new(10, timing: conservative_timing)
95
+
96
+ 10.times do |i|
97
+ conservative_limiter.async do |task|
98
+ puts "Conservative #{i} at #{Time.now}"
99
+ end
100
+ end
101
+ ```
102
+
103
+ ### Cost-Based Rate Limiting
104
+
105
+ Operations can consume different amounts of the rate limit:
106
+
107
+ ```ruby
108
+ timing = Async::Limiter::Timing::SlidingWindow.new(
109
+ 1.0, # 1-second window
110
+ Async::Limiter::Timing::Burst::Greedy,
111
+ 10.0 # 10 units per second
112
+ )
113
+
114
+ limiter = Async::Limiter::Limited.new(20, timing: timing)
115
+
116
+ Async do
117
+ # Light operations (0.5 units each)
118
+ 5.times do |i|
119
+ limiter.acquire(cost: 0.5) do
120
+ puts "Light operation #{i} at #{Time.now}"
121
+ end
122
+ end
123
+
124
+ # Heavy operations (3.0 units each)
125
+ 3.times do |i|
126
+ limiter.acquire(cost: 3.0) do
127
+ puts "Heavy operation #{i} at #{Time.now}"
128
+ end
129
+ end
130
+
131
+ # Total: 5 * 0.5 + 3 * 3.0 = 11.5 units.
132
+ # Will be rate limited to 10 units/second.
133
+ end
134
+ ```
135
+
136
+ ## Fixed Window Strategy
137
+
138
+ Provides rate limiting with discrete time boundaries:
139
+
140
+ ### Basic Usage
141
+
142
+ ```ruby
143
+ # Allow 5 operations per 2-second window with fixed boundaries
144
+ timing = Async::Limiter::Timing::FixedWindow.new(
145
+ 2.0, # 2-second windows
146
+ Async::Limiter::Timing::Burst::Greedy, # Allow bursting within window
147
+ 5 # 5 operations per window
148
+ )
149
+
150
+ limiter = Async::Limiter::Limited.new(10, timing: timing)
151
+
152
+ # Operations are grouped into discrete 2-second windows
153
+ 15.times do |i|
154
+ limiter.async do |task|
155
+ puts "Operation #{i} at #{Time.now}"
156
+ task.sleep 0.1
157
+ end
158
+ end
159
+
160
+ # Output shows operations grouped in batches of 5, every 2 seconds
161
+ ```
162
+
163
+ ### Window Boundary Behavior
164
+
165
+ ```ruby
166
+ # Demonstrate window boundaries
167
+ timing = Async::Limiter::Timing::FixedWindow.new(
168
+ 1.0, # 1-second windows
169
+ Async::Limiter::Timing::Burst::Greedy,
170
+ 3 # 3 operations per window
171
+ )
172
+
173
+ limiter = Async::Limiter::Limited.new(10, timing: timing)
174
+
175
+ start_time = Time.now
176
+
177
+ 10.times do |i|
178
+ limiter.async do |task|
179
+ elapsed = Time.now - start_time
180
+ puts "Operation #{i} at #{elapsed.round(2)}s (window #{elapsed.to_i})"
181
+ end
182
+ end
183
+
184
+ # Operations are clearly grouped by 1-second boundaries:
185
+ # Window 0: operations 0, 1, 2 (0.00s - 0.99s)
186
+ # Window 1: operations 3, 4, 5 (1.00s - 1.99s)
187
+ # Window 2: operations 6, 7, 8 (2.00s - 2.99s)
188
+ # etc.
189
+ ```
190
+
191
+ ### Burst vs Conservative in Fixed Windows
192
+
193
+ ```ruby
194
+ # Greedy allows all operations immediately within each window
195
+ greedy_timing = Async::Limiter::Timing::FixedWindow.new(
196
+ 2.0, Async::Limiter::Timing::Burst::Greedy, 4
197
+ )
198
+
199
+ # Conservative spreads operations evenly within each window
200
+ conservative_timing = Async::Limiter::Timing::FixedWindow.new(
201
+ 2.0, Async::Limiter::Timing::Burst::Conservative, 4
202
+ )
203
+
204
+ puts "=== Greedy Fixed Window ==="
205
+ greedy_limiter = Async::Limiter::Limited.new(10, timing: greedy_timing)
206
+
207
+ 8.times do |i|
208
+ greedy_limiter.async do |task|
209
+ puts "Greedy #{i} at #{Time.now}"
210
+ end
211
+ end
212
+
213
+ sleep(5) # Wait for completion
214
+
215
+ puts "=== Conservative Fixed Window ==="
216
+ conservative_limiter = Async::Limiter::Limited.new(10, timing: conservative_timing)
217
+
218
+ 8.times do |i|
219
+ conservative_limiter.async do |task|
220
+ puts "Conservative #{i} at #{Time.now}"
221
+ end
222
+ end
223
+
224
+ # Greedy: 4 operations immediately, then wait 2s, then 4 more immediately
225
+ # Conservative: Operations spread evenly within each 2-second window
226
+ ```
227
+
228
+ ## Leaky Bucket Strategy
229
+
230
+ Provides smooth token-based rate limiting with automatic token replenishment:
231
+
232
+ ### Basic Usage
233
+
234
+ ```ruby
235
+ # 5 tokens per second leak rate, 20 token capacity
236
+ timing = Async::Limiter::Timing::LeakyBucket.new(
237
+ 5.0, # 5 tokens/second leak rate
238
+ 20.0 # 20 token capacity
239
+ )
240
+
241
+ limiter = Async::Limiter::Limited.new(30, timing: timing)
242
+
243
+ # Bucket starts empty and fills as operations are attempted
244
+ 30.times do |i|
245
+ limiter.async do |task|
246
+ puts "Operation #{i} at #{Time.now}"
247
+ task.sleep 0.1
248
+ end
249
+ end
250
+
251
+ # First ~20 operations may execute quickly (burst capacity)
252
+ # Then operations are limited to 5/second (leak rate)
253
+ ```
254
+
255
+ ### Initial Token Level
256
+
257
+ ```ruby
258
+ # Start with bucket partially filled
259
+ timing = Async::Limiter::Timing::LeakyBucket.new(
260
+ 2.0, # 2 tokens/second leak rate
261
+ 10.0, # 10 token capacity
262
+ initial_level: 8.0 # Start with 8 tokens available
263
+ )
264
+
265
+ limiter = Async::Limiter::Limited.new(20, timing: timing)
266
+
267
+ # First 8 operations execute immediately (using initial tokens)
268
+ # Then rate limited to 2/second
269
+ 15.times do |i|
270
+ limiter.async do |task|
271
+ puts "Operation #{i} at #{Time.now}"
272
+ end
273
+ end
274
+ ```
275
+
276
+ ### Cost-Based Token Consumption
277
+
278
+ ```ruby
279
+ timing = Async::Limiter::Timing::LeakyBucket.new(
280
+ 10.0, # 10 tokens/second leak rate
281
+ 50.0 # 50 token capacity
282
+ )
283
+
284
+ limiter = Async::Limiter::Limited.new(100, timing: timing)
285
+
286
+ Async do
287
+ # Cheap operations (0.5 tokens each)
288
+ 10.times do |i|
289
+ limiter.acquire(cost: 0.5) do
290
+ puts "Cheap operation #{i} at #{Time.now}"
291
+ end
292
+ end
293
+
294
+ # Expensive operations (5.0 tokens each)
295
+ 5.times do |i|
296
+ limiter.acquire(cost: 5.0) do
297
+ puts "Expensive operation #{i} at #{Time.now}"
298
+ end
299
+ end
300
+
301
+ # Mixed costs will be rate limited based on total token consumption.
302
+ end
303
+ ```
304
+
305
+ ### Token Bucket Dynamics
306
+
307
+ ```ruby
308
+ # Demonstrate token accumulation and depletion
309
+ timing = Async::Limiter::Timing::LeakyBucket.new(
310
+ 3.0, # 3 tokens/second
311
+ 15.0 # 15 token capacity
312
+ )
313
+
314
+ limiter = Async::Limiter::Limited.new(50, timing: timing)
315
+
316
+ # Phase 1: Burst consumption (depletes bucket)
317
+ puts "=== Phase 1: Burst consumption ==="
318
+ 20.times do |i|
319
+ limiter.async do |task|
320
+ puts "Burst #{i} at #{Time.now}"
321
+ end
322
+ end
323
+
324
+ # Wait for burst to complete and tokens to accumulate
325
+ sleep(10)
326
+
327
+ # Phase 2: Another burst (uses accumulated tokens)
328
+ puts "=== Phase 2: After token accumulation ==="
329
+ 10.times do |i|
330
+ limiter.async do |task|
331
+ puts "Second burst #{i} at #{Time.now}"
332
+ end
333
+ end
334
+ ```
335
+
336
+ ## Combining Strategies with Different Limiters
337
+
338
+ ### Generic Limiter + Timing
339
+
340
+ Pure rate limiting without concurrency constraints:
341
+
342
+ ```ruby
343
+ # Unlimited concurrency, but rate limited
344
+ timing = Async::Limiter::Timing::SlidingWindow.new(1.0,
345
+ Async::Limiter::Timing::Burst::Greedy, 5)
346
+
347
+ limiter = Async::Limiter::Generic.new(timing: timing)
348
+
349
+ # All 20 tasks start immediately, but timing strategy controls execution rate
350
+ 20.times do |i|
351
+ limiter.async do |task|
352
+ puts "Task #{i} at #{Time.now}"
353
+ task.sleep 0.1
354
+ end
355
+ end
356
+ ```
357
+
358
+ ### Limited Limiter + Timing
359
+
360
+ Both concurrency and rate limiting:
361
+
362
+ ```ruby
363
+ # Max 3 concurrent, and max 2 per second
364
+ timing = Async::Limiter::Timing::LeakyBucket.new(2.0, 10.0)
365
+ limiter = Async::Limiter::Limited.new(3, timing: timing)
366
+
367
+ # Operations are constrained by both limits
368
+ 10.times do |i|
369
+ limiter.async do |task|
370
+ puts "Task #{i} started at #{Time.now} (concurrent: #{i % 3})"
371
+ task.sleep 2 # Longer task to show concurrency limit
372
+ puts "Task #{i} finished at #{Time.now}"
373
+ end
374
+ end
375
+
376
+ # Shows interplay between concurrency (3 max) and rate (2/second) limits
377
+ ```
378
+
379
+ ### Queued Limiter + Timing
380
+
381
+ Priority-based resource allocation with rate limiting:
382
+
383
+ ```ruby
384
+ require "async/queue"
385
+
386
+ # Create resource queue
387
+ queue = Async::Queue.new
388
+ 3.times { |i| queue.push("worker_#{i}") }
389
+
390
+ # Add timing constraint
391
+ timing = Async::Limiter::Timing::FixedWindow.new(2.0,
392
+ Async::Limiter::Timing::Burst::Greedy, 4)
393
+
394
+ limiter = Async::Limiter::Queued.new(queue, timing: timing)
395
+
396
+ # High and low priority tasks with timing constraints
397
+ tasks = []
398
+
399
+ # Low priority background tasks
400
+ 5.times do |i|
401
+ tasks << limiter.async do |task|
402
+ limiter.acquire(priority: 1) do |worker|
403
+ puts "Background task #{i} using #{worker} at #{Time.now}"
404
+ task.sleep 1
405
+ end
406
+ end
407
+ end
408
+
409
+ # High priority user tasks
410
+ 3.times do |i|
411
+ tasks << limiter.async do |task|
412
+ limiter.acquire(priority: 10) do |worker|
413
+ puts "User task #{i} using #{worker} at #{Time.now}"
414
+ task.sleep 1
415
+ end
416
+ end
417
+ end
418
+
419
+ tasks.each(&:wait)
420
+ ```
421
+
422
+ ## Real-World Examples
423
+
424
+ ### API Rate Limiting
425
+
426
+ ```ruby
427
+ class RateLimitedAPIClient
428
+ def initialize(requests_per_second: 10, burst_capacity: 50)
429
+ # Leaky bucket allows bursts up to capacity, then steady rate:
430
+ timing = Async::Limiter::Timing::LeakyBucket.new(
431
+ requests_per_second.to_f,
432
+ burst_capacity.to_f
433
+ )
434
+
435
+ @limiter = Async::Limiter::Generic.new(timing: timing)
436
+ end
437
+
438
+ def make_request(endpoint, cost: 1.0)
439
+ @limiter.acquire(cost: cost) do
440
+ # Make actual HTTP request:
441
+ puts "Making request to #{endpoint} at #{Time.now}"
442
+ simulate_http_request(endpoint)
443
+ end
444
+ end
445
+
446
+ def make_expensive_request(endpoint)
447
+ # Heavy requests consume more rate limit:
448
+ make_request(endpoint, cost: 5.0)
449
+ end
450
+
451
+ private
452
+
453
+ def simulate_http_request(endpoint)
454
+ sleep(0.1) # Simulate network delay
455
+ "Response from #{endpoint}"
456
+ end
457
+ end
458
+
459
+ # Usage
460
+ client = RateLimitedAPIClient.new(requests_per_second: 5, burst_capacity: 20)
461
+
462
+ Async do
463
+ # Burst of requests (uses burst capacity):
464
+ 10.times do |i|
465
+ client.make_request("/api/data/#{i}")
466
+ end
467
+
468
+ # Mix of normal and expensive requests:
469
+ 5.times do |i|
470
+ if i.even?
471
+ client.make_request("/api/normal/#{i}")
472
+ else
473
+ client.make_expensive_request("/api/heavy/#{i}")
474
+ end
475
+ end
476
+ end
477
+ ```
478
+
479
+ ### Background Job Processing
480
+
481
+ ```ruby
482
+ class JobProcessor
483
+ def initialize
484
+ # Process jobs in batches every 30 seconds, up to 50 jobs per batch
485
+ timing = Async::Limiter::Timing::FixedWindow.new(
486
+ 30.0, # 30-second windows
487
+ Async::Limiter::Timing::Burst::Greedy,
488
+ 50 # 50 jobs per window
489
+ )
490
+
491
+ @limiter = Async::Limiter::Limited.new(10, timing: timing) # Max 10 concurrent
492
+ end
493
+
494
+ def process_job(job)
495
+ cost = calculate_job_cost(job)
496
+
497
+ @limiter.acquire(cost: cost) do
498
+ puts "Processing #{job.type} job #{job.id} (cost: #{cost}) at #{Time.now}"
499
+
500
+ case job.type
501
+ when :quick
502
+ sleep(0.5)
503
+ when :normal
504
+ sleep(2.0)
505
+ when :heavy
506
+ sleep(5.0)
507
+ end
508
+
509
+ puts "Completed job #{job.id}"
510
+ end
511
+ end
512
+
513
+ private
514
+
515
+ def calculate_job_cost(job)
516
+ case job.type
517
+ when :quick then 0.5
518
+ when :normal then 1.0
519
+ when :heavy then 3.0
520
+ end
521
+ end
522
+ end
523
+
524
+ # Mock job structure
525
+ Job = Struct.new(:id, :type)
526
+
527
+ # Usage
528
+ processor = JobProcessor.new
529
+
530
+ jobs = [
531
+ Job.new(1, :quick), Job.new(2, :normal), Job.new(3, :heavy),
532
+ Job.new(4, :quick), Job.new(5, :normal), Job.new(6, :heavy),
533
+ Job.new(7, :quick), Job.new(8, :normal), Job.new(9, :heavy),
534
+ ]
535
+
536
+ Async do
537
+ jobs.each do |job|
538
+ processor.process_job(job)
539
+ end
540
+ end
541
+
542
+ # Jobs are processed in batches based on the 30-second fixed window
543
+ # Heavy jobs consume more of the batch quota due to higher cost
544
+ ```
545
+
546
+ ### Adaptive Rate Limiting
547
+
548
+ ```ruby
549
+ class AdaptiveRateLimiter
550
+ def initialize
551
+ @current_rate = 10.0
552
+ @current_capacity = 50.0
553
+ @timing = create_timing
554
+ @limiter = Async::Limiter::Generic.new(timing: @timing)
555
+ @success_count = 0
556
+ @error_count = 0
557
+ end
558
+
559
+ def make_request(endpoint)
560
+ @limiter.acquire do
561
+ begin
562
+ result = simulate_request(endpoint)
563
+ @success_count += 1
564
+ adjust_rate_on_success
565
+ result
566
+ rescue => error
567
+ @error_count += 1
568
+ adjust_rate_on_error
569
+ raise
570
+ end
571
+ end
572
+ end
573
+
574
+ private
575
+
576
+ def create_timing
577
+ Async::Limiter::Timing::LeakyBucket.new(@current_rate, @current_capacity)
578
+ end
579
+
580
+ def adjust_rate_on_success
581
+ # Increase rate gradually on success
582
+ if @success_count % 10 == 0 && @error_count == 0
583
+ @current_rate = [@current_rate * 1.1, 50.0].min
584
+ @current_capacity = [@current_capacity * 1.1, 200.0].min
585
+ update_timing
586
+ puts "Rate increased to #{@current_rate}/sec (capacity: #{@current_capacity})"
587
+ end
588
+ end
589
+
590
+ def adjust_rate_on_error
591
+ # Decrease rate on errors
592
+ @current_rate = [@current_rate * 0.8, 1.0].max
593
+ @current_capacity = [@current_capacity * 0.8, 10.0].max
594
+ update_timing
595
+ puts "Rate decreased to #{@current_rate}/sec (capacity: #{@current_capacity})"
596
+
597
+ # Reset counters
598
+ @success_count = 0
599
+ @error_count = 0
600
+ end
601
+
602
+ def update_timing
603
+ new_timing = create_timing
604
+ @limiter.instance_variable_set(:@timing, new_timing)
605
+ end
606
+
607
+ def simulate_request(endpoint)
608
+ sleep(0.1)
609
+
610
+ # Simulate occasional errors to trigger rate adjustment
611
+ if rand < 0.1 # 10% error rate
612
+ raise "API Error: Rate limit exceeded"
613
+ end
614
+
615
+ "Success: #{endpoint}"
616
+ end
617
+ end
618
+
619
+ # Usage
620
+ limiter = AdaptiveRateLimiter.new
621
+
622
+ Async do
623
+ 100.times do |i|
624
+ begin
625
+ result = limiter.make_request("/api/endpoint/#{i}")
626
+ puts "Request #{i}: #{result}"
627
+ rescue => error
628
+ puts "Request #{i} failed: #{error.message}"
629
+ end
630
+
631
+ sleep(0.05) # Small delay between requests
632
+ end
633
+ end
634
+
635
+ # Shows adaptive behavior: rate increases on success, decreases on errors
636
+ ```
637
+
638
+ ## Best Practices
639
+
640
+ ### Choosing the Right Strategy
641
+
642
+ - **None**: Use when you only need concurrency control without rate limiting
643
+ - **SlidingWindow**: Best for smooth, continuous rate limiting
644
+ - **FixedWindow**: Good for batch processing or when you want discrete time periods
645
+ - **LeakyBucket**: Ideal for APIs with burst tolerance and smooth long-term rates
646
+
647
+ ### Configuration Guidelines
648
+
649
+ - **Window size**: Smaller windows provide more responsive rate limiting but may be less efficient
650
+ - **Burst strategy**: Use Greedy for better user experience, Conservative for more predictable load
651
+ - **Capacity**: Set burst capacity based on your system's ability to handle temporary load spikes
652
+ - **Leak rate**: Should match your sustainable processing rate
653
+
654
+ ### Performance Considerations
655
+
656
+ - **Memory usage**: Timing strategies maintain internal state; size depends on configuration
657
+ - **CPU overhead**: More complex strategies (SlidingWindow) have higher computational cost
658
+ - **Accuracy**: Shorter time windows provide more accurate rate limiting but use more resources
659
+
660
+ ### Error Handling
661
+
662
+ - **Cost validation**: Always handle ArgumentError when costs exceed capacity
663
+ - **Timeout handling**: Set appropriate timeouts based on your timing strategy's behavior
664
+ - **Graceful degradation**: Have fallback strategies when rate limits are exceeded
665
+
666
+ Timing strategies provide powerful tools for controlling the rate and timing of operations in your async applications. Choose the strategy that best matches your specific rate limiting requirements and system constraints.