@julr/tenace 1.0.0-next.0 → 1.0.0-next.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # Tenace
2
2
 
3
- A fluent resilience library for Node.js. Make any async operation resilient with timeout, retry, circuit breaker, bulkhead, and more.
3
+ Resilience library for Node.js. Timeout, retry, circuit breaker, bulkhead, caching, rate limiting, and more.
4
4
 
5
5
  ## Installation
6
6
 
@@ -30,10 +30,6 @@ import type { CacheAdapter, RateLimiterAdapter, LockAdapter } from '@julr/tenace
30
30
  ```ts
31
31
  import { Tenace } from '@julr/tenace'
32
32
 
33
- // Without Tenace: hope for the best
34
- const user = await fetch('/api/users/1')
35
-
36
- // With Tenace: be resilient
37
33
  const user = await Tenace.call(() => fetch('/api/users/1'))
38
34
  .withTimeout('5s')
39
35
  .withRetry({ times: 3 })
@@ -41,70 +37,37 @@ const user = await Tenace.call(() => fetch('/api/users/1'))
41
37
  .execute()
42
38
  ```
43
39
 
44
- ## Understanding the Pipeline
45
-
46
- Tenace uses a **pipeline** system for policies. **The order you add policies = the order errors flow through them.**
47
-
48
- ### The Golden Rule
49
-
50
- > **First added = Innermost layer (handles errors first)**
51
- > **Last added = Outermost layer (catches errors last)**
40
+ ## Pipeline
52
41
 
53
- Think of it like a pipeline: `fn timeout retry fallback`
54
-
55
- ```
56
- ┌─────────────────────────────────────────────────────────────┐
57
- │ .withFallback() ← 4th: Catches ALL errors (outermost) │
58
- │ ┌───────────────────────────────────────────────────────┐ │
59
- │ │ .withRetry() ← 3rd: Retries timeout errors │ │
60
- │ │ ┌─────────────────────────────────────────────────┐ │ │
61
- │ │ │ .withTimeout() ← 2nd: Timeout per attempt │ │ │
62
- │ │ │ ┌───────────────────────────────────────────┐ │ │ │
63
- │ │ │ │ .withCircuitBreaker() ← 1st: Tracks │ │ │ │
64
- │ │ │ │ ┌─────────────────────────────────────┐ │ │ │ │
65
- │ │ │ │ │ │ │ │ │ │
66
- │ │ │ │ │ YOUR FUNCTION │ │ │ │ │
67
- │ │ │ │ │ │ │ │ │ │
68
- │ │ │ │ └─────────────────────────────────────┘ │ │ │ │
69
- │ │ │ └───────────────────────────────────────────┘ │ │ │
70
- │ │ └─────────────────────────────────────────────────┘ │ │
71
- │ └───────────────────────────────────────────────────────┘ │
72
- └─────────────────────────────────────────────────────────────┘
73
- ```
42
+ The order you add policies matters. First added = innermost layer (closest to your function). Last added = outermost layer.
74
43
 
75
- ### Order Matters: Common Patterns
76
-
77
- #### Pattern 1: Timeout per attempt vs total timeout
44
+ ### Timeout per attempt vs total timeout
78
45
 
79
46
  ```ts
80
- // 5s timeout PER attempt
81
- // timeout is inner (first), retry is outer (second)
47
+ // 5s timeout PER attempt
82
48
  Tenace.call(fn)
83
49
  .withTimeout('5s')
84
50
  .withRetry({ times: 5 })
85
51
  .execute()
86
52
 
87
- // ⚠️ TOTAL timeout of 5s for ALL retries combined
88
- // retry is inner (first), timeout is outer (second)
53
+ // TOTAL timeout of 5s for ALL retries combined
89
54
  Tenace.call(fn)
90
55
  .withRetry({ times: 5 })
91
56
  .withTimeout('5s')
92
57
  .execute()
93
58
  ```
94
59
 
95
- #### Pattern 2: Where to place fallback
60
+ ### Where to place fallback
96
61
 
97
62
  ```ts
98
- // Fallback catches EVERYTHING (recommended)
99
- // retry first (inner) → fallback second (outer)
63
+ // Fallback catches everything (recommended)
100
64
  Tenace.call(fn)
101
65
  .withRetry({ times: 3 })
102
66
  .withTimeout('5s')
103
67
  .withFallback(() => defaultValue)
104
68
  .execute()
105
69
 
106
- // ⚠️ Fallback only catches fn errors, not retry/timeout
107
- // fallback is inner, retry is outer
70
+ // Fallback only catches fn errors, not retry/timeout errors
108
71
  Tenace.call(fn)
109
72
  .withFallback(() => defaultValue)
110
73
  .withRetry({ times: 3 })
@@ -112,56 +75,41 @@ Tenace.call(fn)
112
75
  .execute()
113
76
  ```
114
77
 
115
- #### Pattern 3: Circuit breaker placement
78
+ ### Circuit breaker placement
116
79
 
117
80
  ```ts
118
- // ✅ RECOMMENDED: Circuit breaker inside retry
119
- // CB first (inner) → retry second (outer)
120
- // Each retry attempt is tracked separately
81
+ // Circuit breaker inside retry = each attempt is tracked separately
121
82
  Tenace.call(fn)
122
83
  .withCircuitBreaker({ failureThreshold: 5, halfOpenAfter: '30s' })
123
84
  .withRetry({ times: 3 })
124
85
  .execute()
125
86
 
126
- // ⚠️ Circuit breaker outside retry
127
- // retry is inner, CB is outer
128
- // Only the final result (after all retries) is tracked
87
+ // Circuit breaker outside retry = only the final result is tracked
129
88
  Tenace.call(fn)
130
89
  .withRetry({ times: 3 })
131
90
  .withCircuitBreaker({ failureThreshold: 5, halfOpenAfter: '30s' })
132
91
  .execute()
133
92
  ```
134
93
 
135
- ### Recommended Order
136
-
137
- For most use cases (reading order = error handling order):
94
+ ### Recommended order
138
95
 
139
96
  ```ts
140
97
  Tenace.call(fn)
141
98
  .withCircuitBreaker(...) // 1. Failure tracking (innermost)
142
99
  .withTimeout('5s') // 2. Timeout per attempt
143
100
  .withRetry({ times: 3 }) // 3. Retry on timeout/error
144
- .withFallback(() => defaultValue) // 4. Catch-all safety net (outermost)
101
+ .withFallback(() => defaultValue) // 4. Catch-all (outermost)
145
102
  .execute()
146
103
  ```
147
104
 
148
- ### Errors That Stop Retries
149
-
150
- The retry policy will **NOT** retry these errors (fail-fast behavior):
105
+ ### Errors that stop retries
151
106
 
152
- | Error | Reason |
153
- | ------------------- | --------------------------------------------- |
154
- | `CircuitOpenError` | Circuit breaker is open, no point retrying |
155
- | `BulkheadFullError` | System is overloaded, retrying makes it worse |
156
-
157
- All other errors (including `TimeoutError`) **will** be retried.
107
+ `CircuitOpenError` and `BulkheadFullError` are never retried. All other errors (including `TimeoutError`) will be retried.
158
108
 
159
109
  ## Policies
160
110
 
161
111
  ### Timeout
162
112
 
163
- Set a maximum execution time. Supports two strategies:
164
-
165
113
  ```ts
166
114
  // Cooperative (default): Passes AbortSignal, function should respect it
167
115
  Tenace.call(({ signal }) => fetch('/api', { signal }))
@@ -176,10 +124,7 @@ Tenace.call(() => slowOperation())
176
124
 
177
125
  ### Retry
178
126
 
179
- Retry failed operations with configurable backoff:
180
-
181
127
  ```ts
182
- // Simple retry
183
128
  Tenace.call(fn)
184
129
  .withRetry({ times: 3 })
185
130
  .execute()
@@ -218,9 +163,7 @@ Tenace.call(fn)
218
163
  .execute()
219
164
  ```
220
165
 
221
- #### Backoff Presets
222
-
223
- Use the `backoff` helper for common backoff patterns:
166
+ #### Backoff presets
224
167
 
225
168
  ```ts
226
169
  import { Tenace, backoff } from '@julr/tenace'
@@ -248,20 +191,16 @@ Tenace.call(fn)
248
191
 
249
192
  Available presets:
250
193
 
251
- | Preset | Description | Options |
252
- | ------------------------------------- | --------------------------- | ---------------------------------------- |
253
- | `backoff.constant(delay)` | Fixed delay between retries | `Duration` |
254
- | `backoff.exponential(opts)` | Doubles each time (default) | `{ initial?, max?, exponent? }` |
255
- | `backoff.exponentialWithJitter(opts)` | Exponential + randomization | `{ initial?, max?, exponent?, jitter? }` |
256
- | `backoff.linear(opts)` | Increases by fixed step | `{ initial?, step?, max? }` |
194
+ - `backoff.constant(delay)` - Fixed delay between retries
195
+ - `backoff.exponential({ initial?, max?, exponent? })` - Doubles each time
196
+ - `backoff.exponentialWithJitter({ initial?, max?, exponent?, jitter? })` - Exponential + randomization
197
+ - `backoff.linear({ initial?, step?, max? })` - Increases by fixed step
257
198
 
258
- Jitter strategies for `exponentialWithJitter`:
259
- - `'full'` (default): Random delay between 0 and calculated delay
260
- - `'decorrelated'`: AWS-style decorrelated jitter ([recommended](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/))
199
+ Jitter strategies: `'full'` (default) or `'decorrelated'` (AWS-style)
261
200
 
262
201
  ### Circuit Breaker
263
202
 
264
- Stop calling a failing service. After N failures, the circuit "opens" and all calls fail immediately with `CircuitOpenError`.
203
+ After N consecutive failures, the circuit opens and all calls fail immediately with `CircuitOpenError`.
265
204
 
266
205
  ```ts
267
206
  const policy = Tenace.policy()
@@ -288,20 +227,17 @@ handle?.dispose() // Release isolation
288
227
 
289
228
  ### Bulkhead
290
229
 
291
- Limit concurrent executions to protect downstream services:
230
+ Limit concurrent executions. Throws `BulkheadFullError` if the queue is full.
292
231
 
293
232
  ```ts
294
233
  const policy = Tenace.policy()
295
234
  .withBulkhead(10, 100) // max 10 concurrent, 100 in queue
296
235
 
297
236
  await policy.call(() => heavyOperation()).execute()
298
- // Throws BulkheadFullError if queue is also full
299
237
  ```
300
238
 
301
239
  ### Fallback
302
240
 
303
- Return a default value when everything fails:
304
-
305
241
  ```ts
306
242
  const user = await Tenace.call(() => fetchUser(id))
307
243
  .withFallback(() => ({ id, name: 'Unknown', cached: true }))
@@ -317,38 +253,32 @@ const user = await Tenace.call(() => fetchUser(id))
317
253
 
318
254
  ## Adapters
319
255
 
320
- Cache, Rate Limiter, and Distributed Lock are **integrated into the pipeline**. Order matters - just like other policies!
256
+ Cache, Rate Limiter, and Distributed Lock follow the same pipeline rules.
321
257
 
322
258
  ### Cache
323
259
 
324
- Cache successful results to avoid redundant calls:
325
-
326
260
  ```ts
327
- // Basic caching
328
261
  const user = await Tenace.call(() => fetchUser(id))
329
- .withCache({ key: `user:${id}`, ttl: 60_000 }) // 1 minute TTL
262
+ .withCache({ key: `user:${id}`, ttl: 60_000 })
330
263
  .execute()
331
264
  ```
332
265
 
333
- #### Order patterns for cache
266
+ #### Order patterns
334
267
 
335
268
  ```ts
336
- // Fallback BEFORE cache = fallback values ARE cached
337
- // fallback (inner) → cache (outer) = cache stores fallback result
269
+ // Fallback BEFORE cache = fallback values are cached
338
270
  Tenace.call(fn)
339
271
  .withFallback(() => defaultValue)
340
272
  .withCache({ key: 'x', ttl: 60_000 })
341
273
  .execute()
342
274
 
343
- // ⚠️ Cache BEFORE fallback = fallback values NOT cached
344
- // cache (inner) → fallback (outer) = cache doesn't see fallback result
275
+ // Cache BEFORE fallback = fallback values not cached
345
276
  Tenace.call(fn)
346
277
  .withCache({ key: 'x', ttl: 60_000 })
347
278
  .withFallback(() => defaultValue)
348
279
  .execute()
349
280
 
350
- // RateLimit BEFORE cache = no token consumed on cache hit
351
- // rateLimit (inner) → cache (outer) = cache checks first
281
+ // RateLimit BEFORE cache = no token consumed on cache hit
352
282
  Tenace.call(fn)
353
283
  .withRateLimit({ key: 'api', maxCalls: 100, windowMs: 60_000 })
354
284
  .withCache({ key: 'x', ttl: 60_000 })
@@ -358,13 +288,13 @@ Tenace.call(fn)
358
288
  #### Graceful degradation
359
289
 
360
290
  ```ts
361
- // If Redis is down, continue without cache (don't fail the request)
291
+ // If the adapter is down, continue without cache
362
292
  await Tenace.call(fn)
363
293
  .withCache({ key: 'x', ttl: 60_000, optional: true })
364
294
  .execute()
365
295
  ```
366
296
 
367
- **Built-in adapter**: `MemoryCacheAdapter` (uses Bentocache under the hood)
297
+ **Built-in adapter**: `MemoryCacheAdapter` (uses Bentocache)
368
298
 
369
299
  ```ts
370
300
  import { MemoryCacheAdapter } from '@julr/tenace/adapters'
@@ -382,7 +312,7 @@ await Tenace.call(fn)
382
312
  .execute()
383
313
  ```
384
314
 
385
- **Custom adapter**: Implement `CacheAdapter` interface
315
+ **Custom adapter**: Implement `CacheAdapter`
386
316
 
387
317
  ```ts
388
318
  import type { CacheAdapter } from '@julr/tenace/adapters'
@@ -397,10 +327,9 @@ class RedisCacheAdapter implements CacheAdapter {
397
327
 
398
328
  ### Rate Limiter
399
329
 
400
- Limit call frequency to protect APIs or respect external rate limits:
330
+ Throws `RateLimitError` when limit is exceeded (includes `retryAfterMs`).
401
331
 
402
332
  ```ts
403
- // 100 calls per minute
404
333
  const result = await Tenace.call(() => callExternalApi())
405
334
  .withRateLimit({
406
335
  key: 'external-api',
@@ -410,28 +339,51 @@ const result = await Tenace.call(() => callExternalApi())
410
339
  .execute()
411
340
  ```
412
341
 
413
- Throws `RateLimitError` when limit is exceeded. The error includes `retryAfterMs`.
342
+ #### Queue mode
343
+
344
+ By default, requests exceeding the rate limit are rejected immediately. Enable queue mode to wait for tokens to become available instead:
345
+
346
+ ```ts
347
+ // Requests wait in queue instead of being rejected
348
+ const result = await Tenace.call(() => callExternalApi())
349
+ .withRateLimit({
350
+ key: 'external-api',
351
+ maxCalls: 10,
352
+ windowMs: 1000,
353
+ queue: {}, // Enable queue with default settings
354
+ })
355
+ .execute()
356
+
357
+ // Limit queue size to prevent unbounded memory usage
358
+ const result = await Tenace.call(() => callExternalApi())
359
+ .withRateLimit({
360
+ key: 'external-api',
361
+ maxCalls: 10,
362
+ windowMs: 1000,
363
+ queue: { maxSize: 100 }, // Max 100 requests in queue
364
+ })
365
+ .execute()
366
+ ```
367
+
368
+ Throws `RateLimitQueueFullError` when the queue is full.
414
369
 
415
- #### Order patterns for rate limit
370
+ #### Order patterns
416
371
 
417
372
  ```ts
418
- // RateLimit BEFORE fallback = fallback catches RateLimitError
419
- // rateLimit (inner) → fallback (outer)
373
+ // RateLimit BEFORE fallback = fallback catches RateLimitError
420
374
  Tenace.call(fn)
421
375
  .withRateLimit({ key: 'api', maxCalls: 100, windowMs: 60_000 })
422
376
  .withFallback(() => defaultValue)
423
377
  .execute()
424
378
 
425
- // ⚠️ Fallback BEFORE rateLimit = RateLimitError NOT caught by fallback
426
- // fallback (inner) → rateLimit (outer)
379
+ // Fallback BEFORE rateLimit = RateLimitError not caught by fallback
427
380
  Tenace.call(fn)
428
381
  .withFallback(() => defaultValue)
429
382
  .withRateLimit({ key: 'api', maxCalls: 100, windowMs: 60_000 })
430
383
  .execute()
431
384
 
432
- // Retry with error-based delay for rate limit handling
385
+ // Retry with error-based delay for rate limit handling
433
386
  await Tenace.call(fn)
434
- .withFallback(() => defaultValue)
435
387
  .withRetry({
436
388
  times: 3,
437
389
  delay: (attempt, error) => {
@@ -446,13 +398,13 @@ await Tenace.call(fn)
446
398
  #### Graceful degradation
447
399
 
448
400
  ```ts
449
- // If Redis is down, allow the call through (don't enforce rate limit)
401
+ // If the adapter is down, allow the call through
450
402
  await Tenace.call(fn)
451
403
  .withRateLimit({ key: 'api', maxCalls: 100, windowMs: 60_000, optional: true })
452
404
  .execute()
453
405
  ```
454
406
 
455
- **Built-in adapter**: `MemoryRateLimiterAdapter` (uses rate-limiter-flexible)
407
+ **Built-in adapter**: `MemoryRateLimiterAdapter`
456
408
 
457
409
  ```ts
458
410
  import { MemoryRateLimiterAdapter } from '@julr/tenace/adapters'
@@ -463,13 +415,14 @@ configStore.configure({
463
415
  })
464
416
  ```
465
417
 
466
- **Custom adapter**: Implement `RateLimiterAdapter` interface
418
+ **Custom adapter**: Implement `RateLimiterAdapter`
467
419
 
468
420
  ```ts
469
- import type { RateLimiterAdapter, RateLimitConfig, RateLimitResult, RateLimitState } from '@julr/tenace/adapters'
421
+ import type { RateLimiterAdapter, RateLimitConfig, RateLimitResult, RateLimitState, RateLimitQueueConfig } from '@julr/tenace/adapters'
470
422
 
471
423
  class RedisRateLimiterAdapter implements RateLimiterAdapter {
472
424
  async acquire(key: string, options: RateLimitConfig): Promise<RateLimitResult> { /* ... */ }
425
+ async removeTokens(options: { key: string; tokens: number; config: RateLimitConfig; queue?: RateLimitQueueConfig }): Promise<number> { /* ... */ }
473
426
  async getState(key: string): Promise<RateLimitState> { /* ... */ }
474
427
  async reset(key: string): Promise<void> { /* ... */ }
475
428
  }
@@ -477,46 +430,40 @@ class RedisRateLimiterAdapter implements RateLimiterAdapter {
477
430
 
478
431
  ### Distributed Lock
479
432
 
480
- Ensure only one process executes a critical section at a time (useful for distributed systems):
433
+ Throws `LockNotAcquiredError` if the lock cannot be acquired.
481
434
 
482
435
  ```ts
483
- // Process payment with distributed lock
484
436
  const result = await Tenace.call(() => processPayment(orderId))
485
437
  .withDistributedLock({
486
438
  key: `payment:${orderId}`,
487
- ttl: 30_000, // Lock expires after 30s (prevents deadlocks)
439
+ ttl: 30_000,
488
440
  })
489
441
  .execute()
490
442
  ```
491
443
 
492
- Throws `LockNotAcquiredError` if the lock cannot be acquired.
493
-
494
- #### Order patterns for lock
444
+ #### Order patterns
495
445
 
496
446
  ```ts
497
- // Retry BEFORE lock = lock held during ALL retries (atomicity)
498
- // retry (inner) → lock (outer) = lock acquired once, held during all retries
447
+ // Retry BEFORE lock = lock held during all retries
499
448
  Tenace.call(fn)
500
449
  .withRetry({ times: 3 })
501
450
  .withDistributedLock({ key: 'payment', ttl: 30_000 })
502
451
  .execute()
503
452
 
504
- // ⚠️ Lock BEFORE retry = each retry acquires its own lock
505
- // lock (inner) → retry (outer) = lock released between retries
453
+ // Lock BEFORE retry = each retry acquires its own lock
506
454
  Tenace.call(fn)
507
455
  .withDistributedLock({ key: 'payment', ttl: 10_000 })
508
456
  .withRetry({ times: 3 })
509
457
  .execute()
510
458
 
511
- // Lock BEFORE fallback = fallback catches LockNotAcquiredError
512
- // lock (inner) → fallback (outer)
459
+ // Lock BEFORE fallback = fallback catches LockNotAcquiredError
513
460
  Tenace.call(fn)
514
461
  .withDistributedLock({ key: 'payment', ttl: 30_000 })
515
462
  .withFallback(() => defaultValue)
516
463
  .execute()
517
464
  ```
518
465
 
519
- **No built-in adapter** - You must provide one (e.g., using [@verrou/core](https://github.com/Julien-R44/verrou)):
466
+ **No built-in adapter** - provide your own (e.g., using [@verrou/core](https://github.com/Julien-R44/verrou)):
520
467
 
521
468
  ```ts
522
469
  import { Verrou } from '@verrou/core'
@@ -536,7 +483,7 @@ class VerrouLockAdapter implements LockAdapter {
536
483
  configStore.configure({ lock: new VerrouLockAdapter() })
537
484
  ```
538
485
 
539
- **With retry** for lock acquisition:
486
+ With retry for lock acquisition:
540
487
 
541
488
  ```ts
542
489
  await Tenace.call(fn)
@@ -554,7 +501,7 @@ await Tenace.call(fn)
554
501
 
555
502
  ## Reusable Policies
556
503
 
557
- Create a policy once, reuse everywhere. **Circuit breaker and bulkhead states are shared across calls.**
504
+ Circuit breaker and bulkhead states are shared across calls.
558
505
 
559
506
  ```ts
560
507
  const apiPolicy = Tenace.policy()
@@ -562,14 +509,11 @@ const apiPolicy = Tenace.policy()
562
509
  .withRetry({ times: 3, delay: (attempt) => 100 * 2 ** attempt })
563
510
  .withCircuitBreaker({ failureThreshold: 5, halfOpenAfter: '30s' })
564
511
 
565
- // All calls share the same circuit breaker state!
566
512
  await apiPolicy.call(() => fetch('/api/users')).execute()
567
513
  await apiPolicy.call(() => fetch('/api/posts')).execute()
568
-
569
- // If 5 total failures occur, ALL calls will fail with CircuitOpenError
570
514
  ```
571
515
 
572
- ### Health Check Example
516
+ ### Health check example
573
517
 
574
518
  ```ts
575
519
  class UserService {
@@ -592,15 +536,13 @@ class UserService {
592
536
 
593
537
  ## Batch Operations
594
538
 
595
- ### Process items with concurrency control
596
-
597
539
  ```ts
598
540
  const userIds = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
599
541
 
600
542
  const users = await Tenace.map(userIds, (id) => fetchUser(id))
601
- .withConcurrency(3) // max 3 parallel requests
602
- .withRetryPerTask(2) // retry each task up to 2 times
603
- .withTimeoutPerTask('5s') // 5s timeout per task
543
+ .withConcurrency(3)
544
+ .withRetryPerTask(2)
545
+ .withTimeoutPerTask('5s')
604
546
  .execute()
605
547
  ```
606
548
 
@@ -621,7 +563,7 @@ const [users, posts, config] = await Tenace.all([
621
563
  ```ts
622
564
  const results = await Tenace.map(urls, (url) => fetch(url))
623
565
  .withConcurrency(5)
624
- .settle() // Never throws
566
+ .settle()
625
567
 
626
568
  for (const result of results) {
627
569
  if (result.status === 'fulfilled') {
@@ -651,21 +593,16 @@ await Tenace.map(files, (file) => uploadFile(file))
651
593
 
652
594
  ## Semaphore
653
595
 
654
- Low-level concurrency control:
655
-
656
596
  ```ts
657
597
  import { Semaphore } from '@julr/tenace'
658
598
 
659
- const sem = new Semaphore(5) // max 5 concurrent
599
+ const sem = new Semaphore(5)
660
600
 
661
- // Run with automatic acquire/release
662
601
  await sem.run(() => doWork())
663
602
 
664
- // Wrap a function
665
603
  const limitedFetch = sem.wrap(fetch)
666
604
  await limitedFetch('/api/data')
667
605
 
668
- // Map with concurrency
669
606
  const results = await sem.map(items, (item) => process(item))
670
607
 
671
608
  // Manual acquire/release
@@ -679,16 +616,14 @@ try {
679
616
 
680
617
  ## Waiting for Conditions
681
618
 
682
- Poll until a condition becomes true. Useful for waiting on eventual consistency or service readiness.
619
+ Poll until a condition becomes true.
683
620
 
684
621
  ```ts
685
- // Wait for a service to be healthy
686
622
  await Tenace.waitFor(() => isServiceHealthy(), {
687
623
  interval: '1s',
688
624
  timeout: '30s',
689
625
  })
690
626
 
691
- // Wait for a database connection
692
627
  await Tenace.waitFor(async () => {
693
628
  try {
694
629
  await db.ping()
@@ -697,13 +632,6 @@ await Tenace.waitFor(async () => {
697
632
  return false
698
633
  }
699
634
  }, { interval: '500ms', timeout: '1m' })
700
-
701
- // Wait for eventual consistency
702
- await orderApi.create(order)
703
- await Tenace.waitFor(
704
- () => orderApi.exists(order.id),
705
- { interval: '100ms', timeout: '5s' }
706
- )
707
635
  ```
708
636
 
709
637
  Options:
@@ -715,35 +643,24 @@ Options:
715
643
 
716
644
  ## Chaos Engineering
717
645
 
718
- Test your resilience patterns by injecting failures and latency.
719
-
720
- ### Global Chaos
646
+ Inject failures and latency for testing.
721
647
 
722
- Enable chaos globally to affect all Tenace calls:
648
+ ### Global chaos
723
649
 
724
650
  ```ts
725
- import { Tenace } from '@julr/tenace'
651
+ Tenace.chaos.enable({ fault: 1 }) // 100% failure rate
652
+ Tenace.chaos.enable({ latency: 500 }) // 500ms latency
726
653
 
727
- // Enable 100% failure rate
728
- Tenace.chaos.enable({ fault: 1 })
729
-
730
- // Enable 500ms latency
731
- Tenace.chaos.enable({ latency: 500 })
732
-
733
- // Combined with configuration
734
654
  Tenace.chaos.enable({
735
655
  fault: { rate: 0.1, error: new Error('Random failure') },
736
656
  latency: { rate: 0.2, delay: { min: 100, max: 2000 } },
737
657
  })
738
658
 
739
- // Disable
740
659
  Tenace.chaos.disable()
741
-
742
- // Check status
743
660
  Tenace.chaos.isEnabled()
744
661
  ```
745
662
 
746
- ### Testing Example
663
+ ### Testing example
747
664
 
748
665
  ```ts
749
666
  import { test } from 'vitest'
@@ -900,16 +817,17 @@ All errors extend `TenaceError`:
900
817
 
901
818
  ```ts
902
819
  import {
903
- TenaceError, // Base class
904
- TimeoutError, // Operation timed out
905
- CancelledError, // Operation cancelled via AbortSignal
906
- CircuitOpenError, // Circuit breaker is open
907
- CircuitIsolatedError, // Circuit breaker manually isolated
908
- BulkheadFullError, // Bulkhead capacity exceeded
909
- AbortError, // AbortSignal triggered
910
- RateLimitError, // Rate limit exceeded (has retryAfterMs)
911
- LockNotAcquiredError, // Distributed lock not acquired
912
- WaitForTimeoutError, // waitFor() timed out
820
+ TenaceError, // Base class
821
+ TimeoutError, // Operation timed out
822
+ CancelledError, // Operation cancelled via AbortSignal
823
+ CircuitOpenError, // Circuit breaker is open
824
+ CircuitIsolatedError, // Circuit breaker manually isolated
825
+ BulkheadFullError, // Bulkhead capacity exceeded
826
+ AbortError, // AbortSignal triggered
827
+ RateLimitError, // Rate limit exceeded (has retryAfterMs)
828
+ RateLimitQueueFullError, // Rate limit queue is full
829
+ LockNotAcquiredError, // Distributed lock not acquired
830
+ WaitForTimeoutError, // waitFor() timed out
913
831
  } from '@julr/tenace/errors'
914
832
  ```
915
833
 
@@ -958,7 +876,7 @@ Tenace.call(fn)
958
876
  .withBulkhead(limit, queue?) // Concurrency limiter
959
877
  .withFallback(fn) // Default value on failure
960
878
  .withCache(options) // { key, ttl, adapter? }
961
- .withRateLimit(options) // { key, maxCalls, windowMs, adapter? }
879
+ .withRateLimit(options) // { key, maxCalls, windowMs, queue?, adapter? }
962
880
  .withDistributedLock(options) // { key, timeout?, adapter? }
963
881
  .withSpan(name, attributes?) // OpenTelemetry tracing
964
882
  .withChaosFault(options) // { rate, error?, errors? }
@@ -1,9 +1,15 @@
1
- import type { RateLimiterAdapter, RateLimitConfig, RateLimitResult, RateLimitState } from './types.ts';
1
+ import type { RateLimiterAdapter, RateLimitConfig, RateLimitQueueConfig, RateLimitResult, RateLimitState } from './types.ts';
2
2
  /**
3
3
  * In-memory rate limiter adapter using rate-limiter-flexible.
4
4
  */
5
5
  export declare class MemoryRateLimiterAdapter implements RateLimiterAdapter {
6
6
  #private;
7
+ removeTokens(options: {
8
+ key: string;
9
+ tokens: number;
10
+ config: RateLimitConfig;
11
+ queue?: RateLimitQueueConfig;
12
+ }): Promise<number>;
7
13
  acquire(key: string, options: RateLimitConfig): Promise<RateLimitResult>;
8
14
  getState(key: string): Promise<RateLimitState>;
9
15
  reset(key: string): Promise<void>;
@@ -1,2 +1,3 @@
1
- import { t as MemoryRateLimiterAdapter } from "../../memory-DXkg8s6y.js";
1
+ import "../../errors-TCLFVbwO.js";
2
+ import { t as MemoryRateLimiterAdapter } from "../../memory-BKGDbMrk.js";
2
3
  export { MemoryRateLimiterAdapter };
@@ -7,6 +7,17 @@ export interface RateLimiterAdapter {
7
7
  * Try to acquire a permit. Returns true if allowed, false if rate limited.
8
8
  */
9
9
  acquire(key: string, options: RateLimitConfig): Promise<RateLimitResult>;
10
+ /**
11
+ * Remove tokens from the rate limiter, waiting in queue if necessary.
12
+ * Returns the number of remaining tokens after removal.
13
+ * Throws RateLimiterQueueError if queue is full or tokens requested exceed limit.
14
+ */
15
+ removeTokens(options: {
16
+ key: string;
17
+ tokens: number;
18
+ config: RateLimitConfig;
19
+ queue?: RateLimitQueueConfig;
20
+ }): Promise<number>;
10
21
  /**
11
22
  * Get current state for a key
12
23
  */
@@ -74,6 +85,16 @@ export interface RateLimitState {
74
85
  */
75
86
  resetInMs: number;
76
87
  }
88
+ /**
89
+ * Queue configuration for rate limiting
90
+ */
91
+ export interface RateLimitQueueConfig {
92
+ /**
93
+ * Maximum number of requests that can be queued.
94
+ * @default 4294967295 (2^32 - 1)
95
+ */
96
+ maxSize?: number;
97
+ }
77
98
  /**
78
99
  * Options for rate limiting behavior
79
100
  */
@@ -92,7 +113,12 @@ export interface RateLimitOptions extends RateLimitConfig {
92
113
  */
93
114
  optional?: boolean;
94
115
  /**
95
- * Called when rate limit is exceeded
116
+ * Queue configuration. When enabled, requests that exceed the rate limit
117
+ * will be queued instead of rejected, and executed when tokens become available.
118
+ */
119
+ queue?: RateLimitQueueConfig;
120
+ /**
121
+ * Called when rate limit is exceeded (only when queue is not enabled)
96
122
  */
97
123
  onRejected?: (event: {
98
124
  key: string;
@@ -77,3 +77,13 @@ export declare class LockNotAcquiredError extends TenaceError {
77
77
  export declare class WaitForTimeoutError extends TenaceError {
78
78
  constructor(message?: string);
79
79
  }
80
+ /**
81
+ * Thrown when rate limit queue is full and cannot accept more requests
82
+ */
83
+ export declare class RateLimitQueueFullError extends TenaceError {
84
+ key: string;
85
+ constructor(options: {
86
+ key: string;
87
+ message?: string;
88
+ });
89
+ }
@@ -1 +1 @@
1
- export { TenaceError, TimeoutError, CancelledError, CircuitOpenError, CircuitIsolatedError, BulkheadFullError, AbortError, RateLimitError, LockNotAcquiredError, WaitForTimeoutError, } from './errors.ts';
1
+ export { TenaceError, TimeoutError, CancelledError, CircuitOpenError, CircuitIsolatedError, BulkheadFullError, AbortError, RateLimitError, RateLimitQueueFullError, LockNotAcquiredError, WaitForTimeoutError, } from './errors.ts';
@@ -1,2 +1,2 @@
1
- import { a as CircuitOpenError, c as TenaceError, i as CircuitIsolatedError, l as TimeoutError, n as BulkheadFullError, o as LockNotAcquiredError, r as CancelledError, s as RateLimitError, t as AbortError, u as WaitForTimeoutError } from "../errors-BODHnryv.js";
2
- export { AbortError, BulkheadFullError, CancelledError, CircuitIsolatedError, CircuitOpenError, LockNotAcquiredError, RateLimitError, TenaceError, TimeoutError, WaitForTimeoutError };
1
+ import { a as CircuitOpenError, c as RateLimitQueueFullError, d as WaitForTimeoutError, i as CircuitIsolatedError, l as TenaceError, n as BulkheadFullError, o as LockNotAcquiredError, r as CancelledError, s as RateLimitError, t as AbortError, u as TimeoutError } from "../errors-TCLFVbwO.js";
2
+ export { AbortError, BulkheadFullError, CancelledError, CircuitIsolatedError, CircuitOpenError, LockNotAcquiredError, RateLimitError, RateLimitQueueFullError, TenaceError, TimeoutError, WaitForTimeoutError };
@@ -64,4 +64,12 @@ var WaitForTimeoutError = class extends TenaceError {
64
64
  this.name = "WaitForTimeoutError";
65
65
  }
66
66
  };
67
- export { CircuitOpenError as a, TenaceError as c, CircuitIsolatedError as i, TimeoutError as l, BulkheadFullError as n, LockNotAcquiredError as o, CancelledError as r, RateLimitError as s, AbortError as t, WaitForTimeoutError as u };
67
+ var RateLimitQueueFullError = class extends TenaceError {
68
+ key;
69
+ constructor(options) {
70
+ super(options.message ?? `Rate limit queue is full for key "${options.key}"`);
71
+ this.name = "RateLimitQueueFullError";
72
+ this.key = options.key;
73
+ }
74
+ };
75
+ export { CircuitOpenError as a, RateLimitQueueFullError as c, WaitForTimeoutError as d, CircuitIsolatedError as i, TenaceError as l, BulkheadFullError as n, LockNotAcquiredError as o, CancelledError as r, RateLimitError as s, AbortError as t, TimeoutError as u };
@@ -20,7 +20,7 @@ export declare function createCachePolicy<T>(options: CachePolicyOptions): IPoli
20
20
  /**
21
21
  * Creates a rate limit policy that integrates into the cockatiel pipeline.
22
22
  * - Check rate limit before execution
23
- * - Throw RateLimitError if exceeded
23
+ * - Throw RateLimitError if exceeded (or queue if configured)
24
24
  */
25
25
  export declare function createRateLimitPolicy<T>(options: RateLimitPolicyOptions): IPolicy;
26
26
  /**
package/build/src/main.js CHANGED
@@ -1,5 +1,5 @@
1
- import { a as CircuitOpenError, i as CircuitIsolatedError, l as TimeoutError$1, n as BulkheadFullError, o as LockNotAcquiredError, r as CancelledError, s as RateLimitError, u as WaitForTimeoutError } from "./errors-BODHnryv.js";
2
- import { t as MemoryRateLimiterAdapter } from "./memory-DXkg8s6y.js";
1
+ import { a as CircuitOpenError, c as RateLimitQueueFullError, d as WaitForTimeoutError, i as CircuitIsolatedError, n as BulkheadFullError, o as LockNotAcquiredError, r as CancelledError, s as RateLimitError, u as TimeoutError$1 } from "./errors-TCLFVbwO.js";
2
+ import { t as MemoryRateLimiterAdapter } from "./memory-BKGDbMrk.js";
3
3
  import { t as MemoryCacheAdapter } from "./memory-DWyezb1O.js";
4
4
  import pWaitFor, { TimeoutError } from "p-wait-for";
5
5
  import { ms } from "@julr/utils/string/ms";
@@ -422,33 +422,46 @@ function createRateLimitPolicy(options) {
422
422
  const adapter = options.adapter ?? configStore.getRateLimiter();
423
423
  return { execute: async (fn, signal) => {
424
424
  try {
425
- const result = await adapter.acquire(options.key, {
426
- maxCalls: options.maxCalls,
427
- windowMs: options.windowMs,
428
- ...options.strategy !== void 0 && { strategy: options.strategy }
425
+ if (options.queue) await adapter.removeTokens({
426
+ key: options.key,
427
+ tokens: 1,
428
+ config: {
429
+ maxCalls: options.maxCalls,
430
+ windowMs: options.windowMs,
431
+ ...options.strategy !== void 0 && { strategy: options.strategy }
432
+ },
433
+ queue: options.queue
429
434
  });
430
- if (!result.allowed) {
431
- emitEvent(TelemetryEvents.RATE_LIMIT_REJECTED, {
432
- key: options.key,
433
- retry_after_ms: result.retryAfterMs ?? 0
434
- });
435
- notifyPlugins("onRateLimitRejected", {
436
- key: options.key,
437
- retryAfterMs: result.retryAfterMs
438
- });
439
- if (result.retryAfterMs !== void 0) options.onRejected?.({
440
- key: options.key,
441
- retryAfterMs: result.retryAfterMs
442
- });
443
- else options.onRejected?.({ key: options.key });
444
- throw new RateLimitError({
445
- message: `Rate limit exceeded for key "${options.key}". Retry after ${result.retryAfterMs}ms`,
446
- retryAfterMs: result.retryAfterMs ?? 0,
447
- remaining: result.remaining
435
+ else {
436
+ const result = await adapter.acquire(options.key, {
437
+ maxCalls: options.maxCalls,
438
+ windowMs: options.windowMs,
439
+ ...options.strategy !== void 0 && { strategy: options.strategy }
448
440
  });
441
+ if (!result.allowed) {
442
+ emitEvent(TelemetryEvents.RATE_LIMIT_REJECTED, {
443
+ key: options.key,
444
+ retry_after_ms: result.retryAfterMs ?? 0
445
+ });
446
+ notifyPlugins("onRateLimitRejected", {
447
+ key: options.key,
448
+ retryAfterMs: result.retryAfterMs
449
+ });
450
+ if (result.retryAfterMs !== void 0) options.onRejected?.({
451
+ key: options.key,
452
+ retryAfterMs: result.retryAfterMs
453
+ });
454
+ else options.onRejected?.({ key: options.key });
455
+ throw new RateLimitError({
456
+ message: `Rate limit exceeded for key "${options.key}". Retry after ${result.retryAfterMs}ms`,
457
+ retryAfterMs: result.retryAfterMs ?? 0,
458
+ remaining: result.remaining
459
+ });
460
+ }
449
461
  }
450
462
  } catch (error) {
451
463
  if (error instanceof RateLimitError) throw error;
464
+ if (error instanceof RateLimitQueueFullError) throw error;
452
465
  if (!options.optional) throw error;
453
466
  }
454
467
  return fn({
@@ -1,6 +1,8 @@
1
- import { RateLimiterMemory } from "rate-limiter-flexible";
1
+ import { c as RateLimitQueueFullError } from "./errors-TCLFVbwO.js";
2
+ import { RateLimiterMemory, RateLimiterQueue, RateLimiterQueueError } from "rate-limiter-flexible";
2
3
  var MemoryRateLimiterAdapter = class {
3
4
  #limiters = /* @__PURE__ */ new Map();
5
+ #queues = /* @__PURE__ */ new Map();
4
6
  #getLimiter(options) {
5
7
  const cacheKey = `${options.key}:${options.config.maxCalls}:${options.config.windowMs}`;
6
8
  let limiter = this.#limiters.get(cacheKey);
@@ -13,6 +15,35 @@ var MemoryRateLimiterAdapter = class {
13
15
  }
14
16
  return limiter;
15
17
  }
18
+ #getQueue(options) {
19
+ const maxSize = options.queue.maxSize ?? 4294967295;
20
+ const cacheKey = `${options.key}:${options.config.maxCalls}:${options.config.windowMs}:${maxSize}`;
21
+ let queue = this.#queues.get(cacheKey);
22
+ if (!queue) {
23
+ queue = new RateLimiterQueue(this.#getLimiter({
24
+ key: options.key,
25
+ config: options.config
26
+ }), { maxQueueSize: maxSize });
27
+ this.#queues.set(cacheKey, queue);
28
+ }
29
+ return queue;
30
+ }
31
+ async removeTokens(options) {
32
+ const queue = this.#getQueue({
33
+ key: options.key,
34
+ config: options.config,
35
+ queue: options.queue ?? {}
36
+ });
37
+ try {
38
+ return await queue.removeTokens(options.tokens, options.key);
39
+ } catch (error) {
40
+ if (error instanceof RateLimiterQueueError) throw new RateLimitQueueFullError({
41
+ key: options.key,
42
+ message: error.message
43
+ });
44
+ throw error;
45
+ }
46
+ }
16
47
  async acquire(key, options) {
17
48
  const limiter = this.#getLimiter({
18
49
  key,
@@ -55,6 +86,7 @@ var MemoryRateLimiterAdapter = class {
55
86
  }
56
87
  clear() {
57
88
  this.#limiters.clear();
89
+ this.#queues.clear();
58
90
  }
59
91
  };
60
92
  export { MemoryRateLimiterAdapter as t };
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@julr/tenace",
3
3
  "type": "module",
4
- "version": "1.0.0-next.0",
4
+ "version": "1.0.0-next.1",
5
5
  "packageManager": "pnpm@10.24.0",
6
6
  "description": "A Node.js library to make any call resilient with a fluent and simple API",
7
7
  "author": "Julien Ripouteau <julien@ripouteau.com>",