ai-retry 0.2.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -171,6 +171,39 @@ const retryable = createRetryable({
171
171
  });
172
172
  ```
173
173
 
174
+ #### Retry After Delay
175
+
176
+ Handle retryable errors with delays and respect `retry-after` headers from rate-limited responses. This is useful for handling 429 (Too Many Requests) and 503 (Service Unavailable) errors.
177
+
178
+ > [!NOTE]
179
+ > If the response contains a [`retry-after`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Retry-After) header, it will be prioritized over the configured delay.
180
+
181
+
182
+ ```typescript
183
+ import { retryAfterDelay } from 'ai-retry/retryables';
184
+
185
+ const retryableModel = createRetryable({
186
+ model: openai('gpt-4'), // Base model
187
+ retries: [
188
+ // Retry base model 3 times with fixed 2s delay
189
+ retryAfterDelay({ delay: 2000, maxAttempts: 3 }),
190
+
191
+ // Or retry with exponential backoff (2s, 4s, 8s)
192
+ retryAfterDelay({ delay: 2000, backoffFactor: 2, maxAttempts: 3 }),
193
+
194
+ // Or switch to a different model after delay
195
+ retryAfterDelay(openai('gpt-4-mini'), { delay: 1000 }),
196
+ ],
197
+ });
198
+ ```
199
+
200
+ **Options:**
201
+ - `delay` (required): Delay in milliseconds before retrying
202
+ - `backoffFactor` (optional): Multiplier for exponential backoff (delay × backoffFactor^attempt). If not provided, uses fixed delay.
203
+ - `maxAttempts` (optional): Maximum number of retry attempts for this model
204
+
205
+ By default, if a [`retry-after-ms`](https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/provisioned-get-started#what-should--i-do-when-i-receive-a-429-response) or `retry-after` header is present in the response, it will be prioritized over the configured delay. The delay from the header will be capped at 60 seconds for safety. If no headers are present, the configured delay or exponential backoff will be used.
206
+
174
207
  #### Fallbacks
175
208
 
176
209
  If you always want to fallback to a different model on any error, you can simply provide a list of models.
@@ -247,6 +280,71 @@ try {
247
280
  }
248
281
  ```
249
282
 
283
+ ### Options
284
+
285
+ #### Retry Delays
286
+
287
+ You can add delays before retrying to handle rate limiting or give services time to recover. The delay respects abort signals, so requests can still be cancelled during the delay period.
288
+
289
+ ```typescript
290
+ const retryableModel = createRetryable({
291
+ model: openai('gpt-4'),
292
+ retries: [
293
+ // Wait 1 second before retrying
294
+ () => ({
295
+ model: openai('gpt-4'),
296
+ delay: 1_000
297
+ }),
298
+ // Wait 2 seconds before trying a different provider
299
+ () => ({
300
+ model: anthropic('claude-3-haiku-20240307'),
301
+ delay: 2_000
302
+ }),
303
+ ],
304
+ });
305
+
306
+ const result = await generateText({
307
+ model: retryableModel,
308
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
309
+ // Will be respected during delays
310
+ abortSignal: AbortSignal.timeout(60_000),
311
+ });
312
+ ```
313
+
314
+ You can also use delays with built-in retryables:
315
+
316
+ ```typescript
317
+ import { serviceOverloaded } from 'ai-retry/retryables';
318
+
319
+ const retryableModel = createRetryable({
320
+ model: openai('gpt-4'),
321
+ retries: [
322
+ // Wait 5 seconds before retrying on service overload
323
+ serviceOverloaded(openai('gpt-4'), { maxAttempts: 3, delay: 5_000 }),
324
+ ],
325
+ });
326
+ ```
327
+
328
+ #### Max Attempts
329
+
330
+ By default, each retryable will only attempt to retry once per model to avoid infinite loops. You can customize this behavior by returning a `maxAttempts` value from your retryable function. Note that the initial request with the base model is counted as the first attempt.
331
+
332
+ ```typescript
333
+ const retryableModel = createRetryable({
334
+ model: openai('gpt-4'),
335
+ retries: [
336
+ // Try this once
337
+ anthropic('claude-3-haiku-20240307'),
338
+ // Try this one more time (initial + 1 retry)
339
+ () => ({ model: openai('gpt-4'), maxAttempts: 2 }),
340
+ // Already tried, won't be retried again
341
+ anthropic('claude-3-haiku-20240307')
342
+ ],
343
+ });
344
+ ```
345
+
346
+ The attempts are counted per unique model (provider + modelId). That means if multiple retryables return the same model, it won't be retried again once the `maxAttempts` is reached.
347
+
250
348
  #### Logging
251
349
 
252
350
  You can use the following callbacks to log retry attempts and errors:
@@ -285,11 +383,10 @@ There are several built-in retryables:
285
383
  - [`contentFilterTriggered`](./src/retryables/content-filter-triggered.ts): Content filter was triggered based on the prompt or completion.
286
384
  - [`requestTimeout`](./src/retryables/request-timeout.ts): Request timeout occurred.
287
385
  - [`requestNotRetryable`](./src/retryables/request-not-retryable.ts): Request failed with a non-retryable error.
386
+ - [`retryAfterDelay`](./src/retryables/retry-after-delay.ts): Retry with exponential backoff and respect `retry-after` headers for rate limiting.
288
387
  - [`serviceOverloaded`](./src/retryables/service-overloaded.ts): Response with status code 529 (service overloaded).
289
388
  - Use this retryable to handle Anthropic's overloaded errors.
290
389
 
291
- By default, each retryable will only attempt to retry once per model to avoid infinite loops. You can customize this behavior by returning a `maxAttempts` value from your retryable function.
292
-
293
390
  ### API Reference
294
391
 
295
392
  #### `createRetryable(options: RetryableModelOptions): LanguageModelV2 | EmbeddingModelV2`
@@ -318,16 +415,20 @@ type Retryable = (
318
415
 
319
416
  #### `RetryModel`
320
417
 
321
- A `RetryModel` specifies the model to retry and an optional `maxAttempts` to limit how many times this model can be retried.
322
- By default, each retryable will only attempt to retry once per model. This can be customized by setting the `maxAttempts` property.
418
+ A `RetryModel` specifies the model to retry and optional settings like `maxAttempts` and `delay`.
323
419
 
324
420
  ```typescript
325
421
  interface RetryModel {
326
422
  model: LanguageModelV2 | EmbeddingModelV2;
327
- maxAttempts?: number;
423
+ maxAttempts?: number; // Maximum retry attempts per model (default: 1)
424
+ delay?: number; // Delay in milliseconds before retrying
328
425
  }
329
426
  ```
330
427
 
428
+ **Options:**
429
+ - `maxAttempts`: Maximum number of times this model can be retried. Default is 1.
430
+ - `delay`: Delay in milliseconds to wait before retrying. Useful for rate limiting or giving services time to recover. The delay respects abort signals from the request.
431
+
331
432
  #### `RetryContext`
332
433
 
333
434
  The `RetryContext` object contains information about the current attempt and all previous attempts.
package/dist/index.d.ts CHANGED
@@ -1,4 +1,4 @@
1
- import { EmbeddingModelV2, LanguageModelV2, RetryableModelOptions } from "./types-CHhEGL5x.js";
1
+ import { EmbeddingModelV2, EmbeddingModelV2CallOptions, EmbeddingModelV2Embed, LanguageModelV2, LanguageModelV2Generate, LanguageModelV2Stream, Retries, RetryAttempt, RetryContext, RetryErrorAttempt, RetryModel, RetryResultAttempt, Retryable, RetryableModelOptions } from "./types-BrJaHkFh.js";
2
2
 
3
3
  //#region src/create-retryable-model.d.ts
4
4
  declare function createRetryable<MODEL extends LanguageModelV2>(options: RetryableModelOptions<MODEL>): LanguageModelV2;
@@ -10,4 +10,4 @@ declare function createRetryable<MODEL extends EmbeddingModelV2>(options: Retrya
10
10
  */
11
11
  declare const getModelKey: (model: LanguageModelV2 | EmbeddingModelV2) => string;
12
12
  //#endregion
13
- export { createRetryable, getModelKey };
13
+ export { EmbeddingModelV2, EmbeddingModelV2CallOptions, EmbeddingModelV2Embed, LanguageModelV2, LanguageModelV2Generate, LanguageModelV2Stream, Retries, RetryAttempt, RetryContext, RetryErrorAttempt, RetryModel, RetryResultAttempt, Retryable, RetryableModelOptions, createRetryable, getModelKey };
package/dist/index.js CHANGED
@@ -1,17 +1,8 @@
1
- import { isErrorAttempt, isGenerateResult, isResultAttempt, isStreamContentPart } from "./utils-lRsC105f.js";
2
- import "@ai-sdk/provider-utils";
3
- import { RetryError } from "ai";
1
+ import { getModelKey, isErrorAttempt, isGenerateResult, isResultAttempt, isStreamContentPart } from "./utils-DNoBKkQe.js";
2
+ import { delay } from "@ai-sdk/provider-utils";
4
3
  import { getErrorMessage } from "@ai-sdk/provider";
4
+ import { RetryError } from "ai";
5
5
 
6
- //#region src/get-model-key.ts
7
- /**
8
- * Generate a unique key for a LanguageModelV2 instance.
9
- */
10
- const getModelKey = (model) => {
11
- return `${model.provider}/${model.modelId}`;
12
- };
13
-
14
- //#endregion
15
6
  //#region src/find-retry-model.ts
16
7
  /**
17
8
  * Find the next model to retry with based on the retry context
@@ -44,7 +35,7 @@ async function findRetryModel(retries, context) {
44
35
  /**
45
36
  * Check if the model can still be retried based on maxAttempts
46
37
  */
47
- if (retryAttempts.length < maxAttempts) return retryModel.model;
38
+ if (retryAttempts.length < maxAttempts) return retryModel;
48
39
  }
49
40
  }
50
41
  }
@@ -126,9 +117,10 @@ var RetryableEmbeddingModel = class {
126
117
  attempts
127
118
  };
128
119
  } catch (error) {
129
- const { nextModel, attempt } = await this.handleError(error, attempts);
120
+ const { retryModel, attempt } = await this.handleError(error, attempts);
130
121
  attempts.push(attempt);
131
- this.currentModel = nextModel;
122
+ if (retryModel.delay) await delay(retryModel.delay, { abortSignal: input.abortSignal });
123
+ this.currentModel = retryModel.model;
132
124
  }
133
125
  }
134
126
  }
@@ -150,17 +142,17 @@ var RetryableEmbeddingModel = class {
150
142
  attempts: updatedAttempts
151
143
  };
152
144
  this.options.onError?.(context);
153
- const nextModel = await findRetryModel(this.options.retries, context);
145
+ const retryModel = await findRetryModel(this.options.retries, context);
154
146
  /**
155
147
  * Handler didn't return any models to try next, rethrow the error.
156
148
  * If we retried the request, wrap the error into a `RetryError` for better visibility.
157
149
  */
158
- if (!nextModel) {
150
+ if (!retryModel) {
159
151
  if (updatedAttempts.length > 1) throw prepareRetryError(error, updatedAttempts);
160
152
  throw error;
161
153
  }
162
154
  return {
163
- nextModel,
155
+ retryModel,
164
156
  attempt: errorAttempt
165
157
  };
166
158
  }
@@ -169,7 +161,10 @@ var RetryableEmbeddingModel = class {
169
161
  * Always start with the original model
170
162
  */
171
163
  this.currentModel = this.baseModel;
172
- const { result } = await this.withRetry({ fn: async () => await this.currentModel.doEmbed(options) });
164
+ const { result } = await this.withRetry({
165
+ fn: async () => await this.currentModel.doEmbed(options),
166
+ abortSignal: options.abortSignal
167
+ });
173
168
  return result;
174
169
  }
175
170
  };
@@ -236,10 +231,11 @@ var RetryableLanguageModel = class {
236
231
  * Check if the result should trigger a retry (only for generate results, not streams)
237
232
  */
238
233
  if (isGenerateResult(result)) {
239
- const { nextModel, attempt } = await this.handleResult(result, attempts);
234
+ const { retryModel, attempt } = await this.handleResult(result, attempts);
240
235
  attempts.push(attempt);
241
- if (nextModel) {
242
- this.currentModel = nextModel;
236
+ if (retryModel) {
237
+ if (retryModel.delay) await delay(retryModel.delay, { abortSignal: input.abortSignal });
238
+ this.currentModel = retryModel.model;
243
239
  /**
244
240
  * Continue to the next iteration to retry
245
241
  */
@@ -251,9 +247,10 @@ var RetryableLanguageModel = class {
251
247
  attempts
252
248
  };
253
249
  } catch (error) {
254
- const { nextModel, attempt } = await this.handleError(error, attempts);
250
+ const { retryModel, attempt } = await this.handleError(error, attempts);
255
251
  attempts.push(attempt);
256
- this.currentModel = nextModel;
252
+ if (retryModel.delay) await delay(retryModel.delay, { abortSignal: input.abortSignal });
253
+ this.currentModel = retryModel.model;
257
254
  }
258
255
  }
259
256
  }
@@ -275,7 +272,7 @@ var RetryableLanguageModel = class {
275
272
  attempts: updatedAttempts
276
273
  };
277
274
  return {
278
- nextModel: await findRetryModel(this.options.retries, context),
275
+ retryModel: await findRetryModel(this.options.retries, context),
279
276
  attempt: resultAttempt
280
277
  };
281
278
  }
@@ -297,17 +294,17 @@ var RetryableLanguageModel = class {
297
294
  attempts: updatedAttempts
298
295
  };
299
296
  this.options.onError?.(context);
300
- const nextModel = await findRetryModel(this.options.retries, context);
297
+ const retryModel = await findRetryModel(this.options.retries, context);
301
298
  /**
302
299
  * Handler didn't return any models to try next, rethrow the error.
303
300
  * If we retried the request, wrap the error into a `RetryError` for better visibility.
304
301
  */
305
- if (!nextModel) {
302
+ if (!retryModel) {
306
303
  if (updatedAttempts.length > 1) throw prepareRetryError(error, updatedAttempts);
307
304
  throw error;
308
305
  }
309
306
  return {
310
- nextModel,
307
+ retryModel,
311
308
  attempt: errorAttempt
312
309
  };
313
310
  }
@@ -316,7 +313,10 @@ var RetryableLanguageModel = class {
316
313
  * Always start with the original model
317
314
  */
318
315
  this.currentModel = this.baseModel;
319
- const { result } = await this.withRetry({ fn: async () => await this.currentModel.doGenerate(options) });
316
+ const { result } = await this.withRetry({
317
+ fn: async () => await this.currentModel.doGenerate(options),
318
+ abortSignal: options.abortSignal
319
+ });
320
320
  return result;
321
321
  }
322
322
  async doStream(options) {
@@ -327,7 +327,10 @@ var RetryableLanguageModel = class {
327
327
  /**
328
328
  * Perform the initial call to doStream with retry logic to handle errors before any data is streamed.
329
329
  */
330
- let { result, attempts } = await this.withRetry({ fn: async () => await this.currentModel.doStream(options) });
330
+ let { result, attempts } = await this.withRetry({
331
+ fn: async () => await this.currentModel.doStream(options),
332
+ abortSignal: options.abortSignal
333
+ });
331
334
  /**
332
335
  * Wrap the original stream to handle retries if an error occurs during streaming.
333
336
  */
@@ -362,19 +365,21 @@ var RetryableLanguageModel = class {
362
365
  * Check if the error from the stream can be retried.
363
366
  * Otherwise it will rethrow the error.
364
367
  */
365
- const { nextModel, attempt } = await this.handleError(error, attempts);
366
- this.currentModel = nextModel;
368
+ const { retryModel, attempt } = await this.handleError(error, attempts);
367
369
  /**
368
370
  * Save the attempt
369
371
  */
370
372
  attempts.push(attempt);
373
+ if (retryModel.delay) await delay(retryModel.delay, { abortSignal: options.abortSignal });
374
+ this.currentModel = retryModel.model;
371
375
  /**
372
376
  * Retry the request by calling doStream again.
373
377
  * This will create a new stream.
374
378
  */
375
379
  const retriedResult = await this.withRetry({
376
380
  fn: async () => await this.currentModel.doStream(options),
377
- attempts
381
+ attempts,
382
+ abortSignal: options.abortSignal
378
383
  });
379
384
  /**
380
385
  * Cancel the previous reader and stream if we are retrying
@@ -1,4 +1,4 @@
1
- import { EmbeddingModelV2, LanguageModelV2, RetryModel, Retryable } from "../types-CHhEGL5x.js";
1
+ import { EmbeddingModelV2, LanguageModelV2, RetryModel, Retryable } from "../types-BrJaHkFh.js";
2
2
 
3
3
  //#region src/retryables/content-filter-triggered.d.ts
4
4
 
@@ -21,6 +21,19 @@ declare function requestNotRetryable<MODEL extends LanguageModelV2 | EmbeddingMo
21
21
  */
22
22
  declare function requestTimeout<MODEL extends LanguageModelV2 | EmbeddingModelV2>(model: MODEL, options?: Omit<RetryModel<MODEL>, 'model'>): Retryable<MODEL>;
23
23
  //#endregion
24
+ //#region src/retryables/retry-after-delay.d.ts
25
+ type RetryAfterDelayOptions<MODEL extends LanguageModelV2 | EmbeddingModelV2> = Omit<RetryModel<MODEL>, 'model' | 'delay'> & {
26
+ delay: number;
27
+ backoffFactor?: number;
28
+ };
29
+ /**
30
+ * Retry with the same or a different model if the error is retryable with a delay.
31
+ * Uses the `Retry-After` or `Retry-After-Ms` headers if present.
32
+ * Otherwise uses the specified `delay`, or exponential backoff if `backoffFactor` is provided.
33
+ */
34
+ declare function retryAfterDelay<MODEL extends LanguageModelV2 | EmbeddingModelV2>(model: MODEL, options?: RetryAfterDelayOptions<MODEL>): Retryable<MODEL>;
35
+ declare function retryAfterDelay<MODEL extends LanguageModelV2 | EmbeddingModelV2>(options: RetryAfterDelayOptions<MODEL>): Retryable<MODEL>;
36
+ //#endregion
24
37
  //#region src/retryables/service-overloaded.d.ts
25
38
  /**
26
39
  * Fallback to a different model if the provider returns an overloaded error.
@@ -31,4 +44,4 @@ declare function requestTimeout<MODEL extends LanguageModelV2 | EmbeddingModelV2
31
44
  */
32
45
  declare function serviceOverloaded<MODEL extends LanguageModelV2 | EmbeddingModelV2>(model: MODEL, options?: Omit<RetryModel<MODEL>, 'model'>): Retryable<MODEL>;
33
46
  //#endregion
34
- export { contentFilterTriggered, requestNotRetryable, requestTimeout, serviceOverloaded };
47
+ export { contentFilterTriggered, requestNotRetryable, requestTimeout, retryAfterDelay, serviceOverloaded };
@@ -1,4 +1,4 @@
1
- import { isErrorAttempt, isObject, isResultAttempt, isString } from "../utils-lRsC105f.js";
1
+ import { getModelKey, isErrorAttempt, isObject, isResultAttempt, isString } from "../utils-DNoBKkQe.js";
2
2
  import { isAbortError } from "@ai-sdk/provider-utils";
3
3
  import { APICallError } from "ai";
4
4
 
@@ -69,6 +69,68 @@ function requestTimeout(model, options) {
69
69
  };
70
70
  }
71
71
 
72
+ //#endregion
73
+ //#region src/calculate-exponential-backoff.ts
74
+ /**
75
+ * Calculates the exponential backoff delay.
76
+ */
77
+ function calculateExponentialBackoff(baseDelay, backoffFactor, attempts) {
78
+ return baseDelay * backoffFactor ** attempts;
79
+ }
80
+
81
+ //#endregion
82
+ //#region src/parse-retry-headers.ts
83
+ function parseRetryHeaders(headers) {
84
+ if (!headers) return null;
85
+ const retryAfterMs = headers["retry-after-ms"];
86
+ if (retryAfterMs) {
87
+ const delayMs = Number.parseFloat(retryAfterMs);
88
+ if (!Number.isNaN(delayMs) && delayMs >= 0) return delayMs;
89
+ }
90
+ const retryAfter = headers["retry-after"];
91
+ if (retryAfter) {
92
+ const seconds = Number.parseFloat(retryAfter);
93
+ if (!Number.isNaN(seconds)) return seconds * 1e3;
94
+ const date = Date.parse(retryAfter);
95
+ if (!Number.isNaN(date)) return Math.max(0, date - Date.now());
96
+ }
97
+ return null;
98
+ }
99
+
100
+ //#endregion
101
+ //#region src/retryables/retry-after-delay.ts
102
+ const MAX_RETRY_AFTER_MS = 6e4;
103
+ function retryAfterDelay(modelOrOptions, options) {
104
+ const model = modelOrOptions && "delay" in modelOrOptions ? void 0 : modelOrOptions;
105
+ const opts = modelOrOptions && "delay" in modelOrOptions ? modelOrOptions : options;
106
+ if (!opts?.delay) throw new Error("retryAfterDelay: delay is required");
107
+ const delay$1 = opts.delay;
108
+ const backoffFactor = Math.max(opts.backoffFactor ?? 1, 1);
109
+ return (context) => {
110
+ const { current, attempts } = context;
111
+ if (isErrorAttempt(current)) {
112
+ const { error } = current;
113
+ if (APICallError.isInstance(error) && error.isRetryable === true) {
114
+ const targetModel = model ?? current.model;
115
+ const modelKey = getModelKey(targetModel);
116
+ const modelAttempts = attempts.filter((a) => getModelKey(a.model) === modelKey);
117
+ const headerDelay = parseRetryHeaders(error.responseHeaders);
118
+ if (headerDelay !== null) return {
119
+ model: targetModel,
120
+ delay: Math.min(headerDelay, MAX_RETRY_AFTER_MS),
121
+ maxAttempts: opts.maxAttempts
122
+ };
123
+ const calculatedDelay = calculateExponentialBackoff(delay$1, backoffFactor, modelAttempts.length);
124
+ return {
125
+ model: targetModel,
126
+ delay: calculatedDelay,
127
+ maxAttempts: opts.maxAttempts
128
+ };
129
+ }
130
+ }
131
+ };
132
+ }
133
+
72
134
  //#endregion
73
135
  //#region src/retryables/service-overloaded.ts
74
136
  /**
@@ -100,4 +162,4 @@ function serviceOverloaded(model, options) {
100
162
  }
101
163
 
102
164
  //#endregion
103
- export { contentFilterTriggered, requestNotRetryable, requestTimeout, serviceOverloaded };
165
+ export { contentFilterTriggered, requestNotRetryable, requestTimeout, retryAfterDelay, serviceOverloaded };
@@ -52,6 +52,7 @@ type RetryAttempt<MODEL extends LanguageModelV2 | EmbeddingModelV2$1> = RetryErr
52
52
  type RetryModel<MODEL extends LanguageModelV2 | EmbeddingModelV2$1> = {
53
53
  model: MODEL;
54
54
  maxAttempts?: number;
55
+ delay?: number;
55
56
  };
56
57
  /**
57
58
  * A function that determines whether to retry with a different model based on the current attempt and all previous attempts.
@@ -59,5 +60,8 @@ type RetryModel<MODEL extends LanguageModelV2 | EmbeddingModelV2$1> = {
59
60
  type Retryable<MODEL extends LanguageModelV2 | EmbeddingModelV2$1> = (context: RetryContext<MODEL>) => RetryModel<MODEL> | Promise<RetryModel<MODEL>> | undefined;
60
61
  type Retries<MODEL extends LanguageModelV2 | EmbeddingModelV2$1> = Array<Retryable<MODEL> | MODEL>;
61
62
  type LanguageModelV2Generate = Awaited<ReturnType<LanguageModelV2['doGenerate']>>;
63
+ type LanguageModelV2Stream = Awaited<ReturnType<LanguageModelV2['doStream']>>;
64
+ type EmbeddingModelV2CallOptions<VALUE> = Parameters<EmbeddingModelV2$1<VALUE>['doEmbed']>[0];
65
+ type EmbeddingModelV2Embed<VALUE> = Awaited<ReturnType<EmbeddingModelV2$1<VALUE>['doEmbed']>>;
62
66
  //#endregion
63
- export { EmbeddingModelV2$1 as EmbeddingModelV2, type LanguageModelV2, RetryModel, Retryable, RetryableModelOptions };
67
+ export { EmbeddingModelV2$1 as EmbeddingModelV2, EmbeddingModelV2CallOptions, EmbeddingModelV2Embed, type LanguageModelV2, LanguageModelV2Generate, LanguageModelV2Stream, Retries, RetryAttempt, RetryContext, RetryErrorAttempt, RetryModel, RetryResultAttempt, Retryable, RetryableModelOptions };
@@ -1,3 +1,12 @@
1
+ //#region src/get-model-key.ts
2
+ /**
3
+ * Generate a unique key for a LanguageModelV2 instance.
4
+ */
5
+ const getModelKey = (model) => {
6
+ return `${model.provider}/${model.modelId}`;
7
+ };
8
+
9
+ //#endregion
1
10
  //#region src/utils.ts
2
11
  const isObject = (value) => typeof value === "object" && value !== null;
3
12
  const isString = (value) => typeof value === "string";
@@ -24,4 +33,4 @@ const isStreamContentPart = (part) => {
24
33
  };
25
34
 
26
35
  //#endregion
27
- export { isErrorAttempt, isGenerateResult, isObject, isResultAttempt, isStreamContentPart, isString };
36
+ export { getModelKey, isErrorAttempt, isGenerateResult, isObject, isResultAttempt, isStreamContentPart, isString };
package/package.json CHANGED
@@ -1,8 +1,7 @@
1
1
  {
2
2
  "name": "ai-retry",
3
- "version": "0.2.0",
3
+ "version": "0.4.0",
4
4
  "description": "AI SDK Retry",
5
- "packageManager": "pnpm@9.0.0",
6
5
  "main": "./dist/index.js",
7
6
  "module": "./dist/index.js",
8
7
  "types": "./dist/index.d.ts",
@@ -18,14 +17,6 @@
18
17
  "publishConfig": {
19
18
  "access": "public"
20
19
  },
21
- "scripts": {
22
- "prepublishOnly": "pnpm build",
23
- "publish:alpha": "pnpm version prerelease --preid alpha && pnpm publish --tag alpha",
24
- "build": "tsdown",
25
- "test": "vitest",
26
- "lint": "biome check . --write",
27
- "prepare": "husky"
28
- },
29
20
  "keywords": [
30
21
  "ai",
31
22
  "ai-sdk",
@@ -64,5 +55,11 @@
64
55
  "dependencies": {
65
56
  "@ai-sdk/provider": "^2.0.0",
66
57
  "@ai-sdk/provider-utils": "^3.0.9"
58
+ },
59
+ "scripts": {
60
+ "publish:alpha": "pnpm version prerelease --preid alpha && pnpm publish --tag alpha",
61
+ "build": "tsdown",
62
+ "test": "vitest",
63
+ "lint": "biome check . --write"
67
64
  }
68
- }
65
+ }