@ai-sdk/openai 3.0.19 → 3.0.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1331,42 +1331,42 @@ var openaiCompletionProviderOptions = lazySchema4(
1331
1331
  () => zodSchema4(
1332
1332
  z5.object({
1333
1333
  /**
1334
- Echo back the prompt in addition to the completion.
1335
- */
1334
+ * Echo back the prompt in addition to the completion.
1335
+ */
1336
1336
  echo: z5.boolean().optional(),
1337
1337
  /**
1338
- Modify the likelihood of specified tokens appearing in the completion.
1339
-
1340
- Accepts a JSON object that maps tokens (specified by their token ID in
1341
- the GPT tokenizer) to an associated bias value from -100 to 100. You
1342
- can use this tokenizer tool to convert text to token IDs. Mathematically,
1343
- the bias is added to the logits generated by the model prior to sampling.
1344
- The exact effect will vary per model, but values between -1 and 1 should
1345
- decrease or increase likelihood of selection; values like -100 or 100
1346
- should result in a ban or exclusive selection of the relevant token.
1347
-
1348
- As an example, you can pass {"50256": -100} to prevent the <|endoftext|>
1349
- token from being generated.
1338
+ * Modify the likelihood of specified tokens appearing in the completion.
1339
+ *
1340
+ * Accepts a JSON object that maps tokens (specified by their token ID in
1341
+ * the GPT tokenizer) to an associated bias value from -100 to 100. You
1342
+ * can use this tokenizer tool to convert text to token IDs. Mathematically,
1343
+ * the bias is added to the logits generated by the model prior to sampling.
1344
+ * The exact effect will vary per model, but values between -1 and 1 should
1345
+ * decrease or increase likelihood of selection; values like -100 or 100
1346
+ * should result in a ban or exclusive selection of the relevant token.
1347
+ *
1348
+ * As an example, you can pass {"50256": -100} to prevent the <|endoftext|>
1349
+ * token from being generated.
1350
1350
  */
1351
1351
  logitBias: z5.record(z5.string(), z5.number()).optional(),
1352
1352
  /**
1353
- The suffix that comes after a completion of inserted text.
1353
+ * The suffix that comes after a completion of inserted text.
1354
1354
  */
1355
1355
  suffix: z5.string().optional(),
1356
1356
  /**
1357
- A unique identifier representing your end-user, which can help OpenAI to
1358
- monitor and detect abuse. Learn more.
1357
+ * A unique identifier representing your end-user, which can help OpenAI to
1358
+ * monitor and detect abuse. Learn more.
1359
1359
  */
1360
1360
  user: z5.string().optional(),
1361
1361
  /**
1362
- Return the log probabilities of the tokens. Including logprobs will increase
1363
- the response size and can slow down response times. However, it can
1364
- be useful to better understand how the model is behaving.
1365
- Setting to true will return the log probabilities of the tokens that
1366
- were generated.
1367
- Setting to a number will return the log probabilities of the top n
1368
- tokens that were generated.
1369
- */
1362
+ * Return the log probabilities of the tokens. Including logprobs will increase
1363
+ * the response size and can slow down response times. However, it can
1364
+ * be useful to better understand how the model is behaving.
1365
+ * Setting to true will return the log probabilities of the tokens that
1366
+ * were generated.
1367
+ * Setting to a number will return the log probabilities of the top n
1368
+ * tokens that were generated.
1369
+ */
1370
1370
  logprobs: z5.union([z5.boolean(), z5.number()]).optional()
1371
1371
  })
1372
1372
  )
@@ -1619,14 +1619,14 @@ var openaiEmbeddingProviderOptions = lazySchema5(
1619
1619
  () => zodSchema5(
1620
1620
  z6.object({
1621
1621
  /**
1622
- The number of dimensions the resulting output embeddings should have.
1623
- Only supported in text-embedding-3 and later models.
1624
- */
1622
+ * The number of dimensions the resulting output embeddings should have.
1623
+ * Only supported in text-embedding-3 and later models.
1624
+ */
1625
1625
  dimensions: z6.number().optional(),
1626
1626
  /**
1627
- A unique identifier representing your end-user, which can help OpenAI to
1628
- monitor and detect abuse. Learn more.
1629
- */
1627
+ * A unique identifier representing your end-user, which can help OpenAI to
1628
+ * monitor and detect abuse. Learn more.
1629
+ */
1630
1630
  user: z6.string().optional()
1631
1631
  })
1632
1632
  )