@ai-sdk/togetherai 0.0.0-1c33ba03-20260114162300 → 0.0.0-4115c213-20260122152721

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,365 @@
1
+ ---
2
+ title: Together.ai
3
+ description: Learn how to use Together.ai's models with the AI SDK.
4
+ ---
5
+
6
+ # Together.ai Provider
7
+
8
+ The [Together.ai](https://together.ai) provider contains support for 200+ open-source models through the [Together.ai API](https://docs.together.ai/reference).
9
+
10
+ ## Setup
11
+
12
+ The Together.ai provider is available via the `@ai-sdk/togetherai` module. You can
13
+ install it with
14
+
15
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
16
+ <Tab>
17
+ <Snippet text="pnpm add @ai-sdk/togetherai" dark />
18
+ </Tab>
19
+ <Tab>
20
+ <Snippet text="npm install @ai-sdk/togetherai" dark />
21
+ </Tab>
22
+ <Tab>
23
+ <Snippet text="yarn add @ai-sdk/togetherai" dark />
24
+ </Tab>
25
+
26
+ <Tab>
27
+ <Snippet text="bun add @ai-sdk/togetherai" dark />
28
+ </Tab>
29
+ </Tabs>
30
+
31
+ ## Provider Instance
32
+
33
+ You can import the default provider instance `togetherai` from `@ai-sdk/togetherai`:
34
+
35
+ ```ts
36
+ import { togetherai } from '@ai-sdk/togetherai';
37
+ ```
38
+
39
+ If you need a customized setup, you can import `createTogetherAI` from `@ai-sdk/togetherai`
40
+ and create a provider instance with your settings:
41
+
42
+ ```ts
43
+ import { createTogetherAI } from '@ai-sdk/togetherai';
44
+
45
+ const togetherai = createTogetherAI({
46
+ apiKey: process.env.TOGETHER_AI_API_KEY ?? '',
47
+ });
48
+ ```
49
+
50
+ You can use the following optional settings to customize the Together.ai provider instance:
51
+
52
+ - **baseURL** _string_
53
+
54
+ Use a different URL prefix for API calls, e.g. to use proxy servers.
55
+ The default prefix is `https://api.together.xyz/v1`.
56
+
57
+ - **apiKey** _string_
58
+
59
+ API key that is being sent using the `Authorization` header. It defaults to
60
+ the `TOGETHER_AI_API_KEY` environment variable.
61
+
62
+ - **headers** _Record&lt;string,string&gt;_
63
+
64
+ Custom headers to include in the requests.
65
+
66
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
67
+
68
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
69
+ Defaults to the global `fetch` function.
70
+ You can use it as a middleware to intercept requests,
71
+ or to provide a custom fetch implementation for e.g. testing.
72
+
73
+ ## Language Models
74
+
75
+ You can create [Together.ai models](https://docs.together.ai/docs/serverless-models) using a provider instance. The first argument is the model id, e.g. `google/gemma-2-9b-it`.
76
+
77
+ ```ts
78
+ const model = togetherai('google/gemma-2-9b-it');
79
+ ```
80
+
81
+ ### Reasoning Models
82
+
83
+ Together.ai exposes the thinking of `deepseek-ai/DeepSeek-R1` in the generated text using the `<think>` tag.
84
+ You can use the `extractReasoningMiddleware` to extract this reasoning and expose it as a `reasoning` property on the result:
85
+
86
+ ```ts
87
+ import { togetherai } from '@ai-sdk/togetherai';
88
+ import { wrapLanguageModel, extractReasoningMiddleware } from 'ai';
89
+
90
+ const enhancedModel = wrapLanguageModel({
91
+ model: togetherai('deepseek-ai/DeepSeek-R1'),
92
+ middleware: extractReasoningMiddleware({ tagName: 'think' }),
93
+ });
94
+ ```
95
+
96
+ You can then use that enhanced model in functions like `generateText` and `streamText`.
97
+
98
+ ### Example
99
+
100
+ You can use Together.ai language models to generate text with the `generateText` function:
101
+
102
+ ```ts
103
+ import { togetherai } from '@ai-sdk/togetherai';
104
+ import { generateText } from 'ai';
105
+
106
+ const { text } = await generateText({
107
+ model: togetherai('meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'),
108
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
109
+ });
110
+ ```
111
+
112
+ Together.ai language models can also be used in the `streamText` function
113
+ (see [AI SDK Core](/docs/ai-sdk-core)).
114
+
115
+ The Together.ai provider also supports [completion models](https://docs.together.ai/docs/serverless-models#language-models) via (following the above example code) `togetherai.completion()` and [embedding models](https://docs.together.ai/docs/serverless-models#embedding-models) via `togetherai.embedding()`.
116
+
117
+ ## Model Capabilities
118
+
119
+ | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
120
+ | ---------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
121
+ | `meta-llama/Meta-Llama-3.3-70B-Instruct-Turbo` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
122
+ | `meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> |
123
+ | `mistralai/Mixtral-8x22B-Instruct-v0.1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
124
+ | `mistralai/Mistral-7B-Instruct-v0.3` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
125
+ | `deepseek-ai/DeepSeek-V3` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
126
+ | `google/gemma-2b-it` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
127
+ | `Qwen/Qwen2.5-72B-Instruct-Turbo` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
128
+ | `databricks/dbrx-instruct` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
129
+
130
+ <Note>
131
+ The table above lists popular models. Please see the [Together.ai
132
+ docs](https://docs.together.ai/docs/serverless-models) for a full list of
133
+ available models. You can also pass any available provider model ID as a
134
+ string if needed.
135
+ </Note>
136
+
137
+ ## Image Models
138
+
139
+ You can create Together.ai image models using the `.image()` factory method.
140
+ For more on image generation with the AI SDK see [generateImage()](/docs/reference/ai-sdk-core/generate-image).
141
+
142
+ ```ts
143
+ import { togetherai } from '@ai-sdk/togetherai';
144
+ import { generateImage } from 'ai';
145
+
146
+ const { images } = await generateImage({
147
+ model: togetherai.image('black-forest-labs/FLUX.1-dev'),
148
+ prompt: 'A delighted resplendent quetzal mid flight amidst raindrops',
149
+ });
150
+ ```
151
+
152
+ You can pass optional provider-specific request parameters using the `providerOptions` argument.
153
+
154
+ ```ts
155
+ import { togetherai } from '@ai-sdk/togetherai';
156
+ import { generateImage } from 'ai';
157
+
158
+ const { images } = await generateImage({
159
+ model: togetherai.image('black-forest-labs/FLUX.1-dev'),
160
+ prompt: 'A delighted resplendent quetzal mid flight amidst raindrops',
161
+ size: '512x512',
162
+ // Optional additional provider-specific request parameters
163
+ providerOptions: {
164
+ togetherai: {
165
+ steps: 40,
166
+ },
167
+ },
168
+ });
169
+ ```
170
+
171
+ For a complete list of available provider-specific options, see the [Together.ai Image Generation API Reference](https://docs.together.ai/reference/post_images-generations).
172
+
173
+ ### Image Editing
174
+
175
+ Together AI supports image editing through FLUX Kontext models. Pass input images via `prompt.images` to transform or edit existing images.
176
+
177
+ <Note>
178
+ Together AI does not support mask-based inpainting. Instead, use descriptive
179
+ prompts to specify what you want to change in the image.
180
+ </Note>
181
+
182
+ #### Basic Image Editing
183
+
184
+ Transform an existing image using text prompts:
185
+
186
+ ```ts
187
+ const imageBuffer = readFileSync('./input-image.png');
188
+
189
+ const { images } = await generateImage({
190
+ model: togetherai.image('black-forest-labs/FLUX.1-kontext-pro'),
191
+ prompt: {
192
+ text: 'Turn the cat into a golden retriever dog',
193
+ images: [imageBuffer],
194
+ },
195
+ size: '1024x1024',
196
+ providerOptions: {
197
+ togetherai: {
198
+ steps: 28,
199
+ },
200
+ },
201
+ });
202
+ ```
203
+
204
+ #### Editing with URL Reference
205
+
206
+ You can also pass image URLs directly:
207
+
208
+ ```ts
209
+ const { images } = await generateImage({
210
+ model: togetherai.image('black-forest-labs/FLUX.1-kontext-pro'),
211
+ prompt: {
212
+ text: 'Make the background a lush rainforest',
213
+ images: ['https://example.com/photo.png'],
214
+ },
215
+ size: '1024x1024',
216
+ providerOptions: {
217
+ togetherai: {
218
+ steps: 28,
219
+ },
220
+ },
221
+ });
222
+ ```
223
+
224
+ <Note>
225
+ Input images can be provided as `Buffer`, `ArrayBuffer`, `Uint8Array`,
226
+ base64-encoded strings, or URLs. Together AI only supports a single input
227
+ image per request.
228
+ </Note>
229
+
230
+ #### Supported Image Editing Models
231
+
232
+ | Model | Description |
233
+ | -------------------------------------- | ---------------------------------- |
234
+ | `black-forest-labs/FLUX.1-kontext-pro` | Production quality, balanced speed |
235
+ | `black-forest-labs/FLUX.1-kontext-max` | Maximum image fidelity |
236
+ | `black-forest-labs/FLUX.1-kontext-dev` | Development and experimentation |
237
+
238
+ ### Model Capabilities
239
+
240
+ Together.ai image models support various image dimensions that vary by model. Common sizes include 512x512, 768x768, and 1024x1024, with some models supporting up to 1792x1792. The default size is 1024x1024.
241
+
242
+ | Available Models |
243
+ | ------------------------------------------ |
244
+ | `stabilityai/stable-diffusion-xl-base-1.0` |
245
+ | `black-forest-labs/FLUX.1-dev` |
246
+ | `black-forest-labs/FLUX.1-dev-lora` |
247
+ | `black-forest-labs/FLUX.1-schnell` |
248
+ | `black-forest-labs/FLUX.1-canny` |
249
+ | `black-forest-labs/FLUX.1-depth` |
250
+ | `black-forest-labs/FLUX.1-redux` |
251
+ | `black-forest-labs/FLUX.1.1-pro` |
252
+ | `black-forest-labs/FLUX.1-pro` |
253
+ | `black-forest-labs/FLUX.1-schnell-Free` |
254
+
255
+ <Note>
256
+ Please see the [Together.ai models
257
+ page](https://docs.together.ai/docs/serverless-models#image-models) for a full
258
+ list of available image models and their capabilities.
259
+ </Note>
260
+
261
+ ## Embedding Models
262
+
263
+ You can create Together.ai embedding models using the `.embedding()` factory method.
264
+ For more on embedding models with the AI SDK see [embed()](/docs/reference/ai-sdk-core/embed).
265
+
266
+ ```ts
267
+ import { togetherai } from '@ai-sdk/togetherai';
268
+ import { embed } from 'ai';
269
+
270
+ const { embedding } = await embed({
271
+ model: togetherai.embedding('togethercomputer/m2-bert-80M-2k-retrieval'),
272
+ value: 'sunny day at the beach',
273
+ });
274
+ ```
275
+
276
+ ### Model Capabilities
277
+
278
+ | Model | Dimensions | Max Tokens |
279
+ | ------------------------------------------------ | ---------- | ---------- |
280
+ | `togethercomputer/m2-bert-80M-2k-retrieval` | 768 | 2048 |
281
+ | `togethercomputer/m2-bert-80M-8k-retrieval` | 768 | 8192 |
282
+ | `togethercomputer/m2-bert-80M-32k-retrieval` | 768 | 32768 |
283
+ | `WhereIsAI/UAE-Large-V1` | 1024 | 512 |
284
+ | `BAAI/bge-large-en-v1.5` | 1024 | 512 |
285
+ | `BAAI/bge-base-en-v1.5` | 768 | 512 |
286
+ | `sentence-transformers/msmarco-bert-base-dot-v5` | 768 | 512 |
287
+ | `bert-base-uncased` | 768 | 512 |
288
+
289
+ <Note>
290
+ For a complete list of available embedding models, see the [Together.ai models
291
+ page](https://docs.together.ai/docs/serverless-models#embedding-models).
292
+ </Note>
293
+
294
+ ## Reranking Models
295
+
296
+ You can create Together.ai reranking models using the `.reranking()` factory method.
297
+ For more on reranking with the AI SDK see [rerank()](/docs/reference/ai-sdk-core/rerank).
298
+
299
+ ```ts
300
+ import { togetherai } from '@ai-sdk/togetherai';
301
+ import { rerank } from 'ai';
302
+
303
+ const documents = [
304
+ 'sunny day at the beach',
305
+ 'rainy afternoon in the city',
306
+ 'snowy night in the mountains',
307
+ ];
308
+
309
+ const { ranking } = await rerank({
310
+ model: togetherai.reranking('Salesforce/Llama-Rank-v1'),
311
+ documents,
312
+ query: 'talk about rain',
313
+ topN: 2,
314
+ });
315
+
316
+ console.log(ranking);
317
+ // [
318
+ // { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },
319
+ // { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }
320
+ // ]
321
+ ```
322
+
323
+ Together.ai reranking models support additional provider options for object documents. You can specify which fields to use for ranking:
324
+
325
+ ```ts
326
+ import { togetherai } from '@ai-sdk/togetherai';
327
+ import { rerank } from 'ai';
328
+
329
+ const documents = [
330
+ {
331
+ from: 'Paul Doe',
332
+ subject: 'Follow-up',
333
+ text: 'We are happy to give you a discount of 20%.',
334
+ },
335
+ {
336
+ from: 'John McGill',
337
+ subject: 'Missing Info',
338
+ text: 'Here is the pricing from Oracle: $5000/month',
339
+ },
340
+ ];
341
+
342
+ const { ranking } = await rerank({
343
+ model: togetherai.reranking('Salesforce/Llama-Rank-v1'),
344
+ documents,
345
+ query: 'Which pricing did we get from Oracle?',
346
+ providerOptions: {
347
+ togetherai: {
348
+ rankFields: ['from', 'subject', 'text'], // Specify which fields to rank by
349
+ },
350
+ },
351
+ });
352
+ ```
353
+
354
+ The following provider options are available:
355
+
356
+ - **rankFields** _string[]_
357
+
358
+ Array of field names to use for ranking when documents are JSON objects. If not specified, all fields are used.
359
+
360
+ ### Model Capabilities
361
+
362
+ | Model |
363
+ | ------------------------------------- |
364
+ | `Salesforce/Llama-Rank-v1` |
365
+ | `mixedbread-ai/Mxbai-Rerank-Large-V2` |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/togetherai",
3
- "version": "0.0.0-1c33ba03-20260114162300",
3
+ "version": "0.0.0-4115c213-20260122152721",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",
@@ -8,9 +8,18 @@
8
8
  "types": "./dist/index.d.ts",
9
9
  "files": [
10
10
  "dist/**/*",
11
+ "docs/**/*",
12
+ "src",
13
+ "!src/**/*.test.ts",
14
+ "!src/**/*.test-d.ts",
15
+ "!src/**/__snapshots__",
16
+ "!src/**/__fixtures__",
11
17
  "CHANGELOG.md",
12
18
  "README.md"
13
19
  ],
20
+ "directories": {
21
+ "doc": "./docs"
22
+ },
14
23
  "exports": {
15
24
  "./package.json": "./package.json",
16
25
  ".": {
@@ -20,17 +29,17 @@
20
29
  }
21
30
  },
22
31
  "dependencies": {
23
- "@ai-sdk/openai-compatible": "0.0.0-1c33ba03-20260114162300",
24
- "@ai-sdk/provider": "3.0.3",
25
- "@ai-sdk/provider-utils": "0.0.0-1c33ba03-20260114162300"
32
+ "@ai-sdk/openai-compatible": "0.0.0-4115c213-20260122152721",
33
+ "@ai-sdk/provider": "0.0.0-4115c213-20260122152721",
34
+ "@ai-sdk/provider-utils": "0.0.0-4115c213-20260122152721"
26
35
  },
27
36
  "devDependencies": {
28
37
  "@types/node": "20.17.24",
29
38
  "tsup": "^8",
30
39
  "typescript": "5.8.3",
31
40
  "zod": "3.25.76",
32
- "@vercel/ai-tsconfig": "0.0.0",
33
- "@ai-sdk/test-server": "1.0.1"
41
+ "@ai-sdk/test-server": "0.0.0-4115c213-20260122152721",
42
+ "@vercel/ai-tsconfig": "0.0.0"
34
43
  },
35
44
  "peerDependencies": {
36
45
  "zod": "^3.25.76 || ^4.1.8"
@@ -55,7 +64,7 @@
55
64
  "scripts": {
56
65
  "build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
57
66
  "build:watch": "pnpm clean && tsup --watch",
58
- "clean": "del-cli dist *.tsbuildinfo",
67
+ "clean": "del-cli dist docs *.tsbuildinfo",
59
68
  "lint": "eslint \"./**/*.ts*\"",
60
69
  "type-check": "tsc --build",
61
70
  "prettier-check": "prettier --check \"./**/*.ts*\"",
package/src/index.ts ADDED
@@ -0,0 +1,9 @@
1
+ export type { OpenAICompatibleErrorData as TogetherAIErrorData } from '@ai-sdk/openai-compatible';
2
+ export type { TogetherAIRerankingOptions } from './reranking/togetherai-reranking-options';
3
+ export { createTogetherAI, togetherai } from './togetherai-provider';
4
+ export type {
5
+ TogetherAIProvider,
6
+ TogetherAIProviderSettings,
7
+ } from './togetherai-provider';
8
+ export type { TogetherAIImageProviderOptions } from './togetherai-image-model';
9
+ export { VERSION } from './version';
@@ -0,0 +1,43 @@
1
+ import { JSONObject } from '@ai-sdk/provider';
2
+ import { lazySchema, zodSchema } from '@ai-sdk/provider-utils';
3
+ import { z } from 'zod/v4';
4
+
5
+ // https://docs.together.ai/reference/rerank-1
6
+ export type TogetherAIRerankingInput = {
7
+ model: string;
8
+ query: string;
9
+ documents: JSONObject[] | string[];
10
+ top_n: number | undefined;
11
+ return_documents: boolean | undefined;
12
+ rank_fields: string[] | undefined;
13
+ };
14
+
15
+ export const togetheraiErrorSchema = lazySchema(() =>
16
+ zodSchema(
17
+ z.object({
18
+ error: z.object({
19
+ message: z.string(),
20
+ }),
21
+ }),
22
+ ),
23
+ );
24
+
25
+ export const togetheraiRerankingResponseSchema = lazySchema(() =>
26
+ zodSchema(
27
+ z.object({
28
+ id: z.string().nullish(),
29
+ model: z.string().nullish(),
30
+ results: z.array(
31
+ z.object({
32
+ index: z.number(),
33
+ relevance_score: z.number(),
34
+ }),
35
+ ),
36
+ usage: z.object({
37
+ prompt_tokens: z.number(),
38
+ completion_tokens: z.number(),
39
+ total_tokens: z.number(),
40
+ }),
41
+ }),
42
+ ),
43
+ );
@@ -0,0 +1,101 @@
1
+ import { RerankingModelV3 } from '@ai-sdk/provider';
2
+ import {
3
+ combineHeaders,
4
+ createJsonErrorResponseHandler,
5
+ createJsonResponseHandler,
6
+ FetchFunction,
7
+ parseProviderOptions,
8
+ postJsonToApi,
9
+ } from '@ai-sdk/provider-utils';
10
+ import {
11
+ togetheraiErrorSchema,
12
+ TogetherAIRerankingInput,
13
+ togetheraiRerankingResponseSchema,
14
+ } from './togetherai-reranking-api';
15
+ import {
16
+ TogetherAIRerankingModelId,
17
+ togetheraiRerankingOptionsSchema,
18
+ } from './togetherai-reranking-options';
19
+
20
+ type TogetherAIRerankingConfig = {
21
+ provider: string;
22
+ baseURL: string;
23
+ headers: () => Record<string, string | undefined>;
24
+ fetch?: FetchFunction;
25
+ };
26
+
27
+ export class TogetherAIRerankingModel implements RerankingModelV3 {
28
+ readonly specificationVersion = 'v3';
29
+ readonly modelId: TogetherAIRerankingModelId;
30
+
31
+ private readonly config: TogetherAIRerankingConfig;
32
+
33
+ constructor(
34
+ modelId: TogetherAIRerankingModelId,
35
+ config: TogetherAIRerankingConfig,
36
+ ) {
37
+ this.modelId = modelId;
38
+ this.config = config;
39
+ }
40
+
41
+ get provider(): string {
42
+ return this.config.provider;
43
+ }
44
+
45
+ // see https://docs.together.ai/reference/rerank-1
46
+ async doRerank({
47
+ documents,
48
+ headers,
49
+ query,
50
+ topN,
51
+ abortSignal,
52
+ providerOptions,
53
+ }: Parameters<RerankingModelV3['doRerank']>[0]): Promise<
54
+ Awaited<ReturnType<RerankingModelV3['doRerank']>>
55
+ > {
56
+ const rerankingOptions = await parseProviderOptions({
57
+ provider: 'togetherai',
58
+ providerOptions,
59
+ schema: togetheraiRerankingOptionsSchema,
60
+ });
61
+
62
+ const {
63
+ responseHeaders,
64
+ value: response,
65
+ rawValue,
66
+ } = await postJsonToApi({
67
+ url: `${this.config.baseURL}/rerank`,
68
+ headers: combineHeaders(this.config.headers(), headers),
69
+ body: {
70
+ model: this.modelId,
71
+ documents: documents.values,
72
+ query,
73
+ top_n: topN,
74
+ rank_fields: rerankingOptions?.rankFields,
75
+ return_documents: false, // reduce response size
76
+ } satisfies TogetherAIRerankingInput,
77
+ failedResponseHandler: createJsonErrorResponseHandler({
78
+ errorSchema: togetheraiErrorSchema,
79
+ errorToMessage: data => data.error.message,
80
+ }),
81
+ successfulResponseHandler: createJsonResponseHandler(
82
+ togetheraiRerankingResponseSchema,
83
+ ),
84
+ abortSignal,
85
+ fetch: this.config.fetch,
86
+ });
87
+
88
+ return {
89
+ ranking: response.results.map(result => ({
90
+ index: result.index,
91
+ relevanceScore: result.relevance_score,
92
+ })),
93
+ response: {
94
+ id: response.id ?? undefined,
95
+ modelId: response.model ?? undefined,
96
+ headers: responseHeaders,
97
+ body: rawValue,
98
+ },
99
+ };
100
+ }
101
+ }
@@ -0,0 +1,27 @@
1
+ import { FlexibleSchema, lazySchema, zodSchema } from '@ai-sdk/provider-utils';
2
+ import { z } from 'zod/v4';
3
+
4
+ // see https://docs.together.ai/docs/serverless-models#rerank-models
5
+ export type TogetherAIRerankingModelId =
6
+ | 'Salesforce/Llama-Rank-v1'
7
+ | 'mixedbread-ai/Mxbai-Rerank-Large-V2'
8
+ | (string & {});
9
+
10
+ export type TogetherAIRerankingOptions = {
11
+ /**
12
+ * List of keys in the JSON Object document to rank by.
13
+ * Defaults to use all supplied keys for ranking.
14
+ *
15
+ * @example ["title", "text"]
16
+ */
17
+ rankFields?: string[];
18
+ };
19
+
20
+ export const togetheraiRerankingOptionsSchema: FlexibleSchema<TogetherAIRerankingOptions> =
21
+ lazySchema(() =>
22
+ zodSchema(
23
+ z.object({
24
+ rankFields: z.array(z.string()).optional(),
25
+ }),
26
+ ),
27
+ );
@@ -0,0 +1,36 @@
1
+ // https://docs.together.ai/docs/serverless-models#chat-models
2
+ export type TogetherAIChatModelId =
3
+ | 'meta-llama/Llama-3.3-70B-Instruct-Turbo'
4
+ | 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'
5
+ | 'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo'
6
+ | 'meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo'
7
+ | 'meta-llama/Meta-Llama-3-8B-Instruct-Turbo'
8
+ | 'meta-llama/Meta-Llama-3-70B-Instruct-Turbo'
9
+ | 'meta-llama/Llama-3.2-3B-Instruct-Turbo'
10
+ | 'meta-llama/Meta-Llama-3-8B-Instruct-Lite'
11
+ | 'meta-llama/Meta-Llama-3-70B-Instruct-Lite'
12
+ | 'meta-llama/Llama-3-8b-chat-hf'
13
+ | 'meta-llama/Llama-3-70b-chat-hf'
14
+ | 'nvidia/Llama-3.1-Nemotron-70B-Instruct-HF'
15
+ | 'Qwen/Qwen2.5-Coder-32B-Instruct'
16
+ | 'Qwen/QwQ-32B-Preview'
17
+ | 'microsoft/WizardLM-2-8x22B'
18
+ | 'google/gemma-2-27b-it'
19
+ | 'google/gemma-2-9b-it'
20
+ | 'databricks/dbrx-instruct'
21
+ | 'deepseek-ai/deepseek-llm-67b-chat'
22
+ | 'deepseek-ai/DeepSeek-V3'
23
+ | 'google/gemma-2b-it'
24
+ | 'Gryphe/MythoMax-L2-13b'
25
+ | 'meta-llama/Llama-2-13b-chat-hf'
26
+ | 'mistralai/Mistral-7B-Instruct-v0.1'
27
+ | 'mistralai/Mistral-7B-Instruct-v0.2'
28
+ | 'mistralai/Mistral-7B-Instruct-v0.3'
29
+ | 'mistralai/Mixtral-8x7B-Instruct-v0.1'
30
+ | 'mistralai/Mixtral-8x22B-Instruct-v0.1'
31
+ | 'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO'
32
+ | 'Qwen/Qwen2.5-7B-Instruct-Turbo'
33
+ | 'Qwen/Qwen2.5-72B-Instruct-Turbo'
34
+ | 'Qwen/Qwen2-72B-Instruct'
35
+ | 'upstage/SOLAR-10.7B-Instruct-v1.0'
36
+ | (string & {});
@@ -0,0 +1,9 @@
1
+ // https://docs.together.ai/docs/serverless-models#language-models
2
+ export type TogetherAICompletionModelId =
3
+ | 'meta-llama/Llama-2-70b-hf'
4
+ | 'mistralai/Mistral-7B-v0.1'
5
+ | 'mistralai/Mixtral-8x7B-v0.1'
6
+ | 'Meta-Llama/Llama-Guard-7b'
7
+ | 'codellama/CodeLlama-34b-Instruct-hf'
8
+ | 'Qwen/Qwen2.5-Coder-32B-Instruct'
9
+ | (string & {});
@@ -0,0 +1,11 @@
1
+ // https://docs.together.ai/docs/serverless-models#embedding-models
2
+ export type TogetherAIEmbeddingModelId =
3
+ | 'togethercomputer/m2-bert-80M-2k-retrieval'
4
+ | 'togethercomputer/m2-bert-80M-32k-retrieval'
5
+ | 'togethercomputer/m2-bert-80M-8k-retrieval'
6
+ | 'WhereIsAI/UAE-Large-V1'
7
+ | 'BAAI/bge-large-en-v1.5'
8
+ | 'BAAI/bge-base-en-v1.5'
9
+ | 'sentence-transformers/msmarco-bert-base-dot-v5'
10
+ | 'bert-base-uncased'
11
+ | (string & {});