@mastra/mcp-docs-server 1.1.26-alpha.6 → 1.1.26
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/structured-output.md +22 -0
- package/.docs/docs/agents/supervisor-agents.md +18 -0
- package/.docs/docs/editor/overview.md +69 -0
- package/.docs/docs/memory/observational-memory.md +4 -0
- package/.docs/docs/memory/storage.md +1 -0
- package/.docs/docs/observability/tracing/exporters/langfuse.md +31 -0
- package/.docs/guides/deployment/netlify.md +16 -1
- package/.docs/guides/getting-started/next-js.md +0 -4
- package/.docs/guides/migrations/mastra-cloud.md +128 -3
- package/.docs/models/gateways/netlify.md +2 -2
- package/.docs/models/gateways/openrouter.md +3 -1
- package/.docs/models/gateways/vercel.md +4 -1
- package/.docs/models/index.md +36 -1
- package/.docs/models/providers/302ai.md +32 -1
- package/.docs/models/providers/alibaba-cn.md +2 -1
- package/.docs/models/providers/anthropic.md +2 -1
- package/.docs/models/providers/berget.md +9 -12
- package/.docs/models/providers/cloudflare-workers-ai.md +2 -1
- package/.docs/models/providers/cortecs.md +4 -1
- package/.docs/models/providers/deepinfra.md +4 -1
- package/.docs/models/providers/digitalocean.md +116 -0
- package/.docs/models/providers/fireworks-ai.md +2 -1
- package/.docs/models/providers/firmware.md +2 -3
- package/.docs/models/providers/helicone.md +1 -2
- package/.docs/models/providers/huggingface.md +2 -1
- package/.docs/models/providers/kilo.md +2 -1
- package/.docs/models/providers/kimi-for-coding.md +2 -1
- package/.docs/models/providers/llmgateway.md +59 -77
- package/.docs/models/providers/moonshotai-cn.md +3 -2
- package/.docs/models/providers/moonshotai.md +3 -2
- package/.docs/models/providers/nano-gpt.md +8 -1
- package/.docs/models/providers/nvidia.md +2 -1
- package/.docs/models/providers/ollama-cloud.md +2 -1
- package/.docs/models/providers/openai.md +1 -2
- package/.docs/models/providers/opencode-go.md +2 -1
- package/.docs/models/providers/opencode.md +5 -1
- package/.docs/models/providers/ovhcloud.md +4 -7
- package/.docs/models/providers/poe.md +2 -1
- package/.docs/models/providers/tencent-token-plan.md +71 -0
- package/.docs/models/providers/tencent-tokenhub.md +71 -0
- package/.docs/models/providers/wafer.ai.md +72 -0
- package/.docs/models/providers/zenmux.md +3 -1
- package/.docs/models/providers.md +4 -0
- package/.docs/reference/agents/generate.md +8 -0
- package/.docs/reference/client-js/workflows.md +12 -0
- package/.docs/reference/core/mastra-class.md +9 -1
- package/.docs/reference/deployer/cloudflare.md +14 -1
- package/.docs/reference/deployer/netlify.md +50 -2
- package/.docs/reference/harness/harness-class.md +72 -49
- package/.docs/reference/index.md +3 -0
- package/.docs/reference/observability/tracing/exporters/langfuse.md +2 -0
- package/.docs/reference/processors/prefill-error-handler.md +5 -5
- package/.docs/reference/storage/cloudflare-d1.md +42 -42
- package/.docs/reference/storage/redis.md +266 -0
- package/.docs/reference/streaming/agents/stream.md +8 -0
- package/.docs/reference/streaming/workflows/resumeStream.md +2 -0
- package/.docs/reference/tools/tavily.md +307 -0
- package/.docs/reference/workflows/run-methods/resume.md +24 -0
- package/.docs/reference/workflows/workflow-methods/foreach.md +14 -1
- package/.docs/reference/workspace/docker-sandbox.md +196 -0
- package/CHANGELOG.md +78 -0
- package/package.json +10 -10
|
@@ -210,6 +210,28 @@ const response = await testAgent.generate('Tell me about TypeScript.', {
|
|
|
210
210
|
})
|
|
211
211
|
```
|
|
212
212
|
|
|
213
|
+
If you want that structuring model to also see the current conversation history, set `useAgent: true` alongside `model`. Mastra will reuse the parent agent with the separate structuring model and attach read-only memory context when a thread is available.
|
|
214
|
+
|
|
215
|
+
```typescript
|
|
216
|
+
const response = await testAgent.generate('Return my profile as structured data.', {
|
|
217
|
+
memory: {
|
|
218
|
+
thread: 'thread-123',
|
|
219
|
+
resource: 'user-123',
|
|
220
|
+
},
|
|
221
|
+
structuredOutput: {
|
|
222
|
+
schema: z.object({
|
|
223
|
+
favoriteColor: z.string(),
|
|
224
|
+
hometown: z.string(),
|
|
225
|
+
petName: z.string(),
|
|
226
|
+
}),
|
|
227
|
+
model: 'openai/gpt-5.4',
|
|
228
|
+
useAgent: true,
|
|
229
|
+
},
|
|
230
|
+
})
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
Leave `useAgent` unset if you want the separate structuring model to work only from the current response and not inherit prior conversation memory.
|
|
234
|
+
|
|
213
235
|
### Multi-step approach with `prepareStep`
|
|
214
236
|
|
|
215
237
|
For models that don't support tools and structured outputs together, you can use `prepareStep` to handle them in separate steps.
|
|
@@ -300,8 +300,26 @@ Success criteria:
|
|
|
300
300
|
})
|
|
301
301
|
```
|
|
302
302
|
|
|
303
|
+
## Sub-agent versioning
|
|
304
|
+
|
|
305
|
+
When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each sub-agent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
|
|
306
|
+
|
|
307
|
+
```typescript
|
|
308
|
+
const result = await supervisor.generate('Research and write about AI safety', {
|
|
309
|
+
versions: {
|
|
310
|
+
agents: {
|
|
311
|
+
'research-agent': { status: 'published' },
|
|
312
|
+
'writing-agent': { versionId: 'draft-456' },
|
|
313
|
+
},
|
|
314
|
+
},
|
|
315
|
+
})
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
Version overrides propagate automatically through delegation. See [Sub-agent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
|
|
319
|
+
|
|
303
320
|
## Related
|
|
304
321
|
|
|
322
|
+
- [Sub-agent versioning](https://mastra.ai/docs/editor/overview)
|
|
305
323
|
- [Guide: Research coordinator](https://mastra.ai/guides/guide/research-coordinator)
|
|
306
324
|
- [Agent.stream() reference](https://mastra.ai/reference/streaming/agents/stream)
|
|
307
325
|
- [Agent.generate() reference](https://mastra.ai/reference/agents/generate)
|
|
@@ -214,6 +214,75 @@ curl http://localhost:4111/agents/support-agent?versionId=abc-123
|
|
|
214
214
|
|
|
215
215
|
See the [Client SDK agents reference](https://mastra.ai/reference/client-js/agents) for API methods.
|
|
216
216
|
|
|
217
|
+
### Sub-agent versioning
|
|
218
|
+
|
|
219
|
+
When a [supervisor agent](https://mastra.ai/docs/agents/supervisor-agents) delegates to sub-agents, version overrides determine which stored version of each sub-agent to use instead of the code-defined default. This lets you iterate on sub-agent prompts and tools through the editor without redeploying the supervisor.
|
|
220
|
+
|
|
221
|
+
Set version overrides at three levels, with later levels taking priority:
|
|
222
|
+
|
|
223
|
+
1. **Mastra instance config** — global defaults that apply to every `generate()` and `stream()` call.
|
|
224
|
+
2. **Per-invocation options** — overrides passed directly to `generate()` or `stream()`.
|
|
225
|
+
3. **Server request body** — overrides sent in the `versions` field of an API request.
|
|
226
|
+
|
|
227
|
+
Resolution order: **per-invocation > request body > Mastra instance defaults > code-defined agent**.
|
|
228
|
+
|
|
229
|
+
#### Mastra instance config
|
|
230
|
+
|
|
231
|
+
Set global defaults when creating the `Mastra` instance. Every supervisor call inherits these overrides:
|
|
232
|
+
|
|
233
|
+
```typescript
|
|
234
|
+
import { Mastra } from '@mastra/core'
|
|
235
|
+
import { MastraEditor } from '@mastra/editor'
|
|
236
|
+
|
|
237
|
+
export const mastra = new Mastra({
|
|
238
|
+
agents: { supervisor, researchAgent, writerAgent },
|
|
239
|
+
editor: new MastraEditor(),
|
|
240
|
+
versions: {
|
|
241
|
+
agents: {
|
|
242
|
+
'research-agent': { status: 'published' },
|
|
243
|
+
'writer-agent': { versionId: 'abc-123' },
|
|
244
|
+
},
|
|
245
|
+
},
|
|
246
|
+
})
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
#### Per-invocation overrides
|
|
250
|
+
|
|
251
|
+
Override versions for a single call to `generate()` or `stream()`. These take priority over Mastra instance defaults:
|
|
252
|
+
|
|
253
|
+
```typescript
|
|
254
|
+
const result = await supervisor.generate('Research and write an article about AI safety', {
|
|
255
|
+
versions: {
|
|
256
|
+
agents: {
|
|
257
|
+
'research-agent': { versionId: 'draft-456' },
|
|
258
|
+
},
|
|
259
|
+
},
|
|
260
|
+
})
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
#### Server request body
|
|
264
|
+
|
|
265
|
+
When calling agents through the Mastra server, pass version overrides in the request body:
|
|
266
|
+
|
|
267
|
+
```bash
|
|
268
|
+
curl -X POST http://localhost:4111/agents/supervisor/generate \
|
|
269
|
+
-H "Content-Type: application/json" \
|
|
270
|
+
-d '{
|
|
271
|
+
"messages": [{ "role": "user", "content": "Research AI safety" }],
|
|
272
|
+
"versions": {
|
|
273
|
+
"agents": {
|
|
274
|
+
"research-agent": { "versionId": "draft-456" }
|
|
275
|
+
}
|
|
276
|
+
}
|
|
277
|
+
}'
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
#### How propagation works
|
|
281
|
+
|
|
282
|
+
Version overrides propagate automatically through sub-agent delegation via `requestContext`. When a supervisor delegates to a sub-agent, the framework checks if a version override exists for that sub-agent's ID. If one is found, it resolves the stored version from the editor and uses it instead of the code-defined default.
|
|
283
|
+
|
|
284
|
+
If version resolution fails (for example, when the editor is not configured or the version ID doesn't exist), the framework logs a warning and falls back to the code-defined agent.
|
|
285
|
+
|
|
217
286
|
## Next steps
|
|
218
287
|
|
|
219
288
|
- Set up [prompts](https://mastra.ai/docs/editor/prompts) to build reusable instruction templates.
|
|
@@ -339,6 +339,7 @@ Reflection works similarly — the Reflector runs in the background when observa
|
|
|
339
339
|
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
|
|
340
340
|
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
|
|
341
341
|
| `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
|
|
342
|
+
| `activateOnProviderChange` | `false` | Forces buffered observations and reflections to activate when the next step uses a different `provider/model` than the one that produced the latest assistant step. Use this when switching providers or models would invalidate prompt cache reuse. |
|
|
342
343
|
| `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
|
|
343
344
|
| `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
|
|
344
345
|
|
|
@@ -350,6 +351,7 @@ const memory = new Memory({
|
|
|
350
351
|
observationalMemory: {
|
|
351
352
|
model: 'google/gemini-2.5-flash',
|
|
352
353
|
activateAfterIdle: '5m',
|
|
354
|
+
activateOnProviderChange: true,
|
|
353
355
|
},
|
|
354
356
|
},
|
|
355
357
|
})
|
|
@@ -357,6 +359,8 @@ const memory = new Memory({
|
|
|
357
359
|
|
|
358
360
|
With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
|
|
359
361
|
|
|
362
|
+
Changing model or providers mid-thread will invalidate the prompt cache. If your agent can switch between providers or models mid-thread, `activateOnProviderChange: true` forces buffered context to activate before the new provider runs. That avoids sending a large raw window to a provider that cannot reuse the previous prompt cache.
|
|
363
|
+
|
|
360
364
|
### Disabling
|
|
361
365
|
|
|
362
366
|
To disable async buffering and use synchronous observation/reflection instead:
|
|
@@ -34,6 +34,7 @@ Each provider page includes installation instructions, configuration parameters,
|
|
|
34
34
|
- [PostgreSQL](https://mastra.ai/reference/storage/postgresql)
|
|
35
35
|
- [MongoDB](https://mastra.ai/reference/storage/mongodb)
|
|
36
36
|
- [Upstash](https://mastra.ai/reference/storage/upstash)
|
|
37
|
+
- [Redis](https://mastra.ai/reference/storage/redis)
|
|
37
38
|
- [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
|
|
38
39
|
- [Cloudflare KV & Durable Objects](https://mastra.ai/reference/storage/cloudflare)
|
|
39
40
|
- [Convex](https://mastra.ai/reference/storage/convex)
|
|
@@ -122,6 +122,35 @@ new LangfuseExporter({
|
|
|
122
122
|
})
|
|
123
123
|
```
|
|
124
124
|
|
|
125
|
+
#### Batch Tuning for High-Volume Traces
|
|
126
|
+
|
|
127
|
+
For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:
|
|
128
|
+
|
|
129
|
+
```typescript
|
|
130
|
+
new LangfuseExporter({
|
|
131
|
+
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
|
|
132
|
+
secretKey: process.env.LANGFUSE_SECRET_KEY!,
|
|
133
|
+
flushAt: 500, // Maximum spans per OTEL export batch
|
|
134
|
+
flushInterval: 20, // Maximum seconds between flushes
|
|
135
|
+
})
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
To suppress high-volume span types entirely (for example `MODEL_CHUNK` spans from streamed responses), use the observability-level [`excludeSpanTypes` option](https://mastra.ai/reference/observability/tracing/span-filtering) rather than configuring the exporter:
|
|
139
|
+
|
|
140
|
+
```typescript
|
|
141
|
+
import { SpanType } from '@mastra/core/observability'
|
|
142
|
+
|
|
143
|
+
new Observability({
|
|
144
|
+
configs: {
|
|
145
|
+
langfuse: {
|
|
146
|
+
serviceName: 'my-service',
|
|
147
|
+
exporters: [new LangfuseExporter()],
|
|
148
|
+
excludeSpanTypes: [SpanType.MODEL_CHUNK],
|
|
149
|
+
},
|
|
150
|
+
},
|
|
151
|
+
})
|
|
152
|
+
```
|
|
153
|
+
|
|
125
154
|
### Complete Configuration
|
|
126
155
|
|
|
127
156
|
```typescript
|
|
@@ -133,6 +162,8 @@ new LangfuseExporter({
|
|
|
133
162
|
// Optional settings
|
|
134
163
|
baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
|
|
135
164
|
realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
|
|
165
|
+
flushAt: 500, // Maximum spans per OTEL export batch
|
|
166
|
+
flushInterval: 20, // Maximum seconds between flushes
|
|
136
167
|
logLevel: 'info', // Diagnostic logging: debug | info | warn | error
|
|
137
168
|
|
|
138
169
|
// Langfuse-specific settings
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Deploy Mastra to Netlify
|
|
2
2
|
|
|
3
|
-
Use `@mastra/deployer-netlify` to deploy your Mastra server
|
|
3
|
+
Use `@mastra/deployer-netlify` to deploy your Mastra server on Netlify. The deployer bundles your code and generates a `.netlify` directory conforming to Netlify's [Frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/), ready to deploy. You can deploy as serverless functions (default) or as [edge functions](https://docs.netlify.com/build/edge-functions/overview/) for lower latency and longer execution times.
|
|
4
4
|
|
|
5
5
|
> **Info:** This guide covers deploying the [Mastra server](https://mastra.ai/docs/server/mastra-server). If you're using a [server adapter](https://mastra.ai/docs/server/server-adapters) or [web framework](https://mastra.ai/docs/deployment/web-framework), deploy the way you normally would for that framework.
|
|
6
6
|
|
|
@@ -49,6 +49,21 @@ export const mastra = new Mastra({
|
|
|
49
49
|
})
|
|
50
50
|
```
|
|
51
51
|
|
|
52
|
+
To deploy as edge functions instead, pass `{ target: 'edge' }`:
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
import { Mastra } from '@mastra/core'
|
|
56
|
+
import { NetlifyDeployer } from '@mastra/deployer-netlify'
|
|
57
|
+
|
|
58
|
+
export const mastra = new Mastra({
|
|
59
|
+
deployer: new NetlifyDeployer({
|
|
60
|
+
target: 'edge',
|
|
61
|
+
}),
|
|
62
|
+
})
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
Edge functions run on Deno at the edge closest to your users. They have no hard execution timeout (only a CPU time limit), making them a better fit for longer-running AI workflows. See the [constructor options](https://mastra.ai/reference/deployer/netlify) for details.
|
|
66
|
+
|
|
52
67
|
Create a `netlify.toml` file with the following contents in your project root:
|
|
53
68
|
|
|
54
69
|
```toml
|
|
@@ -4,10 +4,6 @@ In this guide, you'll build a tool-calling AI agent using Mastra, then connect i
|
|
|
4
4
|
|
|
5
5
|
You'll use [AI SDK UI](https://ai-sdk.dev/docs/ai-sdk-ui/overview) and [AI Elements](https://ai-sdk.dev/elements) to create a beautiful, interactive chat experience.
|
|
6
6
|
|
|
7
|
-

|
|
8
|
-
|
|
9
|
-
What you'll build: an agent that can call a weather tool, display the JSON result, stream a weather summary in the chat UI, and persist conversation history across reloads.
|
|
10
|
-
|
|
11
7
|
## Before you begin
|
|
12
8
|
|
|
13
9
|
- You'll need an API key from a supported [model provider](https://mastra.ai/models). If you don't have a preference, use [OpenAI](https://mastra.ai/models/providers/openai).
|
|
@@ -56,11 +56,136 @@ The Mastra platform replaces Mastra Cloud with separate Studio and Server produc
|
|
|
56
56
|
|
|
57
57
|
## Replace Mastra Cloud Store with a hosted database
|
|
58
58
|
|
|
59
|
-
Mastra Cloud provided a managed
|
|
59
|
+
Mastra Cloud provided a managed libSQL database, backed by [Turso](https://turso.tech). The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
|
|
60
60
|
|
|
61
|
-
If you were already using a hosted database ("bring your own"), no changes are needed.
|
|
61
|
+
If you were already using a hosted database ("bring your own"), no changes are needed. Ensure the connection string is set as an environment variable in the dashboard rather than hardcoded.
|
|
62
62
|
|
|
63
|
-
If you were using
|
|
63
|
+
If you were using Cloud Store, follow the steps below to export your data and load it into a new libSQL database that you control.
|
|
64
|
+
|
|
65
|
+
### Export your Cloud Store data
|
|
66
|
+
|
|
67
|
+
There are two ways to export your Cloud Store data: a one-click download from the dashboard, or a manual dump via the Turso CLI.
|
|
68
|
+
|
|
69
|
+
#### Option A — Export from the dashboard (recommended)
|
|
70
|
+
|
|
71
|
+
Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Storage**. Click the **Export Database** button. The dashboard generates a full `.sql` dump of your Cloud Store and downloads it directly to your Downloads folder.
|
|
72
|
+
|
|
73
|
+
Once the download completes, convert the dump into a SQLite database file:
|
|
74
|
+
|
|
75
|
+
```bash
|
|
76
|
+
sqlite3 mydb.db < ~/Downloads/mastra-cloud-dump.sql
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
You now have a portable `mydb.db` file you can inspect locally, back up, or use as the source for the new database in the steps that follow.
|
|
80
|
+
|
|
81
|
+
#### Option B — Export via the Turso CLI
|
|
82
|
+
|
|
83
|
+
If you prefer to work from the command line, or need to script the export, you can dump the database directly using the [Turso CLI](https://docs.turso.tech/cli). This approach requires the database URL and an auth token, both surfaced in the dashboard.
|
|
84
|
+
|
|
85
|
+
1. Retrieve your Cloud Store credentials from the dashboard.
|
|
86
|
+
|
|
87
|
+
Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Env Variables**. For Cloud Store–backed projects, two variables are injected alongside your own:
|
|
88
|
+
|
|
89
|
+
- `MASTRA_STORAGE_URL`: A libSQL connection string (e.g. `libsql://<db-name>-<org>.turso.io`).
|
|
90
|
+
- `MASTRA_STORAGE_AUTH_TOKEN`: A read-capable auth token scoped to that database.
|
|
91
|
+
|
|
92
|
+
Each row supports the standard environment variable actions — show/hide via the eye toggle, Edit, Delete, and Copy Value. Use **Copy Value** to grab both values for the dump command below.
|
|
93
|
+
|
|
94
|
+
> **Note:** These variables only appear for projects that were provisioned with Cloud Store. If you brought your own database to Mastra Cloud, you already have these credentials and can skip ahead to [Point your Mastra app at the new database](#point-your-mastra-app-at-the-new-database).
|
|
95
|
+
|
|
96
|
+
> **Info:** If the variables are missing, the values do not decrypt, or the Turso CLI rejects the token, email <support@mastra.ai> from the address associated with your Mastra Cloud account and ask for the libSQL URL and auth token for the project you want to export. Include the project name/ID. Support can also run the dump on your behalf if CLI access is blocked on your network.
|
|
97
|
+
|
|
98
|
+
2. Install the Turso CLI.
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
brew install tursodatabase/tap/turso
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
```bash
|
|
105
|
+
curl -sSfL https://get.tur.so/install.sh | bash
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
See the [Turso CLI introduction](https://docs.turso.tech/cli/introduction) for Windows and headless-install options.
|
|
109
|
+
|
|
110
|
+
3. Export the database to a SQL dump.
|
|
111
|
+
|
|
112
|
+
Set the credentials provided by support (or use the dashboard values if you already copied them earlier) as environment variables, then dump the database to a local file. If you copied the URL from the dashboard, swap the `libsql://` scheme for `https://` — the Turso CLI expects the HTTPS form when passing the URL with an auth token.
|
|
113
|
+
|
|
114
|
+
```bash
|
|
115
|
+
export MASTRA_STORAGE_URL="https://<db-name>-<org>.turso.io"
|
|
116
|
+
export MASTRA_STORAGE_AUTH_TOKEN="<token-from-dashboard-or-support>"
|
|
117
|
+
|
|
118
|
+
turso db shell "$MASTRA_STORAGE_URL?authToken=$MASTRA_STORAGE_AUTH_TOKEN" ".dump" > mastra-cloud-dump.sql
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
> **Warning:** Embedding the auth token in the connection string is less secure than Turso's recommended pattern — the full URL (with token) can end up in shell history, process listings, and terminal logs. Turso officially recommends running `turso auth login` and then dumping by database name only: `turso db shell <database-name> ".dump" > mastra-cloud-dump.sql`. That flow requires the database to live in a Turso account you own, which is not the case for Cloud Store, so the env-var example above is provided as an alternative for this one-time export. If you prefer to avoid token interpolation entirely, ask support to run the dump on your behalf and send you the resulting SQL file.
|
|
122
|
+
|
|
123
|
+
The resulting `mastra-cloud-dump.sql` contains the full schema and data: thread and message history, workflow snapshots, traces, and eval scores. Store it somewhere safe before continuing.
|
|
124
|
+
|
|
125
|
+
### Load the dump into a new libSQL database
|
|
126
|
+
|
|
127
|
+
The dump is a standard SQL file and can be loaded into any libSQL-compatible database. The example below uses a new Turso-hosted database, which keeps the migration like-for-like and avoids schema translation.
|
|
128
|
+
|
|
129
|
+
1. Authenticate the Turso CLI against your own Turso account.
|
|
130
|
+
|
|
131
|
+
```bash
|
|
132
|
+
turso auth login
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
If you do not have a Turso account, the CLI will prompt you to create one. See [Turso pricing](https://turso.tech/pricing) for plan details.
|
|
136
|
+
|
|
137
|
+
2. Create a new database and load the dump in one step.
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
turso db create mastra-migrated --from-dump ./mastra-cloud-dump.sql
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
`--from-dump` restores a local SQLite/libSQL dump at create time, which is faster and safer than piping statements through `turso db shell` after the fact. Pick a region close to where your Mastra Server runs to minimize latency — list available regions with `turso db locations` and pass `--group <group-name>` if you manage multiple groups.
|
|
144
|
+
|
|
145
|
+
For multi-gigabyte dumps, add `--wait` so the CLI blocks until the database is fully available.
|
|
146
|
+
|
|
147
|
+
3. Generate connection credentials for the new database.
|
|
148
|
+
|
|
149
|
+
```bash
|
|
150
|
+
turso db show mastra-migrated --url
|
|
151
|
+
turso db tokens create mastra-migrated
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
The first command prints the libSQL URL. The second prints an auth token. Both are needed by `LibSQLStore`.
|
|
155
|
+
|
|
156
|
+
### Point your Mastra app at the new database
|
|
157
|
+
|
|
158
|
+
Set the new credentials as environment variables, either locally in `.env` or in the Mastra platform dashboard:
|
|
159
|
+
|
|
160
|
+
```bash
|
|
161
|
+
TURSO_DATABASE_URL="libsql://mastra-migrated-<org>.turso.io"
|
|
162
|
+
TURSO_AUTH_TOKEN="<token-from-turso-db-tokens-create>"
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
Configure `LibSQLStore` to read from those variables:
|
|
166
|
+
|
|
167
|
+
```ts
|
|
168
|
+
import { Mastra } from '@mastra/core/mastra'
|
|
169
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
170
|
+
|
|
171
|
+
export const mastra = new Mastra({
|
|
172
|
+
storage: new LibSQLStore({
|
|
173
|
+
id: 'libsql-storage',
|
|
174
|
+
url: process.env.TURSO_DATABASE_URL!,
|
|
175
|
+
authToken: process.env.TURSO_AUTH_TOKEN,
|
|
176
|
+
}),
|
|
177
|
+
})
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
See the [libSQL storage reference](https://mastra.ai/reference/storage/libsql) for the full set of options.
|
|
181
|
+
|
|
182
|
+
### Verify the migration
|
|
183
|
+
|
|
184
|
+
Before decommissioning your Cloud project, confirm the new database serves the data your app expects.
|
|
185
|
+
|
|
186
|
+
- Run `turso db shell mastra-migrated "SELECT name FROM sqlite_master WHERE type='table';"` to list tables. The output should include the Mastra-managed tables (e.g. `mastra_threads`, `mastra_messages`, `mastra_workflow_snapshot`, `mastra_traces`).
|
|
187
|
+
- Run a row count against a known-populated table, for example `turso db shell mastra-migrated "SELECT COUNT(*) FROM mastra_messages;"`, and compare it to the same query against the Cloud Store URL.
|
|
188
|
+
- Start your Mastra app against the new credentials and confirm that an existing thread or workflow run loads as expected in [Studio](https://mastra.ai/docs/studio/observability).
|
|
64
189
|
|
|
65
190
|
## Update observability configuration
|
|
66
191
|
|
|
@@ -13,7 +13,7 @@ const agent = new Agent({
|
|
|
13
13
|
id: "my-agent",
|
|
14
14
|
name: "My Agent",
|
|
15
15
|
instructions: "You are a helpful assistant",
|
|
16
|
-
model: "netlify/anthropic/claude-
|
|
16
|
+
model: "netlify/anthropic/claude-haiku-4-5"
|
|
17
17
|
});
|
|
18
18
|
```
|
|
19
19
|
|
|
@@ -35,7 +35,6 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
35
35
|
|
|
36
36
|
| Model |
|
|
37
37
|
| ------------------------------------------- |
|
|
38
|
-
| `anthropic/claude-3-haiku-20240307` |
|
|
39
38
|
| `anthropic/claude-haiku-4-5` |
|
|
40
39
|
| `anthropic/claude-haiku-4-5-20251001` |
|
|
41
40
|
| `anthropic/claude-opus-4-1-20250805` |
|
|
@@ -43,6 +42,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
43
42
|
| `anthropic/claude-opus-4-5` |
|
|
44
43
|
| `anthropic/claude-opus-4-5-20251101` |
|
|
45
44
|
| `anthropic/claude-opus-4-6` |
|
|
45
|
+
| `anthropic/claude-opus-4-7` |
|
|
46
46
|
| `anthropic/claude-sonnet-4-0` |
|
|
47
47
|
| `anthropic/claude-sonnet-4-20250514` |
|
|
48
48
|
| `anthropic/claude-sonnet-4-5` |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenRouter
|
|
2
2
|
|
|
3
|
-
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 172 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
|
|
6
6
|
|
|
@@ -41,6 +41,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
41
41
|
| `anthropic/claude-opus-4.1` |
|
|
42
42
|
| `anthropic/claude-opus-4.5` |
|
|
43
43
|
| `anthropic/claude-opus-4.6` |
|
|
44
|
+
| `anthropic/claude-opus-4.7` |
|
|
44
45
|
| `anthropic/claude-sonnet-4` |
|
|
45
46
|
| `anthropic/claude-sonnet-4.5` |
|
|
46
47
|
| `anthropic/claude-sonnet-4.6` |
|
|
@@ -116,6 +117,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
116
117
|
| `moonshotai/kimi-k2-0905:exacto` |
|
|
117
118
|
| `moonshotai/kimi-k2-thinking` |
|
|
118
119
|
| `moonshotai/kimi-k2.5` |
|
|
120
|
+
| `moonshotai/kimi-k2.6` |
|
|
119
121
|
| `nousresearch/hermes-3-llama-3.1-405b:free` |
|
|
120
122
|
| `nousresearch/hermes-4-405b` |
|
|
121
123
|
| `nousresearch/hermes-4-70b` |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Vercel
|
|
2
2
|
|
|
3
|
-
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 234 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Vercel documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
|
|
6
6
|
|
|
@@ -72,6 +72,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
72
72
|
| `anthropic/claude-opus-4.1` |
|
|
73
73
|
| `anthropic/claude-opus-4.5` |
|
|
74
74
|
| `anthropic/claude-opus-4.6` |
|
|
75
|
+
| `anthropic/claude-opus-4.7` |
|
|
75
76
|
| `anthropic/claude-sonnet-4` |
|
|
76
77
|
| `anthropic/claude-sonnet-4.5` |
|
|
77
78
|
| `anthropic/claude-sonnet-4.6` |
|
|
@@ -119,6 +120,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
119
120
|
| `google/text-embedding-005` |
|
|
120
121
|
| `google/text-multilingual-embedding-002` |
|
|
121
122
|
| `inception/mercury-2` |
|
|
123
|
+
| `inception/mercury-coder-small` |
|
|
122
124
|
| `inception/mercury-edit-2` |
|
|
123
125
|
| `kwaipilot/kat-coder-pro-v1` |
|
|
124
126
|
| `kwaipilot/kat-coder-pro-v2` |
|
|
@@ -264,4 +266,5 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
264
266
|
| `zai/glm-4.7-flashx` |
|
|
265
267
|
| `zai/glm-5` |
|
|
266
268
|
| `zai/glm-5-turbo` |
|
|
269
|
+
| `zai/glm-5.1` |
|
|
267
270
|
| `zai/glm-5v-turbo` |
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3690 models from 104 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -228,6 +228,41 @@ Mastra tries your primary model first. If it encounters a 500 error, rate limit,
|
|
|
228
228
|
|
|
229
229
|
Your users never experience the disruption - the response comes back with the same format, just from a different model. The error context is preserved as the system moves through your fallback chain, ensuring clean error propagation while maintaining streaming compatibility.
|
|
230
230
|
|
|
231
|
+
### Per-model settings
|
|
232
|
+
|
|
233
|
+
Each fallback entry can carry its own `modelSettings`, `providerOptions`, and `headers` — useful when models in the chain need different temperatures or provider-specific knobs to produce comparable output.
|
|
234
|
+
|
|
235
|
+
```typescript
|
|
236
|
+
import { Agent } from '@mastra/core/agent';
|
|
237
|
+
|
|
238
|
+
const agent = new Agent({
|
|
239
|
+
id: 'tuned-resilient',
|
|
240
|
+
name: 'Tuned Resilient Agent',
|
|
241
|
+
instructions: 'You are a helpful assistant.',
|
|
242
|
+
model: [
|
|
243
|
+
{
|
|
244
|
+
model: 'google/gemini-2.5-flash',
|
|
245
|
+
maxRetries: 2,
|
|
246
|
+
modelSettings: { temperature: 0.3 },
|
|
247
|
+
providerOptions: { google: { thinkingConfig: { thinkingBudget: 0 } } },
|
|
248
|
+
},
|
|
249
|
+
{
|
|
250
|
+
model: 'openai/gpt-5-mini',
|
|
251
|
+
maxRetries: 2,
|
|
252
|
+
modelSettings: { temperature: 0.7 },
|
|
253
|
+
providerOptions: { openai: { reasoningEffort: 'low' } },
|
|
254
|
+
},
|
|
255
|
+
],
|
|
256
|
+
});
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
**Precedence:**
|
|
260
|
+
|
|
261
|
+
- `modelSettings` and `providerOptions`: per-fallback entry overrides call-time options, which override agent `defaultOptions`. `modelSettings` shallow-merges by key. `providerOptions` deep-merges recursively, so nested provider config (e.g. `google.thinkingConfig`) preserves sibling keys across layers.
|
|
262
|
+
- `headers`: call-time `modelSettings.headers` overrides per-fallback `headers`, which overrides headers extracted from model-router models. Runtime headers (tracing, auth, tenancy) intentionally take precedence over model-level headers.
|
|
263
|
+
|
|
264
|
+
Each field also accepts a function of `requestContext`, matching how dynamic models are resolved.
|
|
265
|
+
|
|
231
266
|
## Use local models with Mastra
|
|
232
267
|
|
|
233
268
|
Mastra also supports local models like `gpt-oss`, `Qwen3`, `DeepSeek` and many more that you run on your own hardware. The application running your local model needs to provide an OpenAI-compatible API server for Mastra to connect to. We recommend using [LMStudio](https://lmstudio.ai/) (see [Running the LMStudio server](https://lmstudio.ai/docs/developer/core/server)).
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# 302.AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 95 302.AI models through Mastra's model router. Authentication is handled automatically using the `302AI_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [302.AI documentation](https://doc.302.ai).
|
|
6
6
|
|
|
@@ -35,13 +35,25 @@ for await (const chunk of stream) {
|
|
|
35
35
|
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
36
|
| --------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
37
|
| `302ai/chatgpt-4o-latest` | 128K | | | | | | $5 | $15 |
|
|
38
|
+
| `302ai/claude-3-5-haiku-20241022` | 200K | | | | | | $0.80 | $4 |
|
|
39
|
+
| `302ai/claude-3-5-haiku-latest` | 200K | | | | | | $0.80 | $4 |
|
|
40
|
+
| `302ai/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
|
|
38
41
|
| `302ai/claude-haiku-4-5-20251001` | 200K | | | | | | $1 | $5 |
|
|
39
42
|
| `302ai/claude-opus-4-1-20250805` | 200K | | | | | | $15 | $75 |
|
|
40
43
|
| `302ai/claude-opus-4-1-20250805-thinking` | 200K | | | | | | $15 | $75 |
|
|
44
|
+
| `302ai/claude-opus-4-20250514` | 200K | | | | | | $15 | $75 |
|
|
45
|
+
| `302ai/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
41
46
|
| `302ai/claude-opus-4-5-20251101` | 200K | | | | | | $5 | $25 |
|
|
42
47
|
| `302ai/claude-opus-4-5-20251101-thinking` | 200K | | | | | | $5 | $25 |
|
|
48
|
+
| `302ai/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
|
|
49
|
+
| `302ai/claude-opus-4-6-thinking` | 1.0M | | | | | | $5 | $25 |
|
|
50
|
+
| `302ai/claude-opus-4-7` | 200K | | | | | | $5 | $25 |
|
|
51
|
+
| `302ai/claude-sonnet-4-20250514` | 200K | | | | | | $3 | $15 |
|
|
52
|
+
| `302ai/claude-sonnet-4-5` | 200K | | | | | | $3 | $15 |
|
|
43
53
|
| `302ai/claude-sonnet-4-5-20250929` | 200K | | | | | | $3 | $15 |
|
|
44
54
|
| `302ai/claude-sonnet-4-5-20250929-thinking` | 200K | | | | | | $3 | $15 |
|
|
55
|
+
| `302ai/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
|
|
56
|
+
| `302ai/claude-sonnet-4-6-thinking` | 1.0M | | | | | | $3 | $15 |
|
|
45
57
|
| `302ai/deepseek-chat` | 128K | | | | | | $0.29 | $0.43 |
|
|
46
58
|
| `302ai/deepseek-reasoner` | 128K | | | | | | $0.29 | $0.43 |
|
|
47
59
|
| `302ai/deepseek-v3.2` | 128K | | | | | | $0.29 | $0.43 |
|
|
@@ -60,11 +72,21 @@ for await (const chunk of stream) {
|
|
|
60
72
|
| `302ai/gemini-3-flash-preview` | 1.0M | | | | | | $0.50 | $3 |
|
|
61
73
|
| `302ai/gemini-3-pro-image-preview` | 33K | | | | | | $2 | $120 |
|
|
62
74
|
| `302ai/gemini-3-pro-preview` | 1.0M | | | | | | $2 | $12 |
|
|
75
|
+
| `302ai/gemini-3.1-flash-image-preview` | 131K | | | | | | $0.50 | $60 |
|
|
63
76
|
| `302ai/glm-4.5` | 128K | | | | | | $0.29 | $1 |
|
|
77
|
+
| `302ai/glm-4.5-air` | 128K | | | | | | $0.11 | $0.29 |
|
|
78
|
+
| `302ai/glm-4.5-airx` | 128K | | | | | | $0.57 | $2 |
|
|
79
|
+
| `302ai/glm-4.5-x` | 128K | | | | | | $1 | $2 |
|
|
64
80
|
| `302ai/glm-4.5v` | 64K | | | | | | $0.29 | $0.86 |
|
|
65
81
|
| `302ai/glm-4.6` | 200K | | | | | | $0.29 | $1 |
|
|
66
82
|
| `302ai/glm-4.6v` | 128K | | | | | | $0.14 | $0.43 |
|
|
67
83
|
| `302ai/glm-4.7` | 200K | | | | | | $0.29 | $1 |
|
|
84
|
+
| `302ai/glm-4.7-flashx` | 200K | | | | | | $0.07 | $0.43 |
|
|
85
|
+
| `302ai/glm-5` | 200K | | | | | | $0.60 | $3 |
|
|
86
|
+
| `302ai/glm-5-turbo` | 200K | | | | | | $0.72 | $3 |
|
|
87
|
+
| `302ai/glm-5.1` | 200K | | | | | | $0.86 | $4 |
|
|
88
|
+
| `302ai/glm-5v-turbo` | 200K | | | | | | $0.72 | $3 |
|
|
89
|
+
| `302ai/glm-for-coding` | 200K | | | | | | $0.09 | $0.34 |
|
|
68
90
|
| `302ai/gpt-4.1` | 1.0M | | | | | | $2 | $8 |
|
|
69
91
|
| `302ai/gpt-4.1-mini` | 1.0M | | | | | | $0.40 | $2 |
|
|
70
92
|
| `302ai/gpt-4.1-nano` | 1.0M | | | | | | $0.10 | $0.40 |
|
|
@@ -77,17 +99,26 @@ for await (const chunk of stream) {
|
|
|
77
99
|
| `302ai/gpt-5.1-chat-latest` | 128K | | | | | | $1 | $10 |
|
|
78
100
|
| `302ai/gpt-5.2` | 400K | | | | | | $2 | $14 |
|
|
79
101
|
| `302ai/gpt-5.2-chat-latest` | 128K | | | | | | $2 | $14 |
|
|
102
|
+
| `302ai/gpt-5.4-mini` | 400K | | | | | | $0.75 | $5 |
|
|
103
|
+
| `302ai/gpt-5.4-mini-2026-03-17` | 400K | | | | | | $0.75 | $5 |
|
|
104
|
+
| `302ai/gpt-5.4-nano` | 400K | | | | | | $0.20 | $1 |
|
|
105
|
+
| `302ai/gpt-5.4-nano-2026-03-17` | 400K | | | | | | $0.20 | $1 |
|
|
80
106
|
| `302ai/grok-4-1-fast-non-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
81
107
|
| `302ai/grok-4-1-fast-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
82
108
|
| `302ai/grok-4-fast-non-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
83
109
|
| `302ai/grok-4-fast-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
84
110
|
| `302ai/grok-4.1` | 200K | | | | | | $2 | $10 |
|
|
111
|
+
| `302ai/grok-4.20-beta-0309-non-reasoning` | 2.0M | | | | | | $2 | $6 |
|
|
112
|
+
| `302ai/grok-4.20-beta-0309-reasoning` | 2.0M | | | | | | $2 | $6 |
|
|
113
|
+
| `302ai/grok-4.20-multi-agent-beta-0309` | 2.0M | | | | | | $2 | $6 |
|
|
85
114
|
| `302ai/kimi-k2-0905-preview` | 262K | | | | | | $0.63 | $3 |
|
|
86
115
|
| `302ai/kimi-k2-thinking` | 262K | | | | | | $0.57 | $2 |
|
|
87
116
|
| `302ai/kimi-k2-thinking-turbo` | 262K | | | | | | $1 | $9 |
|
|
88
117
|
| `302ai/MiniMax-M1` | 1.0M | | | | | | $0.13 | $1 |
|
|
89
118
|
| `302ai/MiniMax-M2` | 1.0M | | | | | | $0.33 | $1 |
|
|
90
119
|
| `302ai/MiniMax-M2.1` | 1.0M | | | | | | $0.30 | $1 |
|
|
120
|
+
| `302ai/MiniMax-M2.7` | 205K | | | | | | $0.30 | $1 |
|
|
121
|
+
| `302ai/MiniMax-M2.7-highspeed` | 205K | | | | | | $0.60 | $5 |
|
|
91
122
|
| `302ai/ministral-14b-2512` | 128K | | | | | | $0.33 | $0.33 |
|
|
92
123
|
| `302ai/mistral-large-2512` | 128K | | | | | | $1 | $3 |
|
|
93
124
|
| `302ai/qwen-flash` | 1.0M | | | | | | $0.02 | $0.22 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Alibaba (China)
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 76 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Alibaba (China) documentation](https://www.alibabacloud.com/help/en/model-studio/models).
|
|
6
6
|
|
|
@@ -46,6 +46,7 @@ for await (const chunk of stream) {
|
|
|
46
46
|
| `alibaba-cn/deepseek-v3-1` | 131K | | | | | | $0.57 | $2 |
|
|
47
47
|
| `alibaba-cn/deepseek-v3-2-exp` | 131K | | | | | | $0.29 | $0.43 |
|
|
48
48
|
| `alibaba-cn/glm-5` | 203K | | | | | | $0.86 | $3 |
|
|
49
|
+
| `alibaba-cn/glm-5.1` | 203K | | | | | | $0.87 | $3 |
|
|
49
50
|
| `alibaba-cn/kimi-k2-thinking` | 262K | | | | | | $0.57 | $2 |
|
|
50
51
|
| `alibaba-cn/kimi-k2.5` | 262K | | | | | | $0.57 | $2 |
|
|
51
52
|
| `alibaba-cn/kimi/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
|