ai 6.0.86 → 6.0.88

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  # ai
2
2
 
3
+ ## 6.0.88
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [2a1c664]
8
+ - @ai-sdk/gateway@3.0.48
9
+
10
+ ## 6.0.87
11
+
12
+ ### Patch Changes
13
+
14
+ - Updated dependencies [6bbd05b]
15
+ - @ai-sdk/gateway@3.0.47
16
+
3
17
  ## 6.0.86
4
18
 
5
19
  ### Patch Changes
package/dist/index.js CHANGED
@@ -1211,7 +1211,7 @@ var import_provider_utils3 = require("@ai-sdk/provider-utils");
1211
1211
  var import_provider_utils4 = require("@ai-sdk/provider-utils");
1212
1212
 
1213
1213
  // src/version.ts
1214
- var VERSION = true ? "6.0.86" : "0.0.0-test";
1214
+ var VERSION = true ? "6.0.88" : "0.0.0-test";
1215
1215
 
1216
1216
  // src/util/download/download.ts
1217
1217
  var download = async ({
package/dist/index.mjs CHANGED
@@ -1104,7 +1104,7 @@ import {
1104
1104
  } from "@ai-sdk/provider-utils";
1105
1105
 
1106
1106
  // src/version.ts
1107
- var VERSION = true ? "6.0.86" : "0.0.0-test";
1107
+ var VERSION = true ? "6.0.88" : "0.0.0-test";
1108
1108
 
1109
1109
  // src/util/download/download.ts
1110
1110
  var download = async ({
@@ -153,7 +153,7 @@ var import_provider_utils2 = require("@ai-sdk/provider-utils");
153
153
  var import_provider_utils3 = require("@ai-sdk/provider-utils");
154
154
 
155
155
  // src/version.ts
156
- var VERSION = true ? "6.0.86" : "0.0.0-test";
156
+ var VERSION = true ? "6.0.88" : "0.0.0-test";
157
157
 
158
158
  // src/util/download/download.ts
159
159
  var download = async ({
@@ -132,7 +132,7 @@ import {
132
132
  } from "@ai-sdk/provider-utils";
133
133
 
134
134
  // src/version.ts
135
- var VERSION = true ? "6.0.86" : "0.0.0-test";
135
+ var VERSION = true ? "6.0.88" : "0.0.0-test";
136
136
 
137
137
  // src/util/download/download.ts
138
138
  var download = async ({
@@ -132,6 +132,7 @@ Here are the capabilities of popular models:
132
132
  | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-codex` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
133
133
  | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-5-chat-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
134
134
  | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-opus-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
135
+ | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-sonnet-4-6` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
135
136
  | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-opus-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
136
137
  | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-opus-4-1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
137
138
  | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-opus-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
@@ -0,0 +1,189 @@
1
+ ---
2
+ title: Memory
3
+ description: Add persistent memory to your agent using provider-defined tools, memory providers, or a custom tool.
4
+ ---
5
+
6
+ # Memory
7
+
8
+ Memory lets your agent save information and recall it later. Without memory, every conversation starts fresh. With memory, your agent builds context over time, recalls previous interactions, and adapts to the user.
9
+
10
+ ## Three Approaches
11
+
12
+ You can add memory to your agent with the AI SDK in three ways, each with different tradeoffs:
13
+
14
+ | Approach | Effort | Flexibility | Provider Lock-in |
15
+ | ------------------------------------------------- | ------ | ----------- | -------------------------- |
16
+ | [Provider-Defined Tools](#provider-defined-tools) | Low | Medium | Yes |
17
+ | [Memory Providers](#memory-providers) | Low | Low | Depends on memory provider |
18
+ | [Custom Tool](#custom-tool) | High | High | No |
19
+
20
+ ## Provider-Defined Tools
21
+
22
+ [Provider-defined tools](/docs/foundations/tools#types-of-tools) are tools where the provider specifies the tool's `inputSchema` and `description`, but you provide the `execute` function. The model has been trained to use these tools, which can result in better performance compared to custom tools.
23
+
24
+ ### Anthropic Memory Tool
25
+
26
+ The [Anthropic Memory Tool](https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool) gives Claude a structured interface for managing a `/memories` directory. Claude reads its memory before starting tasks, creates and updates files as it works, and references them in future conversations.
27
+
28
+ ```ts
29
+ import { anthropic } from '@ai-sdk/anthropic';
30
+ import { ToolLoopAgent } from 'ai';
31
+
32
+ const memory = anthropic.tools.memory_20250818({
33
+ execute: async action => {
34
+ // `action` contains `command`, `path`, and other fields
35
+ // depending on the command (view, create, str_replace,
36
+ // insert, delete, rename).
37
+ // Implement your storage backend here.
38
+ // Return the result as a string.
39
+ },
40
+ });
41
+
42
+ const agent = new ToolLoopAgent({
43
+ model: 'anthropic/claude-haiku-4.5',
44
+ tools: { memory },
45
+ });
46
+
47
+ const result = await agent.generate({
48
+ prompt: 'Remember that my favorite editor is Neovim',
49
+ });
50
+ ```
51
+
52
+ The tool receives structured commands (`view`, `create`, `str_replace`, `insert`, `delete`, `rename`), each with a `path` scoped to `/memories`. Your `execute` function maps these to your storage backend (the filesystem, a database, or any other persistence layer).
53
+
54
+ **When to use this**: you want memory with minimal implementation effort and are already using Anthropic models. The tradeoff is provider lock-in, since this tool only works with Claude.
55
+
56
+ ## Memory Providers
57
+
58
+ Another approach is to use a provider that has memory built in. These providers wrap an external memory service and expose it through the AI SDK's standard interface. Memory storage, retrieval, and injection happen transparently, and you do not define any tools yourself.
59
+
60
+ ### Letta
61
+
62
+ [Letta](https://letta.com) provides agents with persistent long-term memory. You create an agent on Letta's platform (cloud or self-hosted), configure its memory there, and use the AI SDK provider to interact with it. Letta's agent runtime handles memory management (core memory, archival memory, recall).
63
+
64
+ ```bash
65
+ pnpm add @letta-ai/vercel-ai-sdk-provider
66
+ ```
67
+
68
+ ```ts
69
+ import { lettaCloud } from '@letta-ai/vercel-ai-sdk-provider';
70
+ import { ToolLoopAgent } from 'ai';
71
+
72
+ const agent = new ToolLoopAgent({
73
+ model: lettaCloud(),
74
+ providerOptions: {
75
+ letta: {
76
+ agent: { id: 'your-agent-id' },
77
+ },
78
+ },
79
+ });
80
+
81
+ const result = await agent.generate({
82
+ prompt: 'Remember that my favorite editor is Neovim',
83
+ });
84
+ ```
85
+
86
+ You can also use Letta's built-in memory tools alongside custom tools:
87
+
88
+ ```ts
89
+ import { lettaCloud } from '@letta-ai/vercel-ai-sdk-provider';
90
+ import { ToolLoopAgent } from 'ai';
91
+
92
+ const agent = new ToolLoopAgent({
93
+ model: lettaCloud(),
94
+ tools: {
95
+ core_memory_append: lettaCloud.tool('core_memory_append'),
96
+ memory_insert: lettaCloud.tool('memory_insert'),
97
+ memory_replace: lettaCloud.tool('memory_replace'),
98
+ },
99
+ providerOptions: {
100
+ letta: {
101
+ agent: { id: 'your-agent-id' },
102
+ },
103
+ },
104
+ });
105
+
106
+ const stream = agent.stream({
107
+ prompt: 'What do you remember about me?',
108
+ });
109
+ ```
110
+
111
+ See the [Letta provider documentation](/providers/community-providers/letta) for full setup and configuration.
112
+
113
+ ### Mem0
114
+
115
+ [Mem0](https://mem0.ai) adds a memory layer on top of any supported LLM provider. It automatically extracts memories from conversations, stores them, and retrieves relevant ones for future prompts.
116
+
117
+ ```bash
118
+ pnpm add @mem0/vercel-ai-provider
119
+ ```
120
+
121
+ ```ts
122
+ import { createMem0 } from '@mem0/vercel-ai-provider';
123
+ import { ToolLoopAgent } from 'ai';
124
+
125
+ const mem0 = createMem0({
126
+ provider: 'openai',
127
+ mem0ApiKey: process.env.MEM0_API_KEY,
128
+ apiKey: process.env.OPENAI_API_KEY,
129
+ });
130
+
131
+ const agent = new ToolLoopAgent({
132
+ model: mem0('gpt-4.1', { user_id: 'user-123' }),
133
+ });
134
+
135
+ const { text } = await agent.generate({
136
+ prompt: 'Remember that my favorite editor is Neovim',
137
+ });
138
+ ```
139
+
140
+ Mem0 works across multiple LLM providers (OpenAI, Anthropic, Google, Groq, Cohere). You can also manage memories explicitly:
141
+
142
+ ```ts
143
+ import { addMemories, retrieveMemories } from '@mem0/vercel-ai-provider';
144
+
145
+ await addMemories(messages, { user_id: 'user-123' });
146
+ const context = await retrieveMemories(prompt, { user_id: 'user-123' });
147
+ ```
148
+
149
+ See the [Mem0 provider documentation](/providers/community-providers/mem0) for full setup and configuration.
150
+
151
+ ### Supermemory
152
+
153
+ [Supermemory](https://supermemory.ai) is a long-term memory platform that adds persistent, self-growing memory to your AI applications. It provides tools that handle saving and retrieving memories automatically through semantic search.
154
+
155
+ ```bash
156
+ pnpm add @supermemory/tools
157
+ ```
158
+
159
+ ```ts
160
+ __PROVIDER_IMPORT__;
161
+ import { supermemoryTools } from '@supermemory/tools/ai-sdk';
162
+ import { ToolLoopAgent } from 'ai';
163
+
164
+ const agent = new ToolLoopAgent({
165
+ model: __MODEL__,
166
+ tools: supermemoryTools(process.env.SUPERMEMORY_API_KEY!),
167
+ });
168
+
169
+ const result = await agent.generate({
170
+ prompt: 'Remember that my favorite editor is Neovim',
171
+ });
172
+ ```
173
+
174
+ Supermemory works with any AI SDK provider. The tools give the model `addMemory` and `searchMemories` operations that handle storage and retrieval.
175
+
176
+ See the [Supermemory provider documentation](/providers/community-providers/supermemory) for full setup and configuration.
177
+
178
+ **When to use memory providers**: these providers are a good fit when you want memory without building any storage infrastructure. The tradeoff is that the provider controls memory behavior, so you have less visibility into what gets stored and how it is retrieved. You also take on a dependency on an external service.
179
+
180
+ ## Custom Tool
181
+
182
+ Building your own memory tool from scratch is the most flexible approach. You control the storage format, the interface, and the retrieval logic. This requires the most upfront work but gives you full ownership of how memory works, with no provider lock-in and no external dependencies.
183
+
184
+ There are two common patterns:
185
+
186
+ - **Structured actions**: you define explicit operations (`view`, `create`, `update`, `search`) and handle structured input yourself. Safe by design since you control every operation.
187
+ - **Bash-backed**: you give the model a sandboxed bash environment to compose shell commands (`cat`, `grep`, `sed`, `echo`) for flexible memory access. More powerful but requires command validation for safety.
188
+
189
+ For a full walkthrough of implementing a custom memory tool with a bash-backed interface, AST-based command validation, and filesystem persistence, see the **[Build a Custom Memory Tool](/cookbook/guides/custom-memory-tool)** recipe.
@@ -347,13 +347,20 @@ try {
347
347
 
348
348
  ## Video Models
349
349
 
350
- | Provider | Model | Features |
351
- | ----------------------------------------------------------------------- | -------------------------- | -------------------------------------- |
352
- | [FAL](/providers/ai-sdk-providers/fal#video-models) | `luma-dream-machine/ray-2` | Text-to-video, image-to-video |
353
- | [FAL](/providers/ai-sdk-providers/fal#video-models) | `minimax-video` | Text-to-video |
354
- | [Google](/providers/ai-sdk-providers/google#video-models) | `veo-2.0-generate-001` | Text-to-video, up to 4 videos per call |
355
- | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-3.1-generate-001` | Text-to-video, audio generation |
356
- | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-2.0-generate-001` | Text-to-video, up to 4 videos per call |
357
- | [Replicate](/providers/ai-sdk-providers/replicate#video-models) | `minimax/video-01` | Text-to-video |
350
+ | Provider | Model | Features |
351
+ | ----------------------------------------------------------------------- | --------------------------- | -------------------------------------- |
352
+ | [FAL](/providers/ai-sdk-providers/fal#video-models) | `luma-dream-machine/ray-2` | Text-to-video, image-to-video |
353
+ | [FAL](/providers/ai-sdk-providers/fal#video-models) | `minimax-video` | Text-to-video |
354
+ | [Google](/providers/ai-sdk-providers/google#video-models) | `veo-2.0-generate-001` | Text-to-video, up to 4 videos per call |
355
+ | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-3.1-generate-001` | Text-to-video, audio generation |
356
+ | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-3.1-fast-generate-001` | Text-to-video, audio generation |
357
+ | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-3.0-generate-001` | Text-to-video, audio generation |
358
+ | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-3.0-fast-generate-001` | Text-to-video, audio generation |
359
+ | [Google Vertex](/providers/ai-sdk-providers/google-vertex#video-models) | `veo-2.0-generate-001` | Text-to-video, up to 4 videos per call |
360
+ | [Kling AI](/providers/ai-sdk-providers/klingai#video-models) | `kling-v2.6-t2v` | Text-to-video |
361
+ | [Kling AI](/providers/ai-sdk-providers/klingai#video-models) | `kling-v2.6-i2v` | Image-to-video |
362
+ | [Kling AI](/providers/ai-sdk-providers/klingai#video-models) | `kling-v2.6-motion-control` | Motion control |
363
+ | [Replicate](/providers/ai-sdk-providers/replicate#video-models) | `minimax/video-01` | Text-to-video |
364
+ | [xAI](/providers/ai-sdk-providers/xai#video-models) | `grok-imagine-video` | Text-to-video, image-to-video, editing |
358
365
 
359
366
  Above are a small subset of the video models supported by the AI SDK providers. For more, see the respective provider documentation.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "ai",
3
- "version": "6.0.86",
3
+ "version": "6.0.88",
4
4
  "description": "AI SDK by Vercel - The AI Toolkit for TypeScript and JavaScript",
5
5
  "license": "Apache-2.0",
6
6
  "sideEffects": false,
@@ -45,7 +45,7 @@
45
45
  },
46
46
  "dependencies": {
47
47
  "@opentelemetry/api": "1.9.0",
48
- "@ai-sdk/gateway": "3.0.46",
48
+ "@ai-sdk/gateway": "3.0.48",
49
49
  "@ai-sdk/provider": "3.0.8",
50
50
  "@ai-sdk/provider-utils": "4.0.15"
51
51
  },