@ai-sdk/cerebras 2.0.24 → 2.0.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  # @ai-sdk/cerebras
2
2
 
3
+ ## 2.0.26
4
+
5
+ ### Patch Changes
6
+
7
+ - 1524271: chore: add skill information to README files
8
+ - Updated dependencies [1524271]
9
+ - @ai-sdk/openai-compatible@2.0.23
10
+
11
+ ## 2.0.25
12
+
13
+ ### Patch Changes
14
+
15
+ - 3988c08: docs: fix incorrect and outdated provider docs
16
+
3
17
  ## 2.0.24
4
18
 
5
19
  ### Patch Changes
package/README.md CHANGED
@@ -10,6 +10,14 @@ The Cerebras provider is available in the `@ai-sdk/cerebras` module. You can ins
10
10
  npm i @ai-sdk/cerebras
11
11
  ```
12
12
 
13
+ ## Skill for Coding Agents
14
+
15
+ If you use coding agents such as Claude Code or Cursor, we highly recommend adding the AI SDK skill to your repository:
16
+
17
+ ```shell
18
+ npx skills add vercel/ai
19
+ ```
20
+
13
21
  ## Provider Instance
14
22
 
15
23
  You can import the default provider instance `cerebras` from `@ai-sdk/cerebras`:
package/dist/index.js CHANGED
@@ -33,7 +33,7 @@ var import_provider_utils = require("@ai-sdk/provider-utils");
33
33
  var import_v4 = require("zod/v4");
34
34
 
35
35
  // src/version.ts
36
- var VERSION = true ? "2.0.24" : "0.0.0-test";
36
+ var VERSION = true ? "2.0.26" : "0.0.0-test";
37
37
 
38
38
  // src/cerebras-provider.ts
39
39
  var cerebrasErrorSchema = import_v4.z.object({
package/dist/index.mjs CHANGED
@@ -11,7 +11,7 @@ import {
11
11
  import { z } from "zod/v4";
12
12
 
13
13
  // src/version.ts
14
- var VERSION = true ? "2.0.24" : "0.0.0-test";
14
+ var VERSION = true ? "2.0.26" : "0.0.0-test";
15
15
 
16
16
  // src/cerebras-provider.ts
17
17
  var cerebrasErrorSchema = z.object({
@@ -97,23 +97,70 @@ const model = cerebras.languageModel('llama-3.3-70b');
97
97
  const model = cerebras.chat('llama-3.3-70b');
98
98
  ```
99
99
 
100
+ ### Reasoning Models
101
+
102
+ Cerebras offers several reasoning models including `gpt-oss-120b`, `qwen-3-32b`, and `zai-glm-4.7` that generate intermediate thinking tokens before their final response. The reasoning output is streamed through the standard AI SDK reasoning parts.
103
+
104
+ For `gpt-oss-120b`, you can control the reasoning depth using the `reasoningEffort` provider option:
105
+
106
+ ```ts
107
+ import { cerebras } from '@ai-sdk/cerebras';
108
+ import { streamText } from 'ai';
109
+
110
+ const result = streamText({
111
+ model: cerebras('gpt-oss-120b'),
112
+ providerOptions: {
113
+ cerebras: {
114
+ reasoningEffort: 'medium',
115
+ },
116
+ },
117
+ prompt: 'How many "r"s are in the word "strawberry"?',
118
+ });
119
+
120
+ for await (const part of result.fullStream) {
121
+ if (part.type === 'reasoning') {
122
+ console.log('Reasoning:', part.text);
123
+ } else if (part.type === 'text-delta') {
124
+ process.stdout.write(part.textDelta);
125
+ }
126
+ }
127
+ ```
128
+
129
+ See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details on how to integrate reasoning into your chatbot.
130
+
131
+ ### Provider Options
132
+
133
+ The following optional provider options are available for Cerebras language models:
134
+
135
+ - **reasoningEffort** _'low' | 'medium' | 'high'_
136
+
137
+ Controls the depth of reasoning for GPT-OSS models. Defaults to `'medium'`.
138
+
139
+ - **user** _string_
140
+
141
+ A unique identifier representing your end-user, which can help with monitoring and abuse detection.
142
+
143
+ - **strictJsonSchema** _boolean_
144
+
145
+ Whether to use strict JSON schema validation. When `true`, the model uses constrained decoding to guarantee schema compliance. Defaults to `true`.
146
+
100
147
  ## Model Capabilities
101
148
 
102
- | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
103
- | -------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
104
- | `llama3.1-8b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
105
- | `llama-3.3-70b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
106
- | `gpt-oss-120b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
107
- | `qwen-3-32b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
108
- | `qwen-3-235b-a22b-instruct-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
109
- | `qwen-3-235b-a22b-thinking-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
110
- | `zai-glm-4.6` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
111
- | `zai-glm-4.7` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
149
+ | Model | Image Input | Object Generation | Tool Usage | Tool Streaming | Reasoning |
150
+ | -------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
151
+ | `llama3.1-8b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
152
+ | `llama-3.3-70b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
153
+ | `gpt-oss-120b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
154
+ | `qwen-3-32b` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
155
+ | `qwen-3-235b-a22b-instruct-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
156
+ | `qwen-3-235b-a22b-thinking-2507` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
157
+ | `zai-glm-4.6` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
158
+ | `zai-glm-4.7` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
112
159
 
113
160
  <Note>
114
- Please see the [Cerebras
161
+ The models `qwen-3-32b` and `llama-3.3-70b` are scheduled for deprecation on
162
+ February 16, 2026. Please see the [Cerebras
115
163
  docs](https://inference-docs.cerebras.ai/introduction) for more details about
116
- the available models. Note that context windows are temporarily limited to
117
- 8192 tokens in the Free Tier. You can also pass any available provider model
118
- ID as a string if needed.
164
+ the available models and migration guidance. You can also pass any available
165
+ provider model ID as a string if needed.
119
166
  </Note>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/cerebras",
3
- "version": "2.0.24",
3
+ "version": "2.0.26",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",
@@ -29,7 +29,7 @@
29
29
  }
30
30
  },
31
31
  "dependencies": {
32
- "@ai-sdk/openai-compatible": "2.0.22",
32
+ "@ai-sdk/openai-compatible": "2.0.23",
33
33
  "@ai-sdk/provider": "3.0.5",
34
34
  "@ai-sdk/provider-utils": "4.0.10"
35
35
  },