ai-sdk-provider-codex-cli 0.2.0 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -15,6 +15,7 @@ A community provider for Vercel AI SDK v5 that uses OpenAI’s Codex CLI (non‑
15
15
  - Works with `generateText`, `streamText`, and `generateObject` (native JSON Schema support via `--output-schema`)
16
16
  - Uses ChatGPT OAuth from `codex login` (tokens in `~/.codex/auth.json`) or `OPENAI_API_KEY`
17
17
  - Node-only (spawns a local process); supports CI and local dev
18
+ - **v0.3.0**: Adds comprehensive tool streaming support for monitoring autonomous tool execution
18
19
  - **v0.2.0 Breaking Changes**: Switched to `--experimental-json` and native schema enforcement (see [CHANGELOG](CHANGELOG.md))
19
20
 
20
21
  ## Installation
@@ -26,7 +27,7 @@ npm i -g @openai/codex
26
27
  codex login # or set OPENAI_API_KEY
27
28
  ```
28
29
 
29
- > **⚠️ Version Requirement**: Requires Codex CLI **>= 0.42.0** for `--experimental-json` and `--output-schema` support. Check your version with `codex --version` and upgrade if needed:
30
+ > **⚠️ Version Requirement**: Requires Codex CLI **>= 0.42.0** for `--experimental-json` and `--output-schema` support. **>= 0.44.0 recommended** for full usage tracking and tool streaming support. Check your version with `codex --version` and upgrade if needed:
30
31
  >
31
32
  > ```bash
32
33
  > npm i -g @openai/codex@latest
@@ -95,19 +96,53 @@ console.log(object);
95
96
 
96
97
  - AI SDK v5 compatible (LanguageModelV2)
97
98
  - Streaming and non‑streaming
99
+ - **Tool streaming support** (v0.3.0+) - Monitor autonomous tool execution in real-time
98
100
  - **Native JSON Schema support** via `--output-schema` (API-enforced with `strict: true`)
99
101
  - JSON object generation with Zod schemas (100-200 fewer tokens per request vs prompt engineering)
100
102
  - Safe defaults for non‑interactive automation (`on-failure`, `workspace-write`, `--skip-git-repo-check`)
101
103
  - Fallback to `npx @openai/codex` when not on PATH (`allowNpx`)
102
104
  - Usage tracking from experimental JSON event format
103
105
 
104
- ### Streaming behavior
106
+ ### Tool Streaming (v0.3.0+)
107
+
108
+ The provider supports comprehensive tool streaming, enabling real-time monitoring of Codex CLI's autonomous tool execution:
109
+
110
+ ```js
111
+ import { streamText } from 'ai';
112
+ import { codexCli } from 'ai-sdk-provider-codex-cli';
113
+
114
+ const result = await streamText({
115
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
116
+ prompt: 'List files and count lines in the largest one',
117
+ });
118
+
119
+ for await (const part of result.fullStream) {
120
+ if (part.type === 'tool-call') {
121
+ console.log('🔧 Tool:', part.toolName);
122
+ }
123
+ if (part.type === 'tool-result') {
124
+ console.log('✅ Result:', part.result);
125
+ }
126
+ }
127
+ ```
128
+
129
+ **What you get:**
130
+
131
+ - Tool invocation events when Codex starts executing tools (exec, patch, web_search, mcp_tool_call)
132
+ - Tool input tracking with full parameter visibility
133
+ - Tool result events with complete output payloads
134
+ - `providerExecuted: true` on all tool calls (Codex executes autonomously, app doesn't need to)
135
+
136
+ **Limitation:** Real-time output streaming (`output-delta` events) not yet available. Tool outputs delivered in final `tool-result` event. See `examples/streaming-tool-calls.mjs` and `examples/streaming-multiple-tools.mjs` for usage patterns.
137
+
138
+ ### Text Streaming behavior
105
139
 
106
140
  **Status:** Incremental streaming not currently supported with `--experimental-json` format (expected in future Codex CLI releases)
107
141
 
108
142
  The `--experimental-json` output format (introduced Sept 25, 2025) currently only emits `item.completed` events with full text content. Incremental streaming via `item.updated` or delta events is not yet implemented by OpenAI.
109
143
 
110
144
  **What this means:**
145
+
111
146
  - `streamText()` works functionally but delivers the entire response in a single chunk after generation completes
112
147
  - No incremental text deltas—you wait for the full response, then receive it all at once
113
148
  - The AI SDK's streaming interface is supported, but actual incremental streaming is not available
@@ -144,6 +179,73 @@ When OpenAI adds streaming support, this provider will be updated to handle thos
144
179
 
145
180
  See [docs/ai-sdk-v5/configuration.md](docs/ai-sdk-v5/configuration.md) for the full list and examples.
146
181
 
182
+ ## Model Parameters & Advanced Options (v0.4.0+)
183
+
184
+ Control reasoning effort, verbosity, and advanced Codex features at model creation time:
185
+
186
+ ```ts
187
+ import { codexCli } from 'ai-sdk-provider-codex-cli';
188
+
189
+ const model = codexCli('gpt-5-codex', {
190
+ allowNpx: true,
191
+ skipGitRepoCheck: true,
192
+
193
+ // Reasoning & verbosity
194
+ reasoningEffort: 'medium', // minimal | low | medium | high
195
+ reasoningSummary: 'auto', // auto | detailed (Note: 'concise' and 'none' are rejected by API)
196
+ reasoningSummaryFormat: 'none', // none | experimental
197
+ modelVerbosity: 'high', // low | medium | high
198
+
199
+ // Advanced features
200
+ includePlanTool: true, // adds --include-plan-tool
201
+ profile: 'production', // adds --profile production
202
+ oss: false, // adds --oss when true
203
+ webSearch: true, // maps to -c tools.web_search=true
204
+
205
+ // Generic overrides (maps to -c key=value)
206
+ configOverrides: {
207
+ experimental_resume: '/tmp/session.jsonl',
208
+ sandbox_workspace_write: { network_access: true },
209
+ },
210
+ });
211
+ ```
212
+
213
+ Nested override objects are flattened to dotted keys (e.g., the example above emits
214
+ `-c sandbox_workspace_write.network_access=true`). Arrays are serialized to JSON strings.
215
+
216
+ ### Per-call overrides via `providerOptions` (v0.4.0+)
217
+
218
+ Override these parameters for individual AI SDK calls using the `providerOptions` map. Per-call
219
+ values take precedence over constructor defaults while leaving other settings intact.
220
+
221
+ ```ts
222
+ import { generateText } from 'ai';
223
+ import { codexCli } from 'ai-sdk-provider-codex-cli';
224
+
225
+ const model = codexCli('gpt-5-codex', {
226
+ allowNpx: true,
227
+ reasoningEffort: 'medium',
228
+ modelVerbosity: 'medium',
229
+ });
230
+
231
+ const response = await generateText({
232
+ model,
233
+ prompt: 'Summarize the latest release notes.',
234
+ providerOptions: {
235
+ 'codex-cli': {
236
+ reasoningEffort: 'high',
237
+ reasoningSummary: 'detailed',
238
+ textVerbosity: 'high', // AI SDK naming; maps to model_verbosity
239
+ configOverrides: {
240
+ experimental_resume: '/tmp/resume.jsonl',
241
+ },
242
+ },
243
+ },
244
+ });
245
+ ```
246
+
247
+ **Precedence:** `providerOptions['codex-cli']` > constructor `CodexCliSettings` > Codex CLI defaults.
248
+
147
249
  ## Zod Compatibility
148
250
 
149
251
  - Peer supports `zod@^3 || ^4`