ai-sdk-provider-codex-cli 0.1.0-ai-sdk-v4 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,80 +4,18 @@
4
4
  [![npm downloads](https://img.shields.io/npm/dm/ai-sdk-provider-codex-cli.svg)](https://www.npmjs.com/package/ai-sdk-provider-codex-cli)
5
5
  [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
6
6
  ![Node >= 18](https://img.shields.io/badge/node-%3E%3D18-43853d?logo=node.js&logoColor=white)
7
- ![AI SDK v4](https://img.shields.io/badge/AI%20SDK-v4-000?logo=vercel&logoColor=white)
7
+ ![AI SDK v5](https://img.shields.io/badge/AI%20SDK-v5-000?logo=vercel&logoColor=white)
8
8
  ![Modules: ESM + CJS](https://img.shields.io/badge/modules-ESM%20%2B%20CJS-3178c6)
9
9
  ![TypeScript](https://img.shields.io/badge/TypeScript-blue)
10
10
  [![PRs welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/issues)
11
11
  [![Latest Release](https://img.shields.io/github/v/release/ben-vargas/ai-sdk-provider-codex-cli?display_name=tag)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/releases/latest)
12
12
 
13
- A community provider for Vercel AI SDK v4 that uses OpenAI’s Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5 class models with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV1 interface.
13
+ A community provider for Vercel AI SDK v5 that uses OpenAI’s Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5 class models (`gpt-5` and the Codex-specific `gpt-5-codex` slug) with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV2 interface.
14
14
 
15
- Note: For AI SDK v5 support, see the `main` branch and install the `latest` tag from npm. This branch targets AI SDK v4 and is published under the `ai-sdk-v4` dist-tag.
16
-
17
- - Works with `generateText`, `streamText`, and `generateObject` (JSON schemas via prompt engineering)
15
+ - Works with `generateText`, `streamText`, and `generateObject` (native JSON Schema support via `--output-schema`)
18
16
  - Uses ChatGPT OAuth from `codex login` (tokens in `~/.codex/auth.json`) or `OPENAI_API_KEY`
19
17
  - Node-only (spawns a local process); supports CI and local dev
20
-
21
- ## Version Compatibility
22
-
23
- - Provider (v5 line, main branch): published as `latest`, compatible with AI SDK v5.
24
- - Provider (v4 line, this branch): published as `ai-sdk-v4`, compatible with AI SDK v4.
25
-
26
- | Provider Version | AI SDK Version | NPM Tag | Status | Branch |
27
- |------------------|----------------|-------------|--------------|------------|
28
- | 0.1.x | v5 | `latest` | Active | `main` |
29
- | 0.1.0-ai-sdk-v4 | v4 | `ai-sdk-v4` | Maintenance | `ai-sdk-v4`|
30
-
31
- ### Installing the Right Version
32
-
33
- - AI SDK v5 (recommended): `npm i ai-sdk-provider-codex-cli ai` (or `@latest`)
34
- - AI SDK v4 (this branch): `npm i ai-sdk-provider-codex-cli@ai-sdk-v4 ai@^4.3.16`
35
-
36
- ## AI SDK v4 vs v5
37
-
38
- - For AI SDK v4 (this branch):
39
- - Install: `npm i ai-sdk-provider-codex-cli@ai-sdk-v4 ai@^4.3.16`
40
- - Usage (non-streaming):
41
-
42
- ```ts
43
- import { generateText } from 'ai';
44
- import { codexCli } from 'ai-sdk-provider-codex-cli';
45
-
46
- const { text } = await generateText({
47
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
48
- prompt: 'Hello from v4',
49
- });
50
- ```
51
-
52
- - Usage (streaming):
53
-
54
- ```ts
55
- import { streamText } from 'ai';
56
- import { codexCli } from 'ai-sdk-provider-codex-cli';
57
-
58
- const { textStream } = await streamText({
59
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
60
- prompt: 'Stream a short reply',
61
- });
62
- for await (const chunk of textStream) process.stdout.write(chunk);
63
- ```
64
-
65
- - For AI SDK v5 (main branch):
66
- - Install: `npm i ai-sdk-provider-codex-cli ai` (or `@latest`)
67
- - Usage (streaming):
68
-
69
- ```ts
70
- import { streamText } from 'ai';
71
- import { codexCli } from 'ai-sdk-provider-codex-cli';
72
-
73
- const result = streamText({
74
- model: codexCli('gpt-5'),
75
- prompt: 'Hello from v5',
76
- });
77
- const text = await result.text;
78
- ```
79
-
80
- - See the `main` branch README for full v5 docs and examples.
18
+ - **v0.2.0 Breaking Changes**: Switched to `--experimental-json` and native schema enforcement (see [CHANGELOG](CHANGELOG.md))
81
19
 
82
20
  ## Installation
83
21
 
@@ -88,14 +26,16 @@ npm i -g @openai/codex
88
26
  codex login # or set OPENAI_API_KEY
89
27
  ```
90
28
 
29
+ > **⚠️ Version Requirement**: Requires Codex CLI **>= 0.42.0** for `--experimental-json` and `--output-schema` support. Check your version with `codex --version` and upgrade if needed:
30
+ >
31
+ > ```bash
32
+ > npm i -g @openai/codex@latest
33
+ > ```
34
+
91
35
  2. Install provider and AI SDK
92
36
 
93
37
  ```bash
94
- # AI SDK v4 users (this branch)
95
- npm i ai@^4.3.16 ai-sdk-provider-codex-cli@ai-sdk-v4
96
-
97
- # AI SDK v5 users (use main branch/latest tag)
98
- # npm i ai ai-sdk-provider-codex-cli
38
+ npm i ai ai-sdk-provider-codex-cli
99
39
  ```
100
40
 
101
41
  ## Quick Start
@@ -106,7 +46,7 @@ Text generation
106
46
  import { generateText } from 'ai';
107
47
  import { codexCli } from 'ai-sdk-provider-codex-cli';
108
48
 
109
- const model = codexCli('gpt-5', {
49
+ const model = codexCli('gpt-5-codex', {
110
50
  allowNpx: true,
111
51
  skipGitRepoCheck: true,
112
52
  approvalMode: 'on-failure',
@@ -126,8 +66,10 @@ Streaming
126
66
  import { streamText } from 'ai';
127
67
  import { codexCli } from 'ai-sdk-provider-codex-cli';
128
68
 
69
+ // The provider works with both `gpt-5` and `gpt-5-codex`; use the latter for
70
+ // the Codex CLI specific slug.
129
71
  const { textStream } = await streamText({
130
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
72
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
131
73
  prompt: 'Write two short lines of encouragement.',
132
74
  });
133
75
  for await (const chunk of textStream) process.stdout.write(chunk);
@@ -142,7 +84,7 @@ import { codexCli } from 'ai-sdk-provider-codex-cli';
142
84
 
143
85
  const schema = z.object({ name: z.string(), age: z.number().int() });
144
86
  const { object } = await generateObject({
145
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
87
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
146
88
  schema,
147
89
  prompt: 'Generate a small user profile.',
148
90
  });
@@ -153,22 +95,34 @@ console.log(object);
153
95
 
154
96
  - AI SDK v5 compatible (LanguageModelV2)
155
97
  - Streaming and non‑streaming
156
- - JSON object generation with Zod schemas (prompt‑engineered)
98
+ - **Native JSON Schema support** via `--output-schema` (API-enforced with `strict: true`)
99
+ - JSON object generation with Zod schemas (100-200 fewer tokens per request vs prompt engineering)
157
100
  - Safe defaults for non‑interactive automation (`on-failure`, `workspace-write`, `--skip-git-repo-check`)
158
101
  - Fallback to `npx @openai/codex` when not on PATH (`allowNpx`)
102
+ - Usage tracking from experimental JSON event format
159
103
 
160
104
  ### Streaming behavior
161
105
 
162
- When using `codex exec --json`, the Codex CLI intentionally suppresses token/assistant deltas in its JSON event stream. Instead, it writes the final assistant message to the file you pass via `--output-last-message` and then signals completion. This provider:
106
+ **Status:** Incremental streaming not currently supported with `--experimental-json` format (expected in future Codex CLI releases)
163
107
 
164
- - emits `response-metadata` early (as soon as the session is configured), and
165
- - returns the final text as a single `text-delta` right before `finish`.
108
+ The `--experimental-json` output format (introduced Sept 25, 2025) currently only emits `item.completed` events with full text content. Incremental streaming via `item.updated` or delta events is not yet implemented by OpenAI.
166
109
 
167
- This is expected behavior for JSON mode in Codex exec, so streaming typically “feels” like a final chunk rather than a gradual trickle.
110
+ **What this means:**
111
+ - `streamText()` works functionally but delivers the entire response in a single chunk after generation completes
112
+ - No incremental text deltas—you wait for the full response, then receive it all at once
113
+ - The AI SDK's streaming interface is supported, but actual incremental streaming is not available
114
+
115
+ **Future support:** The Codex CLI commit (344d4a1d) introducing experimental JSON explicitly notes: "or other item types like `item.output_delta` when we need streaming" and states "more event types and item types to come."
116
+
117
+ When OpenAI adds streaming support, this provider will be updated to handle those events and enable true incremental streaming.
168
118
 
169
119
  ## Documentation
170
120
 
171
- - Getting started, configuration, and troubleshooting live in `docs/` (v4 docs coming soon in this branch).
121
+ - Getting started, configuration, and troubleshooting live in `docs/`:
122
+ - [docs/ai-sdk-v5/guide.md](docs/ai-sdk-v5/guide.md) – full usage guide and examples
123
+ - [docs/ai-sdk-v5/configuration.md](docs/ai-sdk-v5/configuration.md) – all settings and how they map to CLI flags
124
+ - [docs/ai-sdk-v5/troubleshooting.md](docs/ai-sdk-v5/troubleshooting.md) – common issues and fixes
125
+ - [docs/ai-sdk-v5/limitations.md](docs/ai-sdk-v5/limitations.md) – known constraints and behavior differences
172
126
  - See [examples/](examples/) for runnable scripts covering core usage, streaming, permissions/sandboxing, and object generation.
173
127
 
174
128
  ## Authentication
@@ -192,15 +146,25 @@ See [docs/ai-sdk-v5/configuration.md](docs/ai-sdk-v5/configuration.md) for the f
192
146
 
193
147
  ## Zod Compatibility
194
148
 
195
- - Peer supports `zod@^3`
196
- - Validation logic normalizes zod v3/v4 error shapes
149
+ - Peer supports `zod@^3 || ^4`
150
+ - Validation logic normalizes v3/v4 error shapes
197
151
 
198
152
  ## Limitations
199
153
 
200
154
  - Node ≥ 18, local process only (no Edge)
201
- - Codex JSON mode (`codex exec --json`) suppresses mid‑response deltas; streaming typically yields a final chunk. The CLI writes the final assistant text via `--output-last-message`, which this provider reads and emits at the end.
155
+ - Codex `--experimental-json` mode emits events rather than streaming deltas; streaming typically yields a final chunk. The CLI provides the final assistant text in the `item.completed` event, which this provider reads and emits at the end.
202
156
  - Some AI SDK parameters are unsupported by Codex CLI (e.g., temperature/topP/penalties); the provider surfaces warnings and ignores them
203
157
 
158
+ ### JSON Schema Limitations (v0.2.0+)
159
+
160
+ **⚠️ Important:** OpenAI strict mode has limitations:
161
+
162
+ - **Optional fields NOT supported**: All fields must be required (no `.optional()`)
163
+ - **Format validators stripped**: `.email()`, `.url()`, `.uuid()` are removed (use descriptions instead)
164
+ - **Pattern validators stripped**: `.regex()` is removed (use descriptions instead)
165
+
166
+ See [LIMITATIONS.md](LIMITATIONS.md) for comprehensive details and migration guidance.
167
+
204
168
  ## Disclaimer
205
169
 
206
170
  This is a community provider and not an official OpenAI or Vercel product. You are responsible for complying with all applicable terms and ensuring safe usage.
package/dist/index.cjs CHANGED
@@ -13,22 +13,6 @@ var zod = require('zod');
13
13
  var _documentCurrentScript = typeof document !== 'undefined' ? document.currentScript : null;
14
14
  // src/codex-cli-provider.ts
15
15
 
16
- // src/extract-json.ts
17
- function extractJson(text) {
18
- const start = text.indexOf("{");
19
- if (start === -1) return text;
20
- let depth = 0;
21
- for (let i = start; i < text.length; i++) {
22
- const ch = text[i];
23
- if (ch === "{") depth++;
24
- else if (ch === "}") {
25
- depth--;
26
- if (depth === 0) return text.slice(start, i + 1);
27
- }
28
- }
29
- return text;
30
- }
31
-
32
16
  // src/logger.ts
33
17
  var defaultLogger = {
34
18
  warn: (m) => console.warn(m),
@@ -108,7 +92,7 @@ function isToolItem(p) {
108
92
  if (out.type === "text" && typeof out.value !== "string") return false;
109
93
  return true;
110
94
  }
111
- function mapMessagesToPrompt(prompt, mode = { type: "regular" }, jsonSchema) {
95
+ function mapMessagesToPrompt(prompt) {
112
96
  const warnings = [];
113
97
  const parts = [];
114
98
  let systemText;
@@ -151,17 +135,6 @@ function mapMessagesToPrompt(prompt, mode = { type: "regular" }, jsonSchema) {
151
135
  let promptText = "";
152
136
  if (systemText) promptText += systemText + "\n\n";
153
137
  promptText += parts.join("\n\n");
154
- if (mode.type === "object-json" && jsonSchema) {
155
- const schemaStr = JSON.stringify(jsonSchema, null, 2);
156
- promptText = `CRITICAL: You MUST respond with ONLY a JSON object. NO other text.
157
- Your response MUST start with { and end with }
158
- The JSON MUST match this EXACT schema:
159
- ${schemaStr}
160
-
161
- Now, based on the following conversation, generate ONLY the JSON object:
162
-
163
- ${promptText}`;
164
- }
165
138
  return { promptText, ...warnings.length ? { warnings } : {} };
166
139
  }
167
140
  function createAPICallError({
@@ -209,11 +182,12 @@ function resolveCodexPath(explicitPath, allowNpx) {
209
182
  }
210
183
  }
211
184
  var CodexCliLanguageModel = class {
212
- specificationVersion = "v1";
185
+ specificationVersion = "v2";
213
186
  provider = "codex-cli";
214
187
  defaultObjectGenerationMode = "json";
215
188
  supportsImageUrls = false;
216
- supportsStructuredOutputs = false;
189
+ supportedUrls = {};
190
+ supportsStructuredOutputs = true;
217
191
  modelId;
218
192
  settings;
219
193
  logger;
@@ -228,9 +202,9 @@ var CodexCliLanguageModel = class {
228
202
  const warn = validateModelId(this.modelId);
229
203
  if (warn) this.logger.warn(`Codex CLI model: ${warn}`);
230
204
  }
231
- buildArgs(promptText) {
205
+ buildArgs(promptText, responseFormat) {
232
206
  const base = resolveCodexPath(this.settings.codexPath, this.settings.allowNpx);
233
- const args = [...base.args, "exec", "--json"];
207
+ const args = [...base.args, "exec", "--experimental-json"];
234
208
  if (this.settings.fullAuto) {
235
209
  args.push("--full-auto");
236
210
  } else if (this.settings.dangerouslyBypassApprovalsAndSandbox) {
@@ -250,6 +224,20 @@ var CodexCliLanguageModel = class {
250
224
  if (this.modelId) {
251
225
  args.push("-m", this.modelId);
252
226
  }
227
+ let schemaPath;
228
+ if (responseFormat?.type === "json" && responseFormat.schema) {
229
+ const dir = fs.mkdtempSync(path.join(os.tmpdir(), "codex-schema-"));
230
+ schemaPath = path.join(dir, "schema.json");
231
+ const schema = typeof responseFormat.schema === "object" ? responseFormat.schema : {};
232
+ const sanitizedSchema = this.sanitizeJsonSchema(schema);
233
+ const schemaWithAdditional = {
234
+ ...sanitizedSchema,
235
+ // OpenAI strict mode requires additionalProperties=false even if user requested otherwise.
236
+ additionalProperties: false
237
+ };
238
+ fs.writeFileSync(schemaPath, JSON.stringify(schemaWithAdditional, null, 2));
239
+ args.push("--output-schema", schemaPath);
240
+ }
253
241
  args.push(promptText);
254
242
  const env = {
255
243
  ...process.env,
@@ -262,7 +250,34 @@ var CodexCliLanguageModel = class {
262
250
  lastMessagePath = path.join(dir, "last-message.txt");
263
251
  }
264
252
  args.push("--output-last-message", lastMessagePath);
265
- return { cmd: base.cmd, args, env, cwd: this.settings.cwd, lastMessagePath };
253
+ return { cmd: base.cmd, args, env, cwd: this.settings.cwd, lastMessagePath, schemaPath };
254
+ }
255
+ sanitizeJsonSchema(value) {
256
+ if (typeof value !== "object" || value === null) {
257
+ return value;
258
+ }
259
+ if (Array.isArray(value)) {
260
+ return value.map((item) => this.sanitizeJsonSchema(item));
261
+ }
262
+ const obj = value;
263
+ const result = {};
264
+ for (const [key, val] of Object.entries(obj)) {
265
+ if (key === "properties" && typeof val === "object" && val !== null && !Array.isArray(val)) {
266
+ const props = val;
267
+ const sanitizedProps = {};
268
+ for (const [propName, propSchema] of Object.entries(props)) {
269
+ sanitizedProps[propName] = this.sanitizeJsonSchema(propSchema);
270
+ }
271
+ result[key] = sanitizedProps;
272
+ continue;
273
+ }
274
+ if (key === "$schema" || key === "$id" || key === "$ref" || key === "$defs" || key === "definitions" || key === "title" || key === "examples" || key === "default" || key === "format" || // OpenAI strict mode doesn't support format
275
+ key === "pattern") {
276
+ continue;
277
+ }
278
+ result[key] = this.sanitizeJsonSchema(val);
279
+ }
280
+ return result;
266
281
  }
267
282
  mapWarnings(options) {
268
283
  const unsupported = [];
@@ -275,7 +290,6 @@ var CodexCliLanguageModel = class {
275
290
  });
276
291
  };
277
292
  add(options.temperature, "temperature");
278
- add(options.maxTokens, "maxTokens");
279
293
  add(options.topP, "topP");
280
294
  add(options.topK, "topK");
281
295
  add(options.presencePenalty, "presencePenalty");
@@ -284,11 +298,34 @@ var CodexCliLanguageModel = class {
284
298
  add(options.seed, "seed");
285
299
  return unsupported;
286
300
  }
287
- parseJsonLine(line) {
301
+ parseExperimentalJsonEvent(line) {
288
302
  try {
289
- return JSON.parse(line);
303
+ const evt = JSON.parse(line);
304
+ const result = {};
305
+ switch (evt.type) {
306
+ case "session.created":
307
+ result.sessionId = evt.session_id;
308
+ break;
309
+ case "turn.completed":
310
+ if (evt.usage) {
311
+ result.usage = {
312
+ inputTokens: evt.usage.input_tokens || 0,
313
+ outputTokens: evt.usage.output_tokens || 0
314
+ };
315
+ }
316
+ break;
317
+ case "item.completed":
318
+ if (evt.item?.item_type === "assistant_message" || evt.item?.item_type === "reasoning") {
319
+ result.text = evt.item.text;
320
+ }
321
+ break;
322
+ case "error":
323
+ result.error = evt.message;
324
+ break;
325
+ }
326
+ return result;
290
327
  } catch {
291
- return void 0;
328
+ return {};
292
329
  }
293
330
  }
294
331
  handleSpawnError(err, promptExcerpt) {
@@ -306,20 +343,19 @@ var CodexCliLanguageModel = class {
306
343
  });
307
344
  }
308
345
  async doGenerate(options) {
309
- const mode = options.mode?.type === "object-json" ? { type: "object-json" } : { type: "regular" };
310
- const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(
311
- options.prompt,
312
- mode,
313
- options.mode?.type === "object-json" ? options.mode.schema : void 0
314
- );
346
+ const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(options.prompt);
315
347
  const promptExcerpt = promptText.slice(0, 200);
316
348
  const warnings = [
317
349
  ...this.mapWarnings(options),
318
350
  ...mappingWarnings?.map((m) => ({ type: "other", message: m })) || []
319
351
  ];
320
- const { cmd, args, env, cwd, lastMessagePath } = this.buildArgs(promptText);
352
+ const responseFormat = options.responseFormat?.type === "json" ? { type: "json", schema: options.responseFormat.schema } : void 0;
353
+ const { cmd, args, env, cwd, lastMessagePath, schemaPath } = this.buildArgs(
354
+ promptText,
355
+ responseFormat
356
+ );
321
357
  let text = "";
322
- const usage = { promptTokens: 0, completionTokens: 0 };
358
+ const usage = { inputTokens: 0, outputTokens: 0, totalTokens: 0 };
323
359
  const finishReason = "stop";
324
360
  const child = child_process.spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
325
361
  let onAbort;
@@ -339,15 +375,13 @@ var CodexCliLanguageModel = class {
339
375
  child.stdout.on("data", (chunk) => {
340
376
  const lines = chunk.split(/\r?\n/).filter(Boolean);
341
377
  for (const line of lines) {
342
- const evt = this.parseJsonLine(line);
343
- if (!evt) continue;
344
- const msg = evt.msg;
345
- const type = msg?.type;
346
- if (type === "session_configured" && msg) {
347
- this.sessionId = msg.session_id;
348
- } else if (type === "task_complete" && msg) {
349
- const last = msg.last_agent_message;
350
- if (typeof last === "string") text = last;
378
+ const parsed = this.parseExperimentalJsonEvent(line);
379
+ if (parsed.sessionId) this.sessionId = parsed.sessionId;
380
+ if (parsed.text) text = parsed.text;
381
+ if (parsed.usage) {
382
+ usage.inputTokens = parsed.usage.inputTokens;
383
+ usage.outputTokens = parsed.usage.outputTokens;
384
+ usage.totalTokens = usage.inputTokens + usage.outputTokens;
351
385
  }
352
386
  }
353
387
  });
@@ -367,6 +401,13 @@ var CodexCliLanguageModel = class {
367
401
  });
368
402
  } finally {
369
403
  if (options.abortSignal && onAbort) options.abortSignal.removeEventListener("abort", onAbort);
404
+ if (schemaPath) {
405
+ try {
406
+ const schemaDir = path.dirname(schemaPath);
407
+ fs.rmSync(schemaDir, { recursive: true, force: true });
408
+ } catch {
409
+ }
410
+ }
370
411
  }
371
412
  if (!text && lastMessagePath) {
372
413
  try {
@@ -381,38 +422,35 @@ var CodexCliLanguageModel = class {
381
422
  } catch {
382
423
  }
383
424
  }
384
- if (options.mode?.type === "object-json" && text) {
385
- text = extractJson(text);
386
- }
425
+ const content = [{ type: "text", text }];
387
426
  return {
388
- text: text || void 0,
427
+ content,
389
428
  usage,
390
429
  finishReason,
391
- warnings: warnings.length ? warnings : void 0,
430
+ warnings,
392
431
  response: { id: providerUtils.generateId(), timestamp: /* @__PURE__ */ new Date(), modelId: this.modelId },
393
432
  request: { body: promptText },
394
433
  providerMetadata: {
395
434
  "codex-cli": { ...this.sessionId ? { sessionId: this.sessionId } : {} }
396
- },
397
- rawCall: { rawPrompt: promptText, rawSettings: { model: this.modelId, settings: this.settings } }
435
+ }
398
436
  };
399
437
  }
400
438
  async doStream(options) {
401
- const mode = options.mode?.type === "object-json" ? { type: "object-json" } : { type: "regular" };
402
- const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(
403
- options.prompt,
404
- mode,
405
- options.mode?.type === "object-json" ? options.mode.schema : void 0
406
- );
439
+ const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(options.prompt);
407
440
  const promptExcerpt = promptText.slice(0, 200);
408
441
  const warnings = [
409
442
  ...this.mapWarnings(options),
410
443
  ...mappingWarnings?.map((m) => ({ type: "other", message: m })) || []
411
444
  ];
412
- const { cmd, args, env, cwd, lastMessagePath } = this.buildArgs(promptText);
445
+ const responseFormat = options.responseFormat?.type === "json" ? { type: "json", schema: options.responseFormat.schema } : void 0;
446
+ const { cmd, args, env, cwd, lastMessagePath, schemaPath } = this.buildArgs(
447
+ promptText,
448
+ responseFormat
449
+ );
413
450
  const stream = new ReadableStream({
414
451
  start: (controller) => {
415
452
  const child = child_process.spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
453
+ controller.enqueue({ type: "stream-start", warnings });
416
454
  let stderr = "";
417
455
  let accumulatedText = "";
418
456
  const onAbort = () => {
@@ -431,32 +469,37 @@ var CodexCliLanguageModel = class {
431
469
  child.stdout.on("data", (chunk) => {
432
470
  const lines = chunk.split(/\r?\n/).filter(Boolean);
433
471
  for (const line of lines) {
434
- const evt = this.parseJsonLine(line);
435
- if (!evt) continue;
436
- const msg = evt.msg;
437
- const type = msg?.type;
438
- if (type === "session_configured" && msg) {
439
- this.sessionId = msg.session_id;
472
+ const parsed = this.parseExperimentalJsonEvent(line);
473
+ if (parsed.sessionId) {
474
+ this.sessionId = parsed.sessionId;
440
475
  controller.enqueue({
441
476
  type: "response-metadata",
442
477
  id: crypto.randomUUID(),
443
478
  timestamp: /* @__PURE__ */ new Date(),
444
479
  modelId: this.modelId
445
480
  });
446
- } else if (type === "task_complete" && msg) {
447
- const last = msg.last_agent_message;
448
- if (typeof last === "string") {
449
- accumulatedText = last;
450
- }
481
+ }
482
+ if (parsed.text) {
483
+ accumulatedText = parsed.text;
451
484
  }
452
485
  }
453
486
  });
487
+ const cleanupSchema = () => {
488
+ if (!schemaPath) return;
489
+ try {
490
+ const schemaDir = path.dirname(schemaPath);
491
+ fs.rmSync(schemaDir, { recursive: true, force: true });
492
+ } catch {
493
+ }
494
+ };
454
495
  child.on("error", (e) => {
455
496
  if (options.abortSignal) options.abortSignal.removeEventListener("abort", onAbort);
497
+ cleanupSchema();
456
498
  controller.error(this.handleSpawnError(e, promptExcerpt));
457
499
  });
458
500
  child.on("close", (code) => {
459
501
  if (options.abortSignal) options.abortSignal.removeEventListener("abort", onAbort);
502
+ cleanupSchema();
460
503
  if (code !== 0) {
461
504
  controller.error(
462
505
  createAPICallError({
@@ -481,18 +524,12 @@ var CodexCliLanguageModel = class {
481
524
  }
482
525
  }
483
526
  if (finalText) {
484
- if (options.mode?.type === "object-json") {
485
- finalText = extractJson(finalText);
486
- }
487
- controller.enqueue({ type: "text-delta", textDelta: finalText });
527
+ controller.enqueue({ type: "text-delta", id: crypto.randomUUID(), delta: finalText });
488
528
  }
489
529
  controller.enqueue({
490
530
  type: "finish",
491
531
  finishReason: "stop",
492
- usage: { promptTokens: 0, completionTokens: 0 },
493
- providerMetadata: {
494
- "codex-cli": { ...this.sessionId ? { sessionId: this.sessionId } : {} }
495
- }
532
+ usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 }
496
533
  });
497
534
  controller.close();
498
535
  });
@@ -500,11 +537,7 @@ var CodexCliLanguageModel = class {
500
537
  cancel: () => {
501
538
  }
502
539
  });
503
- return {
504
- stream,
505
- rawCall: { rawPrompt: promptText, rawSettings: { model: this.modelId, settings: this.settings } },
506
- warnings: warnings.length ? warnings : void 0
507
- };
540
+ return { stream, request: { body: promptText } };
508
541
  }
509
542
  };
510
543
 
package/dist/index.d.cts CHANGED
@@ -1,4 +1,4 @@
1
- import { ProviderV1, LanguageModelV1 } from '@ai-sdk/provider';
1
+ import { ProviderV2, LanguageModelV2 } from '@ai-sdk/provider';
2
2
 
3
3
  interface Logger {
4
4
  warn: (message: string) => void;
@@ -25,10 +25,10 @@ interface CodexCliProviderSettings {
25
25
  defaultSettings?: CodexCliSettings;
26
26
  }
27
27
 
28
- interface CodexCliProvider extends ProviderV1 {
29
- (modelId: string, settings?: CodexCliSettings): LanguageModelV1;
30
- languageModel(modelId: string, settings?: CodexCliSettings): LanguageModelV1;
31
- chat(modelId: string, settings?: CodexCliSettings): LanguageModelV1;
28
+ interface CodexCliProvider extends ProviderV2 {
29
+ (modelId: string, settings?: CodexCliSettings): LanguageModelV2;
30
+ languageModel(modelId: string, settings?: CodexCliSettings): LanguageModelV2;
31
+ chat(modelId: string, settings?: CodexCliSettings): LanguageModelV2;
32
32
  textEmbeddingModel(modelId: string): never;
33
33
  imageModel(modelId: string): never;
34
34
  }
@@ -39,23 +39,25 @@ interface CodexLanguageModelOptions {
39
39
  id: string;
40
40
  settings?: CodexCliSettings;
41
41
  }
42
- declare class CodexCliLanguageModel implements LanguageModelV1 {
43
- readonly specificationVersion: "v1";
42
+ declare class CodexCliLanguageModel implements LanguageModelV2 {
43
+ readonly specificationVersion: "v2";
44
44
  readonly provider = "codex-cli";
45
45
  readonly defaultObjectGenerationMode: "json";
46
46
  readonly supportsImageUrls = false;
47
- readonly supportsStructuredOutputs = false;
47
+ readonly supportedUrls: {};
48
+ readonly supportsStructuredOutputs = true;
48
49
  readonly modelId: string;
49
50
  readonly settings: CodexCliSettings;
50
51
  private logger;
51
52
  private sessionId?;
52
53
  constructor(options: CodexLanguageModelOptions);
53
54
  private buildArgs;
55
+ private sanitizeJsonSchema;
54
56
  private mapWarnings;
55
- private parseJsonLine;
57
+ private parseExperimentalJsonEvent;
56
58
  private handleSpawnError;
57
- doGenerate(options: Parameters<LanguageModelV1['doGenerate']>[0]): Promise<Awaited<ReturnType<LanguageModelV1['doGenerate']>>>;
58
- doStream(options: Parameters<LanguageModelV1['doStream']>[0]): Promise<Awaited<ReturnType<LanguageModelV1['doStream']>>>;
59
+ doGenerate(options: Parameters<LanguageModelV2['doGenerate']>[0]): Promise<Awaited<ReturnType<LanguageModelV2['doGenerate']>>>;
60
+ doStream(options: Parameters<LanguageModelV2['doStream']>[0]): Promise<Awaited<ReturnType<LanguageModelV2['doStream']>>>;
59
61
  }
60
62
 
61
63
  declare function isAuthenticationError(err: unknown): boolean;
package/dist/index.d.ts CHANGED
@@ -1,4 +1,4 @@
1
- import { ProviderV1, LanguageModelV1 } from '@ai-sdk/provider';
1
+ import { ProviderV2, LanguageModelV2 } from '@ai-sdk/provider';
2
2
 
3
3
  interface Logger {
4
4
  warn: (message: string) => void;
@@ -25,10 +25,10 @@ interface CodexCliProviderSettings {
25
25
  defaultSettings?: CodexCliSettings;
26
26
  }
27
27
 
28
- interface CodexCliProvider extends ProviderV1 {
29
- (modelId: string, settings?: CodexCliSettings): LanguageModelV1;
30
- languageModel(modelId: string, settings?: CodexCliSettings): LanguageModelV1;
31
- chat(modelId: string, settings?: CodexCliSettings): LanguageModelV1;
28
+ interface CodexCliProvider extends ProviderV2 {
29
+ (modelId: string, settings?: CodexCliSettings): LanguageModelV2;
30
+ languageModel(modelId: string, settings?: CodexCliSettings): LanguageModelV2;
31
+ chat(modelId: string, settings?: CodexCliSettings): LanguageModelV2;
32
32
  textEmbeddingModel(modelId: string): never;
33
33
  imageModel(modelId: string): never;
34
34
  }
@@ -39,23 +39,25 @@ interface CodexLanguageModelOptions {
39
39
  id: string;
40
40
  settings?: CodexCliSettings;
41
41
  }
42
- declare class CodexCliLanguageModel implements LanguageModelV1 {
43
- readonly specificationVersion: "v1";
42
+ declare class CodexCliLanguageModel implements LanguageModelV2 {
43
+ readonly specificationVersion: "v2";
44
44
  readonly provider = "codex-cli";
45
45
  readonly defaultObjectGenerationMode: "json";
46
46
  readonly supportsImageUrls = false;
47
- readonly supportsStructuredOutputs = false;
47
+ readonly supportedUrls: {};
48
+ readonly supportsStructuredOutputs = true;
48
49
  readonly modelId: string;
49
50
  readonly settings: CodexCliSettings;
50
51
  private logger;
51
52
  private sessionId?;
52
53
  constructor(options: CodexLanguageModelOptions);
53
54
  private buildArgs;
55
+ private sanitizeJsonSchema;
54
56
  private mapWarnings;
55
- private parseJsonLine;
57
+ private parseExperimentalJsonEvent;
56
58
  private handleSpawnError;
57
- doGenerate(options: Parameters<LanguageModelV1['doGenerate']>[0]): Promise<Awaited<ReturnType<LanguageModelV1['doGenerate']>>>;
58
- doStream(options: Parameters<LanguageModelV1['doStream']>[0]): Promise<Awaited<ReturnType<LanguageModelV1['doStream']>>>;
59
+ doGenerate(options: Parameters<LanguageModelV2['doGenerate']>[0]): Promise<Awaited<ReturnType<LanguageModelV2['doGenerate']>>>;
60
+ doStream(options: Parameters<LanguageModelV2['doStream']>[0]): Promise<Awaited<ReturnType<LanguageModelV2['doStream']>>>;
59
61
  }
60
62
 
61
63
  declare function isAuthenticationError(err: unknown): boolean;
package/dist/index.js CHANGED
@@ -2,30 +2,14 @@ import { NoSuchModelError, LoadAPIKeyError, APICallError } from '@ai-sdk/provide
2
2
  import { spawn } from 'child_process';
3
3
  import { randomUUID } from 'crypto';
4
4
  import { createRequire } from 'module';
5
- import { mkdtempSync, readFileSync, rmSync } from 'fs';
5
+ import { mkdtempSync, writeFileSync, rmSync, readFileSync } from 'fs';
6
6
  import { tmpdir } from 'os';
7
- import { join } from 'path';
7
+ import { join, dirname } from 'path';
8
8
  import { generateId } from '@ai-sdk/provider-utils';
9
9
  import { z } from 'zod';
10
10
 
11
11
  // src/codex-cli-provider.ts
12
12
 
13
- // src/extract-json.ts
14
- function extractJson(text) {
15
- const start = text.indexOf("{");
16
- if (start === -1) return text;
17
- let depth = 0;
18
- for (let i = start; i < text.length; i++) {
19
- const ch = text[i];
20
- if (ch === "{") depth++;
21
- else if (ch === "}") {
22
- depth--;
23
- if (depth === 0) return text.slice(start, i + 1);
24
- }
25
- }
26
- return text;
27
- }
28
-
29
13
  // src/logger.ts
30
14
  var defaultLogger = {
31
15
  warn: (m) => console.warn(m),
@@ -105,7 +89,7 @@ function isToolItem(p) {
105
89
  if (out.type === "text" && typeof out.value !== "string") return false;
106
90
  return true;
107
91
  }
108
- function mapMessagesToPrompt(prompt, mode = { type: "regular" }, jsonSchema) {
92
+ function mapMessagesToPrompt(prompt) {
109
93
  const warnings = [];
110
94
  const parts = [];
111
95
  let systemText;
@@ -148,17 +132,6 @@ function mapMessagesToPrompt(prompt, mode = { type: "regular" }, jsonSchema) {
148
132
  let promptText = "";
149
133
  if (systemText) promptText += systemText + "\n\n";
150
134
  promptText += parts.join("\n\n");
151
- if (mode.type === "object-json" && jsonSchema) {
152
- const schemaStr = JSON.stringify(jsonSchema, null, 2);
153
- promptText = `CRITICAL: You MUST respond with ONLY a JSON object. NO other text.
154
- Your response MUST start with { and end with }
155
- The JSON MUST match this EXACT schema:
156
- ${schemaStr}
157
-
158
- Now, based on the following conversation, generate ONLY the JSON object:
159
-
160
- ${promptText}`;
161
- }
162
135
  return { promptText, ...warnings.length ? { warnings } : {} };
163
136
  }
164
137
  function createAPICallError({
@@ -206,11 +179,12 @@ function resolveCodexPath(explicitPath, allowNpx) {
206
179
  }
207
180
  }
208
181
  var CodexCliLanguageModel = class {
209
- specificationVersion = "v1";
182
+ specificationVersion = "v2";
210
183
  provider = "codex-cli";
211
184
  defaultObjectGenerationMode = "json";
212
185
  supportsImageUrls = false;
213
- supportsStructuredOutputs = false;
186
+ supportedUrls = {};
187
+ supportsStructuredOutputs = true;
214
188
  modelId;
215
189
  settings;
216
190
  logger;
@@ -225,9 +199,9 @@ var CodexCliLanguageModel = class {
225
199
  const warn = validateModelId(this.modelId);
226
200
  if (warn) this.logger.warn(`Codex CLI model: ${warn}`);
227
201
  }
228
- buildArgs(promptText) {
202
+ buildArgs(promptText, responseFormat) {
229
203
  const base = resolveCodexPath(this.settings.codexPath, this.settings.allowNpx);
230
- const args = [...base.args, "exec", "--json"];
204
+ const args = [...base.args, "exec", "--experimental-json"];
231
205
  if (this.settings.fullAuto) {
232
206
  args.push("--full-auto");
233
207
  } else if (this.settings.dangerouslyBypassApprovalsAndSandbox) {
@@ -247,6 +221,20 @@ var CodexCliLanguageModel = class {
247
221
  if (this.modelId) {
248
222
  args.push("-m", this.modelId);
249
223
  }
224
+ let schemaPath;
225
+ if (responseFormat?.type === "json" && responseFormat.schema) {
226
+ const dir = mkdtempSync(join(tmpdir(), "codex-schema-"));
227
+ schemaPath = join(dir, "schema.json");
228
+ const schema = typeof responseFormat.schema === "object" ? responseFormat.schema : {};
229
+ const sanitizedSchema = this.sanitizeJsonSchema(schema);
230
+ const schemaWithAdditional = {
231
+ ...sanitizedSchema,
232
+ // OpenAI strict mode requires additionalProperties=false even if user requested otherwise.
233
+ additionalProperties: false
234
+ };
235
+ writeFileSync(schemaPath, JSON.stringify(schemaWithAdditional, null, 2));
236
+ args.push("--output-schema", schemaPath);
237
+ }
250
238
  args.push(promptText);
251
239
  const env = {
252
240
  ...process.env,
@@ -259,7 +247,34 @@ var CodexCliLanguageModel = class {
259
247
  lastMessagePath = join(dir, "last-message.txt");
260
248
  }
261
249
  args.push("--output-last-message", lastMessagePath);
262
- return { cmd: base.cmd, args, env, cwd: this.settings.cwd, lastMessagePath };
250
+ return { cmd: base.cmd, args, env, cwd: this.settings.cwd, lastMessagePath, schemaPath };
251
+ }
252
+ sanitizeJsonSchema(value) {
253
+ if (typeof value !== "object" || value === null) {
254
+ return value;
255
+ }
256
+ if (Array.isArray(value)) {
257
+ return value.map((item) => this.sanitizeJsonSchema(item));
258
+ }
259
+ const obj = value;
260
+ const result = {};
261
+ for (const [key, val] of Object.entries(obj)) {
262
+ if (key === "properties" && typeof val === "object" && val !== null && !Array.isArray(val)) {
263
+ const props = val;
264
+ const sanitizedProps = {};
265
+ for (const [propName, propSchema] of Object.entries(props)) {
266
+ sanitizedProps[propName] = this.sanitizeJsonSchema(propSchema);
267
+ }
268
+ result[key] = sanitizedProps;
269
+ continue;
270
+ }
271
+ if (key === "$schema" || key === "$id" || key === "$ref" || key === "$defs" || key === "definitions" || key === "title" || key === "examples" || key === "default" || key === "format" || // OpenAI strict mode doesn't support format
272
+ key === "pattern") {
273
+ continue;
274
+ }
275
+ result[key] = this.sanitizeJsonSchema(val);
276
+ }
277
+ return result;
263
278
  }
264
279
  mapWarnings(options) {
265
280
  const unsupported = [];
@@ -272,7 +287,6 @@ var CodexCliLanguageModel = class {
272
287
  });
273
288
  };
274
289
  add(options.temperature, "temperature");
275
- add(options.maxTokens, "maxTokens");
276
290
  add(options.topP, "topP");
277
291
  add(options.topK, "topK");
278
292
  add(options.presencePenalty, "presencePenalty");
@@ -281,11 +295,34 @@ var CodexCliLanguageModel = class {
281
295
  add(options.seed, "seed");
282
296
  return unsupported;
283
297
  }
284
- parseJsonLine(line) {
298
+ parseExperimentalJsonEvent(line) {
285
299
  try {
286
- return JSON.parse(line);
300
+ const evt = JSON.parse(line);
301
+ const result = {};
302
+ switch (evt.type) {
303
+ case "session.created":
304
+ result.sessionId = evt.session_id;
305
+ break;
306
+ case "turn.completed":
307
+ if (evt.usage) {
308
+ result.usage = {
309
+ inputTokens: evt.usage.input_tokens || 0,
310
+ outputTokens: evt.usage.output_tokens || 0
311
+ };
312
+ }
313
+ break;
314
+ case "item.completed":
315
+ if (evt.item?.item_type === "assistant_message" || evt.item?.item_type === "reasoning") {
316
+ result.text = evt.item.text;
317
+ }
318
+ break;
319
+ case "error":
320
+ result.error = evt.message;
321
+ break;
322
+ }
323
+ return result;
287
324
  } catch {
288
- return void 0;
325
+ return {};
289
326
  }
290
327
  }
291
328
  handleSpawnError(err, promptExcerpt) {
@@ -303,20 +340,19 @@ var CodexCliLanguageModel = class {
303
340
  });
304
341
  }
305
342
  async doGenerate(options) {
306
- const mode = options.mode?.type === "object-json" ? { type: "object-json" } : { type: "regular" };
307
- const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(
308
- options.prompt,
309
- mode,
310
- options.mode?.type === "object-json" ? options.mode.schema : void 0
311
- );
343
+ const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(options.prompt);
312
344
  const promptExcerpt = promptText.slice(0, 200);
313
345
  const warnings = [
314
346
  ...this.mapWarnings(options),
315
347
  ...mappingWarnings?.map((m) => ({ type: "other", message: m })) || []
316
348
  ];
317
- const { cmd, args, env, cwd, lastMessagePath } = this.buildArgs(promptText);
349
+ const responseFormat = options.responseFormat?.type === "json" ? { type: "json", schema: options.responseFormat.schema } : void 0;
350
+ const { cmd, args, env, cwd, lastMessagePath, schemaPath } = this.buildArgs(
351
+ promptText,
352
+ responseFormat
353
+ );
318
354
  let text = "";
319
- const usage = { promptTokens: 0, completionTokens: 0 };
355
+ const usage = { inputTokens: 0, outputTokens: 0, totalTokens: 0 };
320
356
  const finishReason = "stop";
321
357
  const child = spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
322
358
  let onAbort;
@@ -336,15 +372,13 @@ var CodexCliLanguageModel = class {
336
372
  child.stdout.on("data", (chunk) => {
337
373
  const lines = chunk.split(/\r?\n/).filter(Boolean);
338
374
  for (const line of lines) {
339
- const evt = this.parseJsonLine(line);
340
- if (!evt) continue;
341
- const msg = evt.msg;
342
- const type = msg?.type;
343
- if (type === "session_configured" && msg) {
344
- this.sessionId = msg.session_id;
345
- } else if (type === "task_complete" && msg) {
346
- const last = msg.last_agent_message;
347
- if (typeof last === "string") text = last;
375
+ const parsed = this.parseExperimentalJsonEvent(line);
376
+ if (parsed.sessionId) this.sessionId = parsed.sessionId;
377
+ if (parsed.text) text = parsed.text;
378
+ if (parsed.usage) {
379
+ usage.inputTokens = parsed.usage.inputTokens;
380
+ usage.outputTokens = parsed.usage.outputTokens;
381
+ usage.totalTokens = usage.inputTokens + usage.outputTokens;
348
382
  }
349
383
  }
350
384
  });
@@ -364,6 +398,13 @@ var CodexCliLanguageModel = class {
364
398
  });
365
399
  } finally {
366
400
  if (options.abortSignal && onAbort) options.abortSignal.removeEventListener("abort", onAbort);
401
+ if (schemaPath) {
402
+ try {
403
+ const schemaDir = dirname(schemaPath);
404
+ rmSync(schemaDir, { recursive: true, force: true });
405
+ } catch {
406
+ }
407
+ }
367
408
  }
368
409
  if (!text && lastMessagePath) {
369
410
  try {
@@ -378,38 +419,35 @@ var CodexCliLanguageModel = class {
378
419
  } catch {
379
420
  }
380
421
  }
381
- if (options.mode?.type === "object-json" && text) {
382
- text = extractJson(text);
383
- }
422
+ const content = [{ type: "text", text }];
384
423
  return {
385
- text: text || void 0,
424
+ content,
386
425
  usage,
387
426
  finishReason,
388
- warnings: warnings.length ? warnings : void 0,
427
+ warnings,
389
428
  response: { id: generateId(), timestamp: /* @__PURE__ */ new Date(), modelId: this.modelId },
390
429
  request: { body: promptText },
391
430
  providerMetadata: {
392
431
  "codex-cli": { ...this.sessionId ? { sessionId: this.sessionId } : {} }
393
- },
394
- rawCall: { rawPrompt: promptText, rawSettings: { model: this.modelId, settings: this.settings } }
432
+ }
395
433
  };
396
434
  }
397
435
  async doStream(options) {
398
- const mode = options.mode?.type === "object-json" ? { type: "object-json" } : { type: "regular" };
399
- const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(
400
- options.prompt,
401
- mode,
402
- options.mode?.type === "object-json" ? options.mode.schema : void 0
403
- );
436
+ const { promptText, warnings: mappingWarnings } = mapMessagesToPrompt(options.prompt);
404
437
  const promptExcerpt = promptText.slice(0, 200);
405
438
  const warnings = [
406
439
  ...this.mapWarnings(options),
407
440
  ...mappingWarnings?.map((m) => ({ type: "other", message: m })) || []
408
441
  ];
409
- const { cmd, args, env, cwd, lastMessagePath } = this.buildArgs(promptText);
442
+ const responseFormat = options.responseFormat?.type === "json" ? { type: "json", schema: options.responseFormat.schema } : void 0;
443
+ const { cmd, args, env, cwd, lastMessagePath, schemaPath } = this.buildArgs(
444
+ promptText,
445
+ responseFormat
446
+ );
410
447
  const stream = new ReadableStream({
411
448
  start: (controller) => {
412
449
  const child = spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
450
+ controller.enqueue({ type: "stream-start", warnings });
413
451
  let stderr = "";
414
452
  let accumulatedText = "";
415
453
  const onAbort = () => {
@@ -428,32 +466,37 @@ var CodexCliLanguageModel = class {
428
466
  child.stdout.on("data", (chunk) => {
429
467
  const lines = chunk.split(/\r?\n/).filter(Boolean);
430
468
  for (const line of lines) {
431
- const evt = this.parseJsonLine(line);
432
- if (!evt) continue;
433
- const msg = evt.msg;
434
- const type = msg?.type;
435
- if (type === "session_configured" && msg) {
436
- this.sessionId = msg.session_id;
469
+ const parsed = this.parseExperimentalJsonEvent(line);
470
+ if (parsed.sessionId) {
471
+ this.sessionId = parsed.sessionId;
437
472
  controller.enqueue({
438
473
  type: "response-metadata",
439
474
  id: randomUUID(),
440
475
  timestamp: /* @__PURE__ */ new Date(),
441
476
  modelId: this.modelId
442
477
  });
443
- } else if (type === "task_complete" && msg) {
444
- const last = msg.last_agent_message;
445
- if (typeof last === "string") {
446
- accumulatedText = last;
447
- }
478
+ }
479
+ if (parsed.text) {
480
+ accumulatedText = parsed.text;
448
481
  }
449
482
  }
450
483
  });
484
+ const cleanupSchema = () => {
485
+ if (!schemaPath) return;
486
+ try {
487
+ const schemaDir = dirname(schemaPath);
488
+ rmSync(schemaDir, { recursive: true, force: true });
489
+ } catch {
490
+ }
491
+ };
451
492
  child.on("error", (e) => {
452
493
  if (options.abortSignal) options.abortSignal.removeEventListener("abort", onAbort);
494
+ cleanupSchema();
453
495
  controller.error(this.handleSpawnError(e, promptExcerpt));
454
496
  });
455
497
  child.on("close", (code) => {
456
498
  if (options.abortSignal) options.abortSignal.removeEventListener("abort", onAbort);
499
+ cleanupSchema();
457
500
  if (code !== 0) {
458
501
  controller.error(
459
502
  createAPICallError({
@@ -478,18 +521,12 @@ var CodexCliLanguageModel = class {
478
521
  }
479
522
  }
480
523
  if (finalText) {
481
- if (options.mode?.type === "object-json") {
482
- finalText = extractJson(finalText);
483
- }
484
- controller.enqueue({ type: "text-delta", textDelta: finalText });
524
+ controller.enqueue({ type: "text-delta", id: randomUUID(), delta: finalText });
485
525
  }
486
526
  controller.enqueue({
487
527
  type: "finish",
488
528
  finishReason: "stop",
489
- usage: { promptTokens: 0, completionTokens: 0 },
490
- providerMetadata: {
491
- "codex-cli": { ...this.sessionId ? { sessionId: this.sessionId } : {} }
492
- }
529
+ usage: { inputTokens: 0, outputTokens: 0, totalTokens: 0 }
493
530
  });
494
531
  controller.close();
495
532
  });
@@ -497,11 +534,7 @@ var CodexCliLanguageModel = class {
497
534
  cancel: () => {
498
535
  }
499
536
  });
500
- return {
501
- stream,
502
- rawCall: { rawPrompt: promptText, rawSettings: { model: this.modelId, settings: this.settings } },
503
- warnings: warnings.length ? warnings : void 0
504
- };
537
+ return { stream, request: { body: promptText } };
505
538
  }
506
539
  };
507
540
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "ai-sdk-provider-codex-cli",
3
- "version": "0.1.0-ai-sdk-v4",
4
- "description": "AI SDK v4 provider for spawning OpenAI Codex CLI (gpt-5 via ChatGPT OAuth)",
3
+ "version": "0.2.0",
4
+ "description": "AI SDK v5 provider for OpenAI Codex CLI with native JSON Schema support",
5
5
  "keywords": [
6
6
  "ai-sdk",
7
7
  "codex",
@@ -56,8 +56,8 @@
56
56
  "test:coverage": "vitest run --coverage"
57
57
  },
58
58
  "dependencies": {
59
- "@ai-sdk/provider": "1.1.3",
60
- "@ai-sdk/provider-utils": "2.2.8",
59
+ "@ai-sdk/provider": "2.0.0",
60
+ "@ai-sdk/provider-utils": "3.0.3",
61
61
  "jsonc-parser": "^3.3.1"
62
62
  },
63
63
  "optionalDependencies": {
@@ -67,17 +67,17 @@
67
67
  "@eslint/js": "^9.14.0",
68
68
  "@types/node": "20.17.24",
69
69
  "@vitest/coverage-v8": "^3.2.4",
70
- "ai": "4.3.16",
70
+ "ai": "5.0.14",
71
71
  "eslint": "^9.14.0",
72
72
  "prettier": "^3.3.3",
73
73
  "tsup": "8.5.0",
74
74
  "typescript": "5.6.3",
75
75
  "vitest": "^3.2.4",
76
76
  "typescript-eslint": "^8.6.0",
77
- "zod": "3.24.1"
77
+ "zod": "^4.0.17"
78
78
  },
79
79
  "peerDependencies": {
80
- "zod": "^3.0.0"
80
+ "zod": "^3.0.0 || ^4.0.0"
81
81
  },
82
82
  "engines": {
83
83
  "node": ">=18"