ai-sdk-provider-codex-cli 1.0.0-beta.1 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,33 +4,31 @@
4
4
  [![npm downloads](https://img.shields.io/npm/dm/ai-sdk-provider-codex-cli.svg)](https://www.npmjs.com/package/ai-sdk-provider-codex-cli)
5
5
  [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
6
6
  ![Node >= 18](https://img.shields.io/badge/node-%3E%3D18-43853d?logo=node.js&logoColor=white)
7
- ![AI SDK v6 beta](https://img.shields.io/badge/AI%20SDK-v6%20beta-000?logo=vercel&logoColor=white)
7
+ ![AI SDK v6](https://img.shields.io/badge/AI%20SDK-v6-000?logo=vercel&logoColor=white)
8
8
  ![Modules: ESM + CJS](https://img.shields.io/badge/modules-ESM%20%2B%20CJS-3178c6)
9
9
  ![TypeScript](https://img.shields.io/badge/TypeScript-blue)
10
10
  [![PRs welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/issues)
11
11
  [![Latest Release](https://img.shields.io/github/v/release/ben-vargas/ai-sdk-provider-codex-cli?display_name=tag)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/releases/latest)
12
12
 
13
- A community provider for Vercel AI SDK v6 (beta) that uses OpenAI's Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5.1 class models (`gpt-5.1`, the Codex-specific `gpt-5.1-codex`, the flagship `gpt-5.1-codex-max`, and the lightweight `gpt-5.1-codex-mini` slugs) with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV3 interface. Legacy GPT-5 / GPT-5-codex slugs remain compatible for existing workflows.
14
-
15
- > **⚠️ AI SDK v6 Beta**: This version (`1.0.0-beta.x`) requires AI SDK v6 beta packages. For AI SDK v5, use the `0.x` releases from the main branch.
13
+ A community provider for Vercel AI SDK v6 that uses OpenAI's Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5.1 class models (`gpt-5.1`, the Codex-specific `gpt-5.1-codex`, the flagship `gpt-5.1-codex-max`, and the lightweight `gpt-5.1-codex-mini` slugs) with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV3 interface. Legacy GPT-5 / GPT-5-codex slugs remain compatible for existing workflows.
16
14
 
17
15
  - Works with `generateText`, `streamText`, and `generateObject` (native JSON Schema support via `--output-schema`)
18
16
  - Uses ChatGPT OAuth from `codex login` (tokens in `~/.codex/auth.json`) or `OPENAI_API_KEY`
19
17
  - Node-only (spawns a local process); supports CI and local dev
20
- - **v1.0.0-beta.1**: AI SDK v6 migration with LanguageModelV3 interface
18
+ - **v1.0.0**: AI SDK v6 stable migration with LanguageModelV3 interface
21
19
  - **v0.5.0**: Adds comprehensive logging system with verbose mode and custom logger support
22
20
  - **v0.3.0**: Adds comprehensive tool streaming support for monitoring autonomous tool execution
23
21
 
24
22
  ## Version Compatibility
25
23
 
26
- | Provider Version | AI SDK Version | NPM Tag | Status | Branch |
27
- | ---------------- | -------------- | --------- | ------ | ------------ |
28
- | 1.x.x-beta | v6 (beta) | `beta` | Beta | `ai-sdk-v6` |
29
- | 0.x.x | v5 | `latest` | Stable | `main` |
24
+ | Provider Version | AI SDK Version | NPM Tag | NPM Installation |
25
+ | ---------------- | -------------- | ------------ | ---------------- |
26
+ | 1.x.x | v6 | `latest` | `npm i ai-sdk-provider-codex-cli ai@^6.0.0` |
27
+ | 0.x.x | v5 | `ai-sdk-v5` | `npm i ai-sdk-provider-codex-cli@ai-sdk-v5 ai@^5.0.0` |
30
28
 
31
29
  ## Installation
32
30
 
33
- ### For AI SDK v6 (beta)
31
+ ### For AI SDK v6 (default)
34
32
 
35
33
  1. Install and authenticate Codex CLI
36
34
 
@@ -39,16 +37,16 @@ npm i -g @openai/codex
39
37
  codex login # or set OPENAI_API_KEY
40
38
  ```
41
39
 
42
- 2. Install provider and AI SDK v6 beta
40
+ 2. Install provider and AI SDK v6
43
41
 
44
42
  ```bash
45
- npm i ai@beta ai-sdk-provider-codex-cli@beta
43
+ npm i ai ai-sdk-provider-codex-cli
46
44
  ```
47
45
 
48
- ### For AI SDK v5 (stable)
46
+ ### For AI SDK v5
49
47
 
50
48
  ```bash
51
- npm i ai ai-sdk-provider-codex-cli
49
+ npm i ai@^5.0.0 ai-sdk-provider-codex-cli@ai-sdk-v5
52
50
  ```
53
51
 
54
52
  > **⚠️ Codex CLI Version**: Requires Codex CLI **>= 0.42.0** for `--experimental-json` and `--output-schema` support. **>= 0.60.0 recommended** for `gpt-5.1-codex-max` and `xhigh` reasoning effort. If you supply your own Codex CLI (global install or custom `codexPath`/`allowNpx`), check it with `codex --version` and upgrade if needed. The optional dependency `@openai/codex` in this package pulls a compatible version automatically.
@@ -315,7 +313,6 @@ const model = codexCli('gpt-5.1-codex', {
315
313
  modelVerbosity: 'high', // low | medium | high
316
314
 
317
315
  // Advanced features
318
- includePlanTool: true, // adds --include-plan-tool
319
316
  profile: 'production', // adds --profile production
320
317
  oss: false, // adds --oss when true
321
318
  webSearch: true, // maps to -c tools.web_search=true
package/dist/index.cjs CHANGED
@@ -117,7 +117,6 @@ var settingsSchema = zod.z.object({
117
117
  reasoningSummaryFormat: zod.z.enum(["none", "experimental"]).optional(),
118
118
  modelVerbosity: zod.z.enum(["low", "medium", "high"]).optional(),
119
119
  // NEW: Advanced features
120
- includePlanTool: zod.z.boolean().optional(),
121
120
  profile: zod.z.string().optional(),
122
121
  oss: zod.z.boolean().optional(),
123
122
  webSearch: zod.z.boolean().optional(),
@@ -356,6 +355,41 @@ function isAuthenticationError(err) {
356
355
  }
357
356
 
358
357
  // src/codex-cli-language-model.ts
358
+ function createEmptyCodexUsage() {
359
+ return {
360
+ inputTokens: {
361
+ total: 0,
362
+ noCache: 0,
363
+ cacheRead: 0,
364
+ cacheWrite: 0
365
+ },
366
+ outputTokens: {
367
+ total: 0,
368
+ text: void 0,
369
+ reasoning: void 0
370
+ },
371
+ raw: void 0
372
+ };
373
+ }
374
+ function mapCodexCliFinishReason(reason) {
375
+ switch (reason) {
376
+ case "stop":
377
+ case "end_turn":
378
+ case void 0:
379
+ return { unified: "stop", raw: reason };
380
+ case "length":
381
+ case "max_tokens":
382
+ return { unified: "length", raw: reason };
383
+ case "content_filter":
384
+ return { unified: "content-filter", raw: reason };
385
+ case "tool_calls":
386
+ return { unified: "tool-calls", raw: reason };
387
+ case "error":
388
+ return { unified: "error", raw: reason };
389
+ default:
390
+ return { unified: "other", raw: reason };
391
+ }
392
+ }
359
393
  var codexCliProviderOptionsSchema = zod.z.object({
360
394
  reasoningEffort: zod.z.enum(["minimal", "low", "medium", "high", "xhigh"]).optional(),
361
395
  reasoningSummary: zod.z.enum(["auto", "detailed"]).optional(),
@@ -527,9 +561,6 @@ var CodexCliLanguageModel = class {
527
561
  if (settings.modelVerbosity) {
528
562
  args.push("-c", `model_verbosity=${settings.modelVerbosity}`);
529
563
  }
530
- if (settings.includePlanTool) {
531
- args.push("--include-plan-tool");
532
- }
533
564
  if (settings.profile) {
534
565
  args.push("--profile", settings.profile);
535
566
  }
@@ -759,13 +790,14 @@ var CodexCliLanguageModel = class {
759
790
  total: inputTotal,
760
791
  noCache: inputTotal - cachedInputTokens,
761
792
  cacheRead: cachedInputTokens,
762
- cacheWrite: void 0
793
+ cacheWrite: 0
763
794
  },
764
795
  outputTokens: {
765
796
  total: outputTotal,
766
- text: outputTotal,
797
+ text: void 0,
767
798
  reasoning: void 0
768
- }
799
+ },
800
+ raw: reported
769
801
  };
770
802
  }
771
803
  getToolName(item) {
@@ -963,11 +995,8 @@ var CodexCliLanguageModel = class {
963
995
  `[codex-cli] Executing Codex CLI: ${cmd} with ${args.length} arguments, cwd: ${cwd ?? "default"}`
964
996
  );
965
997
  let text = "";
966
- let usage = {
967
- inputTokens: { total: 0, noCache: 0, cacheRead: 0, cacheWrite: void 0 },
968
- outputTokens: { total: 0, text: 0, reasoning: void 0 }
969
- };
970
- const finishReason = "stop";
998
+ let usage = createEmptyCodexUsage();
999
+ let finishReason = mapCodexCliFinishReason(void 0);
971
1000
  const startTime = Date.now();
972
1001
  const child = child_process.spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
973
1002
  let onAbort;
@@ -1280,10 +1309,7 @@ var CodexCliLanguageModel = class {
1280
1309
  controller.enqueue({ type: "text-delta", id: textId, delta: finalText });
1281
1310
  controller.enqueue({ type: "text-end", id: textId });
1282
1311
  }
1283
- const usageSummary = lastUsage ?? {
1284
- inputTokens: { total: 0, noCache: 0, cacheRead: 0, cacheWrite: void 0 },
1285
- outputTokens: { total: 0, text: 0, reasoning: void 0 }
1286
- };
1312
+ const usageSummary = lastUsage ?? createEmptyCodexUsage();
1287
1313
  const totalTokens = (usageSummary.inputTokens.total ?? 0) + (usageSummary.outputTokens.total ?? 0);
1288
1314
  this.logger.info(
1289
1315
  `[codex-cli] Stream completed - Session: ${this.sessionId ?? "N/A"}, Duration: ${duration}ms, Tokens: ${totalTokens}`
@@ -1293,7 +1319,7 @@ var CodexCliLanguageModel = class {
1293
1319
  );
1294
1320
  controller.enqueue({
1295
1321
  type: "finish",
1296
- finishReason: "stop",
1322
+ finishReason: mapCodexCliFinishReason(void 0),
1297
1323
  usage: usageSummary
1298
1324
  });
1299
1325
  controller.close();
package/dist/index.d.cts CHANGED
@@ -158,12 +158,6 @@ interface CodexCliSettings {
158
158
  * Maps to: `-c features.rmcp_client=true`
159
159
  */
160
160
  rmcpClient?: boolean;
161
- /**
162
- * Include experimental plan tool that the model can use to update its current plan.
163
- *
164
- * Maps to: `--include-plan-tool`
165
- */
166
- includePlanTool?: boolean;
167
161
  /**
168
162
  * Configuration profile from config.toml to specify default options.
169
163
  *
package/dist/index.d.ts CHANGED
@@ -158,12 +158,6 @@ interface CodexCliSettings {
158
158
  * Maps to: `-c features.rmcp_client=true`
159
159
  */
160
160
  rmcpClient?: boolean;
161
- /**
162
- * Include experimental plan tool that the model can use to update its current plan.
163
- *
164
- * Maps to: `--include-plan-tool`
165
- */
166
- includePlanTool?: boolean;
167
161
  /**
168
162
  * Configuration profile from config.toml to specify default options.
169
163
  *
package/dist/index.js CHANGED
@@ -114,7 +114,6 @@ var settingsSchema = z.object({
114
114
  reasoningSummaryFormat: z.enum(["none", "experimental"]).optional(),
115
115
  modelVerbosity: z.enum(["low", "medium", "high"]).optional(),
116
116
  // NEW: Advanced features
117
- includePlanTool: z.boolean().optional(),
118
117
  profile: z.string().optional(),
119
118
  oss: z.boolean().optional(),
120
119
  webSearch: z.boolean().optional(),
@@ -353,6 +352,41 @@ function isAuthenticationError(err) {
353
352
  }
354
353
 
355
354
  // src/codex-cli-language-model.ts
355
+ function createEmptyCodexUsage() {
356
+ return {
357
+ inputTokens: {
358
+ total: 0,
359
+ noCache: 0,
360
+ cacheRead: 0,
361
+ cacheWrite: 0
362
+ },
363
+ outputTokens: {
364
+ total: 0,
365
+ text: void 0,
366
+ reasoning: void 0
367
+ },
368
+ raw: void 0
369
+ };
370
+ }
371
+ function mapCodexCliFinishReason(reason) {
372
+ switch (reason) {
373
+ case "stop":
374
+ case "end_turn":
375
+ case void 0:
376
+ return { unified: "stop", raw: reason };
377
+ case "length":
378
+ case "max_tokens":
379
+ return { unified: "length", raw: reason };
380
+ case "content_filter":
381
+ return { unified: "content-filter", raw: reason };
382
+ case "tool_calls":
383
+ return { unified: "tool-calls", raw: reason };
384
+ case "error":
385
+ return { unified: "error", raw: reason };
386
+ default:
387
+ return { unified: "other", raw: reason };
388
+ }
389
+ }
356
390
  var codexCliProviderOptionsSchema = z.object({
357
391
  reasoningEffort: z.enum(["minimal", "low", "medium", "high", "xhigh"]).optional(),
358
392
  reasoningSummary: z.enum(["auto", "detailed"]).optional(),
@@ -524,9 +558,6 @@ var CodexCliLanguageModel = class {
524
558
  if (settings.modelVerbosity) {
525
559
  args.push("-c", `model_verbosity=${settings.modelVerbosity}`);
526
560
  }
527
- if (settings.includePlanTool) {
528
- args.push("--include-plan-tool");
529
- }
530
561
  if (settings.profile) {
531
562
  args.push("--profile", settings.profile);
532
563
  }
@@ -756,13 +787,14 @@ var CodexCliLanguageModel = class {
756
787
  total: inputTotal,
757
788
  noCache: inputTotal - cachedInputTokens,
758
789
  cacheRead: cachedInputTokens,
759
- cacheWrite: void 0
790
+ cacheWrite: 0
760
791
  },
761
792
  outputTokens: {
762
793
  total: outputTotal,
763
- text: outputTotal,
794
+ text: void 0,
764
795
  reasoning: void 0
765
- }
796
+ },
797
+ raw: reported
766
798
  };
767
799
  }
768
800
  getToolName(item) {
@@ -960,11 +992,8 @@ var CodexCliLanguageModel = class {
960
992
  `[codex-cli] Executing Codex CLI: ${cmd} with ${args.length} arguments, cwd: ${cwd ?? "default"}`
961
993
  );
962
994
  let text = "";
963
- let usage = {
964
- inputTokens: { total: 0, noCache: 0, cacheRead: 0, cacheWrite: void 0 },
965
- outputTokens: { total: 0, text: 0, reasoning: void 0 }
966
- };
967
- const finishReason = "stop";
995
+ let usage = createEmptyCodexUsage();
996
+ let finishReason = mapCodexCliFinishReason(void 0);
968
997
  const startTime = Date.now();
969
998
  const child = spawn(cmd, args, { env, cwd, stdio: ["ignore", "pipe", "pipe"] });
970
999
  let onAbort;
@@ -1277,10 +1306,7 @@ var CodexCliLanguageModel = class {
1277
1306
  controller.enqueue({ type: "text-delta", id: textId, delta: finalText });
1278
1307
  controller.enqueue({ type: "text-end", id: textId });
1279
1308
  }
1280
- const usageSummary = lastUsage ?? {
1281
- inputTokens: { total: 0, noCache: 0, cacheRead: 0, cacheWrite: void 0 },
1282
- outputTokens: { total: 0, text: 0, reasoning: void 0 }
1283
- };
1309
+ const usageSummary = lastUsage ?? createEmptyCodexUsage();
1284
1310
  const totalTokens = (usageSummary.inputTokens.total ?? 0) + (usageSummary.outputTokens.total ?? 0);
1285
1311
  this.logger.info(
1286
1312
  `[codex-cli] Stream completed - Session: ${this.sessionId ?? "N/A"}, Duration: ${duration}ms, Tokens: ${totalTokens}`
@@ -1290,7 +1316,7 @@ var CodexCliLanguageModel = class {
1290
1316
  );
1291
1317
  controller.enqueue({
1292
1318
  type: "finish",
1293
- finishReason: "stop",
1319
+ finishReason: mapCodexCliFinishReason(void 0),
1294
1320
  usage: usageSummary
1295
1321
  });
1296
1322
  controller.close();
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "ai-sdk-provider-codex-cli",
3
- "version": "1.0.0-beta.1",
4
- "description": "AI SDK v6 provider for OpenAI Codex CLI with native JSON Schema support",
3
+ "version": "1.0.1",
4
+ "description": "AI SDK v6 provider for OpenAI Codex CLI (use ChatGPT Plus/Pro subscription)",
5
5
  "keywords": [
6
6
  "ai-sdk",
7
7
  "codex",
@@ -59,18 +59,18 @@
59
59
  "test:coverage": "vitest run --coverage"
60
60
  },
61
61
  "dependencies": {
62
- "@ai-sdk/provider": "3.0.0-beta.26",
63
- "@ai-sdk/provider-utils": "4.0.0-beta.51",
62
+ "@ai-sdk/provider": "^3.0.0",
63
+ "@ai-sdk/provider-utils": "^4.0.1",
64
64
  "jsonc-parser": "^3.3.1"
65
65
  },
66
66
  "optionalDependencies": {
67
- "@openai/codex": "^0.60.1"
67
+ "@openai/codex": "^0.77.0"
68
68
  },
69
69
  "devDependencies": {
70
70
  "@eslint/js": "^9.14.0",
71
71
  "@types/node": "20.19.6",
72
72
  "@vitest/coverage-v8": "^3.2.4",
73
- "ai": "6.0.0-beta.156",
73
+ "ai": "^6.0.3",
74
74
  "eslint": "^9.14.0",
75
75
  "prettier": "^3.3.3",
76
76
  "tsup": "8.5.0",