ai-sdk-provider-codex-cli 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,10 +1,22 @@
1
1
  # AI SDK Provider for Codex CLI
2
2
 
3
- A community provider for Vercel AI SDK v5 that uses OpenAI’s Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5 class models with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV2 interface.
4
-
5
- - Works with `generateText`, `streamText`, and `generateObject` (JSON schemas via prompt engineering)
3
+ [![npm version](https://img.shields.io/npm/v/ai-sdk-provider-codex-cli.svg)](https://www.npmjs.com/package/ai-sdk-provider-codex-cli)
4
+ [![npm downloads](https://img.shields.io/npm/dm/ai-sdk-provider-codex-cli.svg)](https://www.npmjs.com/package/ai-sdk-provider-codex-cli)
5
+ [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
6
+ ![Node >= 18](https://img.shields.io/badge/node-%3E%3D18-43853d?logo=node.js&logoColor=white)
7
+ ![AI SDK v5](https://img.shields.io/badge/AI%20SDK-v5-000?logo=vercel&logoColor=white)
8
+ ![Modules: ESM + CJS](https://img.shields.io/badge/modules-ESM%20%2B%20CJS-3178c6)
9
+ ![TypeScript](https://img.shields.io/badge/TypeScript-blue)
10
+ [![PRs welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/issues)
11
+ [![Latest Release](https://img.shields.io/github/v/release/ben-vargas/ai-sdk-provider-codex-cli?display_name=tag)](https://github.com/ben-vargas/ai-sdk-provider-codex-cli/releases/latest)
12
+
13
+ A community provider for Vercel AI SDK v5 that uses OpenAI’s Codex CLI (non‑interactive `codex exec`) to talk to GPT‑5 class models (`gpt-5` and the Codex-specific `gpt-5-codex` slug) with your ChatGPT Plus/Pro subscription. The provider spawns the Codex CLI process, parses its JSONL output, and adapts it to the AI SDK LanguageModelV2 interface.
14
+
15
+ - Works with `generateText`, `streamText`, and `generateObject` (native JSON Schema support via `--output-schema`)
6
16
  - Uses ChatGPT OAuth from `codex login` (tokens in `~/.codex/auth.json`) or `OPENAI_API_KEY`
7
17
  - Node-only (spawns a local process); supports CI and local dev
18
+ - **v0.3.0**: Adds comprehensive tool streaming support for monitoring autonomous tool execution
19
+ - **v0.2.0 Breaking Changes**: Switched to `--experimental-json` and native schema enforcement (see [CHANGELOG](CHANGELOG.md))
8
20
 
9
21
  ## Installation
10
22
 
@@ -15,6 +27,12 @@ npm i -g @openai/codex
15
27
  codex login # or set OPENAI_API_KEY
16
28
  ```
17
29
 
30
+ > **⚠️ Version Requirement**: Requires Codex CLI **>= 0.42.0** for `--experimental-json` and `--output-schema` support. **>= 0.44.0 recommended** for full usage tracking and tool streaming support. Check your version with `codex --version` and upgrade if needed:
31
+ >
32
+ > ```bash
33
+ > npm i -g @openai/codex@latest
34
+ > ```
35
+
18
36
  2. Install provider and AI SDK
19
37
 
20
38
  ```bash
@@ -29,7 +47,7 @@ Text generation
29
47
  import { generateText } from 'ai';
30
48
  import { codexCli } from 'ai-sdk-provider-codex-cli';
31
49
 
32
- const model = codexCli('gpt-5', {
50
+ const model = codexCli('gpt-5-codex', {
33
51
  allowNpx: true,
34
52
  skipGitRepoCheck: true,
35
53
  approvalMode: 'on-failure',
@@ -49,8 +67,10 @@ Streaming
49
67
  import { streamText } from 'ai';
50
68
  import { codexCli } from 'ai-sdk-provider-codex-cli';
51
69
 
70
+ // The provider works with both `gpt-5` and `gpt-5-codex`; use the latter for
71
+ // the Codex CLI specific slug.
52
72
  const { textStream } = await streamText({
53
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
73
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
54
74
  prompt: 'Write two short lines of encouragement.',
55
75
  });
56
76
  for await (const chunk of textStream) process.stdout.write(chunk);
@@ -65,7 +85,7 @@ import { codexCli } from 'ai-sdk-provider-codex-cli';
65
85
 
66
86
  const schema = z.object({ name: z.string(), age: z.number().int() });
67
87
  const { object } = await generateObject({
68
- model: codexCli('gpt-5', { allowNpx: true, skipGitRepoCheck: true }),
88
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
69
89
  schema,
70
90
  prompt: 'Generate a small user profile.',
71
91
  });
@@ -76,18 +96,60 @@ console.log(object);
76
96
 
77
97
  - AI SDK v5 compatible (LanguageModelV2)
78
98
  - Streaming and non‑streaming
79
- - JSON object generation with Zod schemas (prompt‑engineered)
99
+ - **Tool streaming support** (v0.3.0+) - Monitor autonomous tool execution in real-time
100
+ - **Native JSON Schema support** via `--output-schema` (API-enforced with `strict: true`)
101
+ - JSON object generation with Zod schemas (100-200 fewer tokens per request vs prompt engineering)
80
102
  - Safe defaults for non‑interactive automation (`on-failure`, `workspace-write`, `--skip-git-repo-check`)
81
103
  - Fallback to `npx @openai/codex` when not on PATH (`allowNpx`)
104
+ - Usage tracking from experimental JSON event format
105
+
106
+ ### Tool Streaming (v0.3.0+)
107
+
108
+ The provider supports comprehensive tool streaming, enabling real-time monitoring of Codex CLI's autonomous tool execution:
109
+
110
+ ```js
111
+ import { streamText } from 'ai';
112
+ import { codexCli } from 'ai-sdk-provider-codex-cli';
113
+
114
+ const result = await streamText({
115
+ model: codexCli('gpt-5-codex', { allowNpx: true, skipGitRepoCheck: true }),
116
+ prompt: 'List files and count lines in the largest one',
117
+ });
118
+
119
+ for await (const part of result.fullStream) {
120
+ if (part.type === 'tool-call') {
121
+ console.log('🔧 Tool:', part.toolName);
122
+ }
123
+ if (part.type === 'tool-result') {
124
+ console.log('✅ Result:', part.result);
125
+ }
126
+ }
127
+ ```
128
+
129
+ **What you get:**
130
+
131
+ - Tool invocation events when Codex starts executing tools (exec, patch, web_search, mcp_tool_call)
132
+ - Tool input tracking with full parameter visibility
133
+ - Tool result events with complete output payloads
134
+ - `providerExecuted: true` on all tool calls (Codex executes autonomously, app doesn't need to)
135
+
136
+ **Limitation:** Real-time output streaming (`output-delta` events) not yet available. Tool outputs delivered in final `tool-result` event. See `examples/streaming-tool-calls.mjs` and `examples/streaming-multiple-tools.mjs` for usage patterns.
137
+
138
+ ### Text Streaming behavior
139
+
140
+ **Status:** Incremental streaming not currently supported with `--experimental-json` format (expected in future Codex CLI releases)
82
141
 
83
- ### Streaming behavior
142
+ The `--experimental-json` output format (introduced Sept 25, 2025) currently only emits `item.completed` events with full text content. Incremental streaming via `item.updated` or delta events is not yet implemented by OpenAI.
84
143
 
85
- When using `codex exec --json`, the Codex CLI intentionally suppresses token/assistant deltas in its JSON event stream. Instead, it writes the final assistant message to the file you pass via `--output-last-message` and then signals completion. This provider:
144
+ **What this means:**
86
145
 
87
- - emits `response-metadata` early (as soon as the session is configured), and
88
- - returns the final text as a single `text-delta` right before `finish`.
146
+ - `streamText()` works functionally but delivers the entire response in a single chunk after generation completes
147
+ - No incremental text deltas—you wait for the full response, then receive it all at once
148
+ - The AI SDK's streaming interface is supported, but actual incremental streaming is not available
89
149
 
90
- This is expected behavior for JSON mode in Codex exec, so streaming typically “feels” like a final chunk rather than a gradual trickle.
150
+ **Future support:** The Codex CLI commit (344d4a1d) introducing experimental JSON explicitly notes: "or other item types like `item.output_delta` when we need streaming" and states "more event types and item types to come."
151
+
152
+ When OpenAI adds streaming support, this provider will be updated to handle those events and enable true incremental streaming.
91
153
 
92
154
  ## Documentation
93
155
 
@@ -125,9 +187,19 @@ See [docs/ai-sdk-v5/configuration.md](docs/ai-sdk-v5/configuration.md) for the f
125
187
  ## Limitations
126
188
 
127
189
  - Node ≥ 18, local process only (no Edge)
128
- - Codex JSON mode (`codex exec --json`) suppresses mid‑response deltas; streaming typically yields a final chunk. The CLI writes the final assistant text via `--output-last-message`, which this provider reads and emits at the end.
190
+ - Codex `--experimental-json` mode emits events rather than streaming deltas; streaming typically yields a final chunk. The CLI provides the final assistant text in the `item.completed` event, which this provider reads and emits at the end.
129
191
  - Some AI SDK parameters are unsupported by Codex CLI (e.g., temperature/topP/penalties); the provider surfaces warnings and ignores them
130
192
 
193
+ ### JSON Schema Limitations (v0.2.0+)
194
+
195
+ **⚠️ Important:** OpenAI strict mode has limitations:
196
+
197
+ - **Optional fields NOT supported**: All fields must be required (no `.optional()`)
198
+ - **Format validators stripped**: `.email()`, `.url()`, `.uuid()` are removed (use descriptions instead)
199
+ - **Pattern validators stripped**: `.regex()` is removed (use descriptions instead)
200
+
201
+ See [LIMITATIONS.md](LIMITATIONS.md) for comprehensive details and migration guidance.
202
+
131
203
  ## Disclaimer
132
204
 
133
205
  This is a community provider and not an official OpenAI or Vercel product. You are responsible for complying with all applicable terms and ensuring safe usage.