@openai/codex 0.1.2505291658 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,9 +1,11 @@
1
1
  <h1 align="center">OpenAI Codex CLI</h1>
2
2
  <p align="center">Lightweight coding agent that runs in your terminal</p>
3
3
 
4
- <p align="center"><code>npm i -g @openai/codex</code></p>
4
+ <p align="center"><code>brew install codex</code></p>
5
5
 
6
- ![Codex demo GIF using: codex "explain this codebase to me"](./.github/demo.gif)
6
+ This is the home of the **Codex CLI**, which is a coding agent from OpenAI that runs locally on your computer. If you are looking for the _cloud-based agent_ from OpenAI, **Codex [Web]**, see <https://chatgpt.com/codex>.
7
+
8
+ <!-- ![Codex demo GIF using: codex "explain this codebase to me"](./.github/demo.gif) -->
7
9
 
8
10
  ---
9
11
 
@@ -14,6 +16,8 @@
14
16
 
15
17
  - [Experimental technology disclaimer](#experimental-technology-disclaimer)
16
18
  - [Quickstart](#quickstart)
19
+ - [OpenAI API Users](#openai-api-users)
20
+ - [OpenAI Plus/Pro Users](#openai-pluspro-users)
17
21
  - [Why Codex?](#why-codex)
18
22
  - [Security model & permissions](#security-model--permissions)
19
23
  - [Platform sandboxing details](#platform-sandboxing-details)
@@ -21,24 +25,17 @@
21
25
  - [CLI reference](#cli-reference)
22
26
  - [Memory & project docs](#memory--project-docs)
23
27
  - [Non-interactive / CI mode](#non-interactive--ci-mode)
28
+ - [Model Context Protocol (MCP)](#model-context-protocol-mcp)
24
29
  - [Tracing / verbose logging](#tracing--verbose-logging)
25
30
  - [Recipes](#recipes)
26
31
  - [Installation](#installation)
27
- - [Configuration guide](#configuration-guide)
28
- - [Basic configuration parameters](#basic-configuration-parameters)
29
- - [Custom AI provider configuration](#custom-ai-provider-configuration)
30
- - [History configuration](#history-configuration)
31
- - [Configuration examples](#configuration-examples)
32
- - [Full configuration example](#full-configuration-example)
33
- - [Custom instructions](#custom-instructions)
34
- - [Environment variables setup](#environment-variables-setup)
32
+ - [DotSlash](#dotslash)
33
+ - [Configuration](#configuration)
35
34
  - [FAQ](#faq)
36
35
  - [Zero data retention (ZDR) usage](#zero-data-retention-zdr-usage)
37
36
  - [Codex open source fund](#codex-open-source-fund)
38
37
  - [Contributing](#contributing)
39
38
  - [Development workflow](#development-workflow)
40
- - [Git hooks with Husky](#git-hooks-with-husky)
41
- - [Debugging](#debugging)
42
39
  - [Writing high-impact code changes](#writing-high-impact-code-changes)
43
40
  - [Opening a pull request](#opening-a-pull-request)
44
41
  - [Review process](#review-process)
@@ -47,8 +44,6 @@
47
44
  - [Contributor license agreement (CLA)](#contributor-license-agreement-cla)
48
45
  - [Quick fixes](#quick-fixes)
49
46
  - [Releasing `codex`](#releasing-codex)
50
- - [Alternative build options](#alternative-build-options)
51
- - [Nix flake development](#nix-flake-development)
52
47
  - [Security & responsible AI](#security--responsible-ai)
53
48
  - [License](#license)
54
49
 
@@ -74,51 +69,91 @@ Help us improve by filing issues or submitting PRs (see the section below for ho
74
69
  Install globally:
75
70
 
76
71
  ```shell
77
- npm install -g @openai/codex
72
+ brew install codex
78
73
  ```
79
74
 
75
+ Or go to the [latest GitHub Release](https://github.com/openai/codex/releases/latest) and download the appropriate binary for your platform.
76
+
77
+ ### OpenAI API Users
78
+
80
79
  Next, set your OpenAI API key as an environment variable:
81
80
 
82
81
  ```shell
83
82
  export OPENAI_API_KEY="your-api-key-here"
84
83
  ```
85
84
 
86
- > **Note:** This command sets the key only for your current terminal session. You can add the `export` line to your shell's configuration file (e.g., `~/.zshrc`) but we recommend setting for the session. **Tip:** You can also place your API key into a `.env` file at the root of your project:
87
- >
88
- > ```env
89
- > OPENAI_API_KEY=your-api-key-here
90
- > ```
91
- >
92
- > The CLI will automatically load variables from `.env` (via `dotenv/config`).
85
+ > [!NOTE]
86
+ > This command sets the key only for your current terminal session. You can add the `export` line to your shell's configuration file (e.g., `~/.zshrc`), but we recommend setting it for the session.
87
+
88
+ ### OpenAI Plus/Pro Users
89
+
90
+ If you have a paid OpenAI account, run the following to start the login process:
91
+
92
+ ```
93
+ codex login
94
+ ```
95
+
96
+ If you complete the process successfully, you should have a `~/.codex/auth.json` file that contains the credentials that Codex will use.
97
+
98
+ If you encounter problems with the login flow, please comment on <https://github.com/openai/codex/issues/1243>.
93
99
 
94
100
  <details>
95
- <summary><strong>Use <code>--provider</code> to use other models</strong></summary>
96
-
97
- > Codex also allows you to use other providers that support the OpenAI Chat Completions API. You can set the provider in the config file or use the `--provider` flag. The possible options for `--provider` are:
98
- >
99
- > - openai (default)
100
- > - openrouter
101
- > - azure
102
- > - gemini
103
- > - ollama
104
- > - mistral
105
- > - deepseek
106
- > - xai
107
- > - groq
108
- > - arceeai
109
- > - any other provider that is compatible with the OpenAI API
110
- >
111
- > If you use a provider other than OpenAI, you will need to set the API key for the provider in the config file or in the environment variable as:
112
- >
113
- > ```shell
114
- > export <provider>_API_KEY="your-api-key-here"
115
- > ```
116
- >
117
- > If you use a provider not listed above, you must also set the base URL for the provider:
118
- >
119
- > ```shell
120
- > export <provider>_BASE_URL="https://your-provider-api-base-url"
121
- > ```
101
+ <summary><strong>Use <code>--profile</code> to use other models</strong></summary>
102
+
103
+ Codex also allows you to use other providers that support the OpenAI Chat Completions (or Responses) API.
104
+
105
+ To do so, you must first define custom [providers](./config.md#model_providers) in `~/.codex/config.toml`. For example, the provider for a standard Ollama setup would be defined as follows:
106
+
107
+ ```toml
108
+ [model_providers.ollama]
109
+ name = "Ollama"
110
+ base_url = "http://localhost:11434/v1"
111
+ ```
112
+
113
+ The `base_url` will have `/chat/completions` appended to it to build the full URL for the request.
114
+
115
+ For providers that also require an `Authorization` header of the form `Bearer: SECRET`, an `env_key` can be specified, which indicates the environment variable to read to use as the value of `SECRET` when making a request:
116
+
117
+ ```toml
118
+ [model_providers.openrouter]
119
+ name = "OpenRouter"
120
+ base_url = "https://openrouter.ai/api/v1"
121
+ env_key = "OPENROUTER_API_KEY"
122
+ ```
123
+
124
+ Providers that speak the Responses API are also supported by adding `wire_api = "responses"` as part of the definition. Accessing OpenAI models via Azure is an example of such a provider, though it also requires specifying additional `query_params` that need to be appended to the request URL:
125
+
126
+ ```toml
127
+ [model_providers.azure]
128
+ name = "Azure"
129
+ # Make sure you set the appropriate subdomain for this URL.
130
+ base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
131
+ env_key = "AZURE_OPENAI_API_KEY" # Or "OPENAI_API_KEY", whichever you use.
132
+ # Newer versions appear to support the responses API, see https://github.com/openai/codex/pull/1321
133
+ query_params = { api-version = "2025-04-01-preview" }
134
+ wire_api = "responses"
135
+ ```
136
+
137
+ Once you have defined a provider you wish to use, you can configure it as your default provider as follows:
138
+
139
+ ```toml
140
+ model_provider = "azure"
141
+ ```
142
+
143
+ > [!TIP]
144
+ > If you find yourself experimenting with a variety of models and providers, then you likely want to invest in defining a _profile_ for each configuration like so:
145
+
146
+ ```toml
147
+ [profiles.o3]
148
+ model_provider = "azure"
149
+ model = "o3"
150
+
151
+ [profiles.mistral]
152
+ model_provider = "ollama"
153
+ model = "mistral"
154
+ ```
155
+
156
+ This way, you can specify one command-line argument (.e.g., `--profile o3`, `--profile mistral`) to override multiple settings together.
122
157
 
123
158
  </details>
124
159
  <br />
@@ -136,7 +171,7 @@ codex "explain this codebase to me"
136
171
  ```
137
172
 
138
173
  ```shell
139
- codex --approval-mode full-auto "create the fanciest todo-list app"
174
+ codex --full-auto "create the fanciest todo-list app"
140
175
  ```
141
176
 
142
177
  That's it - Codex will scaffold a file, run it inside a sandbox, install any
@@ -162,41 +197,35 @@ And it's **fully open-source** so you can see and contribute to how it develops!
162
197
 
163
198
  ## Security model & permissions
164
199
 
165
- Codex lets you decide _how much autonomy_ the agent receives and auto-approval policy via the
166
- `--approval-mode` flag (or the interactive onboarding prompt):
200
+ Codex lets you decide _how much autonomy_ you want to grant the agent. The following options can be configured independently:
167
201
 
168
- | Mode | What the agent may do without asking | Still requires approval |
169
- | ------------------------- | --------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
170
- | **Suggest** <br>(default) | <li>Read any file in the repo | <li>**All** file writes/patches<li> **Any** arbitrary shell commands (aside from reading files) |
171
- | **Auto Edit** | <li>Read **and** apply-patch writes to files | <li>**All** shell commands |
172
- | **Full Auto** | <li>Read/write files <li> Execute shell commands (network disabled, writes limited to your workdir) | - |
202
+ - [`approval_policy`](./codex-rs/config.md#approval_policy) determines when you should be prompted to approve whether Codex can execute a command
203
+ - [`sandbox`](./codex-rs/config.md#sandbox) determines the _sandbox policy_ that Codex uses to execute untrusted commands
173
204
 
174
- In **Full Auto** every command is run **network-disabled** and confined to the
175
- current working directory (plus temporary files) for defense-in-depth. Codex
176
- will also show a warning/confirmation if you start in **auto-edit** or
177
- **full-auto** while the directory is _not_ tracked by Git, so you always have a
178
- safety net.
205
+ By default, Codex runs with `approval_policy = "untrusted"` and `sandbox.mode = "read-only"`, which means that:
179
206
 
180
- Coming soon: you'll be able to whitelist specific commands to auto-execute with
181
- the network enabled, once we're confident in additional safeguards.
207
+ - The user is prompted to approve every command not on the set of "trusted" commands built into Codex (`cat`, `ls`, etc.)
208
+ - Approved commands are run outside of a sandbox because user approval implies "trust," in this case.
182
209
 
183
- ### Platform sandboxing details
210
+ Though running Codex with the `--full-auto` option changes the configuration to `approval_policy = "on-failure"` and `sandbox.mode = "workspace-write"`, which means that:
211
+
212
+ - Codex does not initially ask for user approval before running an individual command.
213
+ - Though when it runs a command, it is run under a sandbox in which:
214
+ - It can read any file on the system.
215
+ - It can only write files under the current directory (or the directory specified via `--cd`).
216
+ - Network requests are completely disabled.
217
+ - Only if the command exits with a non-zero exit code will it ask the user for approval. If granted, it will re-attempt the command outside of the sandbox. (A common case is when Codex cannot `npm install` a dependency because that requires network access.)
218
+
219
+ Again, these two options can be configured independently. For example, if you want Codex to perform an "exploration" where you are happy for it to read anything it wants but you never want to be prompted, you could run Codex with `approval_policy = "never"` and `sandbox.mode = "read-only"`.
184
220
 
185
- The hardening mechanism Codex uses depends on your OS:
221
+ ### Platform sandboxing details
186
222
 
187
- - **macOS 12+** - commands are wrapped with **Apple Seatbelt** (`sandbox-exec`).
223
+ The mechanism Codex uses to implement the sandbox policy depends on your OS:
188
224
 
189
- - Everything is placed in a read-only jail except for a small set of
190
- writable roots (`$PWD`, `$TMPDIR`, `~/.codex`, etc.).
191
- - Outbound network is _fully blocked_ by default - even if a child process
192
- tries to `curl` somewhere it will fail.
225
+ - **macOS 12+** uses **Apple Seatbelt** and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `sandbox.mode` that was specified.
226
+ - **Linux** uses a combination of Landlock/seccomp APIs to enforce the `sandbox` configuration.
193
227
 
194
- - **Linux** - there is no sandboxing by default.
195
- We recommend using Docker for sandboxing, where Codex launches itself inside a **minimal
196
- container image** and mounts your repo _read/write_ at the same path. A
197
- custom `iptables`/`ipset` firewall script denies all egress except the
198
- OpenAI API. This gives you deterministic, reproducible runs without needing
199
- root on the host. You can use the [`run_in_container.sh`](./codex-cli/scripts/run_in_container.sh) script to set up the sandbox.
228
+ Note that when running Linux in a containerized environment such as Docker, sandboxing may not work if the host/container configuration does not support the necessary Landlock/seccomp APIs. In such cases, we recommend configuring your Docker container so that it provides the sandbox guarantees you are looking for and then running `codex` with `sandbox.mode = "danger-full-access"` (or more simply, the `--dangerously-bypass-approvals-and-sandbox` flag) within your container.
200
229
 
201
230
  ---
202
231
 
@@ -205,24 +234,20 @@ The hardening mechanism Codex uses depends on your OS:
205
234
  | Requirement | Details |
206
235
  | --------------------------- | --------------------------------------------------------------- |
207
236
  | Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
208
- | Node.js | **22 or newer** (LTS recommended) |
209
237
  | Git (optional, recommended) | 2.23+ for built-in PR helpers |
210
238
  | RAM | 4-GB minimum (8-GB recommended) |
211
239
 
212
- > Never run `sudo npm install -g`; fix npm permissions instead.
213
-
214
240
  ---
215
241
 
216
242
  ## CLI reference
217
243
 
218
- | Command | Purpose | Example |
219
- | ------------------------------------ | ----------------------------------- | ------------------------------------ |
220
- | `codex` | Interactive REPL | `codex` |
221
- | `codex "..."` | Initial prompt for interactive REPL | `codex "fix lint errors"` |
222
- | `codex -q "..."` | Non-interactive "quiet mode" | `codex -q --json "explain utils.ts"` |
223
- | `codex completion <bash\|zsh\|fish>` | Print shell completion script | `codex completion bash` |
244
+ | Command | Purpose | Example |
245
+ | ------------------ | ---------------------------------- | ------------------------------- |
246
+ | `codex` | Interactive TUI | `codex` |
247
+ | `codex "..."` | Initial prompt for interactive TUI | `codex "fix lint errors"` |
248
+ | `codex exec "..."` | Non-interactive "automation mode" | `codex exec "explain utils.ts"` |
224
249
 
225
- Key flags: `--model/-m`, `--approval-mode/-a`, `--quiet/-q`, and `--notify`.
250
+ Key flags: `--model/-m`, `--ask-for-approval/-a`.
226
251
 
227
252
  ---
228
253
 
@@ -234,8 +259,6 @@ You can give Codex extra instructions and guidance using `AGENTS.md` files. Code
234
259
  2. `AGENTS.md` at repo root - shared project notes
235
260
  3. `AGENTS.md` in the current working directory - sub-folder/feature specifics
236
261
 
237
- Disable loading of these files with `--no-project-doc` or the environment variable `CODEX_DISABLE_PROJECT_DOC=1`.
238
-
239
262
  ---
240
263
 
241
264
  ## Non-interactive / CI mode
@@ -245,21 +268,40 @@ Run Codex head-less in pipelines. Example GitHub Action step:
245
268
  ```yaml
246
269
  - name: Update changelog via Codex
247
270
  run: |
248
- npm install -g @openai/codex
271
+ npm install -g @openai/codex@native # Note: we plan to drop the need for `@native`.
249
272
  export OPENAI_API_KEY="${{ secrets.OPENAI_KEY }}"
250
- codex -a auto-edit --quiet "update CHANGELOG for next release"
273
+ codex exec --full-auto "update CHANGELOG for next release"
274
+ ```
275
+
276
+ ## Model Context Protocol (MCP)
277
+
278
+ The Codex CLI can be configured to leverage MCP servers by defining an [`mcp_servers`](./codex-rs/config.md#mcp_servers) section in `~/.codex/config.toml`. It is intended to mirror how tools such as Claude and Cursor define `mcpServers` in their respective JSON config files, though the Codex format is slightly different since it uses TOML rather than JSON, e.g.:
279
+
280
+ ```toml
281
+ # IMPORTANT: the top-level key is `mcp_servers` rather than `mcpServers`.
282
+ [mcp_servers.server-name]
283
+ command = "npx"
284
+ args = ["-y", "mcp-server"]
285
+ env = { "API_KEY" = "value" }
251
286
  ```
252
287
 
253
- Set `CODEX_QUIET_MODE=1` to silence interactive UI noise.
288
+ > [!TIP]
289
+ > It is somewhat experimental, but the Codex CLI can also be run as an MCP _server_ via `codex mcp`. If you launch it with an MCP client such as `npx @modelcontextprotocol/inspector codex mcp` and send it a `tools/list` request, you will see that there is only one tool, `codex`, that accepts a grab-bag of inputs, including a catch-all `config` map for anything you might want to override. Feel free to play around with it and provide feedback via GitHub issues.
254
290
 
255
291
  ## Tracing / verbose logging
256
292
 
257
- Setting the environment variable `DEBUG=true` prints full API request and response details:
293
+ Because Codex is written in Rust, it honors the `RUST_LOG` environment variable to configure its logging behavior.
258
294
 
259
- ```shell
260
- DEBUG=true codex
295
+ The TUI defaults to `RUST_LOG=codex_core=info,codex_tui=info` and log messages are written to `~/.codex/log/codex-tui.log`, so you can leave the following running in a separate terminal to monitor log messages as they are written:
296
+
297
+ ```
298
+ tail -F ~/.codex/log/codex-tui.log
261
299
  ```
262
300
 
301
+ By comparison, the non-interactive mode (`codex exec`) defaults to `RUST_LOG=error`, but messages are printed inline, so there is no need to monitor a separate file.
302
+
303
+ See the Rust documentation on [`RUST_LOG`](https://docs.rs/env_logger/latest/env_logger/#enabling-logging) for more information on the configuration options.
304
+
263
305
  ---
264
306
 
265
307
  ## Recipes
@@ -281,201 +323,70 @@ Below are a few bite-size examples you can copy-paste. Replace the text in quote
281
323
  ## Installation
282
324
 
283
325
  <details open>
284
- <summary><strong>From npm (Recommended)</strong></summary>
326
+ <summary><strong>From brew (Recommended)</strong></summary>
285
327
 
286
328
  ```bash
287
- npm install -g @openai/codex
288
- # or
289
- yarn global add @openai/codex
290
- # or
291
- bun install -g @openai/codex
292
- # or
293
- pnpm add -g @openai/codex
329
+ brew install codex
294
330
  ```
295
331
 
296
- </details>
332
+ Or go to the [latest GitHub Release](https://github.com/openai/codex/releases/latest) and download the appropriate binary for your platform.
297
333
 
298
- <details>
299
- <summary><strong>Build from source</strong></summary>
334
+ Admittedly, each GitHub Release contains many executables, but in practice, you likely want one of these:
300
335
 
301
- ```bash
302
- # Clone the repository and navigate to the CLI package
303
- git clone https://github.com/openai/codex.git
304
- cd codex/codex-cli
305
-
306
- # Enable corepack
307
- corepack enable
308
-
309
- # Install dependencies and build
310
- pnpm install
311
- pnpm build
312
-
313
- # Linux-only: download prebuilt sandboxing binaries (requires gh and zstd).
314
- ./scripts/install_native_deps.sh
336
+ - macOS
337
+ - Apple Silicon/arm64: `codex-aarch64-apple-darwin.tar.gz`
338
+ - x86_64 (older Mac hardware): `codex-x86_64-apple-darwin.tar.gz`
339
+ - Linux
340
+ - x86_64: `codex-x86_64-unknown-linux-musl.tar.gz`
341
+ - arm64: `codex-aarch64-unknown-linux-musl.tar.gz`
315
342
 
316
- # Get the usage and the options
317
- node ./dist/cli.js --help
343
+ Each archive contains a single entry with the platform baked into the name (e.g., `codex-x86_64-unknown-linux-musl`), so you likely want to rename it to `codex` after extracting it.
318
344
 
319
- # Run the locally-built CLI directly
320
- node ./dist/cli.js
345
+ ### DotSlash
321
346
 
322
- # Or link the command globally for convenience
323
- pnpm link
324
- ```
347
+ The GitHub Release also contains a [DotSlash](https://dotslash-cli.com/) file for the Codex CLI named `codex`. Using a DotSlash file makes it possible to make a lightweight commit to source control to ensure all contributors use the same version of an executable, regardless of what platform they use for development.
325
348
 
326
349
  </details>
327
350
 
328
- ---
329
-
330
- ## Configuration guide
331
-
332
- Codex configuration files can be placed in the `~/.codex/` directory, supporting both YAML and JSON formats.
333
-
334
- ### Basic configuration parameters
335
-
336
- | Parameter | Type | Default | Description | Available Options |
337
- | ------------------- | ------- | ---------- | -------------------------------- | ---------------------------------------------------------------------------------------------- |
338
- | `model` | string | `o4-mini` | AI model to use | Any model name supporting OpenAI API |
339
- | `approvalMode` | string | `suggest` | AI assistant's permission mode | `suggest` (suggestions only)<br>`auto-edit` (automatic edits)<br>`full-auto` (fully automatic) |
340
- | `fullAutoErrorMode` | string | `ask-user` | Error handling in full-auto mode | `ask-user` (prompt for user input)<br>`ignore-and-continue` (ignore and proceed) |
341
- | `notify` | boolean | `true` | Enable desktop notifications | `true`/`false` |
342
-
343
- ### Custom AI provider configuration
344
-
345
- In the `providers` object, you can configure multiple AI service providers. Each provider requires the following parameters:
346
-
347
- | Parameter | Type | Description | Example |
348
- | --------- | ------ | --------------------------------------- | ----------------------------- |
349
- | `name` | string | Display name of the provider | `"OpenAI"` |
350
- | `baseURL` | string | API service URL | `"https://api.openai.com/v1"` |
351
- | `envKey` | string | Environment variable name (for API key) | `"OPENAI_API_KEY"` |
352
-
353
- ### History configuration
354
-
355
- In the `history` object, you can configure conversation history settings:
356
-
357
- | Parameter | Type | Description | Example Value |
358
- | ------------------- | ------- | ------------------------------------------------------ | ------------- |
359
- | `maxSize` | number | Maximum number of history entries to save | `1000` |
360
- | `saveHistory` | boolean | Whether to save history | `true` |
361
- | `sensitivePatterns` | array | Patterns of sensitive information to filter in history | `[]` |
351
+ <details>
352
+ <summary><strong>Build from source</strong></summary>
362
353
 
363
- ### Configuration examples
354
+ ```bash
355
+ # Clone the repository and navigate to the root of the Cargo workspace.
356
+ git clone https://github.com/openai/codex.git
357
+ cd codex/codex-rs
364
358
 
365
- 1. YAML format (save as `~/.codex/config.yaml`):
359
+ # Install the Rust toolchain, if necessary.
360
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
361
+ source "$HOME/.cargo/env"
362
+ rustup component add rustfmt
363
+ rustup component add clippy
366
364
 
367
- ```yaml
368
- model: o4-mini
369
- approvalMode: suggest
370
- fullAutoErrorMode: ask-user
371
- notify: true
372
- ```
365
+ # Build Codex.
366
+ cargo build
373
367
 
374
- 2. JSON format (save as `~/.codex/config.json`):
368
+ # Launch the TUI with a sample prompt.
369
+ cargo run --bin codex -- "explain this codebase to me"
375
370
 
376
- ```json
377
- {
378
- "model": "o4-mini",
379
- "approvalMode": "suggest",
380
- "fullAutoErrorMode": "ask-user",
381
- "notify": true
382
- }
383
- ```
371
+ # After making changes, ensure the code is clean.
372
+ cargo fmt -- --config imports_granularity=Item
373
+ cargo clippy --tests
384
374
 
385
- ### Full configuration example
386
-
387
- Below is a comprehensive example of `config.json` with multiple custom providers:
388
-
389
- ```json
390
- {
391
- "model": "o4-mini",
392
- "provider": "openai",
393
- "providers": {
394
- "openai": {
395
- "name": "OpenAI",
396
- "baseURL": "https://api.openai.com/v1",
397
- "envKey": "OPENAI_API_KEY"
398
- },
399
- "azure": {
400
- "name": "AzureOpenAI",
401
- "baseURL": "https://YOUR_PROJECT_NAME.openai.azure.com/openai",
402
- "envKey": "AZURE_OPENAI_API_KEY"
403
- },
404
- "openrouter": {
405
- "name": "OpenRouter",
406
- "baseURL": "https://openrouter.ai/api/v1",
407
- "envKey": "OPENROUTER_API_KEY"
408
- },
409
- "gemini": {
410
- "name": "Gemini",
411
- "baseURL": "https://generativelanguage.googleapis.com/v1beta/openai",
412
- "envKey": "GEMINI_API_KEY"
413
- },
414
- "ollama": {
415
- "name": "Ollama",
416
- "baseURL": "http://localhost:11434/v1",
417
- "envKey": "OLLAMA_API_KEY"
418
- },
419
- "mistral": {
420
- "name": "Mistral",
421
- "baseURL": "https://api.mistral.ai/v1",
422
- "envKey": "MISTRAL_API_KEY"
423
- },
424
- "deepseek": {
425
- "name": "DeepSeek",
426
- "baseURL": "https://api.deepseek.com",
427
- "envKey": "DEEPSEEK_API_KEY"
428
- },
429
- "xai": {
430
- "name": "xAI",
431
- "baseURL": "https://api.x.ai/v1",
432
- "envKey": "XAI_API_KEY"
433
- },
434
- "groq": {
435
- "name": "Groq",
436
- "baseURL": "https://api.groq.com/openai/v1",
437
- "envKey": "GROQ_API_KEY"
438
- },
439
- "arceeai": {
440
- "name": "ArceeAI",
441
- "baseURL": "https://conductor.arcee.ai/v1",
442
- "envKey": "ARCEEAI_API_KEY"
443
- }
444
- },
445
- "history": {
446
- "maxSize": 1000,
447
- "saveHistory": true,
448
- "sensitivePatterns": []
449
- }
450
- }
375
+ # Run the tests.
376
+ cargo test
451
377
  ```
452
378
 
453
- ### Custom instructions
454
-
455
- You can create a `~/.codex/AGENTS.md` file to define custom guidance for the agent:
456
-
457
- ```markdown
458
- - Always respond with emojis
459
- - Only use git commands when explicitly requested
460
- ```
379
+ </details>
461
380
 
462
- ### Environment variables setup
381
+ ---
463
382
 
464
- For each AI provider, you need to set the corresponding API key in your environment variables. For example:
383
+ ## Configuration
465
384
 
466
- ```bash
467
- # OpenAI
468
- export OPENAI_API_KEY="your-api-key-here"
385
+ Codex supports a rich set of configuration options documented in [`codex-rs/config.md`](./codex-rs/config.md).
469
386
 
470
- # Azure OpenAI
471
- export AZURE_OPENAI_API_KEY="your-azure-api-key-here"
472
- export AZURE_OPENAI_API_VERSION="2025-03-01-preview" (Optional)
387
+ By default, Codex loads its configuration from `~/.codex/config.toml`.
473
388
 
474
- # OpenRouter
475
- export OPENROUTER_API_KEY="your-openrouter-key-here"
476
-
477
- # Similarly for other providers
478
- ```
389
+ Though `--config` can be used to set/override ad-hoc config values for individual invocations of `codex`.
479
390
 
480
391
  ---
481
392
 
@@ -524,7 +435,13 @@ Codex CLI **does** support OpenAI organizations with [Zero Data Retention (ZDR)]
524
435
  OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
525
436
  ```
526
437
 
527
- You may need to upgrade to a more recent version with: `npm i -g @openai/codex@latest`
438
+ Ensure you are running `codex` with `--config disable_response_storage=true` or add this line to `~/.codex/config.toml` to avoid specifying the command line option each time:
439
+
440
+ ```toml
441
+ disable_response_storage = true
442
+ ```
443
+
444
+ See [the configuration documentation on `disable_response_storage`](./codex-rs/config.md#disable_response_storage) for details.
528
445
 
529
446
  ---
530
447
 
@@ -549,51 +466,7 @@ More broadly we welcome contributions - whether you are opening your very first
549
466
 
550
467
  - Create a _topic branch_ from `main` - e.g. `feat/interactive-prompt`.
551
468
  - Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs.
552
- - Use `pnpm test:watch` during development for super-fast feedback.
553
- - We use **Vitest** for unit tests, **ESLint** + **Prettier** for style, and **TypeScript** for type-checking.
554
- - Before pushing, run the full test/type/lint suite:
555
-
556
- ### Git hooks with Husky
557
-
558
- This project uses [Husky](https://typicode.github.io/husky/) to enforce code quality checks:
559
-
560
- - **Pre-commit hook**: Automatically runs lint-staged to format and lint files before committing
561
- - **Pre-push hook**: Runs tests and type checking before pushing to the remote
562
-
563
- These hooks help maintain code quality and prevent pushing code with failing tests. For more details, see [HUSKY.md](./codex-cli/HUSKY.md).
564
-
565
- ```bash
566
- pnpm test && pnpm run lint && pnpm run typecheck
567
- ```
568
-
569
- - If you have **not** yet signed the Contributor License Agreement (CLA), add a PR comment containing the exact text
570
-
571
- ```text
572
- I have read the CLA Document and I hereby sign the CLA
573
- ```
574
-
575
- The CLA-Assistant bot will turn the PR status green once all authors have signed.
576
-
577
- ```bash
578
- # Watch mode (tests rerun on change)
579
- pnpm test:watch
580
-
581
- # Type-check without emitting files
582
- pnpm typecheck
583
-
584
- # Automatically fix lint + prettier issues
585
- pnpm lint:fix
586
- pnpm format:fix
587
- ```
588
-
589
- ### Debugging
590
-
591
- To debug the CLI with a visual debugger, do the following in the `codex-cli` folder:
592
-
593
- - Run `pnpm run build` to build the CLI, which will generate `cli.js.map` alongside `cli.js` in the `dist` folder.
594
- - Run the CLI with `node --inspect-brk ./dist/cli.js` The program then waits until a debugger is attached before proceeding. Options:
595
- - In VS Code, choose **Debug: Attach to Node Process** from the command palette and choose the option in the dropdown with debug port `9229` (likely the first option)
596
- - Go to <chrome://inspect> in Chrome and find **localhost:9229** and click **trace**
469
+ - Following the [development setup](#development-workflow) instructions above, ensure your change is free of lint warnings and test failures.
597
470
 
598
471
  ### Writing high-impact code changes
599
472
 
@@ -605,7 +478,7 @@ To debug the CLI with a visual debugger, do the following in the `codex-cli` fol
605
478
  ### Opening a pull request
606
479
 
607
480
  - Fill in the PR template (or include similar information) - **What? Why? How?**
608
- - Run **all** checks locally (`npm test && npm run lint && npm run typecheck`). CI failures that could have been caught locally slow down the process.
481
+ - Run **all** checks locally (`cargo test && cargo clippy --tests && cargo fmt -- --config imports_granularity=Item`). CI failures that could have been caught locally slow down the process.
609
482
  - Make sure your branch is up-to-date with `main` and that you have resolved merge conflicts.
610
483
  - Mark the PR as **Ready for review** only when you believe it is in a merge-able state.
611
484
 
@@ -652,73 +525,22 @@ The **DCO check** blocks merges until every commit in the PR carries the footer
652
525
 
653
526
  ### Releasing `codex`
654
527
 
655
- To publish a new version of the CLI you first need to stage the npm package. A
656
- helper script in `codex-cli/scripts/` does all the heavy lifting. Inside the
657
- `codex-cli` folder run:
658
-
659
- ```bash
660
- # Classic, JS implementation that includes small, native binaries for Linux sandboxing.
661
- pnpm stage-release
662
-
663
- # Optionally specify the temp directory to reuse between runs.
664
- RELEASE_DIR=$(mktemp -d)
665
- pnpm stage-release --tmp "$RELEASE_DIR"
666
-
667
- # "Fat" package that additionally bundles the native Rust CLI binaries for
668
- # Linux. End-users can then opt-in at runtime by setting CODEX_RUST=1.
669
- pnpm stage-release --native
670
- ```
671
-
672
- Go to the folder where the release is staged and verify that it works as intended. If so, run the following from the temp folder:
673
-
674
- ```
675
- cd "$RELEASE_DIR"
676
- npm publish
677
- ```
528
+ _For admins only._
678
529
 
679
- ### Alternative build options
530
+ Make sure you are on `main` and have no local changes. Then run:
680
531
 
681
- #### Nix flake development
682
-
683
- Prerequisite: Nix >= 2.4 with flakes enabled (`experimental-features = nix-command flakes` in `~/.config/nix/nix.conf`).
684
-
685
- Enter a Nix development shell:
686
-
687
- ```bash
688
- # Use either one of the commands according to which implementation you want to work with
689
- nix develop .#codex-cli # For entering codex-cli specific shell
690
- nix develop .#codex-rs # For entering codex-rs specific shell
691
- ```
692
-
693
- This shell includes Node.js, installs dependencies, builds the CLI, and provides a `codex` command alias.
694
-
695
- Build and run the CLI directly:
696
-
697
- ```bash
698
- # Use either one of the commands according to which implementation you want to work with
699
- nix build .#codex-cli # For building codex-cli
700
- nix build .#codex-rs # For building codex-rs
701
- ./result/bin/codex --help
532
+ ```shell
533
+ VERSION=0.2.0 # Can also be 0.2.0-alpha.1 or any valid Rust version.
534
+ ./codex-rs/scripts/create_github_release.sh "$VERSION"
702
535
  ```
703
536
 
704
- Run the CLI via the flake app:
537
+ This will make a local commit on top of `main` with `version` set to `$VERSION` in `codex-rs/Cargo.toml` (note that on `main`, we leave the version as `version = "0.0.0"`).
705
538
 
706
- ```bash
707
- # Use either one of the commands according to which implementation you want to work with
708
- nix run .#codex-cli # For running codex-cli
709
- nix run .#codex-rs # For running codex-rs
710
- ```
539
+ This will push the commit using the tag `rust-v${VERSION}`, which in turn kicks off [the release workflow](.github/workflows/rust-release.yml). This will create a new GitHub Release named `$VERSION`.
711
540
 
712
- Use direnv with flakes
541
+ If everything looks good in the generated GitHub Release, uncheck the **pre-release** box so it is the latest release.
713
542
 
714
- If you have direnv installed, you can use the following `.envrc` to automatically enter the Nix shell when you `cd` into the project directory:
715
-
716
- ```bash
717
- cd codex-rs
718
- echo "use flake ../flake.nix#codex-cli" >> .envrc && direnv allow
719
- cd codex-cli
720
- echo "use flake ../flake.nix#codex-rs" >> .envrc && direnv allow
721
- ```
543
+ Create a PR to update [`Formula/c/codex.rb`](https://github.com/Homebrew/homebrew-core/blob/main/Formula/c/codex.rb) on Homebrew.
722
544
 
723
545
  ---
724
546