@samooth/open-codex 0.2.7 → 0.2.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +49 -54
- package/dist/cli.js +313 -298
- package/dist/cli.js.map +4 -4
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -111,7 +111,10 @@ development_ that understands and executes your repo.
|
|
|
111
111
|
- **Zero setup** — bring your API key and it just works!
|
|
112
112
|
- **Multiple AI providers** — use OpenAI, Gemini, OpenRouter, Ollama, xAI, DeepSeek, or Hugging Face!
|
|
113
113
|
- **High Performance** — parallel tool execution and asynchronous file indexing for speed ✨
|
|
114
|
-
- **
|
|
114
|
+
- **Persistent Status Bar** — real-time visibility of model, provider, context usage, and session ID 📊
|
|
115
|
+
- **Integrated Tool UI** — unified tool call and response boxes for a cleaner workflow 📦
|
|
116
|
+
- **Syntax Highlighting** — full terminal color support for code diffs, search results, and file contents 🎨
|
|
117
|
+
- **Interactive Search** — quickly filter through command history and past sessions 🔍
|
|
115
118
|
- **Full auto-approval, while safe + secure** by running network-disabled and directory-sandboxed
|
|
116
119
|
- **Multimodal** — pass in screenshots or diagrams to implement features ✨
|
|
117
120
|
- **Dry Run mode** — preview all changes without actually modifying files or running commands!
|
|
@@ -290,55 +293,6 @@ npm link
|
|
|
290
293
|
|
|
291
294
|
</details>
|
|
292
295
|
|
|
293
|
-
### Alternative AI Providers
|
|
294
|
-
|
|
295
|
-
This fork of Codex supports multiple AI providers:
|
|
296
|
-
|
|
297
|
-
- `openai` (default)
|
|
298
|
-
- `gemini`
|
|
299
|
-
- `openrouter`
|
|
300
|
-
- `ollama`
|
|
301
|
-
- `xai`
|
|
302
|
-
- `deepseek`
|
|
303
|
-
- `hf` (Hugging Face)
|
|
304
|
-
|
|
305
|
-
To use a different provider, set the `provider` key in your config file:
|
|
306
|
-
|
|
307
|
-
```json
|
|
308
|
-
{
|
|
309
|
-
"provider": "gemini"
|
|
310
|
-
}
|
|
311
|
-
```
|
|
312
|
-
|
|
313
|
-
Or use the `--provider` flag, e.g. `codex --provider gemini`.
|
|
314
|
-
|
|
315
|
-
#### Ollama Configuration
|
|
316
|
-
|
|
317
|
-
When using the `ollama` provider, OpenCodex defaults to communicating with a local server at `http://localhost:11434`. You can customize this by setting the **`OLLAMA_BASE_URL`** environment variable:
|
|
318
|
-
|
|
319
|
-
```bash
|
|
320
|
-
export OLLAMA_BASE_URL="http://192.168.1.50:11434"
|
|
321
|
-
```
|
|
322
|
-
|
|
323
|
-
For embeddings (used in Project Memory and RAG), OpenCodex defaults to the `nomic-embed-text:latest` model for Ollama. Ensure you have pulled it:
|
|
324
|
-
`ollama pull nomic-embed-text:latest`
|
|
325
|
-
|
|
326
|
-
#### Dynamic Model Discovery
|
|
327
|
-
|
|
328
|
-
For many providers, you can use the `/models` command within the interactive chat to see a list of available models and switch between them. For the **Hugging Face** provider, this dynamically fetches the latest `tool-use` compatible models directly from the Hugging Face Hub.
|
|
329
|
-
|
|
330
|
-
Here’s a table of all providers and their default models:
|
|
331
|
-
|
|
332
|
-
| Provider | Environment Variable Required | Default Agentic Model | Default Full Context Model |
|
|
333
|
-
|------------|-------------------------------|-----------------------|----------------------------|
|
|
334
|
-
| openai | `OPENAI_API_KEY` | `o4-mini` | `o3` |
|
|
335
|
-
| gemini | `GOOGLE_GENERATIVE_AI_API_KEY`| `gemini-3-pro-preview`| `gemini-2.5-pro` |
|
|
336
|
-
| openrouter | `OPENROUTER_API_KEY` | `openai/o4-mini` | `openai/o3` |
|
|
337
|
-
| ollama | *Not required* | *User must specify* | *User must specify* |
|
|
338
|
-
| xai | `XAI_API_KEY` | `grok-3-mini-beta` | `grok-3-beta` |
|
|
339
|
-
| deepseek | `DS_API_KEY` | `deepseek-chat` | `deepseek-reasoner` |
|
|
340
|
-
| hf | `HF_API_KEY` | `moonshotai/Kimi-K2.5`| `moonshotai/Kimi-K2.5` |
|
|
341
|
-
|
|
342
296
|
---
|
|
343
297
|
|
|
344
298
|
## Configuration
|
|
@@ -352,9 +306,6 @@ Codex looks for config files in **`~/.codex/`** (either YAML or JSON format). Th
|
|
|
352
306
|
"provider": "openai", // Default provider
|
|
353
307
|
"approvalMode": "suggest", // or auto-edit, full-auto
|
|
354
308
|
"fullAutoErrorMode": "ask-user", // or ignore-and-continue
|
|
355
|
-
"enableWebSearch": false, // default is false
|
|
356
|
-
"enableDeepThinking": false, // adds "Deep Thinking" prefix to prompt
|
|
357
|
-
"embeddingModel": "text-embedding-3-small", // Custom model for RAG/Memory
|
|
358
309
|
"memory": {
|
|
359
310
|
"enabled": true
|
|
360
311
|
}
|
|
@@ -363,13 +314,57 @@ Codex looks for config files in **`~/.codex/`** (either YAML or JSON format). Th
|
|
|
363
314
|
|
|
364
315
|
You can also define custom instructions:
|
|
365
316
|
|
|
366
|
-
```
|
|
317
|
+
```md
|
|
367
318
|
# ~/.codex/instructions.md
|
|
368
319
|
|
|
369
320
|
- Always respond with emojis
|
|
370
321
|
- Only use git commands if I explicitly mention you should
|
|
371
322
|
```
|
|
372
323
|
|
|
324
|
+
### Alternative AI Providers
|
|
325
|
+
|
|
326
|
+
This fork of Codex supports multiple AI providers:
|
|
327
|
+
|
|
328
|
+
- openai (default)
|
|
329
|
+
- gemini
|
|
330
|
+
- openrouter
|
|
331
|
+
- ollama
|
|
332
|
+
- xai
|
|
333
|
+
- deepseek
|
|
334
|
+
- hf (Hugging Face)
|
|
335
|
+
|
|
336
|
+
To use a different provider, set the `provider` key in your config file:
|
|
337
|
+
|
|
338
|
+
```json
|
|
339
|
+
{
|
|
340
|
+
"provider": "gemini"
|
|
341
|
+
}
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
OR use the `--provider` flag. eg. `codex --provider gemini`
|
|
345
|
+
|
|
346
|
+
#### Dynamic Model Discovery
|
|
347
|
+
|
|
348
|
+
For many providers, you can use the `/models` command within the interactive chat to see a list of available models and switch between them. For the **Hugging Face** provider, this dynamically fetches the latest `tool-use` compatible models directly from the Hugging Face Hub.
|
|
349
|
+
|
|
350
|
+
Here's a list of all the providers and their default models:
|
|
351
|
+
|
|
352
|
+
| Provider | Environment Variable Required | Default Agentic Model | Default Full Context Model |
|
|
353
|
+
| ---------- | ----------------------------- | ---------------------------- | -------------------------- |
|
|
354
|
+
| openai | OPENAI_API_KEY | o4-mini | o3 |
|
|
355
|
+
| gemini | GOOGLE_GENERATIVE_AI_API_KEY | gemini-3-pro-preview | gemini-2.5-pro |
|
|
356
|
+
| openrouter | OPENROUTER_API_KEY | openai/o4-mini | openai/o3 |
|
|
357
|
+
| ollama | Not required | User must specify | User must specify |
|
|
358
|
+
| xai | XAI_API_KEY | grok-3-mini-beta | grok-3-beta |
|
|
359
|
+
| deepseek | DS_API_KEY | deepseek-chat | deepseek-reasoner |
|
|
360
|
+
| hf | HF_API_KEY | moonshotai/Kimi-K2.5 | moonshotai/Kimi-K2.5 |
|
|
361
|
+
|
|
362
|
+
#### When using an alternative provider, make sure you have the correct environment variables set.
|
|
363
|
+
|
|
364
|
+
```bash
|
|
365
|
+
export GOOGLE_GENERATIVE_AI_API_KEY="your-gemini-api-key-here"
|
|
366
|
+
```
|
|
367
|
+
|
|
373
368
|
---
|
|
374
369
|
|
|
375
370
|
## FAQ
|