@samooth/open-codex 0.2.6 → 0.2.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -290,6 +290,55 @@ npm link
290
290
 
291
291
  </details>
292
292
 
293
+ ### Alternative AI Providers
294
+
295
+ This fork of Codex supports multiple AI providers:
296
+
297
+ - `openai` (default)
298
+ - `gemini`
299
+ - `openrouter`
300
+ - `ollama`
301
+ - `xai`
302
+ - `deepseek`
303
+ - `hf` (Hugging Face)
304
+
305
+ To use a different provider, set the `provider` key in your config file:
306
+
307
+ ```json
308
+ {
309
+ "provider": "gemini"
310
+ }
311
+ ```
312
+
313
+ Or use the `--provider` flag, e.g. `codex --provider gemini`.
314
+
315
+ #### Ollama Configuration
316
+
317
+ When using the `ollama` provider, OpenCodex defaults to communicating with a local server at `http://localhost:11434`. You can customize this by setting the **`OLLAMA_BASE_URL`** environment variable:
318
+
319
+ ```bash
320
+ export OLLAMA_BASE_URL="http://192.168.1.50:11434"
321
+ ```
322
+
323
+ For embeddings (used in Project Memory and RAG), OpenCodex defaults to the `nomic-embed-text:latest` model for Ollama. Ensure you have pulled it:
324
+ `ollama pull nomic-embed-text:latest`
325
+
326
+ #### Dynamic Model Discovery
327
+
328
+ For many providers, you can use the `/models` command within the interactive chat to see a list of available models and switch between them. For the **Hugging Face** provider, this dynamically fetches the latest `tool-use` compatible models directly from the Hugging Face Hub.
329
+
330
+ Here’s a table of all providers and their default models:
331
+
332
+ | Provider | Environment Variable Required | Default Agentic Model | Default Full Context Model |
333
+ |------------|-------------------------------|-----------------------|----------------------------|
334
+ | openai | `OPENAI_API_KEY` | `o4-mini` | `o3` |
335
+ | gemini | `GOOGLE_GENERATIVE_AI_API_KEY`| `gemini-3-pro-preview`| `gemini-2.5-pro` |
336
+ | openrouter | `OPENROUTER_API_KEY` | `openai/o4-mini` | `openai/o3` |
337
+ | ollama | *Not required* | *User must specify* | *User must specify* |
338
+ | xai | `XAI_API_KEY` | `grok-3-mini-beta` | `grok-3-beta` |
339
+ | deepseek | `DS_API_KEY` | `deepseek-chat` | `deepseek-reasoner` |
340
+ | hf | `HF_API_KEY` | `moonshotai/Kimi-K2.5`| `moonshotai/Kimi-K2.5` |
341
+
293
342
  ---
294
343
 
295
344
  ## Configuration
@@ -303,6 +352,9 @@ Codex looks for config files in **`~/.codex/`** (either YAML or JSON format). Th
303
352
  "provider": "openai", // Default provider
304
353
  "approvalMode": "suggest", // or auto-edit, full-auto
305
354
  "fullAutoErrorMode": "ask-user", // or ignore-and-continue
355
+ "enableWebSearch": false, // default is false
356
+ "enableDeepThinking": false, // adds "Deep Thinking" prefix to prompt
357
+ "embeddingModel": "text-embedding-3-small", // Custom model for RAG/Memory
306
358
  "memory": {
307
359
  "enabled": true
308
360
  }
@@ -311,57 +363,13 @@ Codex looks for config files in **`~/.codex/`** (either YAML or JSON format). Th
311
363
 
312
364
  You can also define custom instructions:
313
365
 
314
- ```md
366
+ ```markdown
315
367
  # ~/.codex/instructions.md
316
368
 
317
369
  - Always respond with emojis
318
370
  - Only use git commands if I explicitly mention you should
319
371
  ```
320
372
 
321
- ### Alternative AI Providers
322
-
323
- This fork of Codex supports multiple AI providers:
324
-
325
- - openai (default)
326
- - gemini
327
- - openrouter
328
- - ollama
329
- - xai
330
- - deepseek
331
- - hf (Hugging Face)
332
-
333
- To use a different provider, set the `provider` key in your config file:
334
-
335
- ```json
336
- {
337
- "provider": "gemini"
338
- }
339
- ```
340
-
341
- OR use the `--provider` flag. eg. `codex --provider gemini`
342
-
343
- #### Dynamic Model Discovery
344
-
345
- For many providers, you can use the `/models` command within the interactive chat to see a list of available models and switch between them. For the **Hugging Face** provider, this dynamically fetches the latest `tool-use` compatible models directly from the Hugging Face Hub.
346
-
347
- Here's a list of all the providers and their default models:
348
-
349
- | Provider | Environment Variable Required | Default Agentic Model | Default Full Context Model |
350
- | ---------- | ----------------------------- | ---------------------------- | -------------------------- |
351
- | openai | OPENAI_API_KEY | o4-mini | o3 |
352
- | gemini | GOOGLE_GENERATIVE_AI_API_KEY | gemini-3-pro-preview | gemini-2.5-pro |
353
- | openrouter | OPENROUTER_API_KEY | openai/o4-mini | openai/o3 |
354
- | ollama | Not required | User must specify | User must specify |
355
- | xai | XAI_API_KEY | grok-3-mini-beta | grok-3-beta |
356
- | deepseek | DS_API_KEY | deepseek-chat | deepseek-reasoner |
357
- | hf | HF_API_KEY | moonshotai/Kimi-K2.5 | moonshotai/Kimi-K2.5 |
358
-
359
- #### When using an alternative provider, make sure you have the correct environment variables set.
360
-
361
- ```bash
362
- export GOOGLE_GENERATIVE_AI_API_KEY="your-gemini-api-key-here"
363
- ```
364
-
365
373
  ---
366
374
 
367
375
  ## FAQ