@gitlawb/openclaude 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  Use Claude Code with **any LLM** — not just Claude.
4
4
 
5
- OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API.
5
+ OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
6
6
 
7
7
  All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
8
8
 
@@ -82,6 +82,25 @@ export OPENAI_API_KEY=sk-...
82
82
  export OPENAI_MODEL=gpt-4o
83
83
  ```
84
84
 
85
+ ### Codex via ChatGPT auth
86
+
87
+ `codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
88
+ `codexspark` maps to GPT-5.3 Codex Spark for faster loops.
89
+
90
+ If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
91
+ automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
92
+ override the token directly with `CODEX_API_KEY`.
93
+
94
+ ```bash
95
+ export CLAUDE_CODE_USE_OPENAI=1
96
+ export OPENAI_MODEL=codexplan
97
+
98
+ # optional if you do not already have ~/.codex/auth.json
99
+ export CODEX_API_KEY=...
100
+
101
+ openclaude
102
+ ```
103
+
85
104
  ### DeepSeek
86
105
 
87
106
  ```bash
@@ -165,6 +184,9 @@ export OPENAI_MODEL=gpt-4o
165
184
  | `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
166
185
  | `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
167
186
  | `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
187
+ | `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
188
+ | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
189
+ | `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
168
190
 
169
191
  You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
170
192
 
@@ -197,6 +219,7 @@ bun run hardening:strict
197
219
  Notes:
198
220
  - `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
199
221
  - Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
222
+ - Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
200
223
 
201
224
  ### Provider Launch Profiles
202
225
 
@@ -206,15 +229,24 @@ Use profile launchers to avoid repeated environment setup:
206
229
  # one-time profile bootstrap (auto-detect ollama, otherwise openai)
207
230
  bun run profile:init
208
231
 
232
+ # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
233
+ bun run profile:codex
234
+
209
235
  # openai bootstrap with explicit key
210
236
  bun run profile:init -- --provider openai --api-key sk-...
211
237
 
212
238
  # ollama bootstrap with custom model
213
239
  bun run profile:init -- --provider ollama --model llama3.1:8b
214
240
 
241
+ # codex bootstrap with a fast model alias
242
+ bun run profile:init -- --provider codex --model codexspark
243
+
215
244
  # launch using persisted profile (.openclaude-profile.json)
216
245
  bun run dev:profile
217
246
 
247
+ # codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
248
+ bun run dev:codex
249
+
218
250
  # OpenAI profile (requires OPENAI_API_KEY in your shell)
219
251
  bun run dev:openai
220
252
 
@@ -222,7 +254,7 @@ bun run dev:openai
222
254
  bun run dev:ollama
223
255
  ```
224
256
 
225
- `dev:openai` and `dev:ollama` run `doctor:runtime` first and only launch the app if checks pass.
257
+ `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
226
258
  For `dev:ollama`, make sure Ollama is running locally before launch.
227
259
 
228
260
  ---
@@ -0,0 +1,7 @@
1
+ import { join } from 'path'
2
+ import { pathToFileURL } from 'url'
3
+
4
+ export function getDistImportSpecifier(baseDir) {
5
+ const distPath = join(baseDir, '..', 'dist', 'cli.mjs')
6
+ return pathToFileURL(distPath).href
7
+ }
@@ -0,0 +1,13 @@
1
+ import assert from 'node:assert/strict'
2
+ import test from 'node:test'
3
+
4
+ import { getDistImportSpecifier } from './import-specifier.mjs'
5
+
6
+ test('builds a file URL import specifier for dist/cli.mjs', () => {
7
+ const specifier = getDistImportSpecifier('C:\\repo\\bin')
8
+
9
+ assert.equal(
10
+ specifier,
11
+ 'file:///C:/repo/dist/cli.mjs',
12
+ )
13
+ })
package/bin/openclaude CHANGED
@@ -10,12 +10,13 @@
10
10
  import { existsSync } from 'fs'
11
11
  import { join, dirname } from 'path'
12
12
  import { fileURLToPath } from 'url'
13
+ import { getDistImportSpecifier } from './import-specifier.mjs'
13
14
 
14
15
  const __dirname = dirname(fileURLToPath(import.meta.url))
15
16
  const distPath = join(__dirname, '..', 'dist', 'cli.mjs')
16
17
 
17
18
  if (existsSync(distPath)) {
18
- await import(distPath)
19
+ await import(getDistImportSpecifier(__dirname))
19
20
  } else {
20
21
  console.error(`
21
22
  openclaude: dist/cli.mjs not found.