@gitlawb/openclaude 0.1.6 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +43 -220
  2. package/dist/cli.mjs +71929 -76493
  3. package/package.json +77 -76
package/README.md CHANGED
@@ -2,282 +2,105 @@
2
2
 
3
3
  Use Claude Code with **any LLM** — not just Claude.
4
4
 
5
- OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
5
+ OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`, and local inference via [Atomic Chat](https://atomic.chat/) on Apple Silicon.
6
6
 
7
7
  All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
8
8
 
9
9
  ---
10
10
 
11
- ## Install
11
+ ## Start Here
12
12
 
13
- ### Option A: npm (recommended)
13
+ If you are new to terminals or just want the easiest path, start with the beginner guides:
14
14
 
15
- ```bash
16
- npm install -g @gitlawb/openclaude
17
- ```
18
-
19
- ### Option B: From source (requires Bun)
20
-
21
- Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
22
-
23
- ```bash
24
- # Clone from gitlawb
25
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
26
- cd openclaude
27
-
28
- # Install dependencies
29
- bun install
30
-
31
- # Build
32
- bun run build
15
+ - [Non-Technical Setup](docs/non-technical-setup.md)
16
+ - [Windows Quick Start](docs/quick-start-windows.md)
17
+ - [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
33
18
 
34
- # Link globally (optional)
35
- npm link
36
- ```
37
-
38
- ### Option C: Run directly with Bun (no build step)
19
+ If you want source builds, Bun workflows, profile launchers, or full provider examples, use:
39
20
 
40
- ```bash
41
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
42
- cd openclaude
43
- bun install
44
- bun run dev
45
- ```
21
+ - [Advanced Setup](docs/advanced-setup.md)
46
22
 
47
23
  ---
48
24
 
49
- ## Quick Start
25
+ ## Beginner Install
50
26
 
51
- ### 1. Set 3 environment variables
27
+ For most users, install the npm package:
52
28
 
53
29
  ```bash
54
- export CLAUDE_CODE_USE_OPENAI=1
55
- export OPENAI_API_KEY=sk-your-key-here
56
- export OPENAI_MODEL=gpt-4o
30
+ npm install -g @gitlawb/openclaude
57
31
  ```
58
32
 
59
- ### 2. Run it
33
+ The package name is `@gitlawb/openclaude`, but the command you run is:
60
34
 
61
35
  ```bash
62
- # If installed via npm
63
36
  openclaude
64
-
65
- # If built from source
66
- bun run dev
67
- # or after build:
68
- node dist/cli.mjs
69
37
  ```
70
38
 
71
- That's it. The tool system, streaming, file editing, multi-step reasoning everything works through the model you picked.
72
-
73
- The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
39
+ If you install via npm and later see `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
74
40
 
75
41
  ---
76
42
 
77
- ## Provider Examples
43
+ ## Fastest Setup
78
44
 
79
- ### OpenAI
45
+ ### Windows PowerShell
80
46
 
81
- ```bash
82
- export CLAUDE_CODE_USE_OPENAI=1
83
- export OPENAI_API_KEY=sk-...
84
- export OPENAI_MODEL=gpt-4o
85
- ```
86
-
87
- ### Codex via ChatGPT auth
88
-
89
- `codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
90
- `codexspark` maps to GPT-5.3 Codex Spark for faster loops.
91
-
92
- If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
93
- automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
94
- override the token directly with `CODEX_API_KEY`.
95
-
96
- ```bash
97
- export CLAUDE_CODE_USE_OPENAI=1
98
- export OPENAI_MODEL=codexplan
47
+ ```powershell
48
+ npm install -g @gitlawb/openclaude
99
49
 
100
- # optional if you do not already have ~/.codex/auth.json
101
- export CODEX_API_KEY=...
50
+ $env:CLAUDE_CODE_USE_OPENAI="1"
51
+ $env:OPENAI_API_KEY="sk-your-key-here"
52
+ $env:OPENAI_MODEL="gpt-4o"
102
53
 
103
54
  openclaude
104
55
  ```
105
56
 
106
- ### DeepSeek
107
-
108
- ```bash
109
- export CLAUDE_CODE_USE_OPENAI=1
110
- export OPENAI_API_KEY=sk-...
111
- export OPENAI_BASE_URL=https://api.deepseek.com/v1
112
- export OPENAI_MODEL=deepseek-chat
113
- ```
114
-
115
- ### Google Gemini (via OpenRouter)
116
-
117
- ```bash
118
- export CLAUDE_CODE_USE_OPENAI=1
119
- export OPENAI_API_KEY=sk-or-...
120
- export OPENAI_BASE_URL=https://openrouter.ai/api/v1
121
- export OPENAI_MODEL=google/gemini-2.0-flash
122
- ```
123
-
124
- ### Ollama (local, free)
125
-
126
- ```bash
127
- ollama pull llama3.3:70b
128
-
129
- export CLAUDE_CODE_USE_OPENAI=1
130
- export OPENAI_BASE_URL=http://localhost:11434/v1
131
- export OPENAI_MODEL=llama3.3:70b
132
- # no API key needed for local models
133
- ```
134
-
135
- ### LM Studio (local)
57
+ ### macOS / Linux
136
58
 
137
59
  ```bash
138
- export CLAUDE_CODE_USE_OPENAI=1
139
- export OPENAI_BASE_URL=http://localhost:1234/v1
140
- export OPENAI_MODEL=your-model-name
141
- ```
142
-
143
- ### Together AI
144
-
145
- ```bash
146
- export CLAUDE_CODE_USE_OPENAI=1
147
- export OPENAI_API_KEY=...
148
- export OPENAI_BASE_URL=https://api.together.xyz/v1
149
- export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
150
- ```
151
-
152
- ### Groq
153
-
154
- ```bash
155
- export CLAUDE_CODE_USE_OPENAI=1
156
- export OPENAI_API_KEY=gsk_...
157
- export OPENAI_BASE_URL=https://api.groq.com/openai/v1
158
- export OPENAI_MODEL=llama-3.3-70b-versatile
159
- ```
160
-
161
- ### Mistral
162
-
163
- ```bash
164
- export CLAUDE_CODE_USE_OPENAI=1
165
- export OPENAI_API_KEY=...
166
- export OPENAI_BASE_URL=https://api.mistral.ai/v1
167
- export OPENAI_MODEL=mistral-large-latest
168
- ```
169
-
170
- ### Azure OpenAI
60
+ npm install -g @gitlawb/openclaude
171
61
 
172
- ```bash
173
62
  export CLAUDE_CODE_USE_OPENAI=1
174
- export OPENAI_API_KEY=your-azure-key
175
- export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
63
+ export OPENAI_API_KEY=sk-your-key-here
176
64
  export OPENAI_MODEL=gpt-4o
177
- ```
178
-
179
- ---
180
-
181
- ## Environment Variables
182
-
183
- | Variable | Required | Description |
184
- |----------|----------|-------------|
185
- | `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
186
- | `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
187
- | `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
188
- | `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
189
- | `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
190
- | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
191
- | `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
192
- | `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
193
-
194
- You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
195
-
196
- OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
197
-
198
- ---
199
-
200
- ## Runtime Hardening
201
-
202
- Use these commands to keep the CLI stable and catch environment mistakes early:
203
-
204
- ```bash
205
- # quick startup sanity check
206
- bun run smoke
207
-
208
- # validate provider env + reachability
209
- bun run doctor:runtime
210
65
 
211
- # print machine-readable runtime diagnostics
212
- bun run doctor:runtime:json
213
-
214
- # persist a diagnostics report to reports/doctor-runtime.json
215
- bun run doctor:report
216
-
217
- # full local hardening check (smoke + runtime doctor)
218
- bun run hardening:check
219
-
220
- # strict hardening (includes project-wide typecheck)
221
- bun run hardening:strict
66
+ openclaude
222
67
  ```
223
68
 
224
- Notes:
225
- - `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
226
- - Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
227
- - Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
228
-
229
- ### Provider Launch Profiles
230
-
231
- Use profile launchers to avoid repeated environment setup:
69
+ That is enough to start with OpenAI.
232
70
 
233
- ```bash
234
- # one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
235
- bun run profile:init
71
+ ---
236
72
 
237
- # preview the best provider/model for your goal
238
- bun run profile:recommend -- --goal coding --benchmark
73
+ ## Choose Your Guide
239
74
 
240
- # auto-apply the best available local/openai provider/model for your goal
241
- bun run profile:auto -- --goal latency
75
+ ### Beginner
242
76
 
243
- # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
244
- bun run profile:codex
77
+ - Want the easiest setup with copy-paste steps: [Non-Technical Setup](docs/non-technical-setup.md)
78
+ - On Windows: [Windows Quick Start](docs/quick-start-windows.md)
79
+ - On macOS or Linux: [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
245
80
 
246
- # openai bootstrap with explicit key
247
- bun run profile:init -- --provider openai --api-key sk-...
81
+ ### Advanced
248
82
 
249
- # ollama bootstrap with custom model
250
- bun run profile:init -- --provider ollama --model llama3.1:8b
83
+ - Want source builds, Bun, local profiles, runtime checks, or more provider choices: [Advanced Setup](docs/advanced-setup.md)
251
84
 
252
- # ollama bootstrap with intelligent model auto-selection
253
- bun run profile:init -- --provider ollama --goal coding
85
+ ---
254
86
 
255
- # codex bootstrap with a fast model alias
256
- bun run profile:init -- --provider codex --model codexspark
87
+ ## Common Beginner Choices
257
88
 
258
- # launch using persisted profile (.openclaude-profile.json)
259
- bun run dev:profile
89
+ ### OpenAI
260
90
 
261
- # codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
262
- bun run dev:codex
91
+ Best default if you already have an OpenAI API key.
263
92
 
264
- # OpenAI profile (requires OPENAI_API_KEY in your shell)
265
- bun run dev:openai
93
+ ### Ollama
266
94
 
267
- # Ollama profile (defaults: localhost:11434, llama3.1:8b)
268
- bun run dev:ollama
269
- ```
95
+ Best if you want to run models locally on your own machine.
270
96
 
271
- `profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
272
- If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
97
+ ### Codex
273
98
 
274
- Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
275
- Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
99
+ Best if you already use the Codex CLI or ChatGPT Codex backend.
276
100
 
277
- Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
101
+ ### Atomic Chat
278
102
 
279
- `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
280
- For `dev:ollama`, make sure Ollama is running locally before launch.
103
+ Best if you want local inference on Apple Silicon with Atomic Chat. See [Advanced Setup](docs/advanced-setup.md).
281
104
 
282
105
  ---
283
106