@gitlawb/openclaude 0.1.5 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +43 -215
  2. package/dist/cli.mjs +72055 -76545
  3. package/package.json +77 -76
package/README.md CHANGED
@@ -2,277 +2,105 @@
2
2
 
3
3
  Use Claude Code with **any LLM** — not just Claude.
4
4
 
5
- OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
5
+ OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`, and local inference via [Atomic Chat](https://atomic.chat/) on Apple Silicon.
6
6
 
7
7
  All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
8
8
 
9
9
  ---
10
10
 
11
- ## Install
11
+ ## Start Here
12
12
 
13
- ### Option A: npm (recommended)
13
+ If you are new to terminals or just want the easiest path, start with the beginner guides:
14
14
 
15
- ```bash
16
- npm install -g @gitlawb/openclaude
17
- ```
18
-
19
- ### Option B: From source (requires Bun)
20
-
21
- ```bash
22
- # Clone from gitlawb
23
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
24
- cd openclaude
15
+ - [Non-Technical Setup](docs/non-technical-setup.md)
16
+ - [Windows Quick Start](docs/quick-start-windows.md)
17
+ - [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
25
18
 
26
- # Install dependencies
27
- bun install
19
+ If you want source builds, Bun workflows, profile launchers, or full provider examples, use:
28
20
 
29
- # Build
30
- bun run build
31
-
32
- # Link globally (optional)
33
- npm link
34
- ```
35
-
36
- ### Option C: Run directly with Bun (no build step)
37
-
38
- ```bash
39
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
40
- cd openclaude
41
- bun install
42
- bun run dev
43
- ```
21
+ - [Advanced Setup](docs/advanced-setup.md)
44
22
 
45
23
  ---
46
24
 
47
- ## Quick Start
25
+ ## Beginner Install
48
26
 
49
- ### 1. Set 3 environment variables
27
+ For most users, install the npm package:
50
28
 
51
29
  ```bash
52
- export CLAUDE_CODE_USE_OPENAI=1
53
- export OPENAI_API_KEY=sk-your-key-here
54
- export OPENAI_MODEL=gpt-4o
30
+ npm install -g @gitlawb/openclaude
55
31
  ```
56
32
 
57
- ### 2. Run it
33
+ The package name is `@gitlawb/openclaude`, but the command you run is:
58
34
 
59
35
  ```bash
60
- # If installed via npm
61
36
  openclaude
62
-
63
- # If built from source
64
- bun run dev
65
- # or after build:
66
- node dist/cli.mjs
67
37
  ```
68
38
 
69
- That's it. The tool system, streaming, file editing, multi-step reasoning everything works through the model you picked.
70
-
71
- The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
39
+ If you install via npm and later see `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
72
40
 
73
41
  ---
74
42
 
75
- ## Provider Examples
76
-
77
- ### OpenAI
78
-
79
- ```bash
80
- export CLAUDE_CODE_USE_OPENAI=1
81
- export OPENAI_API_KEY=sk-...
82
- export OPENAI_MODEL=gpt-4o
83
- ```
84
-
85
- ### Codex via ChatGPT auth
43
+ ## Fastest Setup
86
44
 
87
- `codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
88
- `codexspark` maps to GPT-5.3 Codex Spark for faster loops.
45
+ ### Windows PowerShell
89
46
 
90
- If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
91
- automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
92
- override the token directly with `CODEX_API_KEY`.
93
-
94
- ```bash
95
- export CLAUDE_CODE_USE_OPENAI=1
96
- export OPENAI_MODEL=codexplan
47
+ ```powershell
48
+ npm install -g @gitlawb/openclaude
97
49
 
98
- # optional if you do not already have ~/.codex/auth.json
99
- export CODEX_API_KEY=...
50
+ $env:CLAUDE_CODE_USE_OPENAI="1"
51
+ $env:OPENAI_API_KEY="sk-your-key-here"
52
+ $env:OPENAI_MODEL="gpt-4o"
100
53
 
101
54
  openclaude
102
55
  ```
103
56
 
104
- ### DeepSeek
105
-
106
- ```bash
107
- export CLAUDE_CODE_USE_OPENAI=1
108
- export OPENAI_API_KEY=sk-...
109
- export OPENAI_BASE_URL=https://api.deepseek.com/v1
110
- export OPENAI_MODEL=deepseek-chat
111
- ```
112
-
113
- ### Google Gemini (via OpenRouter)
114
-
115
- ```bash
116
- export CLAUDE_CODE_USE_OPENAI=1
117
- export OPENAI_API_KEY=sk-or-...
118
- export OPENAI_BASE_URL=https://openrouter.ai/api/v1
119
- export OPENAI_MODEL=google/gemini-2.0-flash
120
- ```
121
-
122
- ### Ollama (local, free)
123
-
124
- ```bash
125
- ollama pull llama3.3:70b
126
-
127
- export CLAUDE_CODE_USE_OPENAI=1
128
- export OPENAI_BASE_URL=http://localhost:11434/v1
129
- export OPENAI_MODEL=llama3.3:70b
130
- # no API key needed for local models
131
- ```
132
-
133
- ### LM Studio (local)
57
+ ### macOS / Linux
134
58
 
135
59
  ```bash
136
- export CLAUDE_CODE_USE_OPENAI=1
137
- export OPENAI_BASE_URL=http://localhost:1234/v1
138
- export OPENAI_MODEL=your-model-name
139
- ```
140
-
141
- ### Together AI
142
-
143
- ```bash
144
- export CLAUDE_CODE_USE_OPENAI=1
145
- export OPENAI_API_KEY=...
146
- export OPENAI_BASE_URL=https://api.together.xyz/v1
147
- export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
148
- ```
149
-
150
- ### Groq
151
-
152
- ```bash
153
- export CLAUDE_CODE_USE_OPENAI=1
154
- export OPENAI_API_KEY=gsk_...
155
- export OPENAI_BASE_URL=https://api.groq.com/openai/v1
156
- export OPENAI_MODEL=llama-3.3-70b-versatile
157
- ```
158
-
159
- ### Mistral
160
-
161
- ```bash
162
- export CLAUDE_CODE_USE_OPENAI=1
163
- export OPENAI_API_KEY=...
164
- export OPENAI_BASE_URL=https://api.mistral.ai/v1
165
- export OPENAI_MODEL=mistral-large-latest
166
- ```
167
-
168
- ### Azure OpenAI
60
+ npm install -g @gitlawb/openclaude
169
61
 
170
- ```bash
171
62
  export CLAUDE_CODE_USE_OPENAI=1
172
- export OPENAI_API_KEY=your-azure-key
173
- export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
63
+ export OPENAI_API_KEY=sk-your-key-here
174
64
  export OPENAI_MODEL=gpt-4o
175
- ```
176
65
 
177
- ---
178
-
179
- ## Environment Variables
180
-
181
- | Variable | Required | Description |
182
- |----------|----------|-------------|
183
- | `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
184
- | `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
185
- | `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
186
- | `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
187
- | `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
188
- | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
189
- | `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
190
-
191
- You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
192
-
193
- ---
194
-
195
- ## Runtime Hardening
196
-
197
- Use these commands to keep the CLI stable and catch environment mistakes early:
198
-
199
- ```bash
200
- # quick startup sanity check
201
- bun run smoke
202
-
203
- # validate provider env + reachability
204
- bun run doctor:runtime
205
-
206
- # print machine-readable runtime diagnostics
207
- bun run doctor:runtime:json
208
-
209
- # persist a diagnostics report to reports/doctor-runtime.json
210
- bun run doctor:report
211
-
212
- # full local hardening check (smoke + runtime doctor)
213
- bun run hardening:check
214
-
215
- # strict hardening (includes project-wide typecheck)
216
- bun run hardening:strict
66
+ openclaude
217
67
  ```
218
68
 
219
- Notes:
220
- - `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
221
- - Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
222
- - Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
223
-
224
- ### Provider Launch Profiles
225
-
226
- Use profile launchers to avoid repeated environment setup:
69
+ That is enough to start with OpenAI.
227
70
 
228
- ```bash
229
- # one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
230
- bun run profile:init
71
+ ---
231
72
 
232
- # preview the best provider/model for your goal
233
- bun run profile:recommend -- --goal coding --benchmark
73
+ ## Choose Your Guide
234
74
 
235
- # auto-apply the best available local/openai provider/model for your goal
236
- bun run profile:auto -- --goal latency
75
+ ### Beginner
237
76
 
238
- # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
239
- bun run profile:codex
77
+ - Want the easiest setup with copy-paste steps: [Non-Technical Setup](docs/non-technical-setup.md)
78
+ - On Windows: [Windows Quick Start](docs/quick-start-windows.md)
79
+ - On macOS or Linux: [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
240
80
 
241
- # openai bootstrap with explicit key
242
- bun run profile:init -- --provider openai --api-key sk-...
81
+ ### Advanced
243
82
 
244
- # ollama bootstrap with custom model
245
- bun run profile:init -- --provider ollama --model llama3.1:8b
83
+ - Want source builds, Bun, local profiles, runtime checks, or more provider choices: [Advanced Setup](docs/advanced-setup.md)
246
84
 
247
- # ollama bootstrap with intelligent model auto-selection
248
- bun run profile:init -- --provider ollama --goal coding
85
+ ---
249
86
 
250
- # codex bootstrap with a fast model alias
251
- bun run profile:init -- --provider codex --model codexspark
87
+ ## Common Beginner Choices
252
88
 
253
- # launch using persisted profile (.openclaude-profile.json)
254
- bun run dev:profile
89
+ ### OpenAI
255
90
 
256
- # codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
257
- bun run dev:codex
91
+ Best default if you already have an OpenAI API key.
258
92
 
259
- # OpenAI profile (requires OPENAI_API_KEY in your shell)
260
- bun run dev:openai
93
+ ### Ollama
261
94
 
262
- # Ollama profile (defaults: localhost:11434, llama3.1:8b)
263
- bun run dev:ollama
264
- ```
95
+ Best if you want to run models locally on your own machine.
265
96
 
266
- `profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
267
- If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
97
+ ### Codex
268
98
 
269
- Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
270
- Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
99
+ Best if you already use the Codex CLI or ChatGPT Codex backend.
271
100
 
272
- Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
101
+ ### Atomic Chat
273
102
 
274
- `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
275
- For `dev:ollama`, make sure Ollama is running locally before launch.
103
+ Best if you want local inference on Apple Silicon with Atomic Chat. See [Advanced Setup](docs/advanced-setup.md).
276
104
 
277
105
  ---
278
106