@gitlawb/openclaude 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. package/README.md +323 -0
  2. package/bin/openclaude +32 -0
  3. package/dist/cli.mjs +550803 -0
  4. package/package.json +136 -0
package/README.md ADDED
@@ -0,0 +1,323 @@
1
+ # OpenClaude
2
+
3
+ Use Claude Code with **any LLM** — not just Claude.
4
+
5
+ OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API.
6
+
7
+ All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
8
+
9
+ ---
10
+
11
+ ## Install
12
+
13
+ ### Option A: npm (recommended)
14
+
15
+ ```bash
16
+ npm install -g @gitlawb/openclaude
17
+ ```
18
+
19
+ ### Option B: From source (requires Bun)
20
+
21
+ ```bash
22
+ # Clone from gitlawb
23
+ git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
24
+ cd openclaude
25
+
26
+ # Install dependencies
27
+ bun install
28
+
29
+ # Build
30
+ bun run build
31
+
32
+ # Link globally (optional)
33
+ npm link
34
+ ```
35
+
36
+ ### Option C: Run directly with Bun (no build step)
37
+
38
+ ```bash
39
+ git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
40
+ cd openclaude
41
+ bun install
42
+ bun run dev
43
+ ```
44
+
45
+ ---
46
+
47
+ ## Quick Start
48
+
49
+ ### 1. Set 3 environment variables
50
+
51
+ ```bash
52
+ export CLAUDE_CODE_USE_OPENAI=1
53
+ export OPENAI_API_KEY=sk-your-key-here
54
+ export OPENAI_MODEL=gpt-4o
55
+ ```
56
+
57
+ ### 2. Run it
58
+
59
+ ```bash
60
+ # If installed via npm
61
+ openclaude
62
+
63
+ # If built from source
64
+ bun run dev
65
+ # or after build:
66
+ node dist/cli.mjs
67
+ ```
68
+
69
+ That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
70
+
71
+ The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
72
+
73
+ ---
74
+
75
+ ## Provider Examples
76
+
77
+ ### OpenAI
78
+
79
+ ```bash
80
+ export CLAUDE_CODE_USE_OPENAI=1
81
+ export OPENAI_API_KEY=sk-...
82
+ export OPENAI_MODEL=gpt-4o
83
+ ```
84
+
85
+ ### DeepSeek
86
+
87
+ ```bash
88
+ export CLAUDE_CODE_USE_OPENAI=1
89
+ export OPENAI_API_KEY=sk-...
90
+ export OPENAI_BASE_URL=https://api.deepseek.com/v1
91
+ export OPENAI_MODEL=deepseek-chat
92
+ ```
93
+
94
+ ### Google Gemini (via OpenRouter)
95
+
96
+ ```bash
97
+ export CLAUDE_CODE_USE_OPENAI=1
98
+ export OPENAI_API_KEY=sk-or-...
99
+ export OPENAI_BASE_URL=https://openrouter.ai/api/v1
100
+ export OPENAI_MODEL=google/gemini-2.0-flash
101
+ ```
102
+
103
+ ### Ollama (local, free)
104
+
105
+ ```bash
106
+ ollama pull llama3.3:70b
107
+
108
+ export CLAUDE_CODE_USE_OPENAI=1
109
+ export OPENAI_BASE_URL=http://localhost:11434/v1
110
+ export OPENAI_MODEL=llama3.3:70b
111
+ # no API key needed for local models
112
+ ```
113
+
114
+ ### LM Studio (local)
115
+
116
+ ```bash
117
+ export CLAUDE_CODE_USE_OPENAI=1
118
+ export OPENAI_BASE_URL=http://localhost:1234/v1
119
+ export OPENAI_MODEL=your-model-name
120
+ ```
121
+
122
+ ### Together AI
123
+
124
+ ```bash
125
+ export CLAUDE_CODE_USE_OPENAI=1
126
+ export OPENAI_API_KEY=...
127
+ export OPENAI_BASE_URL=https://api.together.xyz/v1
128
+ export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
129
+ ```
130
+
131
+ ### Groq
132
+
133
+ ```bash
134
+ export CLAUDE_CODE_USE_OPENAI=1
135
+ export OPENAI_API_KEY=gsk_...
136
+ export OPENAI_BASE_URL=https://api.groq.com/openai/v1
137
+ export OPENAI_MODEL=llama-3.3-70b-versatile
138
+ ```
139
+
140
+ ### Mistral
141
+
142
+ ```bash
143
+ export CLAUDE_CODE_USE_OPENAI=1
144
+ export OPENAI_API_KEY=...
145
+ export OPENAI_BASE_URL=https://api.mistral.ai/v1
146
+ export OPENAI_MODEL=mistral-large-latest
147
+ ```
148
+
149
+ ### Azure OpenAI
150
+
151
+ ```bash
152
+ export CLAUDE_CODE_USE_OPENAI=1
153
+ export OPENAI_API_KEY=your-azure-key
154
+ export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
155
+ export OPENAI_MODEL=gpt-4o
156
+ ```
157
+
158
+ ---
159
+
160
+ ## Environment Variables
161
+
162
+ | Variable | Required | Description |
163
+ |----------|----------|-------------|
164
+ | `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
165
+ | `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
166
+ | `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
167
+ | `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
168
+
169
+ You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
170
+
171
+ ---
172
+
173
+ ## Runtime Hardening
174
+
175
+ Use these commands to keep the CLI stable and catch environment mistakes early:
176
+
177
+ ```bash
178
+ # quick startup sanity check
179
+ bun run smoke
180
+
181
+ # validate provider env + reachability
182
+ bun run doctor:runtime
183
+
184
+ # print machine-readable runtime diagnostics
185
+ bun run doctor:runtime:json
186
+
187
+ # persist a diagnostics report to reports/doctor-runtime.json
188
+ bun run doctor:report
189
+
190
+ # full local hardening check (typecheck + smoke + runtime doctor)
191
+ bun run hardening:check
192
+
193
+ # strict hardening (includes project-wide typecheck)
194
+ bun run hardening:strict
195
+ ```
196
+
197
+ Notes:
198
+ - `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
199
+ - Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
200
+
201
+ ### Provider Launch Profiles
202
+
203
+ Use profile launchers to avoid repeated environment setup:
204
+
205
+ ```bash
206
+ # one-time profile bootstrap (auto-detect ollama, otherwise openai)
207
+ bun run profile:init
208
+
209
+ # openai bootstrap with explicit key
210
+ bun run profile:init -- --provider openai --api-key sk-...
211
+
212
+ # ollama bootstrap with custom model
213
+ bun run profile:init -- --provider ollama --model llama3.1:8b
214
+
215
+ # launch using persisted profile (.openclaude-profile.json)
216
+ bun run dev:profile
217
+
218
+ # OpenAI profile (requires OPENAI_API_KEY in your shell)
219
+ bun run dev:openai
220
+
221
+ # Ollama profile (defaults: localhost:11434, llama3.1:8b)
222
+ bun run dev:ollama
223
+ ```
224
+
225
+ `dev:openai` and `dev:ollama` run `doctor:runtime` first and only launch the app if checks pass.
226
+ For `dev:ollama`, make sure Ollama is running locally before launch.
227
+
228
+ ---
229
+
230
+ ## What Works
231
+
232
+ - **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
233
+ - **Streaming**: Real-time token streaming
234
+ - **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
235
+ - **Images**: Base64 and URL images passed to vision models
236
+ - **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
237
+ - **Sub-agents**: AgentTool spawns sub-agents using the same provider
238
+ - **Memory**: Persistent memory system
239
+
240
+ ## What's Different
241
+
242
+ - **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
243
+ - **No prompt caching**: Anthropic-specific cache headers are skipped
244
+ - **No beta features**: Anthropic-specific beta headers are ignored
245
+ - **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
246
+
247
+ ---
248
+
249
+ ## How It Works
250
+
251
+ The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
252
+
253
+ ```
254
+ Claude Code Tool System
255
+ |
256
+ v
257
+ Anthropic SDK interface (duck-typed)
258
+ |
259
+ v
260
+ openaiShim.ts <-- translates formats
261
+ |
262
+ v
263
+ OpenAI Chat Completions API
264
+ |
265
+ v
266
+ Any compatible model
267
+ ```
268
+
269
+ It translates:
270
+ - Anthropic message blocks → OpenAI messages
271
+ - Anthropic tool_use/tool_result → OpenAI function calls
272
+ - OpenAI SSE streaming → Anthropic stream events
273
+ - Anthropic system prompt arrays → OpenAI system messages
274
+
275
+ The rest of Claude Code doesn't know it's talking to a different model.
276
+
277
+ ---
278
+
279
+ ## Model Quality Notes
280
+
281
+ Not all models are equal at agentic tool use. Here's a rough guide:
282
+
283
+ | Model | Tool Calling | Code Quality | Speed |
284
+ |-------|-------------|-------------|-------|
285
+ | GPT-4o | Excellent | Excellent | Fast |
286
+ | DeepSeek-V3 | Great | Great | Fast |
287
+ | Gemini 2.0 Flash | Great | Good | Very Fast |
288
+ | Llama 3.3 70B | Good | Good | Medium |
289
+ | Mistral Large | Good | Good | Fast |
290
+ | GPT-4o-mini | Good | Good | Very Fast |
291
+ | Qwen 2.5 72B | Good | Good | Medium |
292
+ | Smaller models (<7B) | Limited | Limited | Very Fast |
293
+
294
+ For best results, use models with strong function/tool calling support.
295
+
296
+ ---
297
+
298
+ ## Files Changed from Original
299
+
300
+ ```
301
+ src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
302
+ src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
303
+ src/utils/model/providers.ts — Added 'openai' provider type
304
+ src/utils/model/configs.ts — Added openai model mappings
305
+ src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
306
+ src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
307
+ ```
308
+
309
+ 6 files changed. 786 lines added. Zero dependencies added.
310
+
311
+ ---
312
+
313
+ ## Origin
314
+
315
+ This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
316
+
317
+ The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
318
+
319
+ ---
320
+
321
+ ## License
322
+
323
+ This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
package/bin/openclaude ADDED
@@ -0,0 +1,32 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * OpenClaude — Claude Code with any LLM
5
+ *
6
+ * If dist/cli.mjs exists (built), run that.
7
+ * Otherwise, tell the user to build first or use `bun run dev`.
8
+ */
9
+
10
+ import { existsSync } from 'fs'
11
+ import { join, dirname } from 'path'
12
+ import { fileURLToPath } from 'url'
13
+
14
+ const __dirname = dirname(fileURLToPath(import.meta.url))
15
+ const distPath = join(__dirname, '..', 'dist', 'cli.mjs')
16
+
17
+ if (existsSync(distPath)) {
18
+ await import(distPath)
19
+ } else {
20
+ console.error(`
21
+ openclaude: dist/cli.mjs not found.
22
+
23
+ Build first:
24
+ bun run build
25
+
26
+ Or run directly with Bun:
27
+ bun run dev
28
+
29
+ See README.md for setup instructions.
30
+ `)
31
+ process.exit(1)
32
+ }