@gitlawb/openclaude 0.1.6 → 0.1.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,29 @@
1
+ NOTICE
2
+
3
+ This repository contains code derived from Anthropic's Claude Code CLI.
4
+
5
+ The original Claude Code source is proprietary software:
6
+ Copyright (c) Anthropic PBC. All rights reserved.
7
+ Subject to Anthropic's Commercial Terms of Service.
8
+
9
+ Modifications and additions by OpenClaude contributors are offered under
10
+ the MIT License where legally permissible:
11
+
12
+ MIT License
13
+ Copyright (c) 2026 OpenClaude contributors (modifications only)
14
+
15
+ Permission is hereby granted, free of charge, to any person obtaining
16
+ a copy of the modifications made by OpenClaude contributors, to deal
17
+ in those modifications without restriction, including without limitation
18
+ the rights to use, copy, modify, merge, publish, distribute, sublicense,
19
+ and/or sell copies, subject to the following conditions:
20
+
21
+ The above copyright notice and this permission notice shall be included
22
+ in all copies or substantial portions of the modifications.
23
+
24
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND.
25
+
26
+ The underlying derived code remains subject to Anthropic's copyright.
27
+ This project does not have Anthropic's authorization to distribute
28
+ their proprietary source. Users and contributors should evaluate their
29
+ own legal position.
package/README.md CHANGED
@@ -1,377 +1,291 @@
1
1
  # OpenClaude
2
2
 
3
- Use Claude Code with **any LLM** not just Claude.
3
+ OpenClaude is an open-source coding-agent CLI for cloud and local model providers.
4
4
 
5
- OpenClaude is a fork of the [Claude Code source leak](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code) (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API. It now also supports the ChatGPT Codex backend for `codexplan` and `codexspark`.
5
+ Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
6
6
 
7
- All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
7
+ [![PR Checks](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml/badge.svg?branch=main)](https://github.com/Gitlawb/openclaude/actions/workflows/pr-checks.yml)
8
+ [![Release](https://img.shields.io/github/v/tag/Gitlawb/openclaude?label=release&color=0ea5e9)](https://github.com/Gitlawb/openclaude/tags)
9
+ [![Discussions](https://img.shields.io/badge/discussions-open-7c3aed)](https://github.com/Gitlawb/openclaude/discussions)
10
+ [![Security Policy](https://img.shields.io/badge/security-policy-0f766e)](SECURITY.md)
11
+ [![License](https://img.shields.io/badge/license-MIT-2563eb)](LICENSE)
8
12
 
9
- ---
13
+ [Quick Start](#quick-start) | [Setup Guides](#setup-guides) | [Providers](#supported-providers) | [Source Build](#source-build-and-local-development) | [VS Code Extension](#vs-code-extension) | [Community](#community)
10
14
 
11
- ## Install
15
+ ## Why OpenClaude
12
16
 
13
- ### Option A: npm (recommended)
17
+ - Use one CLI across cloud APIs and local model backends
18
+ - Save provider profiles inside the app with `/provider`
19
+ - Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers
20
+ - Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools
21
+ - Use the bundled VS Code extension for launch integration and theme support
22
+
23
+ ## Quick Start
24
+
25
+ ### Install
14
26
 
15
27
  ```bash
16
28
  npm install -g @gitlawb/openclaude
17
29
  ```
18
30
 
19
- ### Option B: From source (requires Bun)
31
+ If the install later reports `ripgrep not found`, install ripgrep system-wide and confirm `rg --version` works in the same terminal before starting OpenClaude.
20
32
 
21
- Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
33
+ ### Start
22
34
 
23
35
  ```bash
24
- # Clone from gitlawb
25
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
26
- cd openclaude
27
-
28
- # Install dependencies
29
- bun install
30
-
31
- # Build
32
- bun run build
33
-
34
- # Link globally (optional)
35
- npm link
36
+ openclaude
36
37
  ```
37
38
 
38
- ### Option C: Run directly with Bun (no build step)
39
+ Inside OpenClaude:
39
40
 
40
- ```bash
41
- git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
42
- cd openclaude
43
- bun install
44
- bun run dev
45
- ```
41
+ - run `/provider` for guided provider setup and saved profiles
42
+ - run `/onboard-github` for GitHub Models onboarding
46
43
 
47
- ---
44
+ ### Fastest OpenAI setup
48
45
 
49
- ## Quick Start
50
-
51
- ### 1. Set 3 environment variables
46
+ macOS / Linux:
52
47
 
53
48
  ```bash
54
49
  export CLAUDE_CODE_USE_OPENAI=1
55
50
  export OPENAI_API_KEY=sk-your-key-here
56
51
  export OPENAI_MODEL=gpt-4o
57
- ```
58
52
 
59
- ### 2. Run it
60
-
61
- ```bash
62
- # If installed via npm
63
53
  openclaude
64
-
65
- # If built from source
66
- bun run dev
67
- # or after build:
68
- node dist/cli.mjs
69
54
  ```
70
55
 
71
- That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
72
-
73
- The npm package name is `@gitlawb/openclaude`, but the installed CLI command is still `openclaude`.
74
-
75
- ---
56
+ Windows PowerShell:
76
57
 
77
- ## Provider Examples
58
+ ```powershell
59
+ $env:CLAUDE_CODE_USE_OPENAI="1"
60
+ $env:OPENAI_API_KEY="sk-your-key-here"
61
+ $env:OPENAI_MODEL="gpt-4o"
78
62
 
79
- ### OpenAI
80
-
81
- ```bash
82
- export CLAUDE_CODE_USE_OPENAI=1
83
- export OPENAI_API_KEY=sk-...
84
- export OPENAI_MODEL=gpt-4o
63
+ openclaude
85
64
  ```
86
65
 
87
- ### Codex via ChatGPT auth
66
+ ### Fastest local Ollama setup
88
67
 
89
- `codexplan` maps to GPT-5.4 on the Codex backend with high reasoning.
90
- `codexspark` maps to GPT-5.3 Codex Spark for faster loops.
91
-
92
- If you already use the Codex CLI, OpenClaude will read `~/.codex/auth.json`
93
- automatically. You can also point it elsewhere with `CODEX_AUTH_JSON_PATH` or
94
- override the token directly with `CODEX_API_KEY`.
68
+ macOS / Linux:
95
69
 
96
70
  ```bash
97
71
  export CLAUDE_CODE_USE_OPENAI=1
98
- export OPENAI_MODEL=codexplan
99
-
100
- # optional if you do not already have ~/.codex/auth.json
101
- export CODEX_API_KEY=...
72
+ export OPENAI_BASE_URL=http://localhost:11434/v1
73
+ export OPENAI_MODEL=qwen2.5-coder:7b
102
74
 
103
75
  openclaude
104
76
  ```
105
77
 
106
- ### DeepSeek
78
+ Windows PowerShell:
107
79
 
108
- ```bash
109
- export CLAUDE_CODE_USE_OPENAI=1
110
- export OPENAI_API_KEY=sk-...
111
- export OPENAI_BASE_URL=https://api.deepseek.com/v1
112
- export OPENAI_MODEL=deepseek-chat
113
- ```
80
+ ```powershell
81
+ $env:CLAUDE_CODE_USE_OPENAI="1"
82
+ $env:OPENAI_BASE_URL="http://localhost:11434/v1"
83
+ $env:OPENAI_MODEL="qwen2.5-coder:7b"
114
84
 
115
- ### Google Gemini (via OpenRouter)
116
-
117
- ```bash
118
- export CLAUDE_CODE_USE_OPENAI=1
119
- export OPENAI_API_KEY=sk-or-...
120
- export OPENAI_BASE_URL=https://openrouter.ai/api/v1
121
- export OPENAI_MODEL=google/gemini-2.0-flash
85
+ openclaude
122
86
  ```
123
87
 
124
- ### Ollama (local, free)
88
+ ## Setup Guides
125
89
 
126
- ```bash
127
- ollama pull llama3.3:70b
90
+ Beginner-friendly guides:
128
91
 
129
- export CLAUDE_CODE_USE_OPENAI=1
130
- export OPENAI_BASE_URL=http://localhost:11434/v1
131
- export OPENAI_MODEL=llama3.3:70b
132
- # no API key needed for local models
133
- ```
92
+ - [Non-Technical Setup](docs/non-technical-setup.md)
93
+ - [Windows Quick Start](docs/quick-start-windows.md)
94
+ - [macOS / Linux Quick Start](docs/quick-start-mac-linux.md)
134
95
 
135
- ### LM Studio (local)
96
+ Advanced and source-build guides:
136
97
 
137
- ```bash
138
- export CLAUDE_CODE_USE_OPENAI=1
139
- export OPENAI_BASE_URL=http://localhost:1234/v1
140
- export OPENAI_MODEL=your-model-name
141
- ```
98
+ - [Advanced Setup](docs/advanced-setup.md)
99
+ - [Android Install](ANDROID_INSTALL.md)
142
100
 
143
- ### Together AI
101
+ ## Supported Providers
144
102
 
145
- ```bash
146
- export CLAUDE_CODE_USE_OPENAI=1
147
- export OPENAI_API_KEY=...
148
- export OPENAI_BASE_URL=https://api.together.xyz/v1
149
- export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
150
- ```
103
+ | Provider | Setup Path | Notes |
104
+ | --- | --- | --- |
105
+ | OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
106
+ | Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
107
+ | GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
108
+ | Codex | `/provider` | Uses existing Codex credentials when available |
109
+ | Ollama | `/provider` or env vars | Local inference with no API key |
110
+ | Atomic Chat | advanced setup | Local Apple Silicon backend |
111
+ | Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
151
112
 
152
- ### Groq
153
-
154
- ```bash
155
- export CLAUDE_CODE_USE_OPENAI=1
156
- export OPENAI_API_KEY=gsk_...
157
- export OPENAI_BASE_URL=https://api.groq.com/openai/v1
158
- export OPENAI_MODEL=llama-3.3-70b-versatile
159
- ```
160
-
161
- ### Mistral
113
+ ## What Works
162
114
 
163
- ```bash
164
- export CLAUDE_CODE_USE_OPENAI=1
165
- export OPENAI_API_KEY=...
166
- export OPENAI_BASE_URL=https://api.mistral.ai/v1
167
- export OPENAI_MODEL=mistral-large-latest
115
+ - **Tool-driven coding workflows**: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
116
+ - **Streaming responses**: Real-time token output and tool progress
117
+ - **Tool calling**: Multi-step tool loops with model calls, tool execution, and follow-up responses
118
+ - **Images**: URL and base64 image inputs for providers that support vision
119
+ - **Provider profiles**: Guided setup plus saved `.openclaude-profile.json` support
120
+ - **Local and remote model backends**: Cloud APIs, local servers, and Apple Silicon local inference
121
+
122
+ ## Provider Notes
123
+
124
+ OpenClaude supports multiple providers, but behavior is not identical across all of them.
125
+
126
+ - Anthropic-specific features may not exist on other providers
127
+ - Tool quality depends heavily on the selected model
128
+ - Smaller local models can struggle with long multi-step tool flows
129
+ - Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible
130
+
131
+ For best results, use models with strong tool/function calling support.
132
+
133
+ ## Agent Routing
134
+
135
+ OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
136
+
137
+ Add to `~/.claude/settings.json`:
138
+
139
+ ```json
140
+ {
141
+ "agentModels": {
142
+ "deepseek-chat": {
143
+ "base_url": "https://api.deepseek.com/v1",
144
+ "api_key": "sk-your-key"
145
+ },
146
+ "gpt-4o": {
147
+ "base_url": "https://api.openai.com/v1",
148
+ "api_key": "sk-your-key"
149
+ }
150
+ },
151
+ "agentRouting": {
152
+ "Explore": "deepseek-chat",
153
+ "Plan": "gpt-4o",
154
+ "general-purpose": "gpt-4o",
155
+ "frontend-dev": "deepseek-chat",
156
+ "default": "gpt-4o"
157
+ }
158
+ }
168
159
  ```
169
160
 
170
- ### Azure OpenAI
161
+ When no routing match is found, the global provider remains the fallback.
171
162
 
172
- ```bash
173
- export CLAUDE_CODE_USE_OPENAI=1
174
- export OPENAI_API_KEY=your-azure-key
175
- export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
176
- export OPENAI_MODEL=gpt-4o
177
- ```
163
+ > **Note:** `api_key` values in `settings.json` are stored in plaintext. Keep this file private and do not commit it to version control.
178
164
 
179
- ---
165
+ ## Web Search and Fetch
180
166
 
181
- ## Environment Variables
167
+ By default, `WebSearch` works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
182
168
 
183
- | Variable | Required | Description |
184
- |----------|----------|-------------|
185
- | `CLAUDE_CODE_USE_OPENAI` | Yes | Set to `1` to enable the OpenAI provider |
186
- | `OPENAI_API_KEY` | Yes* | Your API key (*not needed for local models like Ollama) |
187
- | `OPENAI_MODEL` | Yes | Model name (e.g. `gpt-4o`, `deepseek-chat`, `llama3.3:70b`) |
188
- | `OPENAI_BASE_URL` | No | API endpoint (defaults to `https://api.openai.com/v1`) |
189
- | `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
190
- | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
191
- | `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
192
- | `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
169
+ > **Note:** DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.
193
170
 
194
- You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
171
+ For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior.
195
172
 
196
- OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
173
+ `WebFetch` works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.
197
174
 
198
- ---
199
-
200
- ## Runtime Hardening
201
-
202
- Use these commands to keep the CLI stable and catch environment mistakes early:
175
+ Set a [Firecrawl](https://firecrawl.dev) API key if you want Firecrawl-powered search/fetch behavior:
203
176
 
204
177
  ```bash
205
- # quick startup sanity check
206
- bun run smoke
207
-
208
- # validate provider env + reachability
209
- bun run doctor:runtime
210
-
211
- # print machine-readable runtime diagnostics
212
- bun run doctor:runtime:json
213
-
214
- # persist a diagnostics report to reports/doctor-runtime.json
215
- bun run doctor:report
216
-
217
- # full local hardening check (smoke + runtime doctor)
218
- bun run hardening:check
219
-
220
- # strict hardening (includes project-wide typecheck)
221
- bun run hardening:strict
178
+ export FIRECRAWL_API_KEY=your-key-here
222
179
  ```
223
180
 
224
- Notes:
225
- - `doctor:runtime` fails fast if `CLAUDE_CODE_USE_OPENAI=1` with a placeholder key (`SUA_CHAVE`) or a missing key for non-local providers.
226
- - Local providers (for example `http://localhost:11434/v1`) can run without `OPENAI_API_KEY`.
227
- - Codex profiles validate `CODEX_API_KEY` or the Codex CLI auth file and probe `POST /responses` instead of `GET /models`.
228
-
229
- ### Provider Launch Profiles
230
-
231
- Use profile launchers to avoid repeated environment setup:
232
-
233
- ```bash
234
- # one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
235
- bun run profile:init
181
+ With Firecrawl enabled:
236
182
 
237
- # preview the best provider/model for your goal
238
- bun run profile:recommend -- --goal coding --benchmark
183
+ - `WebSearch` can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude models
184
+ - `WebFetch` uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly
239
185
 
240
- # auto-apply the best available local/openai provider/model for your goal
241
- bun run profile:auto -- --goal latency
186
+ Free tier at [firecrawl.dev](https://firecrawl.dev) includes 500 credits. The key is optional.
242
187
 
243
- # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
244
- bun run profile:codex
188
+ ## Source Build And Local Development
245
189
 
246
- # openai bootstrap with explicit key
247
- bun run profile:init -- --provider openai --api-key sk-...
248
-
249
- # ollama bootstrap with custom model
250
- bun run profile:init -- --provider ollama --model llama3.1:8b
190
+ ```bash
191
+ bun install
192
+ bun run build
193
+ node dist/cli.mjs
194
+ ```
251
195
 
252
- # ollama bootstrap with intelligent model auto-selection
253
- bun run profile:init -- --provider ollama --goal coding
196
+ Helpful commands:
254
197
 
255
- # codex bootstrap with a fast model alias
256
- bun run profile:init -- --provider codex --model codexspark
198
+ - `bun run dev`
199
+ - `bun test`
200
+ - `bun run test:coverage`
201
+ - `bun run security:pr-scan -- --base origin/main`
202
+ - `bun run smoke`
203
+ - `bun run doctor:runtime`
204
+ - `bun run verify:privacy`
205
+ - focused `bun test ...` runs for the areas you touch
257
206
 
258
- # launch using persisted profile (.openclaude-profile.json)
259
- bun run dev:profile
207
+ ## Testing And Coverage
260
208
 
261
- # codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
262
- bun run dev:codex
209
+ OpenClaude uses Bun's built-in test runner for unit tests.
263
210
 
264
- # OpenAI profile (requires OPENAI_API_KEY in your shell)
265
- bun run dev:openai
211
+ Run the full unit suite:
266
212
 
267
- # Ollama profile (defaults: localhost:11434, llama3.1:8b)
268
- bun run dev:ollama
213
+ ```bash
214
+ bun test
269
215
  ```
270
216
 
271
- `profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
272
- If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
273
-
274
- Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
275
- Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
276
-
277
- Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
278
-
279
- `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
280
- For `dev:ollama`, make sure Ollama is running locally before launch.
281
-
282
- ---
217
+ Generate unit test coverage:
283
218
 
284
- ## What Works
219
+ ```bash
220
+ bun run test:coverage
221
+ ```
285
222
 
286
- - **All tools**: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
287
- - **Streaming**: Real-time token streaming
288
- - **Tool calling**: Multi-step tool chains (the model calls tools, gets results, continues)
289
- - **Images**: Base64 and URL images passed to vision models
290
- - **Slash commands**: /commit, /review, /compact, /diff, /doctor, etc.
291
- - **Sub-agents**: AgentTool spawns sub-agents using the same provider
292
- - **Memory**: Persistent memory system
223
+ Open the visual coverage report:
293
224
 
294
- ## What's Different
225
+ ```bash
226
+ open coverage/index.html
227
+ ```
295
228
 
296
- - **No thinking mode**: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
297
- - **No prompt caching**: Anthropic-specific cache headers are skipped
298
- - **No beta features**: Anthropic-specific beta headers are ignored
299
- - **Token limits**: Defaults to 32K max output — some models may cap lower, which is handled gracefully
229
+ If you already have `coverage/lcov.info` and only want to rebuild the UI:
300
230
 
301
- ---
231
+ ```bash
232
+ bun run test:coverage:ui
233
+ ```
302
234
 
303
- ## How It Works
235
+ Use focused test runs when you only touch one area:
304
236
 
305
- The shim (`src/services/api/openaiShim.ts`) sits between Claude Code and the LLM API:
237
+ - `bun run test:provider`
238
+ - `bun run test:provider-recommendation`
239
+ - `bun test path/to/file.test.ts`
306
240
 
307
- ```
308
- Claude Code Tool System
309
- |
310
- v
311
- Anthropic SDK interface (duck-typed)
312
- |
313
- v
314
- openaiShim.ts <-- translates formats
315
- |
316
- v
317
- OpenAI Chat Completions API
318
- |
319
- v
320
- Any compatible model
321
- ```
241
+ Recommended contributor validation before opening a PR:
322
242
 
323
- It translates:
324
- - Anthropic message blocks → OpenAI messages
325
- - Anthropic tool_use/tool_result OpenAI function calls
326
- - OpenAI SSE streaming Anthropic stream events
327
- - Anthropic system prompt arrays → OpenAI system messages
243
+ - `bun run build`
244
+ - `bun run smoke`
245
+ - `bun run test:coverage` for broader unit coverage when your change affects shared runtime or provider logic
246
+ - focused `bun test ...` runs for the files and flows you changed
328
247
 
329
- The rest of Claude Code doesn't know it's talking to a different model.
248
+ Coverage output is written to `coverage/lcov.info`, and OpenClaude also generates a git-activity-style heatmap at `coverage/index.html`.
249
+ ## Repository Structure
330
250
 
331
- ---
251
+ - `src/` - core CLI/runtime
252
+ - `scripts/` - build, verification, and maintenance scripts
253
+ - `docs/` - setup, contributor, and project documentation
254
+ - `python/` - standalone Python helpers and their tests
255
+ - `vscode-extension/openclaude-vscode/` - VS Code extension
256
+ - `.github/` - repo automation, templates, and CI configuration
257
+ - `bin/` - CLI launcher entrypoints
332
258
 
333
- ## Model Quality Notes
259
+ ## VS Code Extension
334
260
 
335
- Not all models are equal at agentic tool use. Here's a rough guide:
261
+ The repo includes a VS Code extension in [`vscode-extension/openclaude-vscode`](vscode-extension/openclaude-vscode) for OpenClaude launch integration, provider-aware control-center UI, and theme support.
336
262
 
337
- | Model | Tool Calling | Code Quality | Speed |
338
- |-------|-------------|-------------|-------|
339
- | GPT-4o | Excellent | Excellent | Fast |
340
- | DeepSeek-V3 | Great | Great | Fast |
341
- | Gemini 2.0 Flash | Great | Good | Very Fast |
342
- | Llama 3.3 70B | Good | Good | Medium |
343
- | Mistral Large | Good | Good | Fast |
344
- | GPT-4o-mini | Good | Good | Very Fast |
345
- | Qwen 2.5 72B | Good | Good | Medium |
346
- | Smaller models (<7B) | Limited | Limited | Very Fast |
263
+ ## Security
347
264
 
348
- For best results, use models with strong function/tool calling support.
265
+ If you believe you found a security issue, see [SECURITY.md](SECURITY.md).
349
266
 
350
- ---
267
+ ## Community
351
268
 
352
- ## Files Changed from Original
269
+ - Use [GitHub Discussions](https://github.com/Gitlawb/openclaude/discussions) for Q&A, ideas, and community conversation
270
+ - Use [GitHub Issues](https://github.com/Gitlawb/openclaude/issues) for confirmed bugs and actionable feature work
353
271
 
354
- ```
355
- src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
356
- src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
357
- src/utils/model/providers.ts — Added 'openai' provider type
358
- src/utils/model/configs.ts — Added openai model mappings
359
- src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
360
- src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
361
- ```
272
+ ## Contributing
362
273
 
363
- 6 files changed. 786 lines added. Zero dependencies added.
274
+ Contributions are welcome.
364
275
 
365
- ---
276
+ For larger changes, open an issue first so the scope is clear before implementation. Helpful validation commands include:
366
277
 
367
- ## Origin
278
+ - `bun run build`
279
+ - `bun run test:coverage`
280
+ - `bun run smoke`
281
+ - focused `bun test ...` runs for touched areas
368
282
 
369
- This is a fork of [instructkr/claude-code](https://gitlawb.com/node/repos/z6MkgKkb/instructkr-claude-code), which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
283
+ ## Disclaimer
370
284
 
371
- The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
285
+ OpenClaude is an independent community project and is not affiliated with, endorsed by, or sponsored by Anthropic.
372
286
 
373
- ---
287
+ OpenClaude originated from the Claude Code codebase and has since been substantially modified to support multiple providers and open use. "Claude" and "Claude Code" are trademarks of Anthropic PBC. See [LICENSE](LICENSE) for details.
374
288
 
375
289
  ## License
376
290
 
377
- This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
291
+ See [LICENSE](LICENSE).
@@ -1,7 +1,13 @@
1
- import { join } from 'path'
1
+ import { join, win32 } from 'path'
2
2
  import { pathToFileURL } from 'url'
3
3
 
4
4
  export function getDistImportSpecifier(baseDir) {
5
- const distPath = join(baseDir, '..', 'dist', 'cli.mjs')
5
+ if (/^[A-Za-z]:\\/.test(baseDir)) {
6
+ const distPath = win32.join(baseDir, '..', 'dist', 'cli.mjs')
7
+ return `file:///${distPath.replace(/\\/g, '/')}`
8
+ }
9
+
10
+ const joinImpl = join
11
+ const distPath = joinImpl(baseDir, '..', 'dist', 'cli.mjs')
6
12
  return pathToFileURL(distPath).href
7
13
  }