@gitlawb/openclaude 0.1.4 → 0.1.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -18,6 +18,8 @@ npm install -g @gitlawb/openclaude
18
18
 
19
19
  ### Option B: From source (requires Bun)
20
20
 
21
+ Use Bun `1.3.11` or newer for source builds on Windows. Older Bun versions such as `1.3.4` can fail with a large batch of unresolved module errors during `bun run build`.
22
+
21
23
  ```bash
22
24
  # Clone from gitlawb
23
25
  git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
@@ -187,9 +189,12 @@ export OPENAI_MODEL=gpt-4o
187
189
  | `CODEX_API_KEY` | Codex only | Codex/ChatGPT access token override |
188
190
  | `CODEX_AUTH_JSON_PATH` | Codex only | Path to a Codex CLI `auth.json` file |
189
191
  | `CODEX_HOME` | Codex only | Alternative Codex home directory (`auth.json` will be read from here) |
192
+ | `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` | No | Set to `1` to suppress the default `Co-Authored-By` trailer in generated git commit messages |
190
193
 
191
194
  You can also use `ANTHROPIC_MODEL` to override the model name. `OPENAI_MODEL` takes priority.
192
195
 
196
+ OpenClaude PR bodies use OpenClaude branding by default. `OPENCLAUDE_DISABLE_CO_AUTHORED_BY` only affects the commit trailer, not PR attribution text.
197
+
193
198
  ---
194
199
 
195
200
  ## Runtime Hardening
@@ -209,7 +214,7 @@ bun run doctor:runtime:json
209
214
  # persist a diagnostics report to reports/doctor-runtime.json
210
215
  bun run doctor:report
211
216
 
212
- # full local hardening check (typecheck + smoke + runtime doctor)
217
+ # full local hardening check (smoke + runtime doctor)
213
218
  bun run hardening:check
214
219
 
215
220
  # strict hardening (includes project-wide typecheck)
@@ -226,9 +231,15 @@ Notes:
226
231
  Use profile launchers to avoid repeated environment setup:
227
232
 
228
233
  ```bash
229
- # one-time profile bootstrap (auto-detect ollama, otherwise openai)
234
+ # one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
230
235
  bun run profile:init
231
236
 
237
+ # preview the best provider/model for your goal
238
+ bun run profile:recommend -- --goal coding --benchmark
239
+
240
+ # auto-apply the best available local/openai provider/model for your goal
241
+ bun run profile:auto -- --goal latency
242
+
232
243
  # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
233
244
  bun run profile:codex
234
245
 
@@ -238,6 +249,9 @@ bun run profile:init -- --provider openai --api-key sk-...
238
249
  # ollama bootstrap with custom model
239
250
  bun run profile:init -- --provider ollama --model llama3.1:8b
240
251
 
252
+ # ollama bootstrap with intelligent model auto-selection
253
+ bun run profile:init -- --provider ollama --goal coding
254
+
241
255
  # codex bootstrap with a fast model alias
242
256
  bun run profile:init -- --provider codex --model codexspark
243
257
 
@@ -254,6 +268,14 @@ bun run dev:openai
254
268
  bun run dev:ollama
255
269
  ```
256
270
 
271
+ `profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
272
+ If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
273
+
274
+ Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
275
+ Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
276
+
277
+ Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
278
+
257
279
  `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
258
280
  For `dev:ollama`, make sure Ollama is running locally before launch.
259
281
 
package/bin/openclaude CHANGED
@@ -9,14 +9,13 @@
9
9
 
10
10
  import { existsSync } from 'fs'
11
11
  import { join, dirname } from 'path'
12
- import { fileURLToPath } from 'url'
13
- import { getDistImportSpecifier } from './import-specifier.mjs'
12
+ import { fileURLToPath, pathToFileURL } from 'url'
14
13
 
15
14
  const __dirname = dirname(fileURLToPath(import.meta.url))
16
15
  const distPath = join(__dirname, '..', 'dist', 'cli.mjs')
17
16
 
18
17
  if (existsSync(distPath)) {
19
- await import(getDistImportSpecifier(__dirname))
18
+ await import(pathToFileURL(distPath).href)
20
19
  } else {
21
20
  console.error(`
22
21
  openclaude: dist/cli.mjs not found.