@gitlawb/openclaude 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -209,7 +209,7 @@ bun run doctor:runtime:json
209
209
  # persist a diagnostics report to reports/doctor-runtime.json
210
210
  bun run doctor:report
211
211
 
212
- # full local hardening check (typecheck + smoke + runtime doctor)
212
+ # full local hardening check (smoke + runtime doctor)
213
213
  bun run hardening:check
214
214
 
215
215
  # strict hardening (includes project-wide typecheck)
@@ -226,9 +226,15 @@ Notes:
226
226
  Use profile launchers to avoid repeated environment setup:
227
227
 
228
228
  ```bash
229
- # one-time profile bootstrap (auto-detect ollama, otherwise openai)
229
+ # one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
230
230
  bun run profile:init
231
231
 
232
+ # preview the best provider/model for your goal
233
+ bun run profile:recommend -- --goal coding --benchmark
234
+
235
+ # auto-apply the best available local/openai provider/model for your goal
236
+ bun run profile:auto -- --goal latency
237
+
232
238
  # codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
233
239
  bun run profile:codex
234
240
 
@@ -238,6 +244,9 @@ bun run profile:init -- --provider openai --api-key sk-...
238
244
  # ollama bootstrap with custom model
239
245
  bun run profile:init -- --provider ollama --model llama3.1:8b
240
246
 
247
+ # ollama bootstrap with intelligent model auto-selection
248
+ bun run profile:init -- --provider ollama --goal coding
249
+
241
250
  # codex bootstrap with a fast model alias
242
251
  bun run profile:init -- --provider codex --model codexspark
243
252
 
@@ -254,6 +263,14 @@ bun run dev:openai
254
263
  bun run dev:ollama
255
264
  ```
256
265
 
266
+ `profile:recommend` ranks installed Ollama models for `latency`, `balanced`, or `coding`, and `profile:auto` can persist the recommendation directly.
267
+ If no profile exists yet, `dev:profile` now uses the same goal-aware defaults when picking the initial model.
268
+
269
+ Use `--provider ollama` when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
270
+ Goal-based Ollama selection only recommends among models that are already installed and reachable from Ollama.
271
+
272
+ Use `profile:codex` or `--provider codex` when you want the ChatGPT Codex backend.
273
+
257
274
  `dev:openai`, `dev:ollama`, and `dev:codex` run `doctor:runtime` first and only launch the app if checks pass.
258
275
  For `dev:ollama`, make sure Ollama is running locally before launch.
259
276
 
package/bin/openclaude CHANGED
@@ -9,14 +9,13 @@
9
9
 
10
10
  import { existsSync } from 'fs'
11
11
  import { join, dirname } from 'path'
12
- import { fileURLToPath } from 'url'
13
- import { getDistImportSpecifier } from './import-specifier.mjs'
12
+ import { fileURLToPath, pathToFileURL } from 'url'
14
13
 
15
14
  const __dirname = dirname(fileURLToPath(import.meta.url))
16
15
  const distPath = join(__dirname, '..', 'dist', 'cli.mjs')
17
16
 
18
17
  if (existsSync(distPath)) {
19
- await import(getDistImportSpecifier(__dirname))
18
+ await import(pathToFileURL(distPath).href)
20
19
  } else {
21
20
  console.error(`
22
21
  openclaude: dist/cli.mjs not found.