jbai-cli 1.8.2 → 1.9.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  **Use AI coding tools with your JetBrains AI subscription** — no separate API keys needed.
4
4
 
5
- One token, all tools: Claude Code, Codex, Aider, Gemini CLI, OpenCode, **Codex Desktop, Cursor**.
5
+ One token, all tools: Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue, **Codex Desktop, Cursor**.
6
6
 
7
7
  ## Install
8
8
 
@@ -149,17 +149,27 @@ jbai-codex
149
149
  jbai-codex exec "explain this codebase"
150
150
  ```
151
151
 
152
- ### Aider
152
+ ### OpenCode
153
+ ```bash
154
+ jbai-opencode
155
+ ```
156
+
157
+ ### Goose (Block)
153
158
  ```bash
154
- jbai-aider
159
+ # Interactive session
160
+ jbai-goose
155
161
 
156
- # Use Gemini models with Aider
157
- jbai-aider --model gemini/gemini-2.5-pro
162
+ # One-shot task
163
+ jbai-goose run -t "explain this codebase"
158
164
  ```
159
165
 
160
- ### OpenCode
166
+ ### Continue CLI
161
167
  ```bash
162
- jbai-opencode
168
+ # Interactive TUI
169
+ jbai-continue
170
+
171
+ # One-shot (print and exit)
172
+ jbai-continue -p "explain this function"
163
173
  ```
164
174
 
165
175
  ### Handoff to Orca Lab (local)
@@ -179,7 +189,7 @@ jbai handoff --task "add new e2e test" \
179
189
  ```
180
190
 
181
191
  ### In-session handoff (interactive tools)
182
- While running `jbai-codex`, `jbai-claude`, `jbai-gemini`, or `jbai-opencode`:
192
+ While running `jbai-codex`, `jbai-claude`, `jbai-gemini`, `jbai-opencode`, `jbai-goose`, or `jbai-continue`:
183
193
  - Press `Ctrl+]` to trigger a handoff to Orca Lab.
184
194
  - The last prompt you typed is used as the task.
185
195
 
@@ -206,67 +216,78 @@ jbai-claude --super
206
216
  # Codex - full auto mode
207
217
  jbai-codex --super exec "refactor this code"
208
218
 
209
- # Aider - auto-confirm all changes
210
- jbai-aider --super
211
219
  ```
212
220
 
213
221
  | Tool | Super Mode Flag |
214
222
  |------|-----------------|
215
223
  | Claude Code | `--dangerously-skip-permissions` |
216
224
  | Codex | `--full-auto` |
217
- | Aider | `--yes` |
218
225
  | Gemini CLI | `--yolo` |
219
226
  | OpenCode | N/A (run mode is already non-interactive) |
227
+ | Goose | `GOOSE_MODE=auto` |
228
+ | Continue CLI | `--auto` |
220
229
 
221
230
  ## Using Different Models
222
231
 
223
232
  Each tool has a sensible default, but you can specify any available model:
224
233
 
225
234
  ```bash
226
- # Claude with Opus
227
- jbai-claude --model claude-opus-4-1-20250805
235
+ # Claude with Opus 4.6
236
+ jbai-claude --model claude-opus-4-6
228
237
 
229
- # Codex with GPT-5
230
- jbai-codex --model gpt-5.2-codex
238
+ # Codex with GPT-5.3
239
+ jbai-codex --model gpt-5.3-codex
231
240
 
232
- # Aider with Gemini Pro
233
- jbai-aider --model gemini/gemini-2.5-pro
241
+ # Goose with GPT-5.2
242
+ jbai-goose run -t "your task" --provider openai --model gpt-5.2-2025-12-11
243
+
244
+ # Continue with Claude Opus 4.6
245
+ jbai-continue # select model in TUI
234
246
  ```
235
247
 
236
248
  ### Available Models
237
249
 
238
- **Claude (Anthropic)**
250
+ **Claude (Anthropic)** - Default for Goose, Continue
239
251
  | Model | Notes |
240
252
  |-------|-------|
241
- | `claude-sonnet-4-5-20250929` | Default, recommended |
242
- | `claude-opus-4-1-20250805` | Most capable |
253
+ | `claude-sonnet-4-5-20250929` | Default |
254
+ | `claude-opus-4-6` | Most capable (latest) |
255
+ | `claude-opus-4-5-20251101` | |
256
+ | `claude-opus-4-1-20250805` | |
243
257
  | `claude-sonnet-4-20250514` | |
258
+ | `claude-haiku-4-5-20251001` | Fast |
244
259
  | `claude-3-7-sonnet-20250219` | |
245
- | `claude-3-5-haiku-20241022` | Fast |
260
+ | `claude-3-5-haiku-20241022` | Fastest |
246
261
 
247
- **GPT (OpenAI)**
262
+ **GPT (OpenAI Chat)** - Default for OpenCode
248
263
  | Model | Notes |
249
264
  |-------|-------|
250
- | `gpt-4o-2024-11-20` | Default |
251
- | `gpt-5-2025-08-07` | Latest |
265
+ | `gpt-5.2-2025-12-11` | Default, latest |
266
+ | `gpt-5.2` | Alias |
252
267
  | `gpt-5.1-2025-11-13` | |
253
- | `gpt-5.2-2025-12-11` | |
268
+ | `gpt-5-2025-08-07` | |
254
269
  | `gpt-5-mini-2025-08-07` | Fast |
270
+ | `gpt-5-nano-2025-08-07` | Fastest |
271
+ | `gpt-4.1-2025-04-14` | |
272
+ | `o4-mini-2025-04-16` | Reasoning |
255
273
  | `o3-2025-04-16` | Reasoning |
256
- | `o3-mini-2025-01-31` | |
257
274
 
258
275
  **Codex (OpenAI Responses)** - Use with Codex CLI: `jbai-codex --model <model>`
259
276
  | Model | Notes |
260
277
  |-------|-------|
261
- | `gpt-5.2-codex` | Default, coding-optimized |
278
+ | `gpt-5.3-codex` | Default, latest |
279
+ | `gpt-5.3-codex-api-preview` | Alias |
280
+ | `gpt-5.2-codex` | Coding-optimized |
281
+ | `gpt-5.2-pro-2025-12-11` | |
262
282
  | `gpt-5.1-codex` | |
263
- | `gpt-5.1-codex-mini` | Faster |
264
- | `gpt-5.1-codex-max` | |
283
+ | `gpt-5.1-codex-max` | Most capable |
284
+ | `gpt-5.1-codex-mini` | Fast |
285
+ | `gpt-5-codex` | |
265
286
 
266
- **Gemini (Google)** - Use with Aider: `jbai-aider --model gemini/<model>`
287
+ **Gemini (Google)** - Use with Gemini CLI: `jbai-gemini`
267
288
  | Model | Notes |
268
289
  |-------|-------|
269
- | `gemini-2.5-flash` | Fast |
290
+ | `gemini-2.5-flash` | Default, fast |
270
291
  | `gemini-2.5-pro` | More capable |
271
292
  | `gemini-3-pro-preview` | Preview |
272
293
  | `gemini-3-flash-preview` | Preview |
@@ -301,8 +322,6 @@ jbai install
301
322
  # Install specific tool
302
323
  jbai install claude
303
324
  jbai install codex
304
- jbai install aider
305
-
306
325
  # Check what's installed
307
326
  jbai doctor
308
327
  ```
@@ -313,8 +332,9 @@ jbai doctor
313
332
  |------|-----------------|
314
333
  | Claude Code | `npm i -g @anthropic-ai/claude-code` |
315
334
  | Codex | `npm i -g @openai/codex` |
316
- | Aider | `pipx install aider-chat` |
317
- | OpenCode | `go install github.com/opencode-ai/opencode@latest` |
335
+ | OpenCode | `npm i -g opencode-ai` |
336
+ | Goose | `brew install block-goose-cli` |
337
+ | Continue CLI | `npm i -g @continuedev/cli` |
318
338
 
319
339
  ## Token Management
320
340
 
@@ -0,0 +1,190 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * jbai-continue - Continue CLI wrapper for JetBrains AI Platform
5
+ *
6
+ * Generates ~/.continue/config.yaml with Grazie models via jbai-proxy,
7
+ * then launches `cn` (Continue CLI).
8
+ */
9
+
10
+ const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
11
+ const fs = require('fs');
12
+ const path = require('path');
13
+ const os = require('os');
14
+ const http = require('http');
15
+ const config = require('../lib/config');
16
+ const { ensureToken } = require('../lib/ensure-token');
17
+
18
+ const PROXY_PORT = 18080;
19
+
20
+ function isProxyRunning() {
21
+ return new Promise((resolve) => {
22
+ const req = http.get(`http://127.0.0.1:${PROXY_PORT}/health`, { timeout: 1000 }, (res) => {
23
+ let body = '';
24
+ res.on('data', chunk => body += chunk);
25
+ res.on('end', () => {
26
+ try {
27
+ const info = JSON.parse(body);
28
+ resolve(info.status === 'ok');
29
+ } catch {
30
+ resolve(false);
31
+ }
32
+ });
33
+ });
34
+ req.on('error', () => resolve(false));
35
+ req.on('timeout', () => { req.destroy(); resolve(false); });
36
+ });
37
+ }
38
+
39
+ function startProxy() {
40
+ const { spawn } = require('child_process');
41
+ const proxyScript = path.join(__dirname, 'jbai-proxy.js');
42
+ const child = spawn(process.execPath, [proxyScript, '--port', String(PROXY_PORT), '--_daemon'], {
43
+ detached: true,
44
+ stdio: 'ignore',
45
+ });
46
+ child.unref();
47
+ }
48
+
49
+ function buildContinueConfig() {
50
+ const proxyBase = `http://127.0.0.1:${PROXY_PORT}`;
51
+ const models = [];
52
+
53
+ // Add Claude models first (default provider)
54
+ config.MODELS.claude.available.forEach(model => {
55
+ const isDefault = model === config.MODELS.claude.default;
56
+ models.push({
57
+ name: isDefault ? `${model} (default)` : model,
58
+ provider: 'anthropic',
59
+ model,
60
+ apiBase: `${proxyBase}/anthropic/v1`,
61
+ apiKey: 'placeholder',
62
+ roles: ['chat', 'edit'],
63
+ });
64
+ });
65
+
66
+ // Add OpenAI models via proxy
67
+ config.MODELS.openai.available.forEach(model => {
68
+ models.push({
69
+ name: model,
70
+ provider: 'openai',
71
+ model,
72
+ apiBase: `${proxyBase}/openai/v1`,
73
+ apiKey: 'placeholder',
74
+ roles: ['chat', 'edit'],
75
+ });
76
+ });
77
+
78
+ const defaultModel = `${config.MODELS.claude.default} (default)`;
79
+
80
+ return {
81
+ name: 'JetBrains AI',
82
+ version: '0.0.1',
83
+ schema: 'v1',
84
+ defaultModel,
85
+ models,
86
+ };
87
+ }
88
+
89
+ function toYaml(obj, indent = 0) {
90
+ const pad = ' '.repeat(indent);
91
+ let out = '';
92
+ if (Array.isArray(obj)) {
93
+ for (const item of obj) {
94
+ if (typeof item === 'object' && item !== null) {
95
+ out += `${pad}-\n`;
96
+ out += toYaml(item, indent + 2);
97
+ } else {
98
+ out += `${pad}- ${item}\n`;
99
+ }
100
+ }
101
+ } else if (typeof obj === 'object' && obj !== null) {
102
+ for (const [key, val] of Object.entries(obj)) {
103
+ if (Array.isArray(val)) {
104
+ out += `${pad}${key}:\n`;
105
+ out += toYaml(val, indent + 2);
106
+ } else if (typeof val === 'object' && val !== null) {
107
+ out += `${pad}${key}:\n`;
108
+ out += toYaml(val, indent + 2);
109
+ } else {
110
+ out += `${pad}${key}: ${JSON.stringify(val)}\n`;
111
+ }
112
+ }
113
+ }
114
+ return out;
115
+ }
116
+
117
+ (async () => {
118
+ const token = await ensureToken();
119
+ const environment = config.getEnvironment();
120
+ let args = process.argv.slice(2);
121
+ const handoffConfig = stripHandoffFlag(args);
122
+ args = handoffConfig.args;
123
+
124
+ // Check for super mode (--super, --yolo, -s)
125
+ const superFlags = ['--super', '--yolo', '-s'];
126
+ const superMode = args.some(a => superFlags.includes(a));
127
+ args = args.filter(a => !superFlags.includes(a));
128
+
129
+ // Auto-start proxy if not running
130
+ if (!await isProxyRunning()) {
131
+ startProxy();
132
+ await new Promise(r => setTimeout(r, 500));
133
+ }
134
+
135
+ // Write Continue config
136
+ const configDir = path.join(os.homedir(), '.continue');
137
+ const configFile = path.join(configDir, 'config.yaml');
138
+
139
+ if (!fs.existsSync(configDir)) {
140
+ fs.mkdirSync(configDir, { recursive: true });
141
+ }
142
+
143
+ const continueConfig = buildContinueConfig();
144
+ const yaml = `# Auto-generated by jbai-cli — do not edit manually\n${toYaml(continueConfig)}`;
145
+ fs.writeFileSync(configFile, yaml);
146
+
147
+ let finalArgs = [];
148
+
149
+ // Add auto mode (allow all tools) if super
150
+ if (superMode) {
151
+ finalArgs.push('--auto');
152
+ console.log('🚀 Super mode: --auto (all tools allowed) enabled');
153
+ }
154
+
155
+ finalArgs.push(...args);
156
+
157
+ const childEnv = {
158
+ ...process.env,
159
+ OPENAI_API_KEY: 'placeholder',
160
+ ANTHROPIC_API_KEY: 'placeholder',
161
+ };
162
+
163
+ const child = runWithHandoff({
164
+ command: 'cn',
165
+ args: finalArgs,
166
+ env: childEnv,
167
+ toolName: 'jbai-continue',
168
+ handoffDefaults: {
169
+ enabled: !handoffConfig.disabled,
170
+ grazieToken: token,
171
+ grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
172
+ grazieModel: config.MODELS.claude.default,
173
+ cwd: process.cwd(),
174
+ },
175
+ });
176
+
177
+ if (child && typeof child.on === 'function') {
178
+ child.on('error', (err) => {
179
+ if (err.code === 'ENOENT') {
180
+ const tool = config.TOOLS.continue;
181
+ console.error(`❌ ${tool.name} not found.\n`);
182
+ console.error(`Install with: ${tool.install}`);
183
+ console.error(`Or run: jbai install continue`);
184
+ } else {
185
+ console.error(`Error: ${err.message}`);
186
+ }
187
+ process.exit(1);
188
+ });
189
+ }
190
+ })();
@@ -0,0 +1,145 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * jbai-goose - Goose CLI wrapper for JetBrains AI Platform
5
+ *
6
+ * Uses jbai-proxy on localhost:18080 to route requests through Grazie.
7
+ * Goose uses OPENAI_HOST + OPENAI_API_KEY env vars for the OpenAI provider.
8
+ */
9
+
10
+ const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
11
+ const path = require('path');
12
+ const http = require('http');
13
+ const config = require('../lib/config');
14
+ const { ensureToken } = require('../lib/ensure-token');
15
+
16
+ const PROXY_PORT = 18080;
17
+
18
+ function isProxyRunning() {
19
+ return new Promise((resolve) => {
20
+ const req = http.get(`http://127.0.0.1:${PROXY_PORT}/health`, { timeout: 1000 }, (res) => {
21
+ let body = '';
22
+ res.on('data', chunk => body += chunk);
23
+ res.on('end', () => {
24
+ try {
25
+ const info = JSON.parse(body);
26
+ resolve(info.status === 'ok');
27
+ } catch {
28
+ resolve(false);
29
+ }
30
+ });
31
+ });
32
+ req.on('error', () => resolve(false));
33
+ req.on('timeout', () => { req.destroy(); resolve(false); });
34
+ });
35
+ }
36
+
37
+ function startProxy() {
38
+ const { spawn } = require('child_process');
39
+ const proxyScript = path.join(__dirname, 'jbai-proxy.js');
40
+ const child = spawn(process.execPath, [proxyScript, '--port', String(PROXY_PORT), '--_daemon'], {
41
+ detached: true,
42
+ stdio: 'ignore',
43
+ });
44
+ child.unref();
45
+ }
46
+
47
+ (async () => {
48
+ const token = await ensureToken();
49
+ const environment = config.getEnvironment();
50
+ let args = process.argv.slice(2);
51
+ const handoffConfig = stripHandoffFlag(args);
52
+ args = handoffConfig.args;
53
+
54
+ // Check for super mode (--super, --yolo, -s)
55
+ const superFlags = ['--super', '--yolo', '-s'];
56
+ const superMode = args.some(a => superFlags.includes(a));
57
+ args = args.filter(a => !superFlags.includes(a));
58
+
59
+ // Auto-start proxy if not running
60
+ if (!await isProxyRunning()) {
61
+ startProxy();
62
+ await new Promise(r => setTimeout(r, 500));
63
+ }
64
+
65
+ // Determine if this is a "run" or "session" command
66
+ const isRun = args[0] === 'run';
67
+ const isSession = args[0] === 'session' || args.length === 0;
68
+
69
+ // Check if model specified via args or env
70
+ const hasModel = args.includes('--model');
71
+ const hasProvider = args.includes('--provider');
72
+
73
+ let finalArgs = [...args];
74
+
75
+ // Default to session if no subcommand given
76
+ if (args.length === 0 || (!isRun && !isSession && !['configure', 'info', 'update', 'help'].includes(args[0]))) {
77
+ if (args.length === 0) {
78
+ finalArgs = ['session'];
79
+ }
80
+ }
81
+
82
+ // Determine provider from --model flag or default to anthropic (Claude)
83
+ const modelIdx = args.indexOf('--model');
84
+ const requestedModel = modelIdx >= 0 ? args[modelIdx + 1] : null;
85
+ const isClaudeModel = requestedModel
86
+ ? requestedModel.startsWith('claude')
87
+ : true; // default to Claude
88
+
89
+ const defaultProvider = isClaudeModel ? 'anthropic' : 'openai';
90
+ const defaultModel = isClaudeModel ? config.MODELS.claude.default : config.MODELS.openai.default;
91
+
92
+ // Inject provider and model if not specified
93
+ if (!hasProvider && (isSession || isRun || args.length === 0)) {
94
+ finalArgs.push('--provider', defaultProvider);
95
+ }
96
+ if (!hasModel && (isSession || isRun || args.length === 0)) {
97
+ finalArgs.push('--model', defaultModel);
98
+ }
99
+
100
+ // Add super mode (auto-approve all tool calls)
101
+ if (superMode) {
102
+ console.log('🚀 Super mode: auto-approve enabled');
103
+ }
104
+
105
+ const childEnv = {
106
+ ...process.env,
107
+ // Configure both providers so --provider/--model can switch freely
108
+ OPENAI_HOST: `http://127.0.0.1:${PROXY_PORT}/openai`,
109
+ OPENAI_API_KEY: 'placeholder',
110
+ ANTHROPIC_HOST: `http://127.0.0.1:${PROXY_PORT}/anthropic`,
111
+ ANTHROPIC_API_KEY: 'placeholder',
112
+ GOOSE_PROVIDER: defaultProvider,
113
+ GOOSE_MODEL: defaultModel,
114
+ // Auto-approve mode if super
115
+ ...(superMode ? { GOOSE_MODE: 'auto' } : {}),
116
+ };
117
+
118
+ const child = runWithHandoff({
119
+ command: 'goose',
120
+ args: finalArgs,
121
+ env: childEnv,
122
+ toolName: 'jbai-goose',
123
+ handoffDefaults: {
124
+ enabled: !handoffConfig.disabled,
125
+ grazieToken: token,
126
+ grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
127
+ grazieModel: config.MODELS.claude.default,
128
+ cwd: process.cwd(),
129
+ },
130
+ });
131
+
132
+ if (child && typeof child.on === 'function') {
133
+ child.on('error', (err) => {
134
+ if (err.code === 'ENOENT') {
135
+ const tool = config.TOOLS.goose;
136
+ console.error(`❌ ${tool.name} not found.\n`);
137
+ console.error(`Install with: ${tool.install}`);
138
+ console.error(`Or run: jbai install goose`);
139
+ } else {
140
+ console.error(`Error: ${err.message}`);
141
+ }
142
+ process.exit(1);
143
+ });
144
+ }
145
+ })();
@@ -106,8 +106,20 @@ const { ensureToken } = require('../lib/ensure-token');
106
106
  // format (used by jbai-gemini), not OpenAI chat/completions format.
107
107
  // Use `jbai gemini` instead for Gemini models.
108
108
 
109
- // Enable max permissions (yolo mode) by default
110
- opencodeConfig.yolo = true;
109
+ // Enable max permissions for the build agent (allow all tools without asking)
110
+ if (!opencodeConfig.agent) {
111
+ opencodeConfig.agent = {};
112
+ }
113
+ if (!opencodeConfig.agent.build) {
114
+ opencodeConfig.agent.build = {};
115
+ }
116
+ opencodeConfig.agent.build.permission = 'allow';
117
+
118
+ // Only show JetBrains AI providers in the model picker
119
+ opencodeConfig.enabled_providers = [providerName, anthropicProviderName];
120
+
121
+ // Clean up legacy keys that newer OpenCode versions reject
122
+ delete opencodeConfig.yolo;
111
123
 
112
124
  // Write config
113
125
  fs.writeFileSync(configFile, JSON.stringify(opencodeConfig, null, 2));
package/bin/jbai-proxy.js CHANGED
@@ -68,6 +68,10 @@ function resolveRoute(method, urlPath) {
68
68
 
69
69
  // Explicit provider prefix routes
70
70
  if (urlPath.startsWith('/openai/')) {
71
+ // Intercept /openai/v1/models → return synthetic list (Grazie doesn't list codex models)
72
+ if (urlPath === '/openai/v1/models') {
73
+ return { target: null, provider: 'models' };
74
+ }
71
75
  const rest = urlPath.slice('/openai'.length); // keeps /v1/...
72
76
  return { target: endpoints.openai.replace(/\/v1$/, '') + rest, provider: 'openai' };
73
77
  }
@@ -132,7 +136,8 @@ function buildModelsResponse() {
132
136
  // Codex CLI model picker response (matches chatgpt.com/backend-api/codex/models format)
133
137
  function buildCodexModelsResponse() {
134
138
  const descriptions = {
135
- 'gpt-5.3-codex-api-preview': 'Latest GPT-5.3 Codex model. Designed for long-running, detailed software engineering tasks.',
139
+ 'gpt-5.3-codex': 'Latest GPT-5.3 Codex model. Designed for long-running, detailed software engineering tasks.',
140
+ 'gpt-5.3-codex-api-preview': 'GPT-5.3 Codex (api-preview alias).',
136
141
  'gpt-5.2-codex': 'Latest frontier agentic coding model.',
137
142
  'gpt-5.2-pro-2025-12-11': 'GPT-5.2 Pro for deep reasoning and complex tasks.',
138
143
  'gpt-5.2-2025-12-11': 'Latest frontier model with improvements across knowledge, reasoning and coding.',
@@ -245,7 +250,23 @@ function proxy(req, res) {
245
250
  const chunks = [];
246
251
  req.on('data', (chunk) => chunks.push(chunk));
247
252
  req.on('end', () => {
248
- const body = Buffer.concat(chunks);
253
+ let body = Buffer.concat(chunks);
254
+
255
+ // Rewrite model aliases so Grazie accepts the request
256
+ if (body.length > 0 && (req.headers['content-type'] || '').includes('application/json')) {
257
+ try {
258
+ const parsed = JSON.parse(body.toString('utf-8'));
259
+ if (parsed.model && config.MODEL_ALIASES[parsed.model]) {
260
+ const original = parsed.model;
261
+ parsed.model = config.MODEL_ALIASES[parsed.model];
262
+ body = Buffer.from(JSON.stringify(parsed), 'utf-8');
263
+ log(`[alias] Rewrote model "${original}" → "${parsed.model}"`);
264
+ }
265
+ } catch {
266
+ // Not valid JSON or parse error — forward as-is
267
+ }
268
+ }
269
+
249
270
  const targetUrl = new URL(route.target + (query ? '?' + query : ''));
250
271
 
251
272
  // Build forwarded headers - pass through everything except host/authorization
package/bin/jbai.js CHANGED
@@ -30,6 +30,18 @@ const TOOLS = {
30
30
  command: 'opencode',
31
31
  install: 'npm install -g opencode-ai',
32
32
  check: 'opencode --version'
33
+ },
34
+ goose: {
35
+ name: 'Goose',
36
+ command: 'goose',
37
+ install: 'brew install block-goose-cli',
38
+ check: 'goose --version'
39
+ },
40
+ continue: {
41
+ name: 'Continue CLI',
42
+ command: 'cn',
43
+ install: 'npm install -g @continuedev/cli',
44
+ check: 'cn --version'
33
45
  }
34
46
  };
35
47
 
@@ -46,7 +58,7 @@ COMMANDS:
46
58
  jbai handoff Continue task in Orca Lab
47
59
  jbai env [staging|production] Switch environment
48
60
  jbai models List available models
49
- jbai install Install all AI tools (claude, codex, gemini, opencode)
61
+ jbai install Install all AI tools (claude, codex, gemini, opencode, goose, continue)
50
62
  jbai install claude Install specific tool
51
63
  jbai doctor Check which tools are installed
52
64
  jbai help Show this help
@@ -300,7 +312,7 @@ function showModels() {
300
312
  console.log(` - ${m}${def}`);
301
313
  });
302
314
 
303
- console.log('\nGPT (OpenAI Chat) - jbai-aider, jbai-opencode:');
315
+ console.log('\nGPT (OpenAI Chat) - jbai-opencode:');
304
316
  config.MODELS.openai.available.forEach((m) => {
305
317
  const def = m === config.MODELS.openai.default ? ' (default)' : '';
306
318
  console.log(` - ${m}${def}`);
package/lib/config.js CHANGED
@@ -48,11 +48,12 @@ const MODELS = {
48
48
  ]
49
49
  },
50
50
  openai: {
51
- // Chat/Completions models (used by Aider/OpenCode)
51
+ // Chat/Completions models (used by OpenCode)
52
52
  // Keep in sync with the OpenAI proxy's advertised list.
53
- default: 'gpt-4o-2024-11-20',
53
+ default: 'gpt-5.2-2025-12-11',
54
54
  available: [
55
55
  // GPT-5.x series (latest) - require date-versioned names
56
+ // NOTE: gpt-5.3-codex / gpt-5.3-codex-api-preview are Responses-API-only → use via jbai codex
56
57
  'gpt-5.2-2025-12-11',
57
58
  'gpt-5.2',
58
59
  'gpt-5.1-2025-11-13',
@@ -80,9 +81,10 @@ const MODELS = {
80
81
  // Codex CLI uses OpenAI models via the "responses" API (wire_api = "responses")
81
82
  // Includes chat-capable models PLUS codex-only models (responses API only)
82
83
  codex: {
83
- default: 'gpt-5.3-codex-api-preview',
84
+ default: 'gpt-5.3-codex',
84
85
  available: [
85
86
  // Codex-specific models (responses API only, NOT available via chat/completions)
87
+ 'gpt-5.3-codex',
86
88
  'gpt-5.3-codex-api-preview',
87
89
  // GPT-5.x chat models (also work via responses API)
88
90
  'gpt-5.2-2025-12-11',
@@ -120,6 +122,12 @@ const MODELS = {
120
122
  // They are not supported by CLI tools that use OpenAI API format.
121
123
  };
122
124
 
125
+ // Model aliases: new Codex CLI sends short names that Grazie doesn't recognise yet.
126
+ // Map them to the Grazie-accepted equivalents so the proxy can rewrite on the fly.
127
+ const MODEL_ALIASES = {
128
+ 'gpt-5.3-codex': 'gpt-5.3-codex-api-preview',
129
+ };
130
+
123
131
  // All models for tools that support multiple providers (OpenCode, Codex)
124
132
  const ALL_MODELS = {
125
133
  openai: MODELS.openai.available,
@@ -295,6 +303,16 @@ const TOOLS = {
295
303
  name: 'OpenCode',
296
304
  command: 'opencode',
297
305
  install: 'npm install -g opencode-ai'
306
+ },
307
+ goose: {
308
+ name: 'Goose',
309
+ command: 'goose',
310
+ install: 'brew install block-goose-cli'
311
+ },
312
+ continue: {
313
+ name: 'Continue CLI',
314
+ command: 'cn',
315
+ install: 'npm install -g @continuedev/cli'
298
316
  }
299
317
  };
300
318
 
@@ -304,6 +322,7 @@ module.exports = {
304
322
  CONFIG_FILE,
305
323
  ENDPOINTS,
306
324
  MODELS,
325
+ MODEL_ALIASES,
307
326
  ALL_MODELS,
308
327
  TOOLS,
309
328
  ensureConfigDir,
@@ -41,8 +41,10 @@ Next steps:
41
41
  3. Start using AI tools:
42
42
  $ jbai-claude # Claude Code
43
43
  $ jbai-codex # OpenAI Codex
44
- $ jbai-aider # Aider
45
- $ jbai-gemini # Gemini
44
+ $ jbai-opencode # OpenCode
45
+ $ jbai-gemini # Gemini CLI
46
+ $ jbai-goose # Goose (Block)
47
+ $ jbai-continue # Continue CLI
46
48
 
47
49
  Run 'jbai help' for more options.
48
50
  `);
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "jbai-cli",
3
- "version": "1.8.2",
4
- "description": "CLI wrappers to use AI coding tools (Claude Code, Codex, Gemini CLI, OpenCode) with JetBrains AI Platform",
3
+ "version": "1.9.2",
4
+ "description": "CLI wrappers to use AI coding tools (Claude Code, Codex, Gemini CLI, OpenCode, Goose, Continue) with JetBrains AI Platform",
5
5
  "keywords": [
6
6
  "jetbrains",
7
7
  "ai",
@@ -9,6 +9,8 @@
9
9
  "codex",
10
10
  "gemini",
11
11
  "opencode",
12
+ "goose",
13
+ "continue",
12
14
  "cli",
13
15
  "openai",
14
16
  "anthropic",
@@ -30,7 +32,9 @@
30
32
  "jbai-claude": "bin/jbai-claude.js",
31
33
  "jbai-codex": "bin/jbai-codex.js",
32
34
  "jbai-gemini": "bin/jbai-gemini.js",
33
- "jbai-opencode": "bin/jbai-opencode.js"
35
+ "jbai-opencode": "bin/jbai-opencode.js",
36
+ "jbai-goose": "bin/jbai-goose.js",
37
+ "jbai-continue": "bin/jbai-continue.js"
34
38
  },
35
39
  "files": [
36
40
  "bin/",