jbai-cli 1.9.6 → 2.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -12,6 +12,14 @@ npm install -g jbai-cli
12
12
 
13
13
  ## Setup (2 minutes)
14
14
 
15
+ `jbai` now opens an interactive control panel by default, so you can manage everything from one place:
16
+
17
+ ```bash
18
+ jbai
19
+ ```
20
+
21
+ It still supports direct commands for scripts and automation (for example `jbai token set`, `jbai test`, `jbai proxy setup`, `jbai install`).
22
+
15
23
  ### Step 1: Get your token
16
24
 
17
25
  1. Go to [platform.jetbrains.ai](https://platform.jetbrains.ai/) (or [staging](https://platform.stgn.jetbrains.ai/))
@@ -172,39 +180,6 @@ jbai-continue
172
180
  jbai-continue -p "explain this function"
173
181
  ```
174
182
 
175
- ### Handoff to Orca Lab (local)
176
- ```bash
177
- # Continue a task in Orca Lab via local facade
178
- jbai handoff --task "continue this work in orca-lab"
179
- ```
180
-
181
- ### Handoff to Orca Lab (nightly/staging)
182
- ```bash
183
- export ORCA_LAB_URL="https://orca-lab-nightly.labs.jb.gg"
184
- export FACADE_JWT_TOKEN="..." # required for /api/handoff
185
- export GITHUB_TOKEN="..." # repo clone during provisioning
186
-
187
- jbai handoff --task "add new e2e test" \
188
- --repo "https://github.com/JetBrains/jcp-orca-facade.git"
189
- ```
190
-
191
- ### In-session handoff (interactive tools)
192
- While running `jbai-codex`, `jbai-claude`, `jbai-gemini`, `jbai-opencode`, `jbai-goose`, or `jbai-continue`:
193
- - Press `Ctrl+]` to trigger a handoff to Orca Lab.
194
- - The last prompt you typed is used as the task.
195
-
196
- Optional environment variables:
197
- - `ORCA_LAB_URL` (default: `http://localhost:3000`)
198
- - `FACADE_JWT_TOKEN` (required for /api/handoff on hosted Orca Lab)
199
- - `GITHUB_TOKEN` / `GH_TOKEN` (private repos)
200
- - `JBAI_HANDOFF_TASK` (fallback task if no prompt captured)
201
- - `JBAI_HANDOFF_REPO` (override repo URL)
202
- - `JBAI_HANDOFF_REF` (override git ref)
203
- - `JBAI_HANDOFF_BRANCH` (override working branch)
204
- - `JBAI_HANDOFF_ENV` (STAGING | PREPROD | PRODUCTION)
205
- - `JBAI_HANDOFF_MODEL` (Claude model for Orca Lab agent)
206
- - `JBAI_HANDOFF_OPEN` (set to `false` to avoid opening a browser)
207
-
208
183
  ## Super Mode (Skip Confirmations)
209
184
 
210
185
  Add `--super` (or `--yolo` or `-s`) to any command to enable maximum permissions:
@@ -231,12 +206,15 @@ jbai-codex --super exec "refactor this code"
231
206
 
232
207
  Each tool has a sensible default, but you can specify any available model:
233
208
 
209
+ - `jbai-opencode` default: `gpt-5.4` with `xhigh` reasoning (`--variant xhigh`)
210
+ - `jbai-codex` default: `gpt-5.4` with `xhigh` reasoning effort
211
+
234
212
  ```bash
235
213
  # Claude with Opus 4.6
236
214
  jbai-claude --model claude-opus-4-6
237
215
 
238
- # Codex with GPT-5.3
239
- jbai-codex --model gpt-5.3-codex-api-preview
216
+ # Codex with GPT-5.4
217
+ jbai-codex --model gpt-5.4
240
218
 
241
219
  # Goose with GPT-5.2
242
220
  jbai-goose run -t "your task" --provider openai --model gpt-5.2-2025-12-11
@@ -262,7 +240,8 @@ jbai-continue # select model in TUI
262
240
  **GPT (OpenAI Chat)** - Default for OpenCode
263
241
  | Model | Notes |
264
242
  |-------|-------|
265
- | `gpt-5.2-2025-12-11` | Default, latest |
243
+ | `gpt-5.4` | Default, latest |
244
+ | `gpt-5.2-2025-12-11` | |
266
245
  | `gpt-5.2` | Alias |
267
246
  | `gpt-5.1-2025-11-13` | |
268
247
  | `gpt-5-2025-08-07` | |
@@ -275,7 +254,8 @@ jbai-continue # select model in TUI
275
254
  **Codex (OpenAI Responses)** - Use with Codex CLI: `jbai-codex --model <model>`
276
255
  | Model | Notes |
277
256
  |-------|-------|
278
- | `gpt-5.3-codex-api-preview` | Default, latest |
257
+ | `gpt-5.4` | Default, latest |
258
+ | `gpt-5.3-codex-api-preview` | |
279
259
  | `gpt-5.2-codex` | Coding-optimized |
280
260
  | `gpt-5.2-pro-2025-12-11` | |
281
261
  | `gpt-5.1-codex` | |
@@ -295,6 +275,8 @@ jbai-continue # select model in TUI
295
275
 
296
276
  | Command | Description |
297
277
  |---------|-------------|
278
+ | `jbai` | Open interactive control panel |
279
+ | `jbai menu` | Open interactive control panel |
298
280
  | `jbai help` | Show help |
299
281
  | `jbai token` | Show token status |
300
282
  | `jbai token set` | Set/update token |
@@ -303,13 +285,27 @@ jbai-continue # select model in TUI
303
285
  | `jbai proxy setup` | Setup proxy + configure Codex Desktop |
304
286
  | `jbai proxy status` | Check proxy status |
305
287
  | `jbai proxy stop` | Stop proxy |
306
- | `jbai handoff` | Continue a task in Orca Lab |
307
288
  | `jbai install` | Install all AI tools |
308
289
  | `jbai install claude` | Install specific tool |
309
290
  | `jbai doctor` | Check tool installation status |
310
291
  | `jbai env staging` | Use staging environment |
311
292
  | `jbai env production` | Use production environment |
312
293
 
294
+ ## Interactive Control Panel
295
+
296
+ Running `jbai` with no arguments opens a terminal menu with fast access to:
297
+
298
+ - Token management (show, set, refresh)
299
+ - Environment switching (staging / production)
300
+ - Agent installation
301
+ - Client wiring (`jbai proxy setup` + Codex/Desktop env)
302
+ - Health check (`doctor`)
303
+ - Agent launch (Claude / Codex / OpenCode / Gemini / Goose / Continue)
304
+ - Update / uninstall commands
305
+ - Version info
306
+
307
+ Use `0` to exit the menu.
308
+
313
309
  ## Installing AI Tools
314
310
 
315
311
  jbai-cli can install the underlying tools for you:
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'claude',
4
+ model: 'claude-opus-4-6',
5
+ label: 'Claude Code + Opus 4.6',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'claude',
4
+ model: 'claude-sonnet-4-6',
5
+ label: 'Claude Code + Sonnet 4.6',
6
+ });
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
 
3
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
3
+ const { spawn } = require('child_process');
4
4
  const config = require('../lib/config');
5
5
  const { isModelsCommand, showModelsForTool } = require('../lib/model-list');
6
6
  const { ensureToken } = require('../lib/ensure-token');
@@ -8,15 +8,13 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
8
8
 
9
9
  (async () => {
10
10
  let args = process.argv.slice(2);
11
- const handoffConfig = stripHandoffFlag(args);
12
- args = handoffConfig.args;
13
11
 
14
12
  if (isModelsCommand(args)) {
15
13
  showModelsForTool('claude', 'Available Grazie models for jbai-claude:');
16
14
  return;
17
15
  }
18
16
 
19
- const token = await ensureToken();
17
+ await ensureToken();
20
18
 
21
19
  // Check for super mode (--super, --yolo, -s)
22
20
  const superFlags = ['--super', '--yolo', '-s'];
@@ -47,19 +45,7 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
47
45
  delete env.ANTHROPIC_AUTH_TOKEN;
48
46
  delete env.ANTHROPIC_CUSTOM_HEADERS;
49
47
 
50
- const child = runWithHandoff({
51
- command: 'claude',
52
- args: finalArgs,
53
- env,
54
- toolName: 'jbai-claude',
55
- handoffDefaults: {
56
- enabled: !handoffConfig.disabled,
57
- grazieToken: token,
58
- grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
59
- grazieModel: config.MODELS.claude.default,
60
- cwd: process.cwd(),
61
- },
62
- });
48
+ const child = spawn('claude', finalArgs, { stdio: 'inherit', env });
63
49
 
64
50
  if (child && typeof child.on === 'function') {
65
51
  child.on('error', (err) => {
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'codex',
4
+ model: 'gpt-5.2-codex',
5
+ label: 'Codex + GPT-5.2',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'codex',
4
+ model: 'gpt-5.3-codex',
5
+ label: 'Codex + GPT-5.3',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'codex',
4
+ model: 'gpt-5.4',
5
+ label: 'Codex + GPT-5.4',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'codex',
4
+ model: 'rockhopper-alpha',
5
+ label: 'Codex + Rockhopper Alpha (OpenAI EAP)',
6
+ });
package/bin/jbai-codex.js CHANGED
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
 
3
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
3
+ const { spawn } = require('child_process');
4
4
  const fs = require('fs');
5
5
  const path = require('path');
6
6
  const os = require('os');
@@ -13,8 +13,6 @@ const PROXY_PROVIDER = 'jbai-proxy';
13
13
 
14
14
  (async () => {
15
15
  let args = process.argv.slice(2);
16
- const handoffConfig = stripHandoffFlag(args);
17
- args = handoffConfig.args;
18
16
 
19
17
  // Check for super mode (--super, --yolo, -s)
20
18
  const superFlags = ['--super', '--yolo', '-s'];
@@ -60,12 +58,22 @@ wire_api = "responses"
60
58
  console.log(`✅ Added ${PROXY_PROVIDER} provider to Codex config`);
61
59
  }
62
60
 
63
- const hasModel = args.includes('--model');
61
+ const hasModel = args.includes('--model') || args.includes('-m');
62
+ const hasReasoningEffort = args.some((arg, idx) => {
63
+ if (arg === '-c' || arg === '--config') {
64
+ return String(args[idx + 1] || '').includes('model_reasoning_effort');
65
+ }
66
+ return arg.startsWith('-c') && arg.includes('model_reasoning_effort')
67
+ || arg.startsWith('--config=') && arg.includes('model_reasoning_effort');
68
+ });
64
69
  // Use proxy provider so Codex fetches /v1/models from our proxy (shows all Grazie models)
65
70
  const finalArgs = ['-c', `model_provider=${PROXY_PROVIDER}`];
66
71
 
67
72
  if (!hasModel) {
68
73
  finalArgs.push('--model', config.MODELS.codex?.default || config.MODELS.openai.default);
74
+ if (!hasReasoningEffort) {
75
+ finalArgs.push('-c', 'model_reasoning_effort="high"');
76
+ }
69
77
  }
70
78
 
71
79
  // Add super mode flags (full-auto)
@@ -82,19 +90,7 @@ wire_api = "responses"
82
90
  JBAI_PROXY_KEY: 'placeholder', // Proxy handles auth via Grazie JWT
83
91
  };
84
92
 
85
- const child = runWithHandoff({
86
- command: 'codex',
87
- args: finalArgs,
88
- env: childEnv,
89
- toolName: 'jbai-codex',
90
- handoffDefaults: {
91
- enabled: !handoffConfig.disabled,
92
- grazieToken: token,
93
- grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
94
- grazieModel: config.MODELS.claude.default,
95
- cwd: process.cwd(),
96
- },
97
- });
93
+ const child = spawn('codex', finalArgs, { stdio: 'inherit', env: childEnv });
98
94
 
99
95
  if (child && typeof child.on === 'function') {
100
96
  child.on('error', (err) => {
@@ -7,7 +7,7 @@
7
7
  * then launches `cn` (Continue CLI).
8
8
  */
9
9
 
10
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
10
+ const { spawn } = require('child_process');
11
11
  const fs = require('fs');
12
12
  const path = require('path');
13
13
  const os = require('os');
@@ -89,8 +89,6 @@ function toYaml(obj, indent = 0) {
89
89
 
90
90
  (async () => {
91
91
  let args = process.argv.slice(2);
92
- const handoffConfig = stripHandoffFlag(args);
93
- args = handoffConfig.args;
94
92
 
95
93
  // Check for super mode (--super, --yolo, -s)
96
94
  const superFlags = ['--super', '--yolo', '-s'];
@@ -110,7 +108,7 @@ function toYaml(obj, indent = 0) {
110
108
  args.splice(modelFlagIdx, 2);
111
109
  }
112
110
 
113
- const token = await ensureToken();
111
+ await ensureToken();
114
112
  const environment = config.getEnvironment();
115
113
 
116
114
  // Auto-start proxy if not running
@@ -144,19 +142,7 @@ function toYaml(obj, indent = 0) {
144
142
  ANTHROPIC_API_KEY: 'placeholder',
145
143
  };
146
144
 
147
- const child = runWithHandoff({
148
- command: 'cn',
149
- args: finalArgs,
150
- env: childEnv,
151
- toolName: 'jbai-continue',
152
- handoffDefaults: {
153
- enabled: !handoffConfig.disabled,
154
- grazieToken: token,
155
- grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
156
- grazieModel: config.MODELS.claude.default,
157
- cwd: process.cwd(),
158
- },
159
- });
145
+ const child = spawn('cn', finalArgs, { stdio: 'inherit', env: childEnv });
160
146
 
161
147
  if (child && typeof child.on === 'function') {
162
148
  child.on('error', (err) => {
@@ -8,7 +8,7 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
8
8
  const SESSION_NAME = 'jbai-council';
9
9
 
10
10
  const AGENTS = [
11
- { name: 'claude', command: 'jbai-claude', extraArgs: ['--allow-dangerously-skip-permissions'] },
11
+ { name: 'claude', command: 'jbai-claude', extraArgs: ['--super'] },
12
12
  { name: 'codex', command: 'jbai-codex' },
13
13
  { name: 'opencode', command: 'jbai-opencode' },
14
14
  ];
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'gemini',
4
+ model: 'gemini-3.1-pro-preview',
5
+ label: 'Gemini + 3.1 Pro Preview',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'gemini',
4
+ model: 'supernova',
5
+ label: 'Gemini + Supernova (Google EAP)',
6
+ });
@@ -6,7 +6,7 @@
6
6
  * Uses GEMINI_CLI_CUSTOM_HEADERS and GEMINI_BASE_URL for authentication
7
7
  */
8
8
 
9
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
9
+ const { spawn } = require('child_process');
10
10
  const config = require('../lib/config');
11
11
  const { isModelsCommand, showModelsForTool } = require('../lib/model-list');
12
12
  const { ensureToken } = require('../lib/ensure-token');
@@ -14,15 +14,13 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
14
14
 
15
15
  (async () => {
16
16
  let args = process.argv.slice(2);
17
- const handoffConfig = stripHandoffFlag(args);
18
- args = handoffConfig.args;
19
17
 
20
18
  if (isModelsCommand(args)) {
21
19
  showModelsForTool('gemini', 'Available Grazie models for jbai-gemini:');
22
20
  return;
23
21
  }
24
22
 
25
- const token = await ensureToken();
23
+ await ensureToken();
26
24
 
27
25
  // Check for super mode (--super, --yolo, -s)
28
26
  const superFlags = ['--super', '--yolo', '-s'];
@@ -52,19 +50,7 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
52
50
  // Remove any existing custom headers that might conflict
53
51
  delete env.GEMINI_CLI_CUSTOM_HEADERS;
54
52
 
55
- const child = runWithHandoff({
56
- command: 'gemini',
57
- args: finalArgs,
58
- env,
59
- toolName: 'jbai-gemini',
60
- handoffDefaults: {
61
- enabled: !handoffConfig.disabled,
62
- grazieToken: token,
63
- grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
64
- grazieModel: config.MODELS.claude.default,
65
- cwd: process.cwd(),
66
- },
67
- });
53
+ const child = spawn('gemini', finalArgs, { stdio: 'inherit', env });
68
54
 
69
55
  if (child && typeof child.on === 'function') {
70
56
  child.on('error', (err) => {
package/bin/jbai-goose.js CHANGED
@@ -7,7 +7,7 @@
7
7
  * Goose uses OPENAI_HOST + OPENAI_API_KEY env vars for the OpenAI provider.
8
8
  */
9
9
 
10
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
10
+ const { spawn } = require('child_process');
11
11
  const config = require('../lib/config');
12
12
  const { isModelsCommand, showModelsForTool } = require('../lib/model-list');
13
13
  const { ensureToken } = require('../lib/ensure-token');
@@ -15,8 +15,6 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
15
15
 
16
16
  (async () => {
17
17
  let args = process.argv.slice(2);
18
- const handoffConfig = stripHandoffFlag(args);
19
- args = handoffConfig.args;
20
18
 
21
19
  // Check for super mode (--super, --yolo, -s)
22
20
  const superFlags = ['--super', '--yolo', '-s'];
@@ -28,8 +26,7 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
28
26
  return;
29
27
  }
30
28
 
31
- const token = await ensureToken();
32
- const environment = config.getEnvironment();
29
+ await ensureToken();
33
30
 
34
31
  // Auto-start proxy if not running
35
32
  await ensureProxy();
@@ -87,19 +84,7 @@ const { PROXY_PORT, ensureProxy } = require('../lib/proxy');
87
84
  ...(superMode ? { GOOSE_MODE: 'auto' } : {}),
88
85
  };
89
86
 
90
- const child = runWithHandoff({
91
- command: 'goose',
92
- args: finalArgs,
93
- env: childEnv,
94
- toolName: 'jbai-goose',
95
- handoffDefaults: {
96
- enabled: !handoffConfig.disabled,
97
- grazieToken: token,
98
- grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
99
- grazieModel: config.MODELS.claude.default,
100
- cwd: process.cwd(),
101
- },
102
- });
87
+ const child = spawn('goose', finalArgs, { stdio: 'inherit', env: childEnv });
103
88
 
104
89
  if (child && typeof child.on === 'function') {
105
90
  child.on('error', (err) => {
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'opencode',
4
+ model: 'deepseek-r1',
5
+ label: 'OpenCode + DeepSeek R1',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'opencode',
4
+ model: 'xai-grok-4',
5
+ label: 'OpenCode + Grok 4 (xAI)',
6
+ });
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env node
2
+ require('../lib/shortcut').run({
3
+ tool: 'opencode',
4
+ model: 'rockhopper-alpha',
5
+ label: 'OpenCode + Rockhopper Alpha (OpenAI EAP)',
6
+ });
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
 
3
- const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
3
+ const { spawn } = require('child_process');
4
4
  const fs = require('fs');
5
5
  const path = require('path');
6
6
  const os = require('os');
@@ -59,8 +59,6 @@ function fetchGrazieProfiles(token, environment) {
59
59
 
60
60
  (async () => {
61
61
  let args = process.argv.slice(2);
62
- const handoffConfig = stripHandoffFlag(args);
63
- args = handoffConfig.args;
64
62
 
65
63
  // Check for super mode (--super, --yolo, -s)
66
64
  const superFlags = ['--super', '--yolo', '-s'];
@@ -95,35 +93,14 @@ function fetchGrazieProfiles(token, environment) {
95
93
  fs.mkdirSync(configDir, { recursive: true });
96
94
  }
97
95
 
98
- // Create or update OpenCode config with JetBrains provider
96
+ // Build a fresh OpenCode config (don't merge with existing to avoid stale/rejected keys)
99
97
  const providerName = environment === 'staging' ? 'jbai-staging' : 'jbai';
100
- let opencodeConfig = {};
101
-
102
- if (fs.existsSync(configFile)) {
103
- try {
104
- opencodeConfig = JSON.parse(fs.readFileSync(configFile, 'utf-8'));
105
- } catch {
106
- opencodeConfig = {};
107
- }
108
- }
109
-
110
- // Ensure provider section exists
111
- if (!opencodeConfig.provider) {
112
- opencodeConfig.provider = {};
113
- }
98
+ const opencodeConfig = { provider: {} };
114
99
 
115
100
  // Provider names for OpenAI and Anthropic
116
101
  const anthropicProviderName = environment === 'staging' ? 'jbai-anthropic-staging' : 'jbai-anthropic';
117
102
  const grazieOpenAiProviderName = environment === 'staging' ? 'jbai-grazie-openai-staging' : 'jbai-grazie-openai';
118
103
 
119
- // Remove any providers outside our Grazie set for this tool
120
- const allowedProviders = new Set([providerName, anthropicProviderName, grazieOpenAiProviderName]);
121
- for (const key of Object.keys(opencodeConfig.provider)) {
122
- if (!allowedProviders.has(key)) {
123
- delete opencodeConfig.provider[key];
124
- }
125
- }
126
-
127
104
  // Add/update JetBrains OpenAI provider with custom header (using env var reference)
128
105
  // Use OpenAI SDK to support max_completion_tokens for GPT-5.x
129
106
  opencodeConfig.provider[providerName] = {
@@ -205,25 +182,15 @@ function fetchGrazieProfiles(token, environment) {
205
182
  // Use `jbai gemini` instead for Gemini models.
206
183
 
207
184
  // Enable max permissions for the build agent (allow all tools without asking)
208
- if (!opencodeConfig.agent) {
209
- opencodeConfig.agent = {};
210
- }
211
- if (!opencodeConfig.agent.build) {
212
- opencodeConfig.agent.build = {};
213
- }
214
- opencodeConfig.agent.build.permission = 'allow';
185
+ opencodeConfig.agent = { build: { permission: 'allow' } };
215
186
 
216
187
  if (!enableClipboard) {
217
- if (!opencodeConfig.tools) opencodeConfig.tools = {};
218
- opencodeConfig.tools.clipboard = false;
188
+ opencodeConfig.tools = { clipboard: false };
219
189
  }
220
190
 
221
191
  // Only show JetBrains AI providers in the model picker
222
192
  opencodeConfig.enabled_providers = [providerName, anthropicProviderName, grazieOpenAiProviderName];
223
193
 
224
- // Clean up legacy keys that newer OpenCode versions reject
225
- delete opencodeConfig.yolo;
226
-
227
194
  // Write config
228
195
  fs.writeFileSync(configFile, JSON.stringify(opencodeConfig, null, 2));
229
196
 
@@ -243,19 +210,7 @@ function fetchGrazieProfiles(token, environment) {
243
210
  ...process.env
244
211
  };
245
212
 
246
- const child = runWithHandoff({
247
- command: 'opencode',
248
- args: finalArgs,
249
- env: childEnv,
250
- toolName: 'jbai-opencode',
251
- handoffDefaults: {
252
- enabled: !handoffConfig.disabled,
253
- grazieToken: token,
254
- grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
255
- grazieModel: config.MODELS.claude.default,
256
- cwd: process.cwd(),
257
- },
258
- });
213
+ const child = spawn('opencode', finalArgs, { stdio: 'inherit', env: childEnv });
259
214
 
260
215
  if (child && typeof child.on === 'function') {
261
216
  child.on('error', (err) => {
package/bin/jbai-proxy.js CHANGED
@@ -228,6 +228,16 @@ function safeJsonParse(buf) {
228
228
  }
229
229
  }
230
230
 
231
+ function hasVisionInput(payload) {
232
+ if (!payload || typeof payload !== 'object') return false;
233
+
234
+ // Fast path: if request body references standard image keys, treat as vision input.
235
+ const serialized = JSON.stringify(payload);
236
+ return serialized.includes('"input_image"')
237
+ || serialized.includes('"image_url"')
238
+ || serialized.includes('"image_base64"');
239
+ }
240
+
231
241
  function tokenHash(token) {
232
242
  // Cheap, non-cryptographic hash for caching isolation
233
243
  let h = 0;
@@ -1109,6 +1119,7 @@ function handleGrazieOpenAIResponses({ req, res, jwt, endpoints, urlPath, startT
1109
1119
  // Codex CLI model picker response (matches chatgpt.com/backend-api/codex/models format)
1110
1120
  function buildCodexModelsResponse() {
1111
1121
  const descriptions = {
1122
+ 'gpt-5.4': 'Latest frontier model with strongest coding and reasoning performance.',
1112
1123
  'gpt-5.3-codex': 'Latest GPT-5.3 Codex model. Designed for long-running, detailed software engineering tasks.',
1113
1124
  'gpt-5.3-codex-api-preview': 'GPT-5.3 Codex (api-preview alias).',
1114
1125
  'gpt-5.2-codex': 'Latest frontier agentic coding model.',
@@ -1125,11 +1136,11 @@ function buildCodexModelsResponse() {
1125
1136
  'o3-2025-04-16': 'O3 reasoning model.',
1126
1137
  };
1127
1138
 
1128
- const models = config.MODELS.codex.available.map((id, i) => ({
1139
+ const models = config.MODELS.codex.available.map((id) => ({
1129
1140
  slug: id,
1130
1141
  name: id,
1131
1142
  description: descriptions[id] || id,
1132
- default_active: i === 0,
1143
+ default_active: id === config.MODELS.codex.default,
1133
1144
  }));
1134
1145
 
1135
1146
  return { models };
@@ -1290,9 +1301,20 @@ function proxy(req, res) {
1290
1301
  const parsed = JSON.parse(body.toString('utf-8'));
1291
1302
  if (parsed.model && config.MODEL_ALIASES[parsed.model]) {
1292
1303
  const original = parsed.model;
1293
- parsed.model = config.MODEL_ALIASES[parsed.model];
1294
- body = Buffer.from(JSON.stringify(parsed), 'utf-8');
1295
- log(`[alias] Rewrote model "${original}" "${parsed.model}"`);
1304
+ const alias = config.MODEL_ALIASES[original];
1305
+ const isGpt54CompatAlias = original === 'gpt-5.4' || original === 'openai-gpt-5-4';
1306
+ const shouldRewrite = !isGpt54CompatAlias || config.getEnvironment() === 'staging';
1307
+ if (shouldRewrite) {
1308
+ // On staging, gpt-5.4 is not directly available.
1309
+ // For vision payloads route to a vision-capable GPT-5 model.
1310
+ if (isGpt54CompatAlias && hasVisionInput(parsed)) {
1311
+ parsed.model = 'gpt-5-2025-08-07';
1312
+ } else {
1313
+ parsed.model = alias;
1314
+ }
1315
+ body = Buffer.from(JSON.stringify(parsed), 'utf-8');
1316
+ log(`[alias] Rewrote model "${original}" → "${parsed.model}"`);
1317
+ }
1296
1318
  }
1297
1319
  } catch {
1298
1320
  // Not valid JSON or parse error — forward as-is