jbai-cli 1.6.0 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  **Use AI coding tools with your JetBrains AI subscription** — no separate API keys needed.
4
4
 
5
- One token, all tools: Claude Code, Codex, Aider, Gemini CLI, OpenCode.
5
+ One token, all tools: Claude Code, Codex, Aider, Gemini CLI, OpenCode, **Codex Desktop, Cursor**.
6
6
 
7
7
  ## Install
8
8
 
@@ -41,7 +41,99 @@ Testing JetBrains AI Platform (staging)
41
41
  4. Google Proxy (Gemini): ✅ Working
42
42
  ```
43
43
 
44
- ## Usage
44
+ ## Local Proxy (for Codex Desktop, Cursor, and other GUI tools)
45
+
46
+ jbai-cli includes a **local reverse proxy** that lets any tool with custom base URL support work through JetBrains AI Platform — no per-tool wrappers needed.
47
+
48
+ ### One-liner setup
49
+
50
+ ```bash
51
+ jbai proxy setup
52
+ ```
53
+
54
+ This single command:
55
+ - Starts the proxy on `localhost:18080` (auto-starts on login via launchd)
56
+ - Configures **Codex Desktop** (`~/.codex/config.toml`)
57
+ - Adds the `JBAI_PROXY_KEY` env var to your shell
58
+
59
+ ### How it works
60
+
61
+ ```
62
+ Codex Desktop / Cursor / any tool
63
+ │ standard OpenAI / Anthropic API calls
64
+
65
+ http://localhost:18080
66
+ │ injects Grazie-Authenticate-JWT header
67
+ │ routes to correct provider endpoint
68
+
69
+ https://api.jetbrains.ai/user/v5/llm/{provider}/v1
70
+
71
+
72
+ Actual LLM (GPT, Claude, Gemini)
73
+ ```
74
+
75
+ ### Codex Desktop
76
+
77
+ After `jbai proxy setup`, Codex Desktop works automatically. The setup configures `~/.codex/config.toml` with:
78
+ ```toml
79
+ model_provider = "jbai-proxy"
80
+
81
+ [model_providers.jbai-proxy]
82
+ name = "JetBrains AI (Proxy)"
83
+ base_url = "http://localhost:18080/openai/v1"
84
+ env_key = "JBAI_PROXY_KEY"
85
+ wire_api = "responses"
86
+ ```
87
+
88
+ ### Cursor
89
+
90
+ Cursor requires manual configuration via its UI:
91
+ 1. Open **Cursor** → **Settings** (gear icon) → **Models**
92
+ 2. Enable **"Override OpenAI Base URL"**
93
+ 3. Set:
94
+ - **Base URL**: `http://localhost:18080/openai/v1`
95
+ - **API Key**: `placeholder`
96
+ 4. Click **Verify**
97
+
98
+ ### Any OpenAI-compatible tool
99
+
100
+ Point it to the proxy:
101
+ ```bash
102
+ export OPENAI_BASE_URL=http://localhost:18080/openai/v1
103
+ export OPENAI_API_KEY=placeholder
104
+ ```
105
+
106
+ For Anthropic-compatible tools:
107
+ ```bash
108
+ export ANTHROPIC_BASE_URL=http://localhost:18080/anthropic
109
+ export ANTHROPIC_API_KEY=placeholder
110
+ ```
111
+
112
+ ### Proxy commands
113
+
114
+ | Command | Description |
115
+ |---------|-------------|
116
+ | `jbai proxy setup` | One-liner: configure everything + start |
117
+ | `jbai proxy status` | Check if proxy is running |
118
+ | `jbai proxy stop` | Stop the proxy |
119
+ | `jbai proxy --daemon` | Start proxy in background |
120
+ | `jbai proxy install-service` | Auto-start on login (macOS launchd) |
121
+ | `jbai proxy uninstall-service` | Remove auto-start |
122
+
123
+ ### Proxy routes
124
+
125
+ | Route | Target |
126
+ |-------|--------|
127
+ | `/openai/v1/*` | Grazie OpenAI endpoint |
128
+ | `/anthropic/v1/*` | Grazie Anthropic endpoint |
129
+ | `/google/v1/*` | Grazie Google endpoint |
130
+ | `/v1/chat/completions` | OpenAI (auto-detect) |
131
+ | `/v1/responses` | OpenAI (auto-detect) |
132
+ | `/v1/messages` | Anthropic (auto-detect) |
133
+ | `/v1/models` | Synthetic model list |
134
+ | `/health` | Proxy status |
135
+
136
+ ## CLI Tool Wrappers
45
137
 
46
138
  ### Claude Code
47
139
  ```bash
@@ -126,8 +218,6 @@ jbai-aider --super
126
218
  | Gemini CLI | `--yolo` |
127
219
  | OpenCode | N/A (run mode is already non-interactive) |
128
220
 
129
- ⚠️ **Use with caution** - super mode allows the AI to make changes without confirmation.
130
-
131
221
  ## Using Different Models
132
222
 
133
223
  Each tool has a sensible default, but you can specify any available model:
@@ -190,6 +280,9 @@ jbai-aider --model gemini/gemini-2.5-pro
190
280
  | `jbai token set` | Set/update token |
191
281
  | `jbai test` | Test API connections |
192
282
  | `jbai models` | List all models |
283
+ | `jbai proxy setup` | Setup proxy + configure Codex Desktop |
284
+ | `jbai proxy status` | Check proxy status |
285
+ | `jbai proxy stop` | Stop proxy |
193
286
  | `jbai handoff` | Continue a task in Orca Lab |
194
287
  | `jbai install` | Install all AI tools |
195
288
  | `jbai install claude` | Install specific tool |
@@ -257,6 +350,10 @@ jbai-cli uses JetBrains AI Platform's **Guarded Proxy**, which provides API-comp
257
350
 
258
351
  Your JetBrains AI token authenticates all requests via the `Grazie-Authenticate-JWT` header.
259
352
 
353
+ **CLI wrappers** (`jbai-claude`, `jbai-codex`, etc.) set environment variables and launch the underlying tool directly.
354
+
355
+ **Local proxy** (`jbai proxy`) runs an HTTP server on localhost that forwards requests to Grazie, injecting the JWT header automatically. This enables GUI tools like Codex Desktop and Cursor that don't support custom headers but do support custom base URLs.
356
+
260
357
  ## Troubleshooting
261
358
 
262
359
  ### "Token expired"
@@ -279,6 +376,21 @@ jbai test
279
376
  jbai token
280
377
  ```
281
378
 
379
+ ### Proxy not working
380
+ ```bash
381
+ # Check proxy status
382
+ jbai proxy status
383
+
384
+ # Check proxy health
385
+ curl http://localhost:18080/health
386
+
387
+ # Check logs
388
+ cat ~/.jbai/proxy.log
389
+
390
+ # Restart proxy
391
+ jbai proxy stop && jbai proxy --daemon
392
+ ```
393
+
282
394
  ### Wrong environment
283
395
  ```bash
284
396
  # Staging token won't work with production
@@ -2,76 +2,69 @@
2
2
 
3
3
  const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
4
4
  const config = require('../lib/config');
5
+ const { ensureToken } = require('../lib/ensure-token');
5
6
 
6
- const token = config.getToken();
7
- if (!token) {
8
- console.error('❌ No token found. Run: jbai token set');
9
- process.exit(1);
10
- }
7
+ (async () => {
8
+ const token = await ensureToken();
9
+ const endpoints = config.getEndpoints();
10
+ let args = process.argv.slice(2);
11
+ const handoffConfig = stripHandoffFlag(args);
12
+ args = handoffConfig.args;
11
13
 
12
- if (config.isTokenExpired(token)) {
13
- console.error('⚠️ Token expired. Run: jbai token refresh');
14
- process.exit(1);
15
- }
14
+ // Check for super mode (--super, --yolo, -s)
15
+ const superFlags = ['--super', '--yolo', '-s'];
16
+ const superMode = args.some(a => superFlags.includes(a));
17
+ args = args.filter(a => !superFlags.includes(a));
16
18
 
17
- const endpoints = config.getEndpoints();
18
- let args = process.argv.slice(2);
19
- const handoffConfig = stripHandoffFlag(args);
20
- args = handoffConfig.args;
19
+ // Check if model specified
20
+ const hasModel = args.includes('--model') || args.includes('-m');
21
+ let finalArgs = hasModel ? args : ['--model', config.MODELS.claude.default, ...args];
21
22
 
22
- // Check for super mode (--super, --yolo, -s)
23
- const superFlags = ['--super', '--yolo', '-s'];
24
- const superMode = args.some(a => superFlags.includes(a));
25
- args = args.filter(a => !superFlags.includes(a));
23
+ // Add super mode flags
24
+ if (superMode) {
25
+ finalArgs = ['--dangerously-skip-permissions', ...finalArgs];
26
+ console.log('🚀 Super mode: --dangerously-skip-permissions enabled');
27
+ }
26
28
 
27
- // Check if model specified
28
- const hasModel = args.includes('--model') || args.includes('-m');
29
- let finalArgs = hasModel ? args : ['--model', config.MODELS.claude.default, ...args];
29
+ // Set environment for Claude Code
30
+ // Use ANTHROPIC_CUSTOM_HEADERS for the Grazie auth header
31
+ // Remove /v1 from endpoint - Claude Code adds it automatically
32
+ const baseUrl = endpoints.anthropic.replace(/\/v1$/, '');
33
+ const env = {
34
+ ...process.env,
35
+ ANTHROPIC_BASE_URL: baseUrl,
36
+ ANTHROPIC_API_KEY: 'placeholder',
37
+ ANTHROPIC_CUSTOM_HEADERS: `Grazie-Authenticate-JWT: ${token}`
38
+ };
30
39
 
31
- // Add super mode flags
32
- if (superMode) {
33
- finalArgs = ['--dangerously-skip-permissions', ...finalArgs];
34
- console.log('🚀 Super mode: --dangerously-skip-permissions enabled');
35
- }
40
+ // Remove any existing auth token that might conflict
41
+ delete env.ANTHROPIC_AUTH_TOKEN;
36
42
 
37
- // Set environment for Claude Code
38
- // Use ANTHROPIC_CUSTOM_HEADERS for the Grazie auth header
39
- // Remove /v1 from endpoint - Claude Code adds it automatically
40
- const baseUrl = endpoints.anthropic.replace(/\/v1$/, '');
41
- const env = {
42
- ...process.env,
43
- ANTHROPIC_BASE_URL: baseUrl,
44
- ANTHROPIC_API_KEY: 'placeholder',
45
- ANTHROPIC_CUSTOM_HEADERS: `Grazie-Authenticate-JWT: ${token}`
46
- };
47
-
48
- // Remove any existing auth token that might conflict
49
- delete env.ANTHROPIC_AUTH_TOKEN;
50
-
51
- const child = runWithHandoff({
52
- command: 'claude',
53
- args: finalArgs,
54
- env,
55
- toolName: 'jbai-claude',
56
- handoffDefaults: {
57
- enabled: !handoffConfig.disabled,
58
- grazieToken: token,
59
- grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
60
- grazieModel: config.MODELS.claude.default,
61
- cwd: process.cwd(),
62
- },
63
- });
64
-
65
- if (child && typeof child.on === 'function') {
66
- child.on('error', (err) => {
67
- if (err.code === 'ENOENT') {
68
- const tool = config.TOOLS.claude;
69
- console.error(`❌ ${tool.name} not found.\n`);
70
- console.error(`Install with: ${tool.install}`);
71
- console.error(`Or run: jbai install claude`);
72
- } else {
73
- console.error(`Error: ${err.message}`);
74
- }
75
- process.exit(1);
43
+ const child = runWithHandoff({
44
+ command: 'claude',
45
+ args: finalArgs,
46
+ env,
47
+ toolName: 'jbai-claude',
48
+ handoffDefaults: {
49
+ enabled: !handoffConfig.disabled,
50
+ grazieToken: token,
51
+ grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
52
+ grazieModel: config.MODELS.claude.default,
53
+ cwd: process.cwd(),
54
+ },
76
55
  });
77
- }
56
+
57
+ if (child && typeof child.on === 'function') {
58
+ child.on('error', (err) => {
59
+ if (err.code === 'ENOENT') {
60
+ const tool = config.TOOLS.claude;
61
+ console.error(`❌ ${tool.name} not found.\n`);
62
+ console.error(`Install with: ${tool.install}`);
63
+ console.error(`Or run: jbai install claude`);
64
+ } else {
65
+ console.error(`Error: ${err.message}`);
66
+ }
67
+ process.exit(1);
68
+ });
69
+ }
70
+ })();
package/bin/jbai-codex.js CHANGED
@@ -5,47 +5,39 @@ const fs = require('fs');
5
5
  const path = require('path');
6
6
  const os = require('os');
7
7
  const config = require('../lib/config');
8
-
9
- const token = config.getToken();
10
- if (!token) {
11
- console.error('❌ No token found. Run: jbai token set');
12
- process.exit(1);
13
- }
14
-
15
- if (config.isTokenExpired(token)) {
16
- console.error('⚠️ Token expired. Run: jbai token refresh');
17
- process.exit(1);
18
- }
19
-
20
- const endpoints = config.getEndpoints();
21
- const environment = config.getEnvironment();
22
- const providerName = environment === 'staging' ? 'jbai-staging' : 'jbai';
23
- const envVarName = environment === 'staging' ? 'GRAZIE_STAGING_TOKEN' : 'GRAZIE_API_TOKEN';
24
- let args = process.argv.slice(2);
25
- const handoffConfig = stripHandoffFlag(args);
26
- args = handoffConfig.args;
27
-
28
- // Check for super mode (--super, --yolo, -s)
29
- const superFlags = ['--super', '--yolo', '-s'];
30
- const superMode = args.some(a => superFlags.includes(a));
31
- args = args.filter(a => !superFlags.includes(a));
32
-
33
- // Ensure Codex config exists
34
- const codexDir = path.join(os.homedir(), '.codex');
35
- const codexConfig = path.join(codexDir, 'config.toml');
36
-
37
- if (!fs.existsSync(codexDir)) {
38
- fs.mkdirSync(codexDir, { recursive: true });
39
- }
40
-
41
- // Check if our provider is configured
42
- let configContent = '';
43
- if (fs.existsSync(codexConfig)) {
44
- configContent = fs.readFileSync(codexConfig, 'utf-8');
45
- }
46
-
47
- if (!configContent.includes(`[model_providers.${providerName}]`)) {
48
- const providerConfig = `
8
+ const { ensureToken } = require('../lib/ensure-token');
9
+
10
+ (async () => {
11
+ const token = await ensureToken();
12
+ const endpoints = config.getEndpoints();
13
+ const environment = config.getEnvironment();
14
+ const providerName = environment === 'staging' ? 'jbai-staging' : 'jbai';
15
+ const envVarName = environment === 'staging' ? 'GRAZIE_STAGING_TOKEN' : 'GRAZIE_API_TOKEN';
16
+ let args = process.argv.slice(2);
17
+ const handoffConfig = stripHandoffFlag(args);
18
+ args = handoffConfig.args;
19
+
20
+ // Check for super mode (--super, --yolo, -s)
21
+ const superFlags = ['--super', '--yolo', '-s'];
22
+ const superMode = args.some(a => superFlags.includes(a));
23
+ args = args.filter(a => !superFlags.includes(a));
24
+
25
+ // Ensure Codex config exists
26
+ const codexDir = path.join(os.homedir(), '.codex');
27
+ const codexConfig = path.join(codexDir, 'config.toml');
28
+
29
+ if (!fs.existsSync(codexDir)) {
30
+ fs.mkdirSync(codexDir, { recursive: true });
31
+ }
32
+
33
+ // Check if our provider is configured
34
+ let configContent = '';
35
+ if (fs.existsSync(codexConfig)) {
36
+ configContent = fs.readFileSync(codexConfig, 'utf-8');
37
+ }
38
+
39
+ if (!configContent.includes(`[model_providers.${providerName}]`)) {
40
+ const providerConfig = `
49
41
  # JetBrains AI (${environment})
50
42
  [model_providers.${providerName}]
51
43
  name = "JetBrains AI (${environment})"
@@ -53,54 +45,68 @@ base_url = "${endpoints.openai}"
53
45
  env_http_headers = { "Grazie-Authenticate-JWT" = "${envVarName}" }
54
46
  wire_api = "responses"
55
47
  `;
56
- fs.appendFileSync(codexConfig, providerConfig);
57
- console.log(`✅ Added ${providerName} provider to Codex config`);
58
- }
59
-
60
- const hasModel = args.includes('--model');
61
- const finalArgs = ['-c', `model_provider=${providerName}`];
62
-
63
- if (!hasModel) {
64
- finalArgs.push('--model', config.MODELS.codex?.default || config.MODELS.openai.default);
65
- }
66
-
67
- // Add super mode flags (full-auto)
68
- if (superMode) {
69
- finalArgs.push('--full-auto');
70
- console.log('🚀 Super mode: --full-auto enabled');
71
- }
72
-
73
- finalArgs.push(...args);
74
-
75
- const childEnv = {
76
- ...process.env,
77
- [envVarName]: token
78
- };
79
-
80
- const child = runWithHandoff({
81
- command: 'codex',
82
- args: finalArgs,
83
- env: childEnv,
84
- toolName: 'jbai-codex',
85
- handoffDefaults: {
86
- enabled: !handoffConfig.disabled,
87
- grazieToken: token,
88
- grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
89
- grazieModel: config.MODELS.claude.default,
90
- cwd: process.cwd(),
91
- },
92
- });
93
-
94
- if (child && typeof child.on === 'function') {
95
- child.on('error', (err) => {
96
- if (err.code === 'ENOENT') {
97
- const tool = config.TOOLS.codex;
98
- console.error(`❌ ${tool.name} not found.\n`);
99
- console.error(`Install with: ${tool.install}`);
100
- console.error(`Or run: jbai install codex`);
101
- } else {
102
- console.error(`Error: ${err.message}`);
103
- }
104
- process.exit(1);
48
+ fs.appendFileSync(codexConfig, providerConfig);
49
+ console.log(`✅ Added ${providerName} provider to Codex config`);
50
+ }
51
+
52
+ // Point Codex model picker to our proxy (serves our model list instead of chatgpt.com)
53
+ const proxyModelUrl = 'http://localhost:18080/';
54
+ if (!configContent.includes('chatgpt_base_url')) {
55
+ fs.appendFileSync(codexConfig, `\nchatgpt_base_url = "${proxyModelUrl}"\n`);
56
+ console.log(' Set chatgpt_base_url to jbai-proxy for model picker');
57
+ } else if (!configContent.includes(proxyModelUrl)) {
58
+ // Update existing chatgpt_base_url
59
+ configContent = fs.readFileSync(codexConfig, 'utf-8');
60
+ const updated = configContent.replace(/^chatgpt_base_url\s*=\s*"[^"]*"/m, `chatgpt_base_url = "${proxyModelUrl}"`);
61
+ fs.writeFileSync(codexConfig, updated);
62
+ console.log(' Updated chatgpt_base_url to jbai-proxy');
63
+ }
64
+
65
+ const hasModel = args.includes('--model');
66
+ const finalArgs = ['-c', `model_provider=${providerName}`];
67
+
68
+ if (!hasModel) {
69
+ finalArgs.push('--model', config.MODELS.codex?.default || config.MODELS.openai.default);
70
+ }
71
+
72
+ // Add super mode flags (full-auto)
73
+ if (superMode) {
74
+ finalArgs.push('--full-auto');
75
+ console.log('🚀 Super mode: --full-auto enabled');
76
+ }
77
+
78
+ finalArgs.push(...args);
79
+
80
+ const childEnv = {
81
+ ...process.env,
82
+ [envVarName]: token
83
+ };
84
+
85
+ const child = runWithHandoff({
86
+ command: 'codex',
87
+ args: finalArgs,
88
+ env: childEnv,
89
+ toolName: 'jbai-codex',
90
+ handoffDefaults: {
91
+ enabled: !handoffConfig.disabled,
92
+ grazieToken: token,
93
+ grazieEnvironment: environment === 'production' ? 'PRODUCTION' : 'STAGING',
94
+ grazieModel: config.MODELS.claude.default,
95
+ cwd: process.cwd(),
96
+ },
105
97
  });
106
- }
98
+
99
+ if (child && typeof child.on === 'function') {
100
+ child.on('error', (err) => {
101
+ if (err.code === 'ENOENT') {
102
+ const tool = config.TOOLS.codex;
103
+ console.error(`❌ ${tool.name} not found.\n`);
104
+ console.error(`Install with: ${tool.install}`);
105
+ console.error(`Or run: jbai install codex`);
106
+ } else {
107
+ console.error(`Error: ${err.message}`);
108
+ }
109
+ process.exit(1);
110
+ });
111
+ }
112
+ })();
@@ -8,70 +8,63 @@
8
8
 
9
9
  const { runWithHandoff, stripHandoffFlag } = require('../lib/interactive-handoff');
10
10
  const config = require('../lib/config');
11
+ const { ensureToken } = require('../lib/ensure-token');
11
12
 
12
- const token = config.getToken();
13
- if (!token) {
14
- console.error('❌ No token found. Run: jbai token set');
15
- process.exit(1);
16
- }
13
+ (async () => {
14
+ const token = await ensureToken();
15
+ const endpoints = config.getEndpoints();
16
+ let args = process.argv.slice(2);
17
+ const handoffConfig = stripHandoffFlag(args);
18
+ args = handoffConfig.args;
17
19
 
18
- if (config.isTokenExpired(token)) {
19
- console.error('⚠️ Token expired. Run: jbai token refresh');
20
- process.exit(1);
21
- }
20
+ // Check for super mode (--super, --yolo, -s)
21
+ const superFlags = ['--super', '--yolo', '-s'];
22
+ const superMode = args.some(a => superFlags.includes(a));
23
+ args = args.filter(a => !superFlags.includes(a));
22
24
 
23
- const endpoints = config.getEndpoints();
24
- let args = process.argv.slice(2);
25
- const handoffConfig = stripHandoffFlag(args);
26
- args = handoffConfig.args;
25
+ // Check if model specified
26
+ const hasModel = args.includes('--model') || args.includes('-m');
27
+ let finalArgs = hasModel ? args : ['--model', config.MODELS.gemini.default, ...args];
27
28
 
28
- // Check for super mode (--super, --yolo, -s)
29
- const superFlags = ['--super', '--yolo', '-s'];
30
- const superMode = args.some(a => superFlags.includes(a));
31
- args = args.filter(a => !superFlags.includes(a));
29
+ // Add super mode flags (auto-confirm) - Gemini uses --yolo
30
+ if (superMode) {
31
+ finalArgs = ['--yolo', ...finalArgs];
32
+ console.log('🚀 Super mode: --yolo (auto-confirm) enabled');
33
+ }
32
34
 
33
- // Check if model specified
34
- const hasModel = args.includes('--model') || args.includes('-m');
35
- let finalArgs = hasModel ? args : ['--model', config.MODELS.gemini.default, ...args];
35
+ // Set environment for Gemini CLI
36
+ // Uses GEMINI_CLI_CUSTOM_HEADERS for auth (supported since Nov 2025)
37
+ const env = {
38
+ ...process.env,
39
+ GEMINI_BASE_URL: endpoints.google,
40
+ GEMINI_API_KEY: 'placeholder',
41
+ GEMINI_CLI_CUSTOM_HEADERS: `Grazie-Authenticate-JWT: ${token}`
42
+ };
36
43
 
37
- // Add super mode flags (auto-confirm) - Gemini uses --yolo
38
- if (superMode) {
39
- finalArgs = ['--yolo', ...finalArgs];
40
- console.log('🚀 Super mode: --yolo (auto-confirm) enabled');
41
- }
42
-
43
- // Set environment for Gemini CLI
44
- // Uses GEMINI_CLI_CUSTOM_HEADERS for auth (supported since Nov 2025)
45
- const env = {
46
- ...process.env,
47
- GEMINI_BASE_URL: endpoints.google,
48
- GEMINI_API_KEY: 'placeholder',
49
- GEMINI_CLI_CUSTOM_HEADERS: `Grazie-Authenticate-JWT: ${token}`
50
- };
51
-
52
- const child = runWithHandoff({
53
- command: 'gemini',
54
- args: finalArgs,
55
- env,
56
- toolName: 'jbai-gemini',
57
- handoffDefaults: {
58
- enabled: !handoffConfig.disabled,
59
- grazieToken: token,
60
- grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
61
- grazieModel: config.MODELS.claude.default,
62
- cwd: process.cwd(),
63
- },
64
- });
65
-
66
- if (child && typeof child.on === 'function') {
67
- child.on('error', (err) => {
68
- if (err.code === 'ENOENT') {
69
- console.error(`❌ Gemini CLI not found.\n`);
70
- console.error(`Install with: npm install -g @google/gemini-cli`);
71
- console.error(`Or run: jbai install gemini`);
72
- } else {
73
- console.error(`Error: ${err.message}`);
74
- }
75
- process.exit(1);
44
+ const child = runWithHandoff({
45
+ command: 'gemini',
46
+ args: finalArgs,
47
+ env,
48
+ toolName: 'jbai-gemini',
49
+ handoffDefaults: {
50
+ enabled: !handoffConfig.disabled,
51
+ grazieToken: token,
52
+ grazieEnvironment: config.getEnvironment() === 'production' ? 'PRODUCTION' : 'STAGING',
53
+ grazieModel: config.MODELS.claude.default,
54
+ cwd: process.cwd(),
55
+ },
76
56
  });
77
- }
57
+
58
+ if (child && typeof child.on === 'function') {
59
+ child.on('error', (err) => {
60
+ if (err.code === 'ENOENT') {
61
+ console.error(`❌ Gemini CLI not found.\n`);
62
+ console.error(`Install with: npm install -g @google/gemini-cli`);
63
+ console.error(`Or run: jbai install gemini`);
64
+ } else {
65
+ console.error(`Error: ${err.message}`);
66
+ }
67
+ process.exit(1);
68
+ });
69
+ }
70
+ })();