claude-connect 0.1.5 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,11 +1,11 @@
1
1
  # Claude Connect
2
2
 
3
- > Conecta `Claude Code` con `OpenCode Go`, `Zen`, `Kimi`, `DeepSeek`, `OpenRouter` y `Qwen` desde una interfaz de consola clara, rápida y reversible.
3
+ > Conecta `Claude Code` con `OpenCode Go`, `Zen`, `Kimi`, `DeepSeek`, `Ollama`, `OpenAI`, `Inception Labs`, `OpenRouter` y `Qwen` desde una interfaz de consola clara, rápida y reversible.
4
4
 
5
5
  [![npm version](https://img.shields.io/npm/v/claude-connect?style=for-the-badge&logo=npm&color=cb3837)](https://www.npmjs.com/package/claude-connect)
6
6
  [![node](https://img.shields.io/badge/node-%3E%3D22-2f7d32?style=for-the-badge&logo=node.js&logoColor=white)](https://nodejs.org/)
7
7
  [![license](https://img.shields.io/badge/license-MIT-0f172a?style=for-the-badge)](./LICENSE)
8
- [![providers](https://img.shields.io/badge/providers-OpenCode%20Go%20%7C%20Zen%20%7C%20Kimi%20%7C%20DeepSeek%20%7C%20OpenRouter%20%7C%20Qwen-0ea5e9?style=for-the-badge)](https://www.npmjs.com/package/claude-connect)
8
+ [![providers](https://img.shields.io/badge/providers-OpenCode%20Go%20%7C%20Zen%20%7C%20Kimi%20%7C%20DeepSeek%20%7C%20Ollama%20%7C%20OpenAI%20%7C%20Inception%20Labs%20%7C%20OpenRouter%20%7C%20Qwen-0ea5e9?style=for-the-badge)](https://www.npmjs.com/package/claude-connect)
9
9
 
10
10
  ## Why Claude Connect
11
11
 
@@ -13,7 +13,7 @@
13
13
 
14
14
  ### Highlights
15
15
 
16
- - `OpenCode Go`, `Zen`, `Kimi`, `DeepSeek`, `OpenRouter` y `Qwen` listos desde el primer arranque
16
+ - `OpenCode Go`, `Zen`, `Kimi`, `DeepSeek`, `Ollama`, `OpenAI`, `Inception Labs`, `OpenRouter` y `Qwen` listos desde el primer arranque
17
17
  - soporte para `Token` y `OAuth` cuando el proveedor lo permite
18
18
  - API keys compartidas por proveedor para no repetir el mismo token en cada modelo
19
19
  - activación reversible sobre la instalación real de `Claude Code`
@@ -78,6 +78,9 @@ Al activar:
78
78
  - `Zen` usa conexión directa o gateway según el modelo elegido
79
79
  - `Kimi` usa gateway local y reenvia al endpoint Anthropic de `https://api.kimi.com/coding/`
80
80
  - `DeepSeek` apunta a `https://api.deepseek.com/anthropic`
81
+ - `Ollama` pide una URL local o remota, valida `/api/tags` y usa el gateway local sobre `.../api/chat`
82
+ - `OpenAI` usa el gateway local sobre `https://api.openai.com/v1/chat/completions`
83
+ - `Inception Labs` usa el gateway local sobre `https://api.inceptionlabs.ai/v1/chat/completions`
81
84
  - `OpenRouter` usa `openrouter/free` por gateway sobre `https://openrouter.ai/api/v1`
82
85
  - `Qwen` apunta al gateway local `http://127.0.0.1:4310/anthropic`
83
86
 
@@ -89,6 +92,9 @@ Al activar:
89
92
  | `Zen` | `Claude*` de Zen + modelos `chat/completions` de Zen | `Token` | Mixta |
90
93
  | `Kimi` | `kimi-for-coding` | `Token` | Gateway local |
91
94
  | `DeepSeek` | `deepseek-chat`, `deepseek-reasoner` | `Token` | Directa |
95
+ | `Ollama` | modelos descubiertos desde tu servidor | `Servidor Ollama` | Gateway local |
96
+ | `OpenAI` | `gpt-5.4`, `gpt-5.4-mini`, `gpt-5.3-codex`, `gpt-5.2-codex`, `gpt-5.2`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini` | `Token` | Gateway local |
97
+ | `Inception Labs` | `mercury-2` | `Token` | Gateway local |
92
98
  | `OpenRouter` | `openrouter/free` | `Token` | Gateway local |
93
99
  | `Qwen` | `qwen3-coder-plus` | `OAuth`, `Token` | Gateway local |
94
100
 
@@ -103,6 +109,38 @@ Nota sobre `Zen`:
103
109
  - los modelos de Zen servidos por `chat/completions` van por gateway local
104
110
  - esta primera integración no incluye todavía los modelos de Zen expuestos por `responses` ni los de endpoint tipo Google
105
111
 
112
+ Nota sobre `OpenAI`:
113
+
114
+ - esta integración usa `Chat Completions` por `gateway local`
115
+ - el bridge actual encaja bien con los modelos GPT/Codex listados porque Claude Code sigue hablando Anthropic hacia `claude-connect`
116
+ - la autenticación soportada hoy es `API key`; no se expone `OAuth` para este proveedor
117
+ - `gpt-5.4` quedó validado con una llamada real a través del gateway local
118
+ - referencia oficial:
119
+ - https://platform.openai.com/docs/api-reference/chat/create
120
+ - https://platform.openai.com/docs/api-reference/authentication
121
+ - https://developers.openai.com/api/docs/models
122
+
123
+ Nota sobre `Inception Labs`:
124
+
125
+ - esta primera integracion expone solo `mercury-2`, que es el modelo chat-compatible oficial en `v1/chat/completions`
126
+ - `Mercury Edit 2` no se publica todavia en Claude Connect porque usa endpoints `fim/edit` que no encajan con Claude Code en esta arquitectura
127
+ - autenticacion soportada: `API key`
128
+ - referencias oficiales:
129
+ - https://docs.inceptionlabs.ai/get-started/get-started
130
+ - https://docs.inceptionlabs.ai/get-started/authentication
131
+ - https://docs.inceptionlabs.ai/get-started/models
132
+
133
+ Nota sobre `Ollama`:
134
+
135
+ - la URL del servidor se define al crear la conexión
136
+ - sirve tanto para `localhost` como para un VPS o servidor remoto con Ollama expuesto
137
+ - Claude Connect consulta `/api/tags` para listar modelos y validar la conexión antes de guardar
138
+ - luego usa el endpoint nativo `POST /api/chat`, que resultó más compatible para servidores remotos que publican mal `/v1/*`
139
+ - servidores remotos pueden seguir fallando por timeout, auth cloud o respuestas pobres del modelo; la app ya distingue mejor esos casos
140
+ - referencia oficial:
141
+ - https://docs.ollama.com/openai
142
+ - https://docs.ollama.com/api/tags
143
+
106
144
  ## What It Stores
107
145
 
108
146
  Claude Connect guarda el estado sensible fuera del repo.
@@ -125,14 +163,16 @@ Ahí viven:
125
163
  El catálogo SQLite local se genera automáticamente en:
126
164
 
127
165
  ```text
128
- storage/claude-connect.sqlite
166
+ Linux: ~/.claude-connect/storage/claude-connect.sqlite
167
+ Windows: %APPDATA%\claude-connect\storage\claude-connect.sqlite
129
168
  ```
130
169
 
131
170
  Importante:
132
171
 
133
172
  - esa base ya no se versiona en git
134
173
  - el catálogo se siembra desde `src/data/catalog-store.js`
135
- - esto evita conflictos molestos al hacer `git pull`
174
+ - ya no se crea en la carpeta donde ejecutas el comando
175
+ - esto evita conflictos molestos al hacer `git pull` y carpetas `storage/` accidentales en proyectos ajenos
136
176
 
137
177
  ## Claude Code Switching
138
178
 
@@ -149,6 +189,7 @@ Eso permite:
149
189
  - activar otro proveedor sin tocar archivos manualmente
150
190
  - evitar el `Auth conflict` entre sesión `claude.ai` y `API key`
151
191
  - volver a tu estado original con `Revertir Claude`
192
+ - bloquear la activación si `Claude Code` no está realmente instalado todavía
152
193
 
153
194
  ## Qwen OAuth
154
195
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "claude-connect",
3
- "version": "0.1.5",
3
+ "version": "0.1.7",
4
4
  "description": "CLI para configurar Claude Code con proveedores de modelos externos",
5
5
  "author": "wmcarlosv",
6
6
  "type": "module",
@@ -36,6 +36,8 @@
36
36
  "anthropic",
37
37
  "deepseek",
38
38
  "kimi",
39
+ "ollama",
40
+ "openai",
39
41
  "qwen",
40
42
  "terminal"
41
43
  ],
@@ -1,8 +1,14 @@
1
1
  import fs from 'node:fs';
2
2
  import path from 'node:path';
3
3
  import { DatabaseSync } from 'node:sqlite';
4
+ import { resolveClaudeConnectHomeSync } from '../lib/app-paths.js';
4
5
 
5
- export const defaultCatalogDbPath = path.join(process.cwd(), 'storage', 'claude-connect.sqlite');
6
+ export function getDefaultCatalogDbPath(options = {}) {
7
+ const pathModule = options.platform === 'win32' ? path.win32 : path.posix;
8
+ return pathModule.join(resolveClaudeConnectHomeSync(options), 'storage', 'claude-connect.sqlite');
9
+ }
10
+
11
+ export const defaultCatalogDbPath = getDefaultCatalogDbPath();
6
12
 
7
13
  const schemaSql = `
8
14
  PRAGMA foreign_keys = ON;
@@ -521,6 +527,197 @@ const seedProviders = [
521
527
  }
522
528
  ]
523
529
  },
530
+ {
531
+ id: 'ollama',
532
+ name: 'Ollama',
533
+ vendor: 'Ollama',
534
+ description: 'Servidor Ollama autohospedado. La conexion pide una base URL manual, descubre modelos via /api/tags y luego usa Chat Completions por el gateway local.',
535
+ docsUrl: 'https://docs.ollama.com/openai',
536
+ docsVerifiedAt: '2026-04-02',
537
+ baseUrl: 'http://127.0.0.1:11434',
538
+ defaultModelId: null,
539
+ defaultAuthMethodId: 'server',
540
+ defaultApiKeyEnvVar: 'OLLAMA_API_KEY',
541
+ models: [],
542
+ authMethods: [
543
+ {
544
+ id: 'server',
545
+ name: 'Servidor Ollama',
546
+ description: 'Conexion sin API key propia de Claude Connect. Solo necesitas la URL de tu servidor Ollama local o remoto.',
547
+ credentialKind: 'none',
548
+ sortOrder: 1,
549
+ isDefault: 1
550
+ }
551
+ ]
552
+ },
553
+ {
554
+ id: 'openai',
555
+ name: 'OpenAI',
556
+ vendor: 'OpenAI',
557
+ description: 'OpenAI con modelos GPT y Codex orientados a coding. Claude Code se conecta a traves del gateway local para mantener compatibilidad Anthropic, herramientas y vision.',
558
+ docsUrl: 'https://developers.openai.com/api/docs/models',
559
+ docsVerifiedAt: '2026-04-02',
560
+ baseUrl: 'https://api.openai.com/v1',
561
+ defaultModelId: 'gpt-5.4',
562
+ defaultAuthMethodId: 'token',
563
+ defaultApiKeyEnvVar: 'OPENAI_API_KEY',
564
+ models: [
565
+ {
566
+ id: 'gpt-5.4',
567
+ name: 'GPT-5.4',
568
+ category: 'OpenAI Chat Completions',
569
+ contextWindow: '1M',
570
+ summary: 'Modelo frontier actual de OpenAI para trabajo complejo, coding y flujos profesionales.',
571
+ upstreamModelId: 'gpt-5.4',
572
+ transportMode: 'gateway',
573
+ apiStyle: 'openai-chat',
574
+ apiBaseUrl: 'https://api.openai.com/v1',
575
+ apiPath: '/chat/completions',
576
+ authEnvMode: 'auth_token',
577
+ sortOrder: 1,
578
+ isDefault: 1
579
+ },
580
+ {
581
+ id: 'gpt-5.4-mini',
582
+ name: 'GPT-5.4 Mini',
583
+ category: 'OpenAI Chat Completions',
584
+ contextWindow: '400K',
585
+ summary: 'Variante mas rapida y economica de GPT-5.4 para coding, subagentes y alto volumen.',
586
+ upstreamModelId: 'gpt-5.4-mini',
587
+ transportMode: 'gateway',
588
+ apiStyle: 'openai-chat',
589
+ apiBaseUrl: 'https://api.openai.com/v1',
590
+ apiPath: '/chat/completions',
591
+ authEnvMode: 'auth_token',
592
+ sortOrder: 2,
593
+ isDefault: 0
594
+ },
595
+ {
596
+ id: 'gpt-5.3-codex',
597
+ name: 'GPT-5.3 Codex',
598
+ category: 'OpenAI Chat Completions',
599
+ contextWindow: '400K',
600
+ summary: 'Modelo Codex mas capaz de OpenAI para tareas agenticas de programacion.',
601
+ upstreamModelId: 'gpt-5.3-codex',
602
+ transportMode: 'gateway',
603
+ apiStyle: 'openai-chat',
604
+ apiBaseUrl: 'https://api.openai.com/v1',
605
+ apiPath: '/chat/completions',
606
+ authEnvMode: 'auth_token',
607
+ sortOrder: 3,
608
+ isDefault: 0
609
+ },
610
+ {
611
+ id: 'gpt-5.2-codex',
612
+ name: 'GPT-5.2 Codex',
613
+ category: 'OpenAI Chat Completions',
614
+ contextWindow: '400K',
615
+ summary: 'Modelo Codex inteligente para tareas largas de coding y automatizacion.',
616
+ upstreamModelId: 'gpt-5.2-codex',
617
+ transportMode: 'gateway',
618
+ apiStyle: 'openai-chat',
619
+ apiBaseUrl: 'https://api.openai.com/v1',
620
+ apiPath: '/chat/completions',
621
+ authEnvMode: 'auth_token',
622
+ sortOrder: 4,
623
+ isDefault: 0
624
+ },
625
+ {
626
+ id: 'gpt-5.2',
627
+ name: 'GPT-5.2',
628
+ category: 'OpenAI Chat Completions',
629
+ contextWindow: '400K',
630
+ summary: 'Modelo frontier previo de OpenAI para trabajo profesional con razonamiento configurable.',
631
+ upstreamModelId: 'gpt-5.2',
632
+ transportMode: 'gateway',
633
+ apiStyle: 'openai-chat',
634
+ apiBaseUrl: 'https://api.openai.com/v1',
635
+ apiPath: '/chat/completions',
636
+ authEnvMode: 'auth_token',
637
+ sortOrder: 5,
638
+ isDefault: 0
639
+ },
640
+ {
641
+ id: 'gpt-5.1-codex-max',
642
+ name: 'GPT-5.1 Codex Max',
643
+ category: 'OpenAI Chat Completions',
644
+ contextWindow: '400K',
645
+ summary: 'Variante Codex optimizada para tareas de larga duracion y sesiones de coding mas extensas.',
646
+ upstreamModelId: 'gpt-5.1-codex-max',
647
+ transportMode: 'gateway',
648
+ apiStyle: 'openai-chat',
649
+ apiBaseUrl: 'https://api.openai.com/v1',
650
+ apiPath: '/chat/completions',
651
+ authEnvMode: 'auth_token',
652
+ sortOrder: 6,
653
+ isDefault: 0
654
+ },
655
+ {
656
+ id: 'gpt-5.1-codex-mini',
657
+ name: 'GPT-5.1 Codex Mini',
658
+ category: 'OpenAI Chat Completions',
659
+ contextWindow: '400K',
660
+ summary: 'Version mas ligera y economica de la linea Codex 5.1 para iteraciones rapidas.',
661
+ upstreamModelId: 'gpt-5.1-codex-mini',
662
+ transportMode: 'gateway',
663
+ apiStyle: 'openai-chat',
664
+ apiBaseUrl: 'https://api.openai.com/v1',
665
+ apiPath: '/chat/completions',
666
+ authEnvMode: 'auth_token',
667
+ sortOrder: 7,
668
+ isDefault: 0
669
+ }
670
+ ],
671
+ authMethods: [
672
+ {
673
+ id: 'token',
674
+ name: 'Token',
675
+ description: 'Conexion por API key de OpenAI.',
676
+ credentialKind: 'env_var',
677
+ sortOrder: 1,
678
+ isDefault: 1
679
+ }
680
+ ]
681
+ },
682
+ {
683
+ id: 'inception',
684
+ name: 'Inception Labs',
685
+ vendor: 'Inception Labs',
686
+ description: 'Inception Platform con Mercury 2 sobre un endpoint OpenAI-compatible. Claude Code se conecta a traves del gateway local para mantener compatibilidad Anthropic y herramientas.',
687
+ docsUrl: 'https://docs.inceptionlabs.ai/get-started/get-started',
688
+ docsVerifiedAt: '2026-04-03',
689
+ baseUrl: 'https://api.inceptionlabs.ai/v1',
690
+ defaultModelId: 'mercury-2',
691
+ defaultAuthMethodId: 'token',
692
+ defaultApiKeyEnvVar: 'INCEPTION_API_KEY',
693
+ models: [
694
+ {
695
+ id: 'mercury-2',
696
+ name: 'Mercury 2',
697
+ category: 'OpenAI Chat Completions',
698
+ contextWindow: '128K',
699
+ summary: 'Modelo generalista y de razonamiento de Inception Labs expuesto por v1/chat/completions.',
700
+ upstreamModelId: 'mercury-2',
701
+ transportMode: 'gateway',
702
+ apiStyle: 'openai-chat',
703
+ apiBaseUrl: 'https://api.inceptionlabs.ai/v1',
704
+ apiPath: '/chat/completions',
705
+ authEnvMode: 'auth_token',
706
+ sortOrder: 1,
707
+ isDefault: 1
708
+ }
709
+ ],
710
+ authMethods: [
711
+ {
712
+ id: 'token',
713
+ name: 'Token',
714
+ description: 'Conexion por API key de Inception Labs.',
715
+ credentialKind: 'env_var',
716
+ sortOrder: 1,
717
+ isDefault: 1
718
+ }
719
+ ]
720
+ },
524
721
  {
525
722
  id: 'openrouter',
526
723
  name: 'OpenRouter',
@@ -869,7 +1066,7 @@ function mapOAuthRow(row) {
869
1066
  };
870
1067
  }
871
1068
 
872
- export function createCatalogStore({ filename = defaultCatalogDbPath } = {}) {
1069
+ export function createCatalogStore({ filename = getDefaultCatalogDbPath() } = {}) {
873
1070
  if (filename !== ':memory:') {
874
1071
  fs.mkdirSync(path.dirname(filename), { recursive: true });
875
1072
  }
@@ -192,6 +192,10 @@ function buildOpenAIContentPartFromAnthropicBlock(block) {
192
192
  }
193
193
 
194
194
  function safeParseJson(value) {
195
+ if (isObject(value)) {
196
+ return value;
197
+ }
198
+
195
199
  if (typeof value !== 'string' || value.length === 0) {
196
200
  return {};
197
201
  }
@@ -232,6 +236,10 @@ export function estimateTokenCountFromAnthropicRequest(body) {
232
236
  return Math.max(1, Math.ceil(totalLength / 4));
233
237
  }
234
238
 
239
+ function usesMaxCompletionTokens(model) {
240
+ return typeof model === 'string' && /^gpt-5(?:[.-]|$)/.test(model);
241
+ }
242
+
235
243
  export function buildOpenAIRequestFromAnthropic({ body, model }) {
236
244
  const messages = [];
237
245
  const systemText = collectText(body.system).trim();
@@ -323,7 +331,11 @@ export function buildOpenAIRequestFromAnthropic({ body, model }) {
323
331
  };
324
332
 
325
333
  if (typeof body.max_tokens === 'number') {
326
- request.max_tokens = body.max_tokens;
334
+ if (usesMaxCompletionTokens(model)) {
335
+ request.max_completion_tokens = body.max_tokens;
336
+ } else {
337
+ request.max_tokens = body.max_tokens;
338
+ }
327
339
  }
328
340
 
329
341
  if (typeof body.temperature === 'number') {
@@ -368,6 +380,134 @@ export function buildOpenAIRequestFromAnthropic({ body, model }) {
368
380
  return request;
369
381
  }
370
382
 
383
+ export function buildOllamaRequestFromAnthropic({ body, model }) {
384
+ const messages = [];
385
+ const toolUseIdToName = new Map();
386
+ const systemText = collectText(body.system).trim();
387
+
388
+ if (systemText.length > 0) {
389
+ messages.push({
390
+ role: 'system',
391
+ content: systemText
392
+ });
393
+ }
394
+
395
+ for (const message of Array.isArray(body.messages) ? body.messages : []) {
396
+ const blocks = normalizeBlocks(message?.content);
397
+
398
+ if (message?.role === 'user') {
399
+ let textParts = [];
400
+ let imageParts = [];
401
+
402
+ const flushUserMessage = () => {
403
+ if (textParts.length === 0 && imageParts.length === 0) {
404
+ return;
405
+ }
406
+
407
+ messages.push({
408
+ role: 'user',
409
+ content: textParts.join('\n\n'),
410
+ ...(imageParts.length > 0 ? { images: imageParts } : {})
411
+ });
412
+
413
+ textParts = [];
414
+ imageParts = [];
415
+ };
416
+
417
+ for (const block of blocks) {
418
+ if (block?.type === 'tool_result') {
419
+ flushUserMessage();
420
+ messages.push({
421
+ role: 'tool',
422
+ tool_name: toolUseIdToName.get(block.tool_use_id) ?? block.tool_use_id ?? 'tool',
423
+ content: collectText(block.content)
424
+ });
425
+ continue;
426
+ }
427
+
428
+ if (block?.type === 'image' && block?.source?.type === 'base64' && typeof block?.source?.data === 'string') {
429
+ imageParts.push(block.source.data);
430
+ continue;
431
+ }
432
+
433
+ textParts.push(collectText(block?.text ?? block));
434
+ }
435
+
436
+ flushUserMessage();
437
+ continue;
438
+ }
439
+
440
+ if (message?.role === 'assistant') {
441
+ const textParts = [];
442
+ const toolCalls = [];
443
+
444
+ for (const block of blocks) {
445
+ if (block?.type === 'tool_use') {
446
+ toolUseIdToName.set(block.id, block.name);
447
+ toolCalls.push({
448
+ function: {
449
+ name: block.name,
450
+ arguments: block.input ?? {}
451
+ }
452
+ });
453
+ continue;
454
+ }
455
+
456
+ textParts.push(collectText(block?.text ?? block));
457
+ }
458
+
459
+ messages.push({
460
+ role: 'assistant',
461
+ content: textParts.join('\n\n'),
462
+ ...(toolCalls.length > 0 ? { tool_calls: toolCalls } : {})
463
+ });
464
+ }
465
+ }
466
+
467
+ const request = {
468
+ model,
469
+ messages,
470
+ stream: false
471
+ };
472
+
473
+ if (typeof body.max_tokens === 'number') {
474
+ request.options = {
475
+ ...(isObject(request.options) ? request.options : {}),
476
+ num_predict: body.max_tokens
477
+ };
478
+ }
479
+
480
+ if (typeof body.temperature === 'number') {
481
+ request.options = {
482
+ ...(isObject(request.options) ? request.options : {}),
483
+ temperature: body.temperature
484
+ };
485
+ }
486
+
487
+ if (Array.isArray(body.stop_sequences) && body.stop_sequences.length > 0) {
488
+ request.options = {
489
+ ...(isObject(request.options) ? request.options : {}),
490
+ stop: body.stop_sequences
491
+ };
492
+ }
493
+
494
+ if (Array.isArray(body.tools) && body.tools.length > 0) {
495
+ request.tools = body.tools.map((tool) => ({
496
+ type: 'function',
497
+ function: {
498
+ name: tool.name,
499
+ description: tool.description ?? '',
500
+ parameters: tool.input_schema ?? {
501
+ type: 'object',
502
+ properties: {}
503
+ }
504
+ }
505
+ }));
506
+ }
507
+
508
+ return request;
509
+ }
510
+
371
511
  export function buildAnthropicMessageFromOpenAI({ response, requestedModel }) {
372
512
  const choice = response?.choices?.[0] ?? {};
373
513
  const assistantMessage = choice?.message ?? {};
@@ -413,6 +553,47 @@ export function buildAnthropicMessageFromOpenAI({ response, requestedModel }) {
413
553
  };
414
554
  }
415
555
 
556
+ export function buildAnthropicMessageFromOllama({ response, requestedModel }) {
557
+ const assistantMessage = isObject(response?.message) ? response.message : {};
558
+ const content = [];
559
+ const text = typeof assistantMessage.content === 'string' ? assistantMessage.content : '';
560
+ const toolCalls = Array.isArray(assistantMessage.tool_calls) ? assistantMessage.tool_calls : [];
561
+
562
+ if (text.length > 0) {
563
+ content.push({
564
+ type: 'text',
565
+ text
566
+ });
567
+ }
568
+
569
+ for (const toolCall of toolCalls) {
570
+ content.push({
571
+ type: 'tool_use',
572
+ id: `toolu_${crypto.randomUUID().replace(/-/g, '')}`,
573
+ name: toolCall?.function?.name || 'tool',
574
+ input: safeParseJson(toolCall?.function?.arguments)
575
+ });
576
+ }
577
+
578
+ return {
579
+ id: `msg_${crypto.randomUUID().replace(/-/g, '')}`,
580
+ type: 'message',
581
+ role: 'assistant',
582
+ model: requestedModel || response?.model || 'unknown',
583
+ content,
584
+ stop_reason: toolCalls.length > 0
585
+ ? 'tool_use'
586
+ : response?.done_reason === 'length'
587
+ ? 'max_tokens'
588
+ : 'end_turn',
589
+ stop_sequence: null,
590
+ usage: {
591
+ input_tokens: Number(response?.prompt_eval_count ?? 0),
592
+ output_tokens: Number(response?.eval_count ?? 0)
593
+ }
594
+ };
595
+ }
596
+
416
597
  function writeSseEvent(response, event, payload) {
417
598
  response.write(`event: ${event}\n`);
418
599
  response.write(`data: ${JSON.stringify(payload)}\n\n`);
@@ -6,7 +6,9 @@ import process from 'node:process';
6
6
  import { spawn } from 'node:child_process';
7
7
  import { fileURLToPath } from 'node:url';
8
8
  import {
9
+ buildAnthropicMessageFromOllama,
9
10
  buildAnthropicMessageFromOpenAI,
11
+ buildOllamaRequestFromAnthropic,
10
12
  buildOpenAIRequestFromAnthropic,
11
13
  estimateTokenCountFromAnthropicRequest,
12
14
  normalizeAnthropicRequestForUpstream,
@@ -33,6 +35,30 @@ function isObject(value) {
33
35
  return typeof value === 'object' && value !== null && !Array.isArray(value);
34
36
  }
35
37
 
38
+ function describeRequestError(error) {
39
+ if (!(error instanceof Error)) {
40
+ return String(error);
41
+ }
42
+
43
+ const parts = [error.message];
44
+ const cause = error.cause;
45
+
46
+ if (cause && typeof cause === 'object') {
47
+ const code = 'code' in cause && typeof cause.code === 'string' ? cause.code : null;
48
+ const message = 'message' in cause && typeof cause.message === 'string' ? cause.message : null;
49
+
50
+ if (code) {
51
+ parts.push(`code=${code}`);
52
+ }
53
+
54
+ if (message && message !== error.message) {
55
+ parts.push(message);
56
+ }
57
+ }
58
+
59
+ return parts.join(' · ');
60
+ }
61
+
36
62
  async function terminatePid(pid) {
37
63
  if (!isProcessAlive(pid)) {
38
64
  return false;
@@ -136,6 +162,18 @@ function getUpstreamModelId(profile) {
136
162
  }
137
163
 
138
164
  function resolveGatewayUpstreamConfig(profile) {
165
+ if (profile?.provider?.id === 'ollama') {
166
+ return {
167
+ upstreamBaseUrl: typeof profile?.endpoint?.baseUrl === 'string' && profile.endpoint.baseUrl.length > 0
168
+ ? profile.endpoint.baseUrl
169
+ : typeof profile?.model?.apiBaseUrl === 'string' && profile.model.apiBaseUrl.length > 0
170
+ ? profile.model.apiBaseUrl
171
+ : 'http://127.0.0.1:11434',
172
+ upstreamApiStyle: 'ollama-chat',
173
+ upstreamApiPath: '/api/chat'
174
+ };
175
+ }
176
+
139
177
  return {
140
178
  upstreamBaseUrl: typeof profile?.model?.apiBaseUrl === 'string' && profile.model.apiBaseUrl.length > 0
141
179
  ? profile.model.apiBaseUrl
@@ -169,6 +207,19 @@ async function resolveGatewayContext() {
169
207
  const profile = await readProfileFile(switchState.profilePath);
170
208
  const authMethod = profile?.auth?.method === 'api_key' ? 'token' : profile?.auth?.method;
171
209
 
210
+ if (authMethod === 'server' && profile?.provider?.id === 'ollama') {
211
+ const upstream = resolveGatewayUpstreamConfig(profile);
212
+
213
+ return {
214
+ profile,
215
+ authMethod,
216
+ upstreamBaseUrl: upstream.upstreamBaseUrl,
217
+ upstreamApiStyle: upstream.upstreamApiStyle,
218
+ upstreamApiPath: upstream.upstreamApiPath,
219
+ accessToken: 'ollama'
220
+ };
221
+ }
222
+
172
223
  if (authMethod === 'token') {
173
224
  const envVar = profile?.auth?.envVar;
174
225
  let token = typeof envVar === 'string' ? process.env[envVar] : '';
@@ -236,11 +287,25 @@ async function resolveGatewayContext() {
236
287
  }
237
288
 
238
289
  async function forwardUpstreamRequest({ targetUrl, headers, payload, context, refreshOnUnauthorized = true }) {
239
- const response = await fetch(targetUrl, {
240
- method: 'POST',
241
- headers,
242
- body: JSON.stringify(payload)
243
- });
290
+ let response;
291
+
292
+ try {
293
+ response = await fetch(targetUrl, {
294
+ method: 'POST',
295
+ headers,
296
+ body: JSON.stringify(payload)
297
+ });
298
+ } catch (error) {
299
+ const providerName = context?.profile?.provider?.name ?? context?.profile?.provider?.id ?? 'El proveedor';
300
+
301
+ if (context?.profile?.provider?.id === 'ollama') {
302
+ throw new Error(
303
+ `${providerName} no respondio en ${targetUrl}. Revisa que el servidor remoto este accesible, que el puerto este expuesto y que Ollama escuche en esa URL. Detalle: ${describeRequestError(error)}`
304
+ );
305
+ }
306
+
307
+ throw new Error(`${providerName} no respondio en ${targetUrl}. Detalle: ${describeRequestError(error)}`);
308
+ }
244
309
 
245
310
  const responsePayload = await response.json().catch(() => ({}));
246
311
 
@@ -305,6 +370,22 @@ async function forwardChatCompletion({ openAiRequest, context, refreshOnUnauthor
305
370
  });
306
371
  }
307
372
 
373
+ async function forwardOllamaChat({ ollamaRequest, context, refreshOnUnauthorized = true }) {
374
+ const targetUrl = `${context.upstreamBaseUrl.replace(/\/$/, '')}${context.upstreamApiPath || '/api/chat'}`;
375
+
376
+ return forwardUpstreamRequest({
377
+ targetUrl,
378
+ headers: {
379
+ 'content-type': 'application/json',
380
+ accept: 'application/json',
381
+ 'user-agent': 'claude-connect-gateway/0.1.0'
382
+ },
383
+ payload: ollamaRequest,
384
+ context,
385
+ refreshOnUnauthorized
386
+ });
387
+ }
388
+
308
389
  async function forwardAnthropicMessage({ requestBody, context, refreshOnUnauthorized = true }) {
309
390
  const targetUrl = `${context.upstreamBaseUrl.replace(/\/$/, '')}${context.upstreamApiPath || '/v1/messages'}`;
310
391
 
@@ -384,6 +465,36 @@ async function handleMessages(request, response) {
384
465
  return;
385
466
  }
386
467
 
468
+ if (context.upstreamApiStyle === 'ollama-chat') {
469
+ const ollamaRequest = buildOllamaRequestFromAnthropic({
470
+ body,
471
+ model: getUpstreamModelId(context.profile)
472
+ });
473
+ const upstreamResponse = await forwardOllamaChat({
474
+ ollamaRequest,
475
+ context
476
+ });
477
+ const anthropicMessage = buildAnthropicMessageFromOllama({
478
+ response: upstreamResponse,
479
+ requestedModel: getUpstreamModelId(context.profile)
480
+ });
481
+
482
+ if (body.stream === true) {
483
+ response.writeHead(200, {
484
+ 'content-type': 'text/event-stream; charset=utf-8',
485
+ 'cache-control': 'no-cache, no-transform',
486
+ connection: 'keep-alive',
487
+ 'x-accel-buffering': 'no'
488
+ });
489
+ writeAnthropicStreamFromMessage(response, anthropicMessage);
490
+ response.end();
491
+ return;
492
+ }
493
+
494
+ sendJson(response, 200, anthropicMessage);
495
+ return;
496
+ }
497
+
387
498
  if (context.upstreamApiStyle !== 'openai-chat') {
388
499
  throw new Error(`El gateway todavia no soporta el estilo ${context.upstreamApiStyle} para ${context.profile.provider.name}.`);
389
500
  }
@@ -1,4 +1,5 @@
1
1
  import fs from 'node:fs/promises';
2
+ import fsSync from 'node:fs';
2
3
  import os from 'node:os';
3
4
  import path from 'node:path';
4
5
  import process from 'node:process';
@@ -48,6 +49,14 @@ async function pathExists(targetPath) {
48
49
  }
49
50
  }
50
51
 
52
+ function pathExistsSync(targetPath) {
53
+ try {
54
+ return fsSync.existsSync(targetPath);
55
+ } catch (_error) {
56
+ return false;
57
+ }
58
+ }
59
+
51
60
  function defaultHomedir(env, fallbackHomedir) {
52
61
  const pathModule = getPathModule(process.platform);
53
62
 
@@ -179,6 +188,18 @@ export async function resolveClaudeConnectHome(options = {}) {
179
188
  return candidates[0];
180
189
  }
181
190
 
191
+ export function resolveClaudeConnectHomeSync(options = {}) {
192
+ const candidates = buildClaudeConnectHomeCandidates(options);
193
+
194
+ for (const candidate of candidates) {
195
+ if (pathExistsSync(candidate)) {
196
+ return candidate;
197
+ }
198
+ }
199
+
200
+ return candidates[0];
201
+ }
202
+
182
203
  export async function resolveClaudeSettingsPath(options = {}) {
183
204
  if (typeof options.env?.CLAUDE_SETTINGS_PATH === 'string' && options.env.CLAUDE_SETTINGS_PATH.trim().length > 0) {
184
205
  return buildClaudeSettingsPathCandidates(options)[0];
@@ -244,6 +265,8 @@ export async function resolveClaudeConnectPaths(options = {}) {
244
265
 
245
266
  return {
246
267
  claudeConnectHome,
268
+ storageDir: path.join(claudeConnectHome, 'storage'),
269
+ catalogDbPath: path.join(claudeConnectHome, 'storage', 'claude-connect.sqlite'),
247
270
  profilesDir: path.join(claudeConnectHome, 'profiles'),
248
271
  tokensDir: path.join(claudeConnectHome, 'tokens'),
249
272
  secretsDir: path.join(claudeConnectHome, 'secrets'),
@@ -269,3 +292,90 @@ export async function resolveClaudePaths(options = {}) {
269
292
  ...claudeConnectPaths
270
293
  };
271
294
  }
295
+
296
+ function buildExecutableNames(command, platform = process.platform, env = process.env) {
297
+ if (platform !== 'win32') {
298
+ return [command];
299
+ }
300
+
301
+ const pathext = typeof env.PATHEXT === 'string' && env.PATHEXT.length > 0
302
+ ? env.PATHEXT.split(';').filter(Boolean)
303
+ : ['.EXE', '.CMD', '.BAT', '.COM'];
304
+ const hasExt = path.win32.extname(command).length > 0;
305
+
306
+ if (hasExt) {
307
+ return [command];
308
+ }
309
+
310
+ return pathext.map((ext) => `${command}${ext.toLowerCase()}`);
311
+ }
312
+
313
+ export async function findExecutableOnPath(command, {
314
+ platform = process.platform,
315
+ env = process.env
316
+ } = {}) {
317
+ const pathModule = getPathModule(platform);
318
+ const pathValue = typeof env.PATH === 'string' ? env.PATH : '';
319
+ const pathEntries = pathValue.split(path.delimiter).filter(Boolean);
320
+ const commandNames = buildExecutableNames(command, platform, env);
321
+
322
+ for (const directory of pathEntries) {
323
+ for (const commandName of commandNames) {
324
+ const candidate = pathModule.join(directory, commandName);
325
+
326
+ if (await pathExists(candidate)) {
327
+ return candidate;
328
+ }
329
+ }
330
+ }
331
+
332
+ return null;
333
+ }
334
+
335
+ export async function detectClaudeCodeInstallation(options = {}) {
336
+ const settingsCandidates = buildClaudeSettingsPathCandidates(options);
337
+ const accountCandidates = buildClaudeAccountPathCandidates(options);
338
+ const credentialsCandidates = buildClaudeCredentialsPathCandidates(options);
339
+ const executablePath = await findExecutableOnPath('claude', options);
340
+
341
+ const [existingSettingsPath, existingAccountPath, existingCredentialsPath] = await Promise.all([
342
+ (async () => {
343
+ for (const candidate of settingsCandidates) {
344
+ if (await pathExists(candidate)) {
345
+ return candidate;
346
+ }
347
+ }
348
+
349
+ return null;
350
+ })(),
351
+ (async () => {
352
+ for (const candidate of accountCandidates) {
353
+ if (await pathExists(candidate)) {
354
+ return candidate;
355
+ }
356
+ }
357
+
358
+ return null;
359
+ })(),
360
+ (async () => {
361
+ for (const candidate of credentialsCandidates) {
362
+ if (await pathExists(candidate)) {
363
+ return candidate;
364
+ }
365
+ }
366
+
367
+ return null;
368
+ })()
369
+ ]);
370
+
371
+ return {
372
+ isInstalled: Boolean(executablePath || existingSettingsPath || existingAccountPath || existingCredentialsPath),
373
+ executablePath,
374
+ existingSettingsPath,
375
+ existingAccountPath,
376
+ existingCredentialsPath,
377
+ settingsCandidates,
378
+ accountCandidates,
379
+ credentialsCandidates
380
+ };
381
+ }
@@ -1,6 +1,6 @@
1
1
  import fs from 'node:fs/promises';
2
2
  import path from 'node:path';
3
- import { resolveClaudePaths } from './app-paths.js';
3
+ import { detectClaudeCodeInstallation, resolveClaudePaths } from './app-paths.js';
4
4
  import { readManagedProviderTokenSecret, readManagedTokenSecret } from './secrets.js';
5
5
 
6
6
  function isObject(value) {
@@ -99,6 +99,10 @@ export async function readSwitchState() {
99
99
  }
100
100
 
101
101
  async function resolveTokenValueForProfile(profile) {
102
+ if (profile?.provider?.id === 'ollama') {
103
+ return 'ollama';
104
+ }
105
+
102
106
  const envVar = profile?.auth?.envVar;
103
107
  const envToken = typeof envVar === 'string' ? process.env[envVar] : '';
104
108
 
@@ -163,6 +167,16 @@ export async function resolveClaudeTransportForProfile({
163
167
  };
164
168
  }
165
169
 
170
+ if (authMethod === 'server') {
171
+ return {
172
+ connectionMode: 'gateway',
173
+ connectionBaseUrl: gatewayBaseUrl,
174
+ authToken: 'claude-connect-local',
175
+ authEnvMode: 'auth_token',
176
+ extraEnv: {}
177
+ };
178
+ }
179
+
166
180
  return {
167
181
  connectionMode: 'gateway',
168
182
  connectionBaseUrl: gatewayBaseUrl,
@@ -212,6 +226,9 @@ export function buildClaudeSettingsForProfile({
212
226
  if (authMethod === 'token') {
213
227
  env.CLAUDE_CONNECT_TOKEN_ENV_VAR = profile.auth.envVar;
214
228
  delete env.CLAUDE_CONNECT_TOKEN_FILE;
229
+ } else if (authMethod === 'server') {
230
+ delete env.CLAUDE_CONNECT_TOKEN_ENV_VAR;
231
+ delete env.CLAUDE_CONNECT_TOKEN_FILE;
215
232
  } else if (authMethod === 'oauth' && profile.auth.oauth?.tokenFile) {
216
233
  env.CLAUDE_CONNECT_TOKEN_FILE = profile.auth.oauth.tokenFile;
217
234
  delete env.CLAUDE_CONNECT_TOKEN_ENV_VAR;
@@ -223,6 +240,14 @@ export function buildClaudeSettingsForProfile({
223
240
  }
224
241
 
225
242
  export async function activateClaudeProfile({ profile, gatewayBaseUrl = 'http://127.0.0.1:4310/anthropic' }) {
243
+ const installation = await detectClaudeCodeInstallation();
244
+
245
+ if (!installation.isInstalled) {
246
+ throw new Error(
247
+ 'Claude Code no parece estar instalado en esta maquina. Instala o ejecuta Claude Code primero y luego vuelve a activar la conexion.'
248
+ );
249
+ }
250
+
226
251
  const {
227
252
  claudeSettingsPath,
228
253
  claudeAccountPath,
@@ -0,0 +1,100 @@
1
+ function describeRequestError(error) {
2
+ if (error && typeof error === 'object') {
3
+ if ('cause' in error && error.cause && typeof error.cause === 'object' && 'message' in error.cause) {
4
+ return String(error.cause.message);
5
+ }
6
+
7
+ if ('message' in error) {
8
+ return String(error.message);
9
+ }
10
+ }
11
+
12
+ return String(error);
13
+ }
14
+
15
+ export function normalizeOllamaBaseUrl(value) {
16
+ const trimmed = typeof value === 'string' ? value.trim() : '';
17
+
18
+ if (trimmed.length === 0) {
19
+ throw new Error('La URL de Ollama no puede quedar vacia.');
20
+ }
21
+
22
+ const withProtocol = /^https?:\/\//i.test(trimmed)
23
+ ? trimmed
24
+ : `http://${trimmed}`;
25
+
26
+ let url;
27
+
28
+ try {
29
+ url = new URL(withProtocol);
30
+ } catch (_error) {
31
+ throw new Error('La URL de Ollama no es valida.');
32
+ }
33
+
34
+ if (!url.hostname) {
35
+ throw new Error('La URL de Ollama no es valida.');
36
+ }
37
+
38
+ return url.toString().replace(/\/$/, '');
39
+ }
40
+
41
+ function summarizeOllamaModel(model) {
42
+ const details = model?.details && typeof model.details === 'object' ? model.details : {};
43
+ const segments = [
44
+ typeof details.family === 'string' && details.family.length > 0 ? details.family : null,
45
+ typeof details.parameter_size === 'string' && details.parameter_size.length > 0 ? details.parameter_size : null,
46
+ typeof details.quantization_level === 'string' && details.quantization_level.length > 0 ? details.quantization_level : null
47
+ ].filter(Boolean);
48
+
49
+ return segments.length > 0
50
+ ? segments.join(' · ')
51
+ : 'Modelo descubierto desde /api/tags';
52
+ }
53
+
54
+ export async function fetchOllamaModels({ baseUrl, timeoutMs = 8000 }) {
55
+ const normalizedBaseUrl = normalizeOllamaBaseUrl(baseUrl);
56
+ const controller = new AbortController();
57
+ const timer = setTimeout(() => controller.abort(new Error('timeout')), timeoutMs);
58
+
59
+ try {
60
+ const response = await fetch(`${normalizedBaseUrl}/api/tags`, {
61
+ method: 'GET',
62
+ headers: {
63
+ accept: 'application/json'
64
+ },
65
+ signal: controller.signal
66
+ });
67
+
68
+ const payload = await response.json().catch(() => ({}));
69
+
70
+ if (!response.ok) {
71
+ const message = payload?.error || payload?.message || `HTTP ${response.status}`;
72
+ throw new Error(`Ollama respondio ${response.status}: ${message}`);
73
+ }
74
+
75
+ const rawModels = Array.isArray(payload?.models) ? payload.models : [];
76
+
77
+ return {
78
+ baseUrl: normalizedBaseUrl,
79
+ models: rawModels.map((model, index) => ({
80
+ id: model?.model || model?.name || `ollama-model-${index + 1}`,
81
+ name: model?.name || model?.model || `Modelo ${index + 1}`,
82
+ category: 'Ollama OpenAI-compatible',
83
+ contextWindow: 'Auto',
84
+ summary: summarizeOllamaModel(model),
85
+ upstreamModelId: model?.model || model?.name || `ollama-model-${index + 1}`,
86
+ transportMode: 'gateway',
87
+ apiStyle: 'openai-chat',
88
+ apiBaseUrl: normalizedBaseUrl,
89
+ apiPath: '/v1/chat/completions',
90
+ authEnvMode: 'auth_token',
91
+ sortOrder: index + 1,
92
+ isDefault: index === 0
93
+ }))
94
+ };
95
+ } catch (error) {
96
+ throw new Error(`No se pudo consultar ${normalizedBaseUrl}/api/tags: ${describeRequestError(error)}`);
97
+ } finally {
98
+ clearTimeout(timer);
99
+ }
100
+ }
package/src/wizard.js CHANGED
@@ -22,6 +22,7 @@ import {
22
22
  readManagedTokenSecret,
23
23
  saveManagedProviderTokenSecret
24
24
  } from './lib/secrets.js';
25
+ import { fetchOllamaModels, normalizeOllamaBaseUrl } from './lib/ollama.js';
25
26
  import {
26
27
  assertInteractiveTerminal,
27
28
  buildFrame,
@@ -151,6 +152,13 @@ function buildTokenDetailLines(profile) {
151
152
  return [`Token file: ${profile.auth.oauth?.tokenFile ?? 'no encontrado'}`];
152
153
  }
153
154
 
155
+ if (profile.auth.method === 'server') {
156
+ return [
157
+ `Base URL: ${profile.endpoint?.baseUrl ?? 'sin definir'}`,
158
+ `Modelo upstream: ${profile.model?.upstreamModelId ?? profile.model?.id ?? 'sin definir'}`
159
+ ];
160
+ }
161
+
154
162
  if (profile.providerCredentialConfigured) {
155
163
  return [
156
164
  `Credencial compartida: ${profile.providerSecretRecord?.filePath ?? profile.auth.providerSecretFile ?? 'configurada'}`,
@@ -185,11 +193,6 @@ function profileActionItems(profile) {
185
193
  description: 'Borra solo el perfil. La API key compartida del proveedor se conserva.',
186
194
  value: 'delete'
187
195
  });
188
- items.push({
189
- label: 'Volver',
190
- description: 'Regresa al menu principal.',
191
- value: 'back'
192
- });
193
196
 
194
197
  return items;
195
198
  }
@@ -233,7 +236,7 @@ function renderWelcome() {
233
236
  colorize('4. Guardar perfil y credenciales locales', colors.soft),
234
237
  '',
235
238
  colorize('Catalogo actual', colors.bold, colors.accentSoft),
236
- colorize('OpenCode Go, Zen, Kimi, DeepSeek, OpenRouter y Qwen ya vienen almacenados en SQLite.', colors.soft),
239
+ colorize('OpenCode Go, Zen, Kimi, DeepSeek, Ollama, OpenAI, OpenRouter y Qwen ya vienen almacenados en SQLite.', colors.soft),
237
240
  '',
238
241
  colorize('Seguridad', colors.bold, colors.accentSoft),
239
242
  colorize('El token OAuth se guarda localmente y el modo Token puede guardarse una sola vez por proveedor.', colors.soft)
@@ -246,8 +249,12 @@ function renderWelcome() {
246
249
  function renderSummary({ profile, filePath }) {
247
250
  const authSummary = profile.auth.method === 'oauth'
248
251
  ? `Auth: oauth con token en ${profile.auth.oauth.tokenFile}`
249
- : `Auth: ${profile.auth.method} con fallback en ${profile.auth.envVar}`;
250
- const managedSecretSummary = profile.auth.method !== 'oauth' && profile.auth.providerSecretFile
252
+ : profile.auth.method === 'server'
253
+ ? 'Auth: servidor Ollama sin API key administrada por Claude Connect'
254
+ : `Auth: ${profile.auth.method} con fallback en ${profile.auth.envVar}`;
255
+ const managedSecretSummary = profile.auth.method === 'server'
256
+ ? colorize('Esta conexion usa solo la URL y el modelo descubiertos en el servidor Ollama.', colors.soft)
257
+ : profile.auth.method !== 'oauth' && profile.auth.providerSecretFile
251
258
  ? colorize(`API key compartida del proveedor en: ${profile.auth.providerSecretFile}`, colors.soft)
252
259
  : profile.auth.method !== 'oauth' && profile.auth.secretFile
253
260
  ? colorize(`API key antigua detectada en: ${profile.auth.secretFile}`, colors.soft)
@@ -279,12 +286,18 @@ function renderSummary({ profile, filePath }) {
279
286
  colorize(`export OPENAI_MODEL=${profile.model.id}`, colors.soft),
280
287
  colorize('El access token y refresh token ya quedaron guardados localmente.', colors.soft)
281
288
  ]
282
- : [
283
- colorize(`Fallback opcional: export ${profile.auth.envVar}=<tu_token>`, colors.soft),
284
- colorize(`export OPENAI_BASE_URL=${profile.endpoint.baseUrl}`, colors.soft),
285
- colorize(`export OPENAI_MODEL=${profile.model.id}`, colors.soft),
286
- colorize('La API key puede guardarse una sola vez por proveedor en Claude Connect.', colors.soft)
287
- ])
289
+ : profile.auth.method === 'server'
290
+ ? [
291
+ colorize(`export OPENAI_BASE_URL=${profile.endpoint.baseUrl}`, colors.soft),
292
+ colorize(`export OPENAI_MODEL=${profile.model.id}`, colors.soft),
293
+ colorize('La conexion usa el servidor Ollama descubierto y se valida antes de guardar.', colors.soft)
294
+ ]
295
+ : [
296
+ colorize(`Fallback opcional: export ${profile.auth.envVar}=<tu_token>`, colors.soft),
297
+ colorize(`export OPENAI_BASE_URL=${profile.endpoint.baseUrl}`, colors.soft),
298
+ colorize(`export OPENAI_MODEL=${profile.model.id}`, colors.soft),
299
+ colorize('La API key puede guardarse una sola vez por proveedor en Claude Connect.', colors.soft)
300
+ ])
288
301
  ],
289
302
  footer: [colorize('Presiona cualquier tecla para volver al menu', colors.dim, colors.muted)]
290
303
  })
@@ -420,7 +433,22 @@ async function activateClaudeFromSavedProfile() {
420
433
  return profile;
421
434
  }
422
435
 
423
- const result = await activateClaudeProfile({ profile });
436
+ let result;
437
+
438
+ try {
439
+ result = await activateClaudeProfile({ profile });
440
+ } catch (error) {
441
+ renderInfoScreen({
442
+ title: 'No se pudo activar Claude',
443
+ subtitle: 'Claude Connect no pudo aplicar la conexion en Claude Code.',
444
+ lines: [
445
+ colorize(error instanceof Error ? error.message : String(error), colors.warning)
446
+ ],
447
+ footer: 'Presiona una tecla para volver'
448
+ });
449
+ return await waitForAnyKey();
450
+ }
451
+
424
452
  const gateway = result.connectionMode === 'gateway'
425
453
  ? await restartGatewayInBackground()
426
454
  : await stopGateway();
@@ -561,7 +589,7 @@ async function deleteSavedProfile(profile) {
561
589
  lines: [
562
590
  colorize(`Perfil: ${profile.profileName}`, colors.soft),
563
591
  colorize(`Archivo eliminado: ${profile.filePath}`, colors.soft),
564
- ...(profile.auth?.method !== 'oauth'
592
+ ...(profile.auth?.method === 'token' || profile.auth?.method === 'api_key'
565
593
  ? [colorize('La API key compartida del proveedor se conserva para otros modelos.', colors.soft)]
566
594
  : [])
567
595
  ],
@@ -703,6 +731,143 @@ async function createNewConnection(store) {
703
731
  }
704
732
 
705
733
  const catalog = store.getProviderCatalog(provider.id);
734
+
735
+ if (catalog.id === 'ollama') {
736
+ const ollamaBaseUrlInput = await promptText({
737
+ step: 2,
738
+ totalSteps: 4,
739
+ title: 'URL del servidor Ollama',
740
+ subtitle: 'Puede ser local o remoto, por ejemplo http://127.0.0.1:11434 o https://mi-vps:11434.',
741
+ label: 'Base URL',
742
+ defaultValue: catalog.baseUrl,
743
+ placeholder: catalog.baseUrl,
744
+ allowBack: true
745
+ });
746
+
747
+ if (isExit(ollamaBaseUrlInput)) {
748
+ return ollamaBaseUrlInput;
749
+ }
750
+
751
+ if (isBack(ollamaBaseUrlInput)) {
752
+ continue;
753
+ }
754
+
755
+ let normalizedOllamaBaseUrl;
756
+
757
+ try {
758
+ normalizedOllamaBaseUrl = normalizeOllamaBaseUrl(ollamaBaseUrlInput);
759
+ } catch (error) {
760
+ renderInfoScreen({
761
+ title: 'URL invalida',
762
+ subtitle: 'La direccion del servidor Ollama no se pudo normalizar.',
763
+ lines: [
764
+ colorize(error instanceof Error ? error.message : String(error), colors.warning)
765
+ ],
766
+ footer: 'Presiona una tecla para volver'
767
+ });
768
+
769
+ const invalidUrlResult = await waitForAnyKey();
770
+
771
+ if (isExit(invalidUrlResult)) {
772
+ return invalidUrlResult;
773
+ }
774
+
775
+ continue;
776
+ }
777
+
778
+ let discovered;
779
+
780
+ try {
781
+ discovered = await fetchOllamaModels({ baseUrl: normalizedOllamaBaseUrl });
782
+ } catch (error) {
783
+ renderInfoScreen({
784
+ title: 'No se pudo conectar a Ollama',
785
+ subtitle: 'Claude Connect intento consultar /api/tags para descubrir modelos.',
786
+ lines: [
787
+ colorize(`Base URL: ${normalizedOllamaBaseUrl}`, colors.soft),
788
+ colorize(error instanceof Error ? error.message : String(error), colors.warning)
789
+ ],
790
+ footer: 'Presiona una tecla para volver'
791
+ });
792
+
793
+ const failedConnectionResult = await waitForAnyKey();
794
+
795
+ if (isExit(failedConnectionResult)) {
796
+ return failedConnectionResult;
797
+ }
798
+
799
+ continue;
800
+ }
801
+
802
+ if (discovered.models.length === 0) {
803
+ renderInfoScreen({
804
+ title: 'Sin modelos en Ollama',
805
+ subtitle: 'La conexion esta viva, pero /api/tags no devolvio modelos disponibles.',
806
+ lines: [
807
+ colorize(`Base URL: ${normalizedOllamaBaseUrl}`, colors.soft),
808
+ colorize('Carga al menos un modelo en ese servidor y vuelve a intentarlo.', colors.soft)
809
+ ],
810
+ footer: 'Presiona una tecla para volver'
811
+ });
812
+
813
+ const emptyModelsResult = await waitForAnyKey();
814
+
815
+ if (isExit(emptyModelsResult)) {
816
+ return emptyModelsResult;
817
+ }
818
+
819
+ continue;
820
+ }
821
+
822
+ const discoveredModel = await selectFromList({
823
+ step: 3,
824
+ totalSteps: 4,
825
+ title: 'Selecciona el modelo de Ollama',
826
+ subtitle: `Servidor: ${normalizedOllamaBaseUrl}.`,
827
+ items: modelItems(discovered.models),
828
+ allowBack: true,
829
+ detailBuilder: (selected) => [
830
+ `Modelo: ${selected.value.id}`,
831
+ `Categoria: ${selected.value.category}`,
832
+ `Contexto: ${selected.value.contextWindow}`,
833
+ selected.value.summary
834
+ ]
835
+ });
836
+
837
+ if (isExit(discoveredModel)) {
838
+ return discoveredModel;
839
+ }
840
+
841
+ if (isBack(discoveredModel)) {
842
+ continue;
843
+ }
844
+
845
+ const authMethod = catalog.authMethods[0];
846
+ const profileName = slugifyProfileName(`${provider.id}-${discoveredModel.id}-${authMethod.id}`);
847
+ const customProvider = {
848
+ ...catalog,
849
+ baseUrl: normalizedOllamaBaseUrl
850
+ };
851
+ const profile = buildProfile({
852
+ provider: customProvider,
853
+ model: {
854
+ ...discoveredModel,
855
+ apiBaseUrl: normalizedOllamaBaseUrl,
856
+ apiPath: '/api/chat',
857
+ transportMode: 'gateway',
858
+ apiStyle: 'ollama-chat',
859
+ authEnvMode: 'auth_token'
860
+ },
861
+ authMethod,
862
+ profileName,
863
+ apiKeyEnvVar: catalog.defaultApiKeyEnvVar
864
+ });
865
+
866
+ const filePath = await saveProfile(profile);
867
+ renderSummary({ profile, filePath });
868
+ return await waitForAnyKey();
869
+ }
870
+
706
871
  const totalSteps = catalog.models.length > 1 ? 3 : 2;
707
872
  let model = catalog.models[0];
708
873