@iservu-inc/adf-cli 0.14.4 → 0.14.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,249 +1,217 @@
1
- # AI Model Documentation Sources
2
-
3
- This document tracks the official documentation URLs for AI model listings from each provider. These should be checked regularly to ensure adf-cli supports the latest models.
4
-
5
- **Last Updated:** 2025-10-04
6
-
7
- ---
8
-
9
- ## Provider Model Documentation URLs
10
-
11
- ### Anthropic Claude
12
-
13
- **Official Models Page:**
14
- - https://docs.anthropic.com/en/docs/about-claude/models
15
-
16
- **API Models List:**
17
- - No public API endpoint for listing models
18
- - Must use curated list from documentation
19
-
20
- **Current Models (as of 2025-10-04):**
21
- - `claude-sonnet-4-5-20250929`
22
- - `claude-3-5-sonnet-20241022`
23
- - `claude-3-opus-20240229`
24
- - `claude-3-sonnet-20240229`
25
- - `claude-3-haiku-20240307`
26
-
27
- **Update Strategy:** Manual curation from official docs
28
-
29
- ---
30
-
31
- ### OpenAI
32
-
33
- **Official Models Page:**
34
- - https://platform.openai.com/docs/models
35
-
36
- **API Models List Endpoint:**
37
- - ✅ `GET https://api.openai.com/v1/models`
38
- - Requires API key in Authorization header
39
-
40
- **Current Model Families (as of 2025-10-04):**
41
- - GPT-4o series: `gpt-4o`, `gpt-4o-mini`
42
- - GPT-4 Turbo: `gpt-4-turbo`, `gpt-4-turbo-preview`
43
- - GPT-4: `gpt-4`
44
- - o1 series: `o1`, `o1-mini`, `o1-preview`
45
- - o3 series: `o3`, `o3-mini`, `o3-pro` (if available)
46
- - GPT-3.5: `gpt-3.5-turbo`
47
-
48
- **Update Strategy:** Dynamic fetching via API endpoint (DO NOT filter by model name prefix)
49
-
50
- **Important:**
51
- - OpenAI API returns ALL available models for your API key
52
- - Do NOT filter by prefix (e.g., `gpt-`) as this excludes o1, o3, and future model families
53
- - Filter only by owned_by or other metadata if needed
54
-
55
- ---
56
-
57
- ### Google Gemini
58
-
59
- **Official Models Page:**
60
- - https://ai.google.dev/gemini-api/docs/models
61
-
62
- **API Models List:**
63
- - ⚠️ SDK has experimental listModels() but not stable
64
- - Must use curated list from documentation
65
-
66
- **Current Models (as of 2025-10-04):**
67
-
68
- **Gemini 2.5 Series (Latest - Stable):**
69
- - `gemini-2.5-pro`
70
- - `gemini-2.5-flash`
71
- - `gemini-2.5-flash-lite`
72
- - `gemini-2.5-flash-image`
73
-
74
- **Gemini 2.0 Series:**
75
- - `gemini-2.0-flash`
76
- - `gemini-2.0-flash-lite`
77
-
78
- **Gemini 1.5 Series (Legacy):**
79
- - `gemini-1.5-pro-latest`
80
- - `gemini-1.5-flash-latest`
81
- - `gemini-1.5-pro`
82
- - `gemini-1.5-flash`
83
-
84
- **Update Strategy:** Manual curation from official docs (SDK support pending)
85
-
86
- ---
87
-
88
- ### OpenRouter
89
-
90
- **Official Models Page:**
91
- - https://openrouter.ai/models
92
-
93
- **API Models List Endpoint:**
94
- - ✅ `GET https://openrouter.ai/api/v1/models`
95
- - Returns JSON with all available models
96
-
97
- **Update Strategy:** Dynamic fetching via API endpoint
98
-
99
- **Current Implementation:** Working correctly - fetches all models dynamically
100
-
101
- ---
102
-
103
- ## Update Process
104
-
105
- ### Monthly Model Review Checklist
106
-
107
- Run this checklist at the start of each month to ensure latest models are available:
108
-
109
- 1. **Anthropic Claude**
110
- - [ ] Visit https://docs.anthropic.com/en/docs/about-claude/models
111
- - [ ] Check for new model releases
112
- - [ ] Update `AI_PROVIDERS.ANTHROPIC.defaultModels` in `lib/ai/ai-config.js`
113
- - [ ] Test new models if added
114
-
115
- 2. **OpenAI**
116
- - [ ] Verify `fetchAvailableModels()` is NOT filtering by model prefix
117
- - [ ] Test with real API key to ensure all models returned
118
- - [ ] Check https://platform.openai.com/docs/models for new model families
119
- - [ ] Update fallback list if API changes
120
-
121
- 3. **Google Gemini**
122
- - [ ] Visit https://ai.google.dev/gemini-api/docs/models
123
- - [ ] Check for new Gemini versions (2.6, 3.0, etc.)
124
- - [ ] Update `AI_PROVIDERS.GOOGLE.defaultModels` in `lib/ai/ai-config.js`
125
- - [ ] Test new models if added
126
-
127
- 4. **OpenRouter**
128
- - [ ] API fetching is dynamic - no action needed
129
- - [ ] Verify API endpoint still works
130
- - [ ] Check for API changes at https://openrouter.ai/docs
131
-
132
- ### Testing After Updates
133
-
134
- ```bash
135
- # Test each provider's model fetching
136
- cd adf-cli
137
- npm test
138
-
139
- # Manual test with real API keys
140
- adf init
141
- # Select each provider and verify model list
142
- ```
143
-
144
- ---
145
-
146
- ## Known Issues
147
-
148
- ### OpenAI Model Filtering Bug (CRITICAL - Fixed in v0.4.29)
149
-
150
- **Problem:**
151
- ```javascript
152
- .filter(m => m.id.startsWith('gpt-')) // ❌ TOO RESTRICTIVE
153
- ```
154
-
155
- This filter excludes:
156
- - o1, o1-mini, o1-preview
157
- - o3, o3-pro, o3-mini
158
- - Any future non-GPT model families
159
-
160
- **Solution:**
161
- ```javascript
162
- // Fetch ALL models without prefix filtering
163
- const models = response.data
164
- .map(m => m.id)
165
- .sort();
166
- ```
167
-
168
- **Status:** Fixed in v0.4.29
169
-
170
- ---
171
-
172
- ## API Response Examples
173
-
174
- ### OpenAI `/v1/models` Response
175
-
176
- ```json
177
- {
178
- "object": "list",
179
- "data": [
180
- {
181
- "id": "gpt-4o",
182
- "object": "model",
183
- "created": 1234567890,
184
- "owned_by": "openai"
185
- },
186
- {
187
- "id": "o1-preview",
188
- "object": "model",
189
- "created": 1234567890,
190
- "owned_by": "openai"
191
- }
192
- ]
193
- }
194
- ```
195
-
196
- ### OpenRouter `/api/v1/models` Response
197
-
198
- ```json
199
- {
200
- "data": [
201
- {
202
- "id": "anthropic/claude-sonnet-4-5",
203
- "name": "Claude Sonnet 4.5",
204
- "pricing": { ... }
205
- }
206
- ]
207
- }
208
- ```
209
-
210
- ---
211
-
212
- ## Maintenance Notes
213
-
214
- - **DO NOT** hardcode model lists unless provider has no API
215
- - **DO NOT** filter by model name prefix unless absolutely necessary
216
- - **ALWAYS** fetch dynamically when API is available
217
- - **TEST** with real API keys before publishing updates
218
- - **DOCUMENT** any assumptions or limitations in this file
219
-
220
- ---
221
-
222
- ## Future Improvements
223
-
224
- 1. **Add model metadata caching**
225
- - Cache API responses for 24 hours
226
- - Reduce API calls during testing
227
-
228
- 2. **Add model capability detection**
229
- - Vision support
230
- - Function calling
231
- - Context window size
232
- - Pricing information
233
-
234
- 3. **Add model recommendation system**
235
- - Suggest best model for use case
236
- - Warn about deprecated models
237
- - Show model capabilities in selection
238
-
239
- 4. **Automated model list updates**
240
- - GitHub Actions workflow
241
- - Monthly PR with updated model lists
242
- - Automated testing against provider APIs
243
-
244
- ---
245
-
246
- ## Version History
247
-
248
- - **2025-10-04:** Initial documentation
249
- - **2025-10-04:** Identified OpenAI filtering bug (v0.4.29)
1
+ # AI Model Documentation Sources
2
+
3
+ This document tracks the official documentation URLs for AI model listings from each provider. These should be checked regularly to ensure adf-cli supports the latest models.
4
+
5
+ **Last Updated:** 2026-01-12
6
+
7
+ ---
8
+
9
+ ## Provider Model Documentation URLs
10
+
11
+ ### Anthropic Claude
12
+
13
+ **Official Models Page:**
14
+ - https://docs.anthropic.com/en/docs/about-claude/models
15
+
16
+ **API Models List:**
17
+ - Supported via `GET /v1/models`
18
+ - Node.js SDK: `client.models.list()`
19
+
20
+ **Current Models (as of 2026-01-12):**
21
+ - `claude-3-5-sonnet-20241022`
22
+ - `claude-3-5-haiku-20241022`
23
+ - `claude-3-opus-20240229`
24
+
25
+ **Update Strategy:** Dynamic discovery via API
26
+
27
+ ---
28
+
29
+ ### OpenAI
30
+
31
+ **Official Models Page:**
32
+ - https://platform.openai.com/docs/models
33
+
34
+ **API Models List Endpoint:**
35
+ - ✅ `GET https://api.openai.com/v1/models`
36
+ - Requires API key in Authorization header
37
+
38
+ **Current Model Families (as of 2026-01-12):**
39
+ - GPT-4o series: `gpt-4o`, `gpt-4o-mini`
40
+ - o1 series: `o1`, `o1-mini`
41
+ - o3 series: `o3`, `o3-mini`
42
+ - GPT-5 series (if available)
43
+
44
+ **Update Strategy:** Dynamic fetching via API endpoint (DO NOT filter by model name prefix)
45
+
46
+ **Important:**
47
+ - OpenAI API returns ALL available models for your API key
48
+ - Do NOT filter by prefix (e.g., `gpt-`) as this excludes o1, o3, and future model families
49
+ - Filter only by owned_by or other metadata if needed
50
+
51
+ ---
52
+
53
+ ### Google Gemini
54
+
55
+ **Official Models Page:**
56
+ - https://ai.google.dev/gemini-api/docs/models
57
+
58
+ **API Models List:**
59
+ - Supported via `v1beta/models` endpoint
60
+ - Node.js SDK: `@google/genai` (Unified SDK)
61
+
62
+ **Current Models (as of 2026-01-12):**
63
+
64
+ **Gemini 3.0 Series (Latest - Frontier):**
65
+ - `gemini-3.0-pro`
66
+ - `gemini-3.0-flash`
67
+ - `gemini-3.0-deep-think`
68
+
69
+ **Gemini 2.5 Series (Stable):**
70
+ - `gemini-2.5-pro`
71
+ - `gemini-2.5-flash`
72
+ - `gemini-2.5-flash-lite`
73
+
74
+ **Gemini 2.0 Series (Legacy):**
75
+ - `gemini-2.0-flash`
76
+ - `gemini-2.0-pro`
77
+
78
+ **Update Strategy:** Dynamic discovery via API
79
+
80
+ ---
81
+
82
+ ### OpenRouter
83
+
84
+ **Official Models Page:**
85
+ - https://openrouter.ai/models
86
+
87
+ **API Models List Endpoint:**
88
+ - ✅ `GET https://openrouter.ai/api/v1/models`
89
+ - Returns JSON with all available models
90
+
91
+ **Update Strategy:** Dynamic fetching via API endpoint
92
+
93
+ **Current Implementation:** Working correctly - fetches all models dynamically
94
+
95
+ ---
96
+
97
+ ## Update Process
98
+
99
+ ### Monthly Model Review Checklist
100
+
101
+ Run this checklist at the start of each month to ensure latest models are available:
102
+
103
+ 1. **Anthropic Claude**
104
+ - [ ] Visit https://docs.anthropic.com/en/docs/about-claude/models
105
+ - [ ] Check for new model releases
106
+ - [ ] Verify `fetchAvailableModels()` is getting all models via pagination
107
+
108
+ 2. **OpenAI**
109
+ - [ ] Verify `fetchAvailableModels()` is NOT filtering by model prefix
110
+ - [ ] Test with real API key to ensure all models returned
111
+ - [ ] Check https://platform.openai.com/docs/models for new model families
112
+
113
+ 3. **Google Gemini**
114
+ - [ ] Visit https://ai.google.dev/gemini-api/docs/models
115
+ - [ ] Check for new Gemini versions (3.1, 4.0, etc.)
116
+ - [ ] Verify dynamic discovery works via `v1beta/models`
117
+
118
+ 4. **OpenRouter**
119
+ - [ ] API fetching is dynamic - no action needed
120
+ - [ ] Verify API endpoint still works
121
+ - [ ] Check for API changes at https://openrouter.ai/docs
122
+
123
+ ### Testing After Updates
124
+
125
+ ```bash
126
+ # Test each provider's model fetching
127
+ cd adf-cli
128
+ npm test
129
+
130
+ # Manual test with real API keys
131
+ node bin/adf.js config
132
+ # Select each provider and verify model list
133
+ ```
134
+
135
+ ---
136
+
137
+ ## Known Issues
138
+
139
+ ### OpenAI Model Filtering Bug (CRITICAL - Fixed in v0.4.29)
140
+
141
+ **Problem:**
142
+ ```javascript
143
+ .filter(m => m.id.startsWith('gpt-')) // ❌ TOO RESTRICTIVE
144
+ ```
145
+
146
+ This filter excludes:
147
+ - o1, o1-mini, o1-preview
148
+ - o3, o3-pro, o3-mini
149
+ - Any future non-GPT model families
150
+
151
+ **Solution:**
152
+ ```javascript
153
+ // Fetch ALL models without prefix filtering
154
+ const models = response.data
155
+ .map(m => m.id)
156
+ .sort();
157
+ ```
158
+
159
+ **Status:** Fixed in v0.4.29
160
+
161
+ ---
162
+
163
+ ## API Response Examples
164
+
165
+ ### OpenAI `/v1/models` Response
166
+
167
+ ```json
168
+ {
169
+ "object": "list",
170
+ "data": [
171
+ {
172
+ "id": "gpt-4o",
173
+ "object": "model",
174
+ "created": 1234567890,
175
+ "owned_by": "openai"
176
+ },
177
+ {
178
+ "id": "o1-mini",
179
+ "object": "model",
180
+ "created": 1234567890,
181
+ "owned_by": "openai"
182
+ }
183
+ ]
184
+ }
185
+ ```
186
+
187
+ ### OpenRouter `/api/v1/models` Response
188
+
189
+ ```json
190
+ {
191
+ "data": [
192
+ {
193
+ "id": "anthropic/claude-3-5-sonnet",
194
+ "name": "Claude 3.5 Sonnet",
195
+ "pricing": { ... }
196
+ }
197
+ ]
198
+ }
199
+ ```
200
+
201
+ ---
202
+
203
+ ## Maintenance Notes
204
+
205
+ - **DO NOT** hardcode model lists unless provider has no API
206
+ - **DO NOT** filter by model name prefix unless absolutely necessary
207
+ - **ALWAYS** fetch dynamically when API is available
208
+ - **TEST** with real API keys before publishing updates
209
+ - **DOCUMENT** any assumptions or limitations in this file
210
+
211
+ ---
212
+
213
+ ## Version History
214
+
215
+ - **2026-01-12:** Updated to remove deprecated models (Gemini 1.5, GPT-3.5) and enable dynamic discovery across all providers.
216
+ - **2025-10-04:** Initial documentation
217
+ - **2025-10-04:** Identified OpenAI filtering bug (v0.4.29)
@@ -15,7 +15,7 @@ The adf-cli interview system now supports **4 AI providers**, giving users flexi
15
15
  | Provider | Models | API Key Format | Website |
16
16
  |----------|--------|----------------|---------|
17
17
  | **Anthropic Claude** | Sonnet 4.5, Claude 3.5, Opus | `sk-ant-*` | https://console.anthropic.com/ |
18
- | **OpenAI GPT** | GPT-4-turbo, GPT-4o, GPT-4, GPT-3.5-turbo | `sk-*` | https://platform.openai.com/ |
18
+ | **OpenAI GPT** | GPT-4o, GPT-4o-mini, o1, o3 | `sk-*` | https://platform.openai.com/ |
19
19
  | **Google Gemini** | 2.0-flash-exp, 1.5-pro, 1.5-flash | (varies) | https://ai.google.dev/ |
20
20
  | **OpenRouter** | Multi-model access | `sk-or-*` | https://openrouter.ai/ |
21
21
 
package/README.md CHANGED
@@ -148,7 +148,7 @@ ADF CLI requires an AI provider to guide you through requirements gathering with
148
148
  ### Supported AI Providers
149
149
 
150
150
  - **Anthropic Claude** (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
151
- - **OpenAI** (GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, etc.)
151
+ - **OpenAI** (GPT-4o, GPT-4o-mini, o1, etc.)
152
152
  - **Google Gemini** (Gemini 1.5 Pro, Gemini 1.5 Flash, etc.)
153
153
  - **OpenRouter** (Access to 100+ models from multiple providers)
154
154
 
package/bin/adf.js CHANGED
@@ -429,22 +429,22 @@ ${chalk.cyan.bold('Configuration Categories:')}
429
429
 
430
430
  ${chalk.cyan.bold('AI Providers Supported:')}
431
431
  ${chalk.yellow('Anthropic Claude')}
432
- Models: claude-sonnet-4-5, claude-opus-4-5, claude-3-5-sonnet, etc.
432
+ Models: claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, etc.
433
433
  API Key: Get from https://console.anthropic.com/
434
434
  Format: sk-ant-...
435
435
 
436
436
  ${chalk.yellow('OpenAI GPT')}
437
- Models: gpt-5.2-chat, gpt-4o, o1, o3, etc. (116+ models)
437
+ Models: gpt-4o, gpt-4o-mini, o1, o3, etc.
438
438
  API Key: Get from https://platform.openai.com/api-keys
439
439
  Format: sk-...
440
440
 
441
441
  ${chalk.yellow('Google Gemini')}
442
- Models: gemini-2.0-flash, gemini-1.5-pro, gemini-1.5-flash, etc.
442
+ Models: gemini-2.5-flash, gemini-3.0-flash, gemini-2.5-pro, etc.
443
443
  API Key: Get from https://aistudio.google.com/app/apikey
444
444
  Format: (varies)
445
445
 
446
446
  ${chalk.yellow('OpenRouter')}
447
- Models: 100+ models from multiple providers
447
+ Models: 300+ models from multiple providers
448
448
  API Key: Get from https://openrouter.ai/keys
449
449
  Format: sk-or-...
450
450
 
@@ -49,14 +49,6 @@ class AIClient {
49
49
  });
50
50
  break;
51
51
 
52
- case 'perplexity':
53
- const OpenAIForPerplexity = require('openai');
54
- this.client = new OpenAIForPerplexity({
55
- apiKey: this.apiKey,
56
- baseURL: 'https://api.perplexity.ai'
57
- });
58
- break;
59
-
60
52
  default:
61
53
  throw new Error(`Unsupported provider: ${this.provider}`);
62
54
  }
@@ -80,14 +72,11 @@ class AIClient {
80
72
  case 'google':
81
73
  return await this.googleRequest(prompt, maxTokens, temperature);
82
74
 
83
- case 'openrouter':
84
- return await this.openrouterRequest(prompt, maxTokens, temperature);
85
-
86
- case 'perplexity':
87
- return await this.perplexityRequest(prompt, maxTokens, temperature);
88
-
89
- default: throw new Error(`Provider ${this.provider} not implemented`);
90
- }
75
+ case 'openrouter':
76
+ return await this.openrouterRequest(prompt, maxTokens, temperature);
77
+
78
+ default:
79
+ throw new Error(`Provider ${this.provider} not implemented`); }
91
80
  } catch (error) {
92
81
  throw new Error(`AI request failed: ${error.message}`);
93
82
  }
@@ -322,43 +311,6 @@ class AIClient {
322
311
  }
323
312
  }
324
313
 
325
- /**
326
- * Perplexity AI request (via OpenAI-compatible API)
327
- */
328
- async perplexityRequest(prompt, maxTokens, temperature) {
329
- try {
330
- // Perplexity supports standard OpenAI chat completions
331
- // but has some specific recommended parameters
332
- const response = await this.client.chat.completions.create({
333
- model: this.model,
334
- messages: [
335
- {
336
- role: 'user',
337
- content: prompt
338
- }
339
- ],
340
- max_tokens: maxTokens,
341
- temperature: temperature,
342
- // Optional: Perplexity specific headers can be passed if needed via client config
343
- });
344
-
345
- return {
346
- content: response.choices[0].message.content,
347
- model: this.model,
348
- provider: 'perplexity',
349
- usage: {
350
- promptTokens: response.usage?.prompt_tokens || 0,
351
- completionTokens: response.usage?.completion_tokens || 0,
352
- totalTokens: response.usage?.total_tokens || 0
353
- },
354
- // Include search citations if available (Perplexity feature)
355
- citations: response.citations || []
356
- };
357
- } catch (error) {
358
- throw new Error(`Perplexity request failed: ${error.message}`);
359
- }
360
- }
361
-
362
314
  /**
363
315
  * Test connection with a simple prompt
364
316
  */
@@ -25,7 +25,7 @@ const AI_PROVIDERS = {
25
25
  requiredFormat: 'sk-ant-',
26
26
  website: 'https://console.anthropic.com/',
27
27
  setup: 'Get your API key from https://console.anthropic.com/',
28
- defaultModels: ['claude-sonnet-4-5-20250929', 'claude-3-5-sonnet-20241022', 'claude-3-opus-20240229']
28
+ defaultModels: ['claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', 'claude-3-opus-20240229']
29
29
  },
30
30
  OPENAI: {
31
31
  id: 'openai',
@@ -34,7 +34,7 @@ const AI_PROVIDERS = {
34
34
  requiredFormat: 'sk-',
35
35
  website: 'https://platform.openai.com/',
36
36
  setup: 'Get your API key from https://platform.openai.com/api-keys',
37
- defaultModels: ['gpt-4o', 'gpt-4-turbo', 'gpt-4', 'o1', 'o1-mini', 'o1-preview', 'gpt-3.5-turbo']
37
+ defaultModels: ['gpt-4o', 'gpt-4o-mini', 'o1', 'o1-mini']
38
38
  },
39
39
  GOOGLE: {
40
40
  id: 'google',
@@ -65,24 +65,10 @@ const AI_PROVIDERS = {
65
65
  website: 'https://openrouter.ai/',
66
66
  setup: 'Get your API key from https://openrouter.ai/keys',
67
67
  defaultModels: [
68
- 'anthropic/claude-sonnet-4-5',
69
- 'openai/gpt-4-turbo',
70
- 'google/gemini-pro-1.5',
71
- 'meta-llama/llama-3.1-70b-instruct'
72
- ]
73
- },
74
- PERPLEXITY: {
75
- id: 'perplexity',
76
- name: 'Perplexity AI (Search + Reasoning)',
77
- envVar: 'PERPLEXITY_API_KEY',
78
- requiredFormat: 'pplx-',
79
- website: 'https://www.perplexity.ai/api',
80
- setup: 'Get your API key from https://www.perplexity.ai/settings/api',
81
- defaultModels: [
82
- 'sonar-pro',
83
- 'sonar-reasoning-pro',
84
- 'sonar',
85
- 'sonar-deep-research'
68
+ 'anthropic/claude-3.5-sonnet',
69
+ 'openai/gpt-4o',
70
+ 'google/gemini-2.5-flash',
71
+ 'meta-llama/llama-3.3-70b-instruct'
86
72
  ]
87
73
  }
88
74
  };
@@ -273,27 +259,6 @@ async function validateAPIKeyWithProvider(provider, apiKey, model = null) {
273
259
  }
274
260
  break;
275
261
 
276
- case 'perplexity':
277
- // Perplexity doesn't support models.list() fully in all tiers or it might be static.
278
- // We use a cheap "models" list call if supported, or rely on the fact that
279
- // if we can create a client and it doesn't crash, we assume it's good enough
280
- // for this stage, effectively deferring full validation to model testing.
281
- // However, we can try to hit the chat/completions endpoint with dry-run style if possible.
282
- // For now, we will perform a lightweight check using OpenAI compatibility.
283
- try {
284
- const OpenAI = require('openai');
285
- const client = new OpenAI({
286
- apiKey,
287
- baseURL: 'https://api.perplexity.ai'
288
- });
289
- // Try listing models (Perplexity supports this via OpenAI compatibility)
290
- await client.models.list();
291
- } catch (error) {
292
- // If list models fails, the key is likely invalid
293
- throw new Error(`Perplexity API validation failed: ${error.message}`);
294
- }
295
- break;
296
-
297
262
  default:
298
263
  throw new Error(`Validation not implemented for provider: ${provider.id}`);
299
264
  }
@@ -416,35 +381,6 @@ async function fetchAvailableModels(provider, apiKey) {
416
381
  console.log(chalk.yellow(' Using default model list\n'));
417
382
  return provider.defaultModels;
418
383
 
419
- case 'perplexity':
420
- try {
421
- // Perplexity doesn't have a list models endpoint, so we validate known ones
422
- // by performing a very cheap token count or dry-run if possible,
423
- // but for now we rely on the default list as it is static but we can try to validate
424
- // the API key with a simple request to 'sonar' which is their fastest model.
425
- const OpenAI = require('openai');
426
- const client = new OpenAI({
427
- apiKey,
428
- baseURL: 'https://api.perplexity.ai'
429
- });
430
-
431
- // Simple validation check
432
- await client.models.list(); // Some OpenAI compatible endpoints support this, let's try
433
-
434
- // If list() works, we filter, otherwise we might fallback to defaults.
435
- // Note: Perplexity API documentation says standard OpenAI endpoints are supported.
436
- // If list() returns models, great. If not, we use defaults.
437
- // However, Perplexity typically DOES NOT implement the /models endpoint fully dynamically
438
- // like OpenAI. It often returns a static list or might error.
439
- // Let's assume for safety we use the defaults but verified via a simple call.
440
-
441
- spinner.succeed('Perplexity API connected');
442
- return provider.defaultModels; // Return defaults as they are curated for the platform
443
- } catch(error) {
444
- spinner.warn(`Perplexity connectivity check failed: ${error.message}`);
445
- }
446
- return provider.defaultModels;
447
-
448
384
  case 'openrouter':
449
385
  try {
450
386
  const fetch = require('node-fetch');
@@ -592,11 +528,6 @@ async function configureAIProvider(projectPath = process.cwd()) {
592
528
  name: buildProviderName(AI_PROVIDERS.OPENROUTER, 'openrouter'),
593
529
  value: 'openrouter',
594
530
  short: 'OpenRouter'
595
- },
596
- {
597
- name: buildProviderName(AI_PROVIDERS.PERPLEXITY, 'perplexity'),
598
- value: 'perplexity',
599
- short: 'Perplexity'
600
531
  }
601
532
  ];
602
533
 
@@ -670,7 +601,7 @@ async function configureAIProvider(projectPath = process.cwd()) {
670
601
 
671
602
  try {
672
603
  // For Google, we need to pass a model to validate properly
673
- const validationModel = selectedProvider.id === 'google' ? 'gemini-1.5-flash' : null;
604
+ const validationModel = selectedProvider.id === 'google' ? 'gemini-2.5-flash' : null;
674
605
  await validateAPIKeyWithProvider(selectedProvider, apiKey, validationModel);
675
606
  validationSpinner.succeed(chalk.green('API key validated successfully'));
676
607
  } catch (error) {
@@ -718,16 +649,16 @@ async function configureAIProvider(projectPath = process.cwd()) {
718
649
  // Show helpful tips about model selection
719
650
  if (selectedProvider.id === 'openrouter' && availableModels.length > 50) {
720
651
  console.log(chalk.gray('\n💡 Tip: Recommended models for best compatibility:'));
721
- console.log(chalk.gray(' • anthropic/claude-sonnet-4-5'));
722
- console.log(chalk.gray(' • openai/gpt-4-turbo'));
723
- console.log(chalk.gray(' • google/gemini-pro-1.5'));
652
+ console.log(chalk.gray(' • anthropic/claude-3.5-sonnet'));
653
+ console.log(chalk.gray(' • openai/gpt-4o'));
654
+ console.log(chalk.gray(' • google/gemini-2.5-pro'));
724
655
  console.log(chalk.yellow(' ⚠️ Free models may require specific privacy settings\n'));
725
656
  } else if (selectedProvider.id === 'google') {
726
657
  console.log(chalk.gray('\n💡 Tip: Recommended models for free tier:'));
727
- console.log(chalk.gray(' • gemini-1.5-flash (fastest, lowest quota usage)'));
728
- console.log(chalk.gray(' • gemini-1.5-flash-8b (ultra-fast, minimal quota)'));
729
- console.log(chalk.gray(' • gemini-2.0-flash-exp (newer, experimental)'));
730
- console.log(chalk.yellow(' ⚠️ Pro models (gemini-pro, gemini-2.5-pro) may exceed free tier quota\n'));
658
+ console.log(chalk.gray(' • gemini-2.5-flash (fastest, lowest quota usage)'));
659
+ console.log(chalk.gray(' • gemini-2.5-flash-lite (ultra-fast, minimal quota)'));
660
+ console.log(chalk.gray(' • gemini-3.0-flash (newer, frontier intelligence)'));
661
+ console.log(chalk.yellow(' ⚠️ Pro models (gemini-2.5-pro, gemini-3.0-pro) may exceed free tier quota\n'));
731
662
  }
732
663
 
733
664
  // Model selection with autocomplete
@@ -103,10 +103,10 @@ class Interviewer {
103
103
 
104
104
  // Check which provider is configured
105
105
  const providerMap = {
106
- 'ANTHROPIC_API_KEY': { id: 'anthropic', name: 'Anthropic Claude', models: ['claude-sonnet-4-5-20250929', 'claude-3-5-sonnet-20241022'] },
107
- 'OPENAI_API_KEY': { id: 'openai', name: 'OpenAI GPT', models: ['gpt-4-turbo', 'gpt-4o', 'gpt-4'] },
108
- 'GOOGLE_API_KEY': { id: 'google', name: 'Google Gemini', models: ['gemini-2.0-flash-exp', 'gemini-1.5-pro'] },
109
- 'OPENROUTER_API_KEY': { id: 'openrouter', name: 'OpenRouter', models: ['anthropic/claude-sonnet-4-5', 'openai/gpt-4-turbo'] }
106
+ 'ANTHROPIC_API_KEY': { id: 'anthropic', name: 'Anthropic Claude', models: ['claude-3-5-sonnet', 'claude-3-5-haiku'] },
107
+ 'OPENAI_API_KEY': { id: 'openai', name: 'OpenAI GPT', models: ['gpt-4o', 'gpt-4o-mini'] },
108
+ 'GOOGLE_API_KEY': { id: 'google', name: 'Google Gemini', models: ['gemini-2.5-flash', 'gemini-3.0-flash'] },
109
+ 'OPENROUTER_API_KEY': { id: 'openrouter', name: 'OpenRouter', models: ['anthropic/claude-3.5-sonnet', 'openai/gpt-4o'] }
110
110
  };
111
111
 
112
112
  let configuredProvider = null;
@@ -35,7 +35,8 @@ class AntigravityGenerator extends ToolConfigGenerator {
35
35
  const agentsPath = path.join(antigravityDir, 'agents.yaml');
36
36
 
37
37
  const agentName = this.getAgentName();
38
- const model = this.getModelForFramework();
38
+ const ai = await this.getConfiguredAI();
39
+ const model = ai.model || this.getModelForFramework();
39
40
 
40
41
  // Generate YAML content
41
42
  const yamlContent = `# Antigravity Agent Configuration
@@ -104,12 +105,12 @@ agents:
104
105
  */
105
106
  getModelForFramework() {
106
107
  const modelMap = {
107
- 'rapid': 'gemini-2.0-flash-exp',
108
- 'balanced': 'gemini-2.0-flash-thinking-exp',
109
- 'comprehensive': 'gemini-exp-1206'
108
+ 'rapid': 'gemini-3.0-flash',
109
+ 'balanced': 'gemini-3.0-flash',
110
+ 'comprehensive': 'gemini-3.0-pro'
110
111
  };
111
112
 
112
- return modelMap[this.framework] || 'gemini-2.0-flash-exp';
113
+ return modelMap[this.framework] || 'gemini-3.0-flash';
113
114
  }
114
115
 
115
116
  /**
@@ -41,7 +41,8 @@ class OpenCodeGenerator extends ToolConfigGenerator {
41
41
  async generateConfig() {
42
42
  const projectContext = await this.getProjectContext();
43
43
  const providers = await this.getAvailableProviders();
44
- const agents = await this.generateAgentConfigurations(providers);
44
+ const ai = await this.getConfiguredAI();
45
+ const agents = await this.generateAgentConfigurations(providers, ai);
45
46
 
46
47
  const config = {
47
48
  "$schema": "https://opencode.ai/config.json",
@@ -49,11 +50,11 @@ class OpenCodeGenerator extends ToolConfigGenerator {
49
50
  // Provider configurations (Anthropic, OpenAI, Google, OpenRouter, etc.)
50
51
  "providers": await this.generateProviderConfigurations(providers),
51
52
 
52
- // Default model (use the most powerful available)
53
- "model": this.getDefaultModel(providers),
53
+ // Default model (use configured model or the most powerful available)
54
+ "model": ai.model || this.getDefaultModel(providers, ai.provider),
54
55
 
55
56
  // Small model for faster, cheaper tasks
56
- "small_model": this.getFastModel(providers),
57
+ "small_model": this.getFastModel(providers, ai.provider),
57
58
 
58
59
  // Agent configurations (mapped from ADF framework)
59
60
  "agents": agents,
@@ -204,13 +205,13 @@ class OpenCodeGenerator extends ToolConfigGenerator {
204
205
  * Generate agent configurations based on ADF framework level
205
206
  * Reference: https://opencode.ai/docs/agents
206
207
  */
207
- async generateAgentConfigurations(providers) {
208
+ async generateAgentConfigurations(providers, ai) {
208
209
  const agents = {};
209
210
  const agentsList = this.getAgentsList();
210
211
 
211
212
  // Map ADF agents to OpenCode agents with optimal models
212
213
  for (const agentName of agentsList) {
213
- const agentConfig = this.getAgentConfig(agentName, providers);
214
+ const agentConfig = this.getAgentConfig(agentName, providers, ai);
214
215
  agents[agentName] = agentConfig;
215
216
  }
216
217
 
@@ -220,7 +221,7 @@ class OpenCodeGenerator extends ToolConfigGenerator {
220
221
  /**
221
222
  * Get configuration for a specific agent
222
223
  */
223
- getAgentConfig(agentName, providers) {
224
+ getAgentConfig(agentName, providers, ai) {
224
225
  const agentDescriptions = {
225
226
  'analyst': 'Business analyst for requirements and specifications',
226
227
  'pm': 'Product manager for planning and prioritization',
@@ -233,7 +234,7 @@ class OpenCodeGenerator extends ToolConfigGenerator {
233
234
  const config = {
234
235
  "description": agentDescriptions[agentName] || 'AI coding assistant',
235
236
  "mode": agentName === 'dev' ? 'primary' : 'subagent',
236
- "model": this.getOptimalModelForAgent(agentName, providers)
237
+ "model": this.getOptimalModelForAgent(agentName, providers, ai)
237
238
  };
238
239
 
239
240
  // Agent-specific tool permissions
@@ -256,16 +257,16 @@ class OpenCodeGenerator extends ToolConfigGenerator {
256
257
  /**
257
258
  * Get optimal model for each agent type based on task complexity
258
259
  */
259
- getOptimalModelForAgent(agentName, providers) {
260
+ getOptimalModelForAgent(agentName, providers, ai) {
260
261
  // Model selection strategy:
261
262
  // - Powerful models: analyst, architect (deep thinking required)
262
263
  // - Balanced models: pm, dev (main implementation)
263
264
  // - Fast models: qa, sm (routine tasks)
264
265
 
265
266
  const modelTiers = {
266
- powerful: this.getPowerfulModel(providers),
267
- balanced: this.getBalancedModel(providers),
268
- fast: this.getFastModel(providers)
267
+ powerful: this.getPowerfulModel(providers, ai.provider, ai.model),
268
+ balanced: ai.model || this.getBalancedModel(providers, ai.provider),
269
+ fast: this.getFastModel(providers, ai.provider)
269
270
  };
270
271
 
271
272
  const agentModelMap = {
@@ -283,41 +284,49 @@ class OpenCodeGenerator extends ToolConfigGenerator {
283
284
  /**
284
285
  * Get the most powerful model from available providers
285
286
  */
286
- getPowerfulModel(providers) {
287
- if (providers.includes('anthropic')) return 'anthropic/claude-sonnet-4-5-20250929';
288
- if (providers.includes('openai')) return 'openai/gpt-5';
289
- if (providers.includes('google')) return 'google/gemini-2.5-pro';
290
- if (providers.includes('openrouter')) return 'openrouter/anthropic/claude-sonnet-4';
291
- return 'anthropic/claude-sonnet-4-5-20250929'; // Default
287
+ getPowerfulModel(providers, currentProvider, currentModel) {
288
+ // If the current model is already known to be powerful, use it
289
+ if (currentModel && (currentModel.includes('opus') || currentModel.includes('pro') || currentModel.includes('gpt-4o') || currentModel.includes('o1'))) {
290
+ return currentModel;
291
+ }
292
+
293
+ if (currentProvider === 'anthropic' || providers.includes('anthropic')) return 'anthropic/claude-3-5-sonnet-20241022';
294
+ if (currentProvider === 'openai' || providers.includes('openai')) return 'openai/gpt-4o';
295
+ if (currentProvider === 'google' || providers.includes('google')) return 'google/gemini-2.5-pro';
296
+ if (currentProvider === 'openrouter' || providers.includes('openrouter')) return 'openrouter/anthropic/claude-3.5-sonnet';
297
+
298
+ return currentModel || 'anthropic/claude-3-5-sonnet-20241022';
292
299
  }
293
300
 
294
301
  /**
295
302
  * Get a balanced model from available providers
296
303
  */
297
- getBalancedModel(providers) {
298
- if (providers.includes('anthropic')) return 'anthropic/claude-haiku-4-5-20250514';
299
- if (providers.includes('openai')) return 'openai/gpt-4o';
300
- if (providers.includes('google')) return 'google/gemini-1.5-pro';
301
- if (providers.includes('openrouter')) return 'openrouter/anthropic/claude-haiku-4';
302
- return 'anthropic/claude-haiku-4-5-20250514'; // Default
304
+ getBalancedModel(providers, currentProvider) {
305
+ if (currentProvider === 'anthropic' || providers.includes('anthropic')) return 'anthropic/claude-3-5-sonnet-20241022';
306
+ if (currentProvider === 'openai' || providers.includes('openai')) return 'openai/gpt-4o';
307
+ if (currentProvider === 'google' || providers.includes('google')) return 'google/gemini-2.5-flash';
308
+ if (currentProvider === 'openrouter' || providers.includes('openrouter')) return 'openrouter/anthropic/claude-3.5-sonnet';
309
+
310
+ return 'openai/gpt-4o';
303
311
  }
304
312
 
305
313
  /**
306
314
  * Get the fastest/cheapest model from available providers
307
315
  */
308
- getFastModel(providers) {
309
- if (providers.includes('anthropic')) return 'anthropic/claude-haiku-4-5-20250514';
310
- if (providers.includes('openai')) return 'openai/gpt-4o-mini';
311
- if (providers.includes('google')) return 'google/gemini-1.5-flash';
312
- if (providers.includes('openrouter')) return 'openrouter/google/gemini-flash-1.5';
313
- return 'anthropic/claude-haiku-4-5-20250514'; // Default
316
+ getFastModel(providers, currentProvider) {
317
+ if (currentProvider === 'anthropic' || providers.includes('anthropic')) return 'anthropic/claude-3-5-haiku-20241022';
318
+ if (currentProvider === 'openai' || providers.includes('openai')) return 'openai/gpt-4o-mini';
319
+ if (currentProvider === 'google' || providers.includes('google')) return 'google/gemini-2.5-flash-lite';
320
+ if (currentProvider === 'openrouter' || providers.includes('openrouter')) return 'openrouter/anthropic/claude-3.5-haiku';
321
+
322
+ return 'openai/gpt-4o-mini';
314
323
  }
315
324
 
316
325
  /**
317
326
  * Get default model for OpenCode
318
327
  */
319
- getDefaultModel(providers) {
320
- return this.getBalancedModel(providers);
328
+ getDefaultModel(providers, currentProvider) {
329
+ return this.getBalancedModel(providers, currentProvider);
321
330
  }
322
331
 
323
332
  /**
@@ -329,4 +338,4 @@ class OpenCodeGenerator extends ToolConfigGenerator {
329
338
  }
330
339
  }
331
340
 
332
- module.exports = OpenCodeGenerator;
341
+ module.exports = OpenCodeGenerator;
@@ -271,6 +271,30 @@ class ToolConfigGenerator {
271
271
  return deployedFiles;
272
272
  }
273
273
 
274
+ /**
275
+ * Get currently configured AI provider and model from .adf/.env
276
+ */
277
+ async getConfiguredAI() {
278
+ const envPath = path.join(this.projectPath, '.adf', '.env');
279
+ if (!(await fs.pathExists(envPath))) {
280
+ return { provider: 'anthropic', model: 'claude-3-5-sonnet-20241022' };
281
+ }
282
+
283
+ const content = await fs.readFile(envPath, 'utf-8');
284
+ const env = {};
285
+ content.split('\n').forEach(line => {
286
+ const [key, value] = line.split('=');
287
+ if (key && value) {
288
+ env[key.trim()] = value.trim().replace(/^["']|["']$/g, '');
289
+ }
290
+ });
291
+
292
+ return {
293
+ provider: env.ADF_CURRENT_PROVIDER || 'anthropic',
294
+ model: env.ADF_CURRENT_MODEL || 'claude-3-5-sonnet-20241022'
295
+ };
296
+ }
297
+
274
298
  /**
275
299
  * Get list of agents based on framework
276
300
  */
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@iservu-inc/adf-cli",
3
- "version": "0.14.4",
3
+ "version": "0.14.6",
4
4
  "description": "CLI tool for AgentDevFramework - Agent-Native development framework with multi-provider AI support",
5
5
  "main": "index.js",
6
6
  "bin": {