fastlane-plugin-translate_gpt_release_notes 0.1.1 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e58b5b90b41cfb4c9865a1749550d3ebc9a5af135fca5ca666131a94021e2f8a
4
- data.tar.gz: f847030d39a98cd54fdf50d19c86b4b37e64e2eb19366145cf3e8bfd5110f4f4
3
+ metadata.gz: 409b1bcd2c73ee2accc11399a91ffb090bbecd3ae523950e1e15583327702d9e
4
+ data.tar.gz: 1ee83d987e74e9dbbb2e903940cdc74eff26476ab436f0637b2ff5681dddb455
5
5
  SHA512:
6
- metadata.gz: df687f2d1dc2d26c7bc9364eb870ac06f8ae89faf38371cc8981e973d599350b21de5d88e256424ad89bd6439aa9c24349b8e3f80beb6b31c29cb32bcc7f7662
7
- data.tar.gz: 6c5f4587ef5620db5e1027591753b8c2596857f3aac1b5717610ac16cd77e439f3787517851ed0fecd5e2d46aa25a2ba1383b879aa111fa7968d01ea437175e7
6
+ metadata.gz: e64e5272e48df8a01e1a28b625d31a4c859ce7c1e3998a6116ddbb4eed11f9d157eb12e801090d3ee84e0a7175c632f86515f270a55c1e792777c0f8425236cb
7
+ data.tar.gz: 8f5c8d58a9ffc296c035946d303c3439877d24d39262f17231d7fd1d771054a247202ecddd91b1e6a6fc9877db3613c325bc59eec0e673e1c1f10be33fef6c87
data/README.md CHANGED
@@ -6,7 +6,7 @@
6
6
 
7
7
  ## Getting Started
8
8
 
9
- This project is a [fastlane](https://github.com/fastlane/fastlane) plugin. To get started with `fastlane-plugin-translate_gpt`, add it to your project by running:
9
+ This project is a [fastlane](https://github.com/fastlane/fastlane) plugin. To get started with `fastlane-plugin-translate_gpt_release_notes`, add it to your project by running:
10
10
 
11
11
  ```bash
12
12
  fastlane add_plugin translate_gpt_release_notes
@@ -15,95 +15,399 @@ fastlane add_plugin translate_gpt_release_notes
15
15
  ### Requirements
16
16
 
17
17
  - Ruby >= 3.1
18
- - OpenAI API key
18
+ - API key for at least one supported translation provider
19
19
 
20
20
  **Note**: This plugin requires Ruby 3.1 or higher to ensure compatibility with the latest security patches in nokogiri.
21
21
 
22
22
  ## About translate-gpt-release-notes
23
23
 
24
- `translate-gpt-release-notes` is a fastlane plugin that allows you to translate release notes or changelogs for iOS and Android apps using OpenAI GPT API. Based on [translate-gpt by ftp27](https://github.com/ftp27/fastlane-plugin-translate_gpt).
24
+ `translate-gpt-release-notes` is a fastlane plugin that allows you to translate release notes or changelogs for iOS and Android apps using multiple AI translation providers. Based on [translate-gpt by ftp27](https://github.com/ftp27/fastlane-plugin-translate_gpt).
25
25
 
26
+ ### Supported Translation Providers
26
27
 
27
- ## How it works:
28
+ The plugin now supports **4 translation providers**, giving you flexibility to choose based on cost, quality, and availability:
28
29
 
29
- `translate-gpt-release-notes` takes the changelog file for master locale (default: en-US), detects other locales based on fastlane metadata folder structure, translates changelog to all other languages with OpenAI API and creates localized .txt changelong files in respective folders
30
+ | Provider | Best For | Quality | Cost | Speed |
31
+ |----------|----------|---------|------|-------|
32
+ | **OpenAI GPT** | General purpose, flexible translations | ⭐⭐⭐⭐⭐ | $$$ | Fast |
33
+ | **Anthropic Claude** | High-quality, nuanced translations | ⭐⭐⭐⭐⭐ | $$$ | Medium |
34
+ | **Google Gemini** | Cost-effective, high-volume translations | ⭐⭐⭐⭐ | $ | Fast |
35
+ | **DeepL** | European languages, specialized translation | ⭐⭐⭐⭐⭐ | $$ | Fast |
30
36
 
31
- ## Example
37
+ ## How it works
32
38
 
33
- The following example demonstrates how to use `translate-gpt-release-notes` in a `Fastfile`
39
+ `translate-gpt-release-notes` takes the changelog file for the master locale (default: en-US), detects other locales based on the fastlane metadata folder structure, translates the changelog to all other languages using your chosen AI provider, and creates localized `.txt` changelog files in their respective folders.
40
+
41
+ ## Quick Start
42
+
43
+ ### 1. Configure your API key
44
+
45
+ Choose your preferred provider and set the corresponding environment variable:
46
+
47
+ ```bash
48
+ # Option 1: OpenAI (default)
49
+ export OPENAI_API_KEY='your-openai-api-key'
50
+ # Or use the legacy variable (still supported)
51
+ export GPT_API_KEY='your-openai-api-key'
52
+
53
+ # Option 2: Anthropic Claude
54
+ export ANTHROPIC_API_KEY='your-anthropic-api-key'
55
+
56
+ # Option 3: Google Gemini
57
+ export GEMINI_API_KEY='your-gemini-api-key'
58
+
59
+ # Option 4: DeepL
60
+ export DEEPL_API_KEY='your-deepl-api-key'
61
+ ```
62
+
63
+ ### 2. Use in your Fastfile
34
64
 
35
65
  ```ruby
36
- lane :translate_release_notes do
37
- translate_gpt_release_notes(
38
- master_locale: 'en-US',
39
- platform: 'ios',
40
- context: 'This is an app about cute kittens',
41
- model_name: 'gpt-5.2',
42
- service_tier: 'flex',
43
- request_timeout: 900
44
- # other parameters...
45
- )
66
+ lane :translate_release_notes do
67
+ translate_gpt_release_notes(
68
+ master_locale: 'en-US',
69
+ platform: 'ios',
70
+ context: 'This is an app about cute kittens'
71
+ )
46
72
  end
47
73
  ```
48
74
 
49
- ## Options
75
+ ## Provider Selection
50
76
 
51
- The following options are available for `translate-gpt-release-notes`:
77
+ ### Default Provider
52
78
 
53
- | Key | Description | Environment Variable |
54
- | --- | --- | --- |
55
- | `api_token` | The API key for your OpenAI GPT account. | `GPT_API_KEY` |
56
- | `model_name` | Name of the ChatGPT model to use (default: gpt-5.2) | `GPT_MODEL_NAME` |
57
- | `service_tier` | OpenAI service tier to use (auto, default, flex, or priority). | `GPT_SERVICE_TIER` |
58
- | `temperature` | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Defaults to 0.5 | `GPT_TEMPERATURE` |
59
- | `request_timeout` | Timeout for the request in seconds. Defaults to 30 seconds. If `service_tier` is `flex` and this is lower than 900, the plugin increases it to 900. | `GPT_REQUEST_TIMEOUT` |
60
- | `master_locale` | Master language/locale for the source texts | `MASTER_LOCALE` |
61
- | `context` | Context for translation to improve accuracy | `GPT_CONTEXT` |
62
- | `platform` | Platform for which to translate (ios or android, defaults to ios).| `PLATFORM` |
79
+ By default, the plugin uses **OpenAI** as the translation provider. This ensures backward compatibility with existing setups.
63
80
 
64
- ## Authentication
81
+ ### Selecting a Provider
82
+
83
+ You can explicitly select a provider using the `provider` parameter:
84
+
85
+ ```ruby
86
+ # Use Anthropic Claude
87
+ translate_gpt_release_notes(
88
+ provider: 'anthropic',
89
+ master_locale: 'en-US',
90
+ platform: 'ios'
91
+ )
65
92
 
66
- `translate-gpt-release-notes` supports multiple authentication methods for the OpenAI GPT API:
93
+ # Use Google Gemini
94
+ translate_gpt_release_notes(
95
+ provider: 'gemini',
96
+ master_locale: 'en-US',
97
+ platform: 'ios'
98
+ )
67
99
 
68
- ### API Key
100
+ # Use DeepL
101
+ translate_gpt_release_notes(
102
+ provider: 'deepl',
103
+ master_locale: 'en-US',
104
+ platform: 'ios'
105
+ )
106
+ ```
69
107
 
70
- You can provide your API key directly as an option to `translate-gpt`:
108
+ Or set the default provider via environment variable:
109
+
110
+ ```bash
111
+ export TRANSLATION_PROVIDER='anthropic'
112
+ ```
113
+
114
+ ## Usage Examples by Provider
115
+
116
+ ### OpenAI (Default)
71
117
 
72
118
  ```ruby
73
119
  translate_gpt_release_notes(
74
- api_token: 'YOUR_API_KEY',
120
+ provider: 'openai', # Optional, this is the default
121
+ openai_api_key: 'sk-...', # Or use OPENAI_API_KEY env var
122
+ model_name: 'gpt-5.2', # Default model
123
+ service_tier: 'flex', # Options: auto, default, flex, priority
124
+ temperature: 0.5, # 0-2, lower = more deterministic
75
125
  master_locale: 'en-US',
76
126
  platform: 'ios',
77
- model_name: 'gpt-5.2',
78
- context: 'This is an app about cute kittens'
127
+ context: 'Fitness tracking app'
128
+ )
129
+ ```
79
130
 
131
+ ### Anthropic Claude
132
+
133
+ ```ruby
134
+ translate_gpt_release_notes(
135
+ provider: 'anthropic',
136
+ anthropic_api_key: 'sk-ant-...', # Or use ANTHROPIC_API_KEY env var
137
+ model_name: 'claude-sonnet-4.5', # Default model
138
+ temperature: 0.5, # 0-1 for Anthropic
139
+ master_locale: 'en-US',
140
+ platform: 'ios',
141
+ context: 'Finance management app'
142
+ )
143
+ ```
144
+
145
+ ### Google Gemini
146
+
147
+ ```ruby
148
+ translate_gpt_release_notes(
149
+ provider: 'gemini',
150
+ gemini_api_key: '...', # Or use GEMINI_API_KEY env var
151
+ model_name: 'gemini-2.5-flash', # Default model
152
+ temperature: 0.5, # 0-1 for Gemini
153
+ master_locale: 'en-US',
154
+ platform: 'android',
155
+ context: 'Social media app'
156
+ )
157
+ ```
158
+
159
+ ### DeepL
160
+
161
+ ```ruby
162
+ translate_gpt_release_notes(
163
+ provider: 'deepl',
164
+ deepl_api_key: '...', # Or use DEEPL_API_KEY env var
165
+ formality: 'less', # Options: default, more, less
166
+ master_locale: 'en-US',
167
+ platform: 'ios',
168
+ context: 'Casual gaming app'
80
169
  )
81
170
  ```
82
171
 
83
- ### Environment Variable
172
+ **Note**: DeepL automatically detects free vs paid API keys (free keys end with `:fx`) and uses the appropriate endpoint.
173
+
174
+ ## Options
175
+
176
+ ### Core Options
177
+
178
+ | Key | Description | Environment Variable | Default |
179
+ |-----|-------------|---------------------|---------|
180
+ | `provider` | Translation provider to use (`openai`, `anthropic`, `gemini`, `deepl`) | `TRANSLATION_PROVIDER` | `openai` |
181
+ | `master_locale` | Master language/locale for the source texts | `MASTER_LOCALE` | `en-US` |
182
+ | `platform` | Platform (`ios` or `android`) | `PLATFORM` | `ios` |
183
+ | `context` | Context for translation to improve accuracy | `GPT_CONTEXT` | - |
184
+
185
+ ### Provider-Specific API Keys
186
+
187
+ | Key | Description | Environment Variable |
188
+ |-----|-------------|---------------------|
189
+ | `openai_api_key` | OpenAI API key | `OPENAI_API_KEY` or `GPT_API_KEY` |
190
+ | `anthropic_api_key` | Anthropic API key | `ANTHROPIC_API_KEY` |
191
+ | `gemini_api_key` | Google Gemini API key | `GEMINI_API_KEY` |
192
+ | `deepl_api_key` | DeepL API key | `DEEPL_API_KEY` |
193
+
194
+ ### OpenAI-Specific Options
195
+
196
+ | Key | Description | Environment Variable | Default |
197
+ |-----|-------------|---------------------|---------|
198
+ | `model_name` | OpenAI model to use | `GPT_MODEL_NAME` | `gpt-5.2` |
199
+ | `service_tier` | Service tier: `auto`, `default`, `flex`, `priority` | `GPT_SERVICE_TIER` | - |
200
+ | `temperature` | Sampling temperature (0-2) | `GPT_TEMPERATURE` | `0.5` |
201
+ | `request_timeout` | Timeout in seconds (auto-bumped to 900s for flex) | `GPT_REQUEST_TIMEOUT` | `30` |
202
+
203
+ ### Anthropic-Specific Options
204
+
205
+ | Key | Description | Environment Variable | Default |
206
+ |-----|-------------|---------------------|---------|
207
+ | `model_name` | Anthropic model to use | `ANTHROPIC_MODEL_NAME` | `claude-sonnet-4.5` |
208
+ | `temperature` | Sampling temperature (0-1) | `ANTHROPIC_TEMPERATURE` | `0.5` |
209
+ | `request_timeout` | Timeout in seconds | `ANTHROPIC_REQUEST_TIMEOUT` | `60` |
210
+
211
+ ### Google Gemini-Specific Options
212
+
213
+ | Key | Description | Environment Variable | Default |
214
+ |-----|-------------|---------------------|---------|
215
+ | `model_name` | Gemini model to use | `GEMINI_MODEL_NAME` | `gemini-2.5-flash` |
216
+ | `temperature` | Sampling temperature (0-1) | `GEMINI_TEMPERATURE` | `0.5` |
217
+ | `request_timeout` | Timeout in seconds | `GEMINI_REQUEST_TIMEOUT` | `60` |
218
+
219
+ ### DeepL-Specific Options
84
220
 
85
- Alternatively, you can set the `GPT_API_KEY` environment variable with your API key:
221
+ | Key | Description | Environment Variable | Default |
222
+ |-----|-------------|---------------------|---------|
223
+ | `formality` | Formality level: `default`, `more`, `less` | `DEEPL_FORMALITY` | `default` |
224
+ | `request_timeout` | Timeout in seconds | `DEEPL_REQUEST_TIMEOUT` | `30` |
225
+
226
+ ## Authentication
227
+
228
+ ### Environment Variables (Recommended)
229
+
230
+ The recommended approach is to set API keys via environment variables:
86
231
 
87
232
  ```bash
88
- export GPT_API_KEY='YOUR_API_KEY'
233
+ export OPENAI_API_KEY='sk-...'
234
+ export ANTHROPIC_API_KEY='sk-ant-...'
235
+ export GEMINI_API_KEY='...'
236
+ export DEEPL_API_KEY='...'
89
237
  ```
90
238
 
91
- And then call `translate-gp-release-notes` without specifying an API key:
239
+ ### Direct Parameters
240
+
241
+ Alternatively, pass API keys directly (useful for CI/CD with secrets):
92
242
 
93
243
  ```ruby
94
244
  translate_gpt_release_notes(
245
+ provider: 'anthropic',
246
+ anthropic_api_key: ENV['ANTHROPIC_API_KEY'],
95
247
  master_locale: 'en-US',
96
- platform: 'ios',
248
+ platform: 'ios'
249
+ )
250
+ ```
251
+
252
+ ### Multiple Providers Configuration
253
+
254
+ You can configure multiple providers simultaneously and switch between them:
255
+
256
+ ```bash
257
+ # Set up all providers
258
+ export OPENAI_API_KEY='sk-...'
259
+ export ANTHROPIC_API_KEY='sk-ant-...'
260
+ export GEMINI_API_KEY='...'
261
+
262
+ # Default to Gemini for cost savings
263
+ export TRANSLATION_PROVIDER='gemini'
264
+ ```
265
+
266
+ ## Migration Guide
267
+
268
+ ### From Single-Provider Setup (v0.1.x)
269
+
270
+ If you're upgrading from a previous version that only supported OpenAI:
271
+
272
+ 1. **No breaking changes** - Your existing setup will continue to work
273
+ 2. **Existing `GPT_API_KEY` still works** - No need to rename your environment variable
274
+ 3. **Default provider is OpenAI** - All existing configurations work unchanged
275
+
276
+ Optional improvements you can make:
277
+ - Rename `GPT_API_KEY` to `OPENAI_API_KEY` for clarity (both work)
278
+ - Set `TRANSLATION_PROVIDER` if you want to experiment with other providers
279
+ - Try different providers for different lanes (e.g., Gemini for development, Claude for production)
280
+
281
+ ### Example Migration
282
+
283
+ **Before:**
284
+ ```ruby
285
+ translate_gpt_release_notes(
286
+ api_token: ENV['GPT_API_KEY'],
97
287
  model_name: 'gpt-5.2',
98
- context: 'This is an app about cute kittens'
288
+ master_locale: 'en-US'
99
289
  )
100
290
  ```
101
- ## Important notes:
102
291
 
103
- 1. Android has a limit of 500 symbols for changelogs and sometimes translations can exceed this number, which leads to Google API errors when submitting the app. Plugin **tries** to handle this, however errors happen. Reducing the length of master_locale changelog usually helps. iOS has a limit of 4000 symbols, which is plenty.
104
- 2. OpenAI API usage cost money, keep it in mind.
105
- 3. If you use `service_tier: 'flex'`, the plugin increases `request_timeout` to 900s when it is set lower.
106
- 4. Hint: Flex processing trades higher latency for lower prices, which can reduce costs for non-urgent translations.
292
+ **After** (still works, but cleaner):
293
+ ```ruby
294
+ translate_gpt_release_notes(
295
+ provider: 'openai',
296
+ master_locale: 'en-US'
297
+ )
298
+ ```
299
+
300
+ ## Important Notes
301
+
302
+ ### Android 500 Character Limit
303
+
304
+ Android has a limit of 500 characters for changelogs. The plugin handles this in two ways:
305
+
306
+ 1. **AI Providers (OpenAI, Anthropic, Gemini)**: The character limit is included in the translation prompt, asking the AI to stay within the limit
307
+ 2. **DeepL**: Translations are truncated to 500 characters with a warning if they exceed the limit
308
+
309
+ If you frequently hit the limit, consider shortening your master locale changelog.
310
+
311
+ ### iOS Character Limit
312
+
313
+ iOS has a limit of 4000 characters, which is rarely an issue for release notes.
314
+
315
+ ### Cost Considerations
316
+
317
+ All AI translation APIs cost money. Consider these tips:
318
+
319
+ - Use `service_tier: 'flex'` with OpenAI for lower prices (trades latency for cost)
320
+ - Google Gemini is generally the most cost-effective option
321
+ - DeepL offers competitive pricing for European languages
322
+ - The plugin skips translation if the source file hasn't changed (tracked via `last_successful_run.txt`)
323
+
324
+ ### Service Tiers (OpenAI)
325
+
326
+ | Tier | Description | Use Case |
327
+ |------|-------------|----------|
328
+ | `auto` | Automatic tier selection | General use |
329
+ | `default` | Standard processing | Urgent translations |
330
+ | `flex` | Lower cost, higher latency | Non-urgent translations |
331
+ | `priority` | Premium processing | Critical releases |
332
+
333
+ **Note**: When using `flex`, the plugin automatically increases `request_timeout` to 900 seconds if set lower.
334
+
335
+ ## Troubleshooting
336
+
337
+ ### "No translation provider credentials configured"
338
+
339
+ **Cause**: No API keys are set for any provider.
340
+
341
+ **Solution**: Set at least one provider's API key:
342
+ ```bash
343
+ export OPENAI_API_KEY='your-key-here'
344
+ ```
345
+
346
+ ### "Provider 'X' has no credentials"
347
+
348
+ **Cause**: You specified a provider but haven't configured its API key.
349
+
350
+ **Solution**: Either configure the provider's API key or switch to a provider with configured credentials.
351
+
352
+ ### "Invalid provider 'X'"
353
+
354
+ **Cause**: The provider name is not recognized.
355
+
356
+ **Solution**: Use one of the valid provider names: `openai`, `anthropic`, `gemini`, `deepl`.
357
+
358
+ ### Translations Exceed Android Character Limit
359
+
360
+ **Cause**: The translated text is longer than 500 characters.
361
+
362
+ **Solutions**:
363
+ 1. Shorten your source changelog
364
+ 2. For DeepL, translations are automatically truncated
365
+ 3. For AI providers, the prompt includes the limit but compliance isn't guaranteed
366
+
367
+ ### API Timeout Errors
368
+
369
+ **Cause**: The translation request is taking too long.
370
+
371
+ **Solutions**:
372
+ 1. Increase `request_timeout` parameter
373
+ 2. For OpenAI flex tier, timeout is automatically increased to 900s
374
+ 3. Consider using a faster provider (Gemini or DeepL)
375
+
376
+ ### Slow Translations with Flex Tier
377
+
378
+ **Cause**: Flex tier trades latency for lower cost.
379
+
380
+ **Solution**: This is expected behavior. If speed is critical, use `service_tier: 'default'` or `service_tier: 'priority'`.
381
+
382
+ ## Provider Comparison Details
383
+
384
+ ### When to Use Each Provider
385
+
386
+ **OpenAI GPT**
387
+ - ✅ Best for general-purpose translations
388
+ - ✅ Flexible and customizable
389
+ - ✅ Supports service tiers for cost control
390
+ - ❌ Can be expensive for high volume
391
+
392
+ **Anthropic Claude**
393
+ - ✅ Highest quality nuanced translations
394
+ - ✅ Excellent for complex or technical content
395
+ - ✅ Strong reasoning capabilities
396
+ - ❌ Slower than other options
397
+ - ❌ Higher cost
398
+
399
+ **Google Gemini**
400
+ - ✅ Most cost-effective
401
+ - ✅ Fast response times
402
+ - ✅ Good quality for standard content
403
+ - ❌ May struggle with very nuanced content
404
+
405
+ **DeepL**
406
+ - ✅ Best for European languages
407
+ - ✅ Purpose-built for translation
408
+ - ✅ Formality control
409
+ - ❌ Limited language support compared to AI providers
410
+ - ❌ May not handle app-specific context as well
107
411
 
108
412
  ## Issues and Feedback
109
413
 
@@ -1,12 +1,24 @@
1
1
  require 'fastlane/action'
2
- require 'openai'
3
2
  require_relative '../helper/translate_gpt_release_notes_helper'
3
+ require_relative '../helper/credential_resolver'
4
+ require_relative '../helper/providers/provider_factory'
4
5
  require 'fileutils'
5
6
 
6
7
  module Fastlane
7
8
  module Actions
8
9
  class TranslateGptReleaseNotesAction < Action
9
10
  def self.run(params)
11
+ provider_name = params[:provider] || 'openai'
12
+
13
+ unless Helper::CredentialResolver.credentials_exist?(provider_name, params)
14
+ available = Helper::CredentialResolver.available_providers(params)
15
+ if available.empty?
16
+ UI.user_error!("No translation provider credentials configured. Set one of: OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, or DEEPL_API_KEY")
17
+ else
18
+ UI.user_error!("Provider '#{provider_name}' has no credentials. Available providers: #{available.join(', ')}")
19
+ end
20
+ end
21
+
10
22
  # Define the path for the last run time file
11
23
  last_run_file = "last_successful_run.txt"
12
24
 
@@ -98,11 +110,60 @@ module Fastlane
98
110
  end
99
111
 
100
112
  def self.description
101
- "Translate release notes or changelogs for iOS and Android apps using OpenAI's GPT API"
113
+ "Translate release notes using AI providers: OpenAI, Claude, Gemini, or DeepL"
102
114
  end
103
115
 
104
116
  def self.available_options
105
117
  [
118
+ FastlaneCore::ConfigItem.new(
119
+ key: :provider,
120
+ env_name: 'TRANSLATION_PROVIDER',
121
+ description: "Translation provider to use (#{Helper::Providers::ProviderFactory.available_provider_names.join(', ')})",
122
+ type: String,
123
+ default_value: 'openai',
124
+ verify_block: proc do |value|
125
+ unless Helper::Providers::ProviderFactory.valid_provider?(value)
126
+ available = Helper::Providers::ProviderFactory.available_provider_names.join(', ')
127
+ UI.user_error!("Invalid provider '#{value}'. Available: #{available}")
128
+ end
129
+ end
130
+ ),
131
+ FastlaneCore::ConfigItem.new(
132
+ key: :openai_api_key,
133
+ env_name: 'OPENAI_API_KEY',
134
+ description: 'OpenAI API key (alternative to environment variable)',
135
+ sensitive: true,
136
+ code_gen_sensitive: true,
137
+ optional: true,
138
+ default_value: nil
139
+ ),
140
+ FastlaneCore::ConfigItem.new(
141
+ key: :anthropic_api_key,
142
+ env_name: 'ANTHROPIC_API_KEY',
143
+ description: 'Anthropic API key (alternative to environment variable)',
144
+ sensitive: true,
145
+ code_gen_sensitive: true,
146
+ optional: true,
147
+ default_value: nil
148
+ ),
149
+ FastlaneCore::ConfigItem.new(
150
+ key: :gemini_api_key,
151
+ env_name: 'GEMINI_API_KEY',
152
+ description: 'Google Gemini API key (alternative to environment variable)',
153
+ sensitive: true,
154
+ code_gen_sensitive: true,
155
+ optional: true,
156
+ default_value: nil
157
+ ),
158
+ FastlaneCore::ConfigItem.new(
159
+ key: :deepl_api_key,
160
+ env_name: 'DEEPL_API_KEY',
161
+ description: 'DeepL API key (alternative to environment variable)',
162
+ sensitive: true,
163
+ code_gen_sensitive: true,
164
+ optional: true,
165
+ default_value: nil
166
+ ),
106
167
  FastlaneCore::ConfigItem.new(
107
168
  key: :api_token,
108
169
  env_name: "GPT_API_KEY",
@@ -114,7 +175,7 @@ module Fastlane
114
175
  FastlaneCore::ConfigItem.new(
115
176
  key: :model_name,
116
177
  env_name: "GPT_MODEL_NAME",
117
- description: "Name of the ChatGPT model to use",
178
+ description: "Name of the AI model to use (provider-specific)",
118
179
  default_value: "gpt-5.2"
119
180
  ),
120
181
  FastlaneCore::ConfigItem.new(
@@ -0,0 +1,118 @@
1
+ module Fastlane
2
+ module Helper
3
+ # CredentialResolver manages multiple provider API keys simultaneously,
4
+ # allowing users to configure keys for all providers and select which
5
+ # to use via the provider parameter.
6
+ #
7
+ # This class provides a centralized way to resolve API credentials from
8
+ # various sources (parameters, environment variables) with a defined
9
+ # priority order.
10
+ class CredentialResolver
11
+ # Maps each provider to its credential configuration
12
+ # Each provider has:
13
+ # - env_vars: Array of environment variable names to check (in order)
14
+ # - param_key: Symbol for the parameter key in the params hash
15
+ PROVIDER_CREDENTIALS = {
16
+ 'openai' => {
17
+ env_vars: ['OPENAI_API_KEY', 'GPT_API_KEY'], # GPT_API_KEY for backward compatibility
18
+ param_key: :openai_api_key
19
+ },
20
+ 'anthropic' => {
21
+ env_vars: ['ANTHROPIC_API_KEY'],
22
+ param_key: :anthropic_api_key
23
+ },
24
+ 'gemini' => {
25
+ env_vars: ['GEMINI_API_KEY'],
26
+ param_key: :gemini_api_key
27
+ },
28
+ 'deepl' => {
29
+ env_vars: ['DEEPL_API_KEY'],
30
+ param_key: :deepl_api_key
31
+ }
32
+ }.freeze
33
+
34
+ # Resolves the API key for a given provider following priority order:
35
+ # 1. Direct parameter (e.g., params[:openai_api_key])
36
+ # 2. Environment variables in order defined in PROVIDER_CREDENTIALS
37
+ # 3. Legacy fallback for OpenAI (GPT_API_KEY) if defined in env_vars
38
+ #
39
+ # @param provider_name [String] The provider identifier (e.g., 'openai', 'anthropic')
40
+ # @param params [Hash] Hash of parameters that may contain API keys
41
+ # @return [String, nil] The resolved API key or nil if not found
42
+ def self.resolve(provider_name, params = {})
43
+ config = provider_config(provider_name)
44
+ return nil unless config
45
+
46
+ # Priority 1: Check direct parameter
47
+ param_value = params[config[:param_key]]
48
+ return param_value.to_s.strip unless param_value.nil? || param_value.to_s.strip.empty?
49
+
50
+ # Priority 2: Check environment variables in order
51
+ config[:env_vars].each do |env_var|
52
+ env_value = ENV[env_var]
53
+ return env_value.to_s.strip unless env_value.nil? || env_value.to_s.strip.empty?
54
+ end
55
+
56
+ nil
57
+ end
58
+
59
+ # Checks if credentials exist for a given provider.
60
+ #
61
+ # @param provider_name [String] The provider identifier
62
+ # @param params [Hash] Hash of parameters that may contain API keys
63
+ # @return [Boolean] true if credentials exist, false otherwise
64
+ def self.credentials_exist?(provider_name, params = {})
65
+ !resolve(provider_name, params).nil?
66
+ end
67
+
68
+ # Returns an array of provider names that have configured credentials.
69
+ #
70
+ # @param params [Hash] Hash of parameters that may contain API keys
71
+ # @return [Array<String>] Array of provider names with valid credentials
72
+ def self.available_providers(params = {})
73
+ PROVIDER_CREDENTIALS.keys.select do |provider_name|
74
+ credentials_exist?(provider_name, params)
75
+ end
76
+ end
77
+
78
+ # Returns help text explaining how to configure credentials for a provider.
79
+ #
80
+ # @param provider_name [String] The provider identifier
81
+ # @return [String] Help text for configuring credentials
82
+ def self.credential_help(provider_name)
83
+ config = provider_config(provider_name)
84
+ return "Unknown provider: #{provider_name}" unless config
85
+
86
+ env_vars = config[:env_vars]
87
+ param_key = config[:param_key]
88
+
89
+ if env_vars.length == 1
90
+ "Set #{env_vars.first} environment variable, or pass :#{param_key} parameter"
91
+ else
92
+ env_vars_str = env_vars.join(' or ')
93
+ "Set #{env_vars_str} environment variable, or pass :#{param_key} parameter"
94
+ end
95
+ end
96
+
97
+ # Returns an array of all supported provider names.
98
+ #
99
+ # @return [Array<String>] Array of all supported provider names
100
+ def self.all_providers
101
+ PROVIDER_CREDENTIALS.keys.freeze
102
+ end
103
+
104
+ private
105
+
106
+ # Retrieves the credential configuration for a provider.
107
+ # Handles case-insensitive provider names by downcasing.
108
+ #
109
+ # @param provider_name [String] The provider identifier
110
+ # @return [Hash, nil] The credential configuration or nil if provider not found
111
+ def self.provider_config(provider_name)
112
+ return nil if provider_name.nil?
113
+
114
+ PROVIDER_CREDENTIALS[provider_name.to_s.downcase]
115
+ end
116
+ end
117
+ end
118
+ end