create-app-release 1.2.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/.prettierignore CHANGED
@@ -4,3 +4,4 @@ build
4
4
  coverage
5
5
  .next
6
6
  package-lock.json
7
+ pnpm-lock.yaml
package/CHANGELOG.md CHANGED
@@ -5,6 +5,15 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.3.0] - 2026-01-29
9
+
10
+ ### Added
11
+
12
+ - Added support for Google Gemini as an AI provider (API key and CLI integration).
13
+ - Introduced `--ai-provider` option to select between OpenAI, Gemini API, and Gemini CLI.
14
+ - Added `--gemini-key` and `--gemini-model` options for configuring Gemini API.
15
+ - Enabled direct integration with `gemini-cli` for summary generation.
16
+
8
17
  ## [1.2.0] - 2025-03-18
9
18
 
10
19
  ### Added
@@ -72,6 +81,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
72
81
  - ora - Terminal spinners
73
82
  - dotenv - Environment variable management
74
83
 
84
+ [1.3.0]: https://github.com/jamesgordo/create-app-release/releases/tag/v1.3.0
75
85
  [1.2.0]: https://github.com/jamesgordo/create-app-release/releases/tag/v1.2.0
76
86
  [1.1.0]: https://github.com/jamesgordo/create-app-release/releases/tag/v1.1.0
77
87
  [1.0.0]: https://github.com/jamesgordo/create-app-release/releases/tag/v1.0.0
package/README.md CHANGED
@@ -7,25 +7,24 @@ An AI-powered GitHub release automation tool that helps you create release pull
7
7
 
8
8
  ## Features
9
9
 
10
- - 🤖 AI-powered release notes generation using GPT-4
11
- - 🔄 Flexible LLM support:
12
- - OpenAI models (GPT-4o, GPT-3.5-turbo)
13
- - Deepseek models
14
- - QwenAI models
15
- - Local LLM deployments
16
- - 📦 Zero configuration - works right out of the box
17
- - 🔑 Secure token management through git config
18
- - 🎯 Interactive pull request selection
19
- - Professional markdown formatting
20
- - 📝 Smart categorization of changes
21
- - 🌟 User-friendly CLI interface
10
+ - 🤖 AI-powered release notes generation.
11
+ - 🔄 **Flexible LLM Support**: Seamlessly switch between OpenAI, Google Gemini, and any OpenAI-compatible API.
12
+ - **OpenAI**: `gpt-4o`, `gpt-3.5-turbo`.
13
+ - **Google Gemini**: `gemini-pro` via API key or local `gemini-cli`.
14
+ - **OpenAI-Compatible**: Supports providers like Deepseek, QwenAI, or local LLMs via a custom base URL.
15
+ - 📦 Zero configuration - works right out of the box.
16
+ - 🔑 Secure token management through `git config`.
17
+ - 🎯 Interactive pull request selection.
18
+ - Professional markdown formatting.
19
+ - 📝 Smart categorization of changes.
20
+ - 🌟 User-friendly CLI interface.
22
21
 
23
22
  ## Prerequisites
24
23
 
25
24
  - Node.js 14 or higher
26
25
  - Git installed and configured
27
- - GitHub account with repository access
28
- - OpenAI account (for GPT-4 access)
26
+ - A GitHub account with repository access
27
+ - An account with an AI provider (e.g., OpenAI, Google Gemini) if using an API key.
29
28
 
30
29
  ## Usage
31
30
 
@@ -35,57 +34,65 @@ Run the tool directly using npx:
35
34
  npx create-app-release
36
35
  ```
37
36
 
38
- On first run, the tool will guide you through:
39
-
40
- 1. Setting up your GitHub token (stored in git config)
41
- 2. Configuring your OpenAI API key (stored in git config)
42
- 3. Selecting pull requests for the release
43
- 4. Reviewing the AI-generated summary
44
- 5. Creating the release pull request
37
+ On the first run, the tool will guide you through setting up the necessary tokens and configurations.
45
38
 
46
39
  ### Token Setup
47
40
 
48
- You'll need two tokens to use this tool:
41
+ You will need a **GitHub Token** and an API key for your chosen AI provider.
49
42
 
50
- 1. **GitHub Token** - Create at [GitHub Token Settings](https://github.com/settings/tokens/new)
43
+ 1. **GitHub Token** - Create at [GitHub Token Settings](https://github.com/settings/tokens/new)
44
+ - Required scope: `repo`
45
+ - Stored in git config as `github.token`
51
46
 
52
- - Required scope: `repo`
53
- - Will be stored in git config as `github.token`
47
+ 2. **OpenAI API Key** - Get from [OpenAI Platform](https://platform.openai.com/api-keys)
48
+ - Required if using the `openai` provider.
49
+ - Stored in git config as `openai.token`
54
50
 
55
- 2. **OpenAI API Key** - Get from [OpenAI Platform](https://platform.openai.com/api-keys)
56
- - Will be stored in git config as `openai.token`
51
+ 3. **Gemini API Key** - Get from [Google AI Studio](https://makersuite.google.com/app/apikey)
52
+ - Required if using the `gemini` provider.
53
+ - Stored in git config as `gemini.token`
57
54
 
58
55
  ### Command-Line Options
59
56
 
60
- Customize the tool's behavior using these command-line options:
57
+ #### General Options
61
58
 
62
- ```bash
63
- # Set OpenAI API key directly (alternative to env/git config)
64
- --openai-key <key>
59
+ `--ai-provider <provider>`
60
+ : Select the AI provider.
61
+ : **Options**: `openai`, `gemini`, `gemini-cli`.
62
+ : If not specified, you will be prompted to choose.
65
63
 
66
- # Choose OpenAI model (default: "gpt-4o")
67
- --openai-model <model>
68
- # Examples: gpt-4o, gpt-3.5-turbo, deepseek-r1, qwen2.5
64
+ ---
69
65
 
70
- # Set custom OpenAI API base URL
71
- --openai-base-url <url>
72
- # Examples:
73
- # - Deepseek: https://api.deepseek.com/v1
74
- # - QwenAI: https://api.qwen.ai/v1
75
- # - Local: http://localhost:8000/v1
76
- # - Custom: https://custom-openai-endpoint.com/v1
66
+ #### OpenAI Provider (`--ai-provider openai`)
77
67
 
78
- # Full example with different providers:
68
+ `--openai-key <key>`
69
+ : Set your OpenAI API key directly.
79
70
 
80
- # Using Deepseek
81
- npx create-app-release --openai-base-url https://api.deepseek.com/v1 --openai-key your_deepseek_key --openai-model deepseek-chat
71
+ `--openai-model <model>`
72
+ : Choose the OpenAI model (default: `"gpt-4o"`).
82
73
 
83
- # Using QwenAI
84
- npx create-app-release --openai-base-url https://api.qwen.ai/v1 --openai-key your_qwen_key --openai-model qwen-14b-chat
74
+ `--openai-base-url <url>`
75
+ : Set a custom base URL for OpenAI-compatible APIs (e.g., Deepseek, QwenAI, local LLMs).
76
+ : **Examples**:
77
+ : - `https://api.deepseek.com/v1`
78
+ : - `https://api.qwen.ai/v1`
79
+ : - `http://localhost:8000/v1`
85
80
 
86
- # Using Local LLM
87
- npx create-app-release --openai-base-url http://localhost:8000/v1 --openai-model local-model
88
- ```
81
+ ---
82
+
83
+ #### Gemini Provider (`--ai-provider gemini`)
84
+
85
+ `--gemini-key <key>`
86
+ : Set your Gemini API key directly.
87
+
88
+ `--gemini-model <model>`
89
+ : Set the Gemini model to use (default: `"gemini-pro"`).
90
+
91
+ ---
92
+
93
+ #### Gemini CLI Provider (`--ai-provider gemini-cli`)
94
+
95
+ This option uses a local `gemini` command-line tool, which must be installed and available in your system's `PATH`. The script will execute the `gemini` command, passing the prompt to its standard input. No API key is required for this provider option.
89
96
 
90
97
  ### Environment Variables (Optional)
91
98
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "create-app-release",
3
- "version": "1.2.0",
3
+ "version": "1.3.0",
4
4
  "description": "AI-powered GitHub release automation tool",
5
5
  "main": "src/index.js",
6
6
  "type": "module",
@@ -45,6 +45,7 @@
45
45
  ]
46
46
  },
47
47
  "dependencies": {
48
+ "@google/generative-ai": "^0.24.1",
48
49
  "@octokit/rest": "^20.0.2",
49
50
  "chalk": "^5.3.0",
50
51
  "commander": "^11.1.0",
package/src/index.js CHANGED
@@ -6,10 +6,14 @@ import { Octokit } from '@octokit/rest';
6
6
  import inquirer from 'inquirer';
7
7
  import chalk from 'chalk';
8
8
  import ora from 'ora';
9
+ import { GoogleGenerativeAI } from '@google/generative-ai';
9
10
  import OpenAI from 'openai';
10
11
  import { promisify } from 'util';
11
12
  import { exec as execCallback } from 'child_process';
12
13
  import { createRequire } from 'module';
14
+ import { tmpdir } from 'os';
15
+ import { join } from 'path';
16
+ import { writeFile, unlink } from 'fs/promises';
13
17
 
14
18
  // Initialize utilities
15
19
  const exec = promisify(execCallback);
@@ -98,12 +102,13 @@ async function configureToken({ envKey, gitKey, name, createUrl, additionalInfo
98
102
  // Initialize API clients
99
103
  let octokit;
100
104
  let openai;
105
+ let gemini;
101
106
 
102
107
  /**
103
- * Initialize GitHub and OpenAI tokens
104
- * @returns {Promise<Object>} Object containing both tokens
108
+ * Initialize GitHub token
109
+ * @returns {Promise<string>} The GitHub token
105
110
  */
106
- async function initializeTokens() {
111
+ async function initializeGitHubToken() {
107
112
  const githubToken = await configureToken({
108
113
  envKey: 'GITHUB_TOKEN',
109
114
  gitKey: 'github.token',
@@ -111,15 +116,7 @@ async function initializeTokens() {
111
116
  createUrl: 'https://github.com/settings/tokens/new',
112
117
  additionalInfo: "Make sure to enable the 'repo' scope.",
113
118
  });
114
-
115
- const openaiToken = await configureToken({
116
- envKey: 'OPENAI_API_KEY',
117
- gitKey: 'openai.token',
118
- name: 'OpenAI',
119
- createUrl: 'https://platform.openai.com/api-keys',
120
- });
121
-
122
- return { githubToken, openaiToken };
119
+ return githubToken;
123
120
  }
124
121
 
125
122
  /**
@@ -349,9 +346,10 @@ async function fetchPullRequests(owner, repo, baseBranch) {
349
346
  /**
350
347
  * Generate an AI-powered release summary from selected pull requests
351
348
  * @param {Array} selectedPRs - List of selected pull requests
349
+ * @param {string} aiProvider - The AI provider to use ('openai' or 'gemini')
352
350
  * @returns {Promise<string>} Generated release summary
353
351
  */
354
- async function generateSummary(selectedPRs) {
352
+ async function generateSummary(selectedPRs, aiProvider) {
355
353
  const spinner = ora('Generating release summary...').start();
356
354
  try {
357
355
  const prDetails = selectedPRs.map((pr) => ({
@@ -377,22 +375,39 @@ ${JSON.stringify(prDetails, null, 2)}
377
375
 
378
376
  Keep the summary concise, clear, and focused on the user impact. Use professional but easy-to-understand language.`;
379
377
 
380
- const model = program.opts().openaiModel || 'gpt-4o';
381
- const response = await openai.chat.completions.create({
382
- model,
383
- messages: [{ role: 'user', content: prompt }],
384
- temperature: 0.7,
385
- });
378
+ let summaryText;
386
379
 
387
- // Validate response structure
388
- if (!response?.choices?.length || !response.choices[0]?.message?.content) {
389
- throw new Error(
390
- 'Invalid API response structure. Expected response.choices[0].message.content'
391
- );
380
+ if (aiProvider === 'openai') {
381
+ const model = program.opts().openaiModel || 'gpt-4o';
382
+ const response = await openai.chat.completions.create({
383
+ model,
384
+ messages: [{ role: 'user', content: prompt }],
385
+ temperature: 0.7,
386
+ });
387
+
388
+ if (!response?.choices?.length || !response.choices[0]?.message?.content) {
389
+ throw new Error('Invalid API response structure from OpenAI.');
390
+ }
391
+ summaryText = response.choices[0].message.content;
392
+ } else if (aiProvider === 'gemini') {
393
+ const modelName = program.opts().geminiModel || 'gemini-pro';
394
+ const model = gemini.getGenerativeModel({ model: modelName });
395
+ const result = await model.generateContent(prompt);
396
+ const response = await result.response;
397
+ summaryText = response.text();
398
+ } else if (aiProvider === 'gemini-cli') {
399
+ const tempFilePath = join(tmpdir(), `gemini-prompt-${Date.now()}.txt`);
400
+ try {
401
+ await writeFile(tempFilePath, prompt, 'utf-8');
402
+ const { stdout } = await exec(`gemini < ${tempFilePath}`);
403
+ summaryText = stdout;
404
+ } finally {
405
+ await unlink(tempFilePath);
406
+ }
392
407
  }
393
408
 
394
409
  spinner.succeed('Summary generated successfully');
395
- return response.choices[0].message.content;
410
+ return summaryText;
396
411
  } catch (error) {
397
412
  spinner.fail('Failed to generate summary');
398
413
 
@@ -483,24 +498,65 @@ async function run() {
483
498
  const options = program.opts();
484
499
 
485
500
  // Initialize GitHub token
486
- const { githubToken } = await initializeTokens();
501
+ const githubToken = await initializeGitHubToken();
502
+ octokit = new Octokit({ auth: githubToken });
487
503
 
488
- // Get OpenAI token from command line or fallback to configuration
489
- let openaiToken = options.openaiKey;
490
- if (!openaiToken) {
491
- const tokens = await initializeTokens();
492
- openaiToken = tokens.openaiToken;
504
+ // AI Provider selection
505
+ let aiProvider = options.aiProvider;
506
+ if (!aiProvider) {
507
+ const { provider } = await inquirer.prompt([
508
+ {
509
+ type: 'list',
510
+ name: 'provider',
511
+ message: 'Select an AI provider for generating summaries:',
512
+ choices: [
513
+ { name: 'OpenAI', value: 'openai' },
514
+ { name: 'Gemini (API Key)', value: 'gemini' },
515
+ { name: 'Gemini (CLI)', value: 'gemini-cli' },
516
+ ],
517
+ default: 'openai',
518
+ },
519
+ ]);
520
+ aiProvider = provider;
493
521
  }
494
522
 
495
- // Initialize clients with tokens
496
- octokit = new Octokit({
497
- auth: githubToken,
498
- });
499
-
500
- openai = new OpenAI({
501
- apiKey: openaiToken,
502
- baseURL: options.openaiBaseUrl,
503
- });
523
+ // Initialize AI client
524
+ if (aiProvider === 'openai') {
525
+ const openaiToken =
526
+ options.openaiKey ||
527
+ (await configureToken({
528
+ envKey: 'OPENAI_API_KEY',
529
+ gitKey: 'openai.token',
530
+ name: 'OpenAI',
531
+ createUrl: 'https://platform.openai.com/api-keys',
532
+ }));
533
+ openai = new OpenAI({
534
+ apiKey: openaiToken,
535
+ baseURL: options.openaiBaseUrl,
536
+ });
537
+ } else if (aiProvider === 'gemini') {
538
+ const geminiToken =
539
+ options.geminiKey ||
540
+ (await configureToken({
541
+ envKey: 'GEMINI_API_KEY',
542
+ gitKey: 'gemini.token',
543
+ name: 'Gemini',
544
+ createUrl: 'https://makersuite.google.com/app/apikey',
545
+ }));
546
+ gemini = new GoogleGenerativeAI(geminiToken);
547
+ } else if (aiProvider === 'gemini-cli') {
548
+ // No initialization needed, but I can check if `gemini` command exists
549
+ try {
550
+ await exec('command -v gemini');
551
+ } catch (error) {
552
+ console.error(
553
+ chalk.red(
554
+ 'Error: `gemini` command not found. Please install the gemini-cli tool and make sure it is in your PATH.'
555
+ )
556
+ );
557
+ process.exit(1);
558
+ }
559
+ }
504
560
 
505
561
  // Fetch repositories the user has contributed to
506
562
  const userRepos = await fetchUserRepositories();
@@ -616,7 +672,7 @@ async function run() {
616
672
 
617
673
  let summary;
618
674
  if (summaryType === 'ai') {
619
- summary = await generateSummary(selectedPRs);
675
+ summary = await generateSummary(selectedPRs, aiProvider);
620
676
  } else {
621
677
  summary = selectedPRs
622
678
  .map((pr) => {
@@ -668,27 +724,36 @@ async function run() {
668
724
  const description = `AI-powered GitHub release automation tool
669
725
 
670
726
  Options:
671
- --openai-key <key> Set OpenAI API key directly (alternative to env/git config)
672
- --openai-model <model> Set OpenAI model to use (default: "gpt-4")
673
- Examples: gpt-4, gpt-3.5-turbo
727
+ --ai-provider <provider> Set AI provider to use ('openai', 'gemini', or 'gemini-cli')
728
+ --openai-key <key> Set OpenAI API key directly
729
+ --openai-model <model> Set OpenAI model to use (default: "gpt-4o")
674
730
  --openai-base-url <url> Set custom OpenAI API base URL
675
- Example: https://custom-openai-endpoint.com/v1
731
+ --gemini-key <key> Set Gemini API key directly (for 'gemini' provider)
732
+ --gemini-model <model> Set Gemini model to use (default: "gemini-pro")
676
733
 
677
734
  Environment Variables:
678
735
  GITHUB_TOKEN GitHub personal access token
679
- OPENAI_API_KEY OpenAI API key (if not using --openai-key)
736
+ OPENAI_API_KEY OpenAI API key
737
+ GEMINI_API_KEY Gemini API key
680
738
 
681
739
  Git Config:
682
740
  github.token GitHub token in git config
683
- openai.token OpenAI token in git config (if not using --openai-key)
741
+ openai.token OpenAI token in git config
742
+ gemini.token Gemini token in git config
684
743
  `;
685
744
 
686
745
  program
687
746
  .name('create-app-release')
688
747
  .description(description)
689
748
  .version(pkg.version)
749
+ .option(
750
+ '--ai-provider <provider>',
751
+ "Set AI provider to use ('openai', 'gemini', or 'gemini-cli')"
752
+ )
690
753
  .option('--openai-base-url <url>', 'Set custom OpenAI API base URL')
691
- .option('--openai-model <model>', 'Set OpenAI model to use (default: "gpt-4")')
692
- .option('--openai-key <key>', 'Set OpenAI API key directly (alternative to env/git config)')
754
+ .option('--openai-model <model>', 'Set OpenAI model to use (default: "gpt-4o")')
755
+ .option('--openai-key <key>', 'Set OpenAI API key directly')
756
+ .option('--gemini-model <model>', 'Set Gemini model to use (default: "gemini-pro")')
757
+ .option('--gemini-key <key>', "Set Gemini API key directly (for 'gemini' provider)")
693
758
  .action(run)
694
759
  .parse(process.argv);