aicommit2 2.2.14 → 2.2.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +205 -565
  2. package/dist/cli.mjs +85 -85
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -14,10 +14,11 @@
14
14
  [![license](https://img.shields.io/badge/license-MIT-211A4C.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGZpbGw9Im5vbmUiIHN0cm9rZT0iI0ZGRiIgdmlld0JveD0iMCAwIDI0IDI0Ij48cGF0aCBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiIHN0cm9rZS13aWR0aD0iMiIgZD0ibTMgNiAzIDFtMCAwLTMgOWE1IDUgMCAwIDAgNi4wMDEgME02IDdsMyA5TTYgN2w2LTJtNiAyIDMtMW0tMyAxLTMgOWE1IDUgMCAwIDAgNi4wMDEgME0xOCA3bDMgOW0tMy05LTYtMm0wLTJ2Mm0wIDE2VjVtMCAxNkg5bTMgMGgzIi8+PC9zdmc+)](https://github.com/tak-bro/aicommit2/blob/main/LICENSE)
15
15
  [![version](https://img.shields.io/npm/v/aicommit2?logo=semanticrelease&label=release&color=A51C2D)](https://www.npmjs.com/package/aicommit2)
16
16
  [![downloads](https://img.shields.io/npm/dt/aicommit2?color=F33535&logo=npm)](https://www.npmjs.com/package/aicommit2)
17
+ [![Nix](https://img.shields.io/badge/Nix-5277C3?logo=nixos&logoColor=fff)](#nix-installation)
17
18
 
18
19
  </div>
19
20
 
20
- ---
21
+ ______________________________________________________________________
21
22
 
22
23
  ## 🚀 Quick Start
23
24
 
@@ -37,24 +38,29 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
37
38
 
38
39
  ## ✨ Key Features
39
40
 
40
- - **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
41
- - **OpenAI API Compatibility**: Support for any service that implements the OpenAI API specification.
42
- - **Reactive CLI**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
43
- - **Git Hook Integration**: Can be used as a prepare-commit-msg hook.
44
- - **Custom Prompt**: Supports user-defined system prompt templates.
41
+ - **[Multi-AI Support](#cloud-ai-services)**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
42
+ - **[OpenAI API Compatibility](docs/providers/compatible.md)**: Support for any service that implements the OpenAI API specification.
43
+ - **[Reactive CLI](#usage)**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
44
+ - **[Git Hook Integration](#git-hook)**: Can be used as a prepare-commit-msg hook.
45
+ - **[Custom Prompt](#custom-prompt-template)**: Supports user-defined system prompt templates.
45
46
 
46
47
  ## 🤖 Supported Providers
47
48
 
48
- - [OpenAI](https://openai.com/)
49
- - [Anthropic Claude](https://console.anthropic.com/)
50
- - [Gemini](https://gemini.google.com/)
51
- - [Mistral AI](https://mistral.ai/) (including [Codestral](https://mistral.ai/news/codestral/))
52
- - [Cohere](https://cohere.com/)
53
- - [Groq](https://groq.com/)
54
- - [Perplexity](https://docs.perplexity.ai/)
55
- - [DeepSeek](https://www.deepseek.com/)
56
- - [Ollama](https://ollama.com/)
57
- - [OpenAI API Compatibility](#openai-api-compatible-services)
49
+ ### Cloud AI Services
50
+
51
+ - [OpenAI](docs/providers/openai.md)
52
+ - [Anthropic Claude](docs/providers/anthropic.md)
53
+ - [Gemini](docs/providers/gemini.md)
54
+ - [Mistral & Codestral](docs/providers/mistral.md)
55
+ - [Cohere](docs/providers/cohere.md)
56
+ - [Groq](docs/providers/groq.md)
57
+ - [Perplexity](docs/providers/perplexity.md)
58
+ - [DeepSeek](docs/providers/deepseek.md)
59
+ - [OpenAI API Compatibility](docs/providers/compatible.md)
60
+
61
+ ### Local AI Services
62
+
63
+ - [Ollama](docs/providers/ollama.md)
58
64
 
59
65
  ## Setup
60
66
 
@@ -62,19 +68,10 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
62
68
 
63
69
  1. Install _aicommit2_:
64
70
 
65
- **Directly from npm**:
66
71
  ```sh
67
72
  npm install -g aicommit2
68
73
  ```
69
74
 
70
- **Alternatively, from source**:
71
- ```sh
72
- git clone https://github.com/tak-bro/aicommit2.git
73
- cd aicommit2
74
- npm run build
75
- npm install -g .
76
- ```
77
-
78
75
  2. Set up API keys (**at least ONE key must be set**):
79
76
 
80
77
  ```sh
@@ -84,6 +81,7 @@ aicommit2 config set ANTHROPIC.key=<your key>
84
81
  ```
85
82
 
86
83
  3. Run _aicommit2_ with your staged files in git repository:
84
+
87
85
  ```shell
88
86
  git add <files...>
89
87
  aicommit2
@@ -91,33 +89,75 @@ aicommit2
91
89
 
92
90
  > 👉 **Tip:** Use the `aic2` alias if `aicommit2` is too long for you.
93
91
 
94
- ## Using Locally
92
+ ### Alternative Installation Methods
95
93
 
96
- You can also use your model for free with [Ollama](https://ollama.com/) and it is available to use both Ollama and remote providers **simultaneously**.
94
+ #### Nix Installation
97
95
 
98
- 1. Install Ollama from [https://ollama.com](https://ollama.com/)
96
+ If you use the Nix package manager, aicommit2 can be installed directly using the provided flake:
99
97
 
100
- 2. Start it with your model
101
- ```shell
102
- ollama run llama3.2 # model you want use. ex) codellama, deepseek-coder
98
+ ```sh
99
+ # Install temporarily in your current shell
100
+ nix run github:tak-bro/aicommit2
101
+
102
+ # Install permanently to your profile
103
+ nix profile install github:tak-bro/aicommit2
104
+
105
+ # Use the shorter alias
106
+ nix run github:tak-bro/aic2 -- --help
103
107
  ```
104
108
 
105
- 3. Set the host, model and numCtx. (The default numCtx value in Ollama is 2048. It is recommended to set it to `4096` or higher.)
109
+ ##### Using in a Flake-based Project
110
+
111
+ Add aicommit2 to your flake inputs:
112
+
113
+ ```nix
114
+ {
115
+ # flake.nix configuration file
116
+ inputs = {
117
+ nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
118
+ aicommit2.url = "github:tak-bro/aicommit2";
119
+ };
120
+ # Rest of your flake.nix file
121
+ }
122
+
123
+ # Somewhere where you define your packages
124
+ {pkgs, inputs, ...}:{
125
+
126
+ environment.systemPackages = [inputs.aicommit2.packages.x86_64-linux.default];
127
+ # Or home packages
128
+ home.packages = [inputs.aicommit2.packages.x86_64-linux.default];
129
+ }
130
+ ```
131
+
132
+ ##### Development Environment
133
+
134
+ To enter a development shell with all dependencies:
135
+
106
136
  ```sh
107
- aicommit2 config set OLLAMA.host=<your host>
108
- aicommit2 config set OLLAMA.model=<your model>
109
- aicommit2 config set OLLAMA.numCtx=4096
137
+ nix develop github:tak-bro/aicommit2
110
138
  ```
111
139
 
112
- > If you want to use Ollama, you must set **OLLAMA.model**.
140
+ After setting up with Nix, you'll still need to configure API keys as described in the [Setup](#setup) section.
113
141
 
114
- 4. Run _aicommit2_ with your staged in git repository
115
- ```shell
116
- git add <files...>
117
- aicommit2
142
+ #### From Source
143
+
144
+ ```sh
145
+ git clone https://github.com/tak-bro/aicommit2.git
146
+ cd aicommit2
147
+ npm run build
148
+ npm install -g .
118
149
  ```
119
150
 
120
- > 👉 **Tip:** Ollama can run LLMs **in parallel** from v0.1.33. Please see [this section](#loading-multiple-ollama-models).
151
+ #### Via VSCode Devcontainer
152
+
153
+ Add [feature](https://github.com/kvokka/features/tree/main/src/aicommit2) to
154
+ your `devcontainer.json` file:
155
+
156
+ ```json
157
+ "features": {
158
+ "ghcr.io/kvokka/features/aicommit2:1": {}
159
+ }
160
+ ```
121
161
 
122
162
  ## How it works
123
163
 
@@ -161,8 +201,9 @@ aicommit2 --all # or -a
161
201
  - `--pre-commit`: Run in [pre-commit](https://pre-commit.com/) framework mode (default: **false**)
162
202
  - This option is specifically for use with the pre-commit framework
163
203
  - See [Integration with pre-commit framework](#integration-with-pre-commit-framework) section for setup instructions
164
-
204
+
165
205
  Example:
206
+
166
207
  ```sh
167
208
  aicommit2 --locale "jp" --all --type "conventional" --generate 3 --clipboard --exclude "*.json" --exclude "*.ts"
168
209
  ```
@@ -187,7 +228,7 @@ if you prefer to set up the hook manually, create or edit the `.git/hooks/prepar
187
228
  #!/bin/sh
188
229
  # your-other-hook "$@"
189
230
  aicommit2 --hook-mode "$@"
190
- ```
231
+ ```
191
232
 
192
233
  Make the hook executable:
193
234
 
@@ -195,6 +236,30 @@ Make the hook executable:
195
236
  chmod +x .git/hooks/prepare-commit-msg
196
237
  ```
197
238
 
239
+ #### Integration with pre-commit Framework
240
+
241
+ If you're using the [pre-commit](https://pre-commit.com/) framework, you can add _aicommit2_ to your `.pre-commit-config.yaml`:
242
+
243
+ ```yaml
244
+ repos:
245
+ - repo: local
246
+ hooks:
247
+ - id: aicommit2
248
+ name: AI Commit Message Generator
249
+ entry: aicommit2 --pre-commit
250
+ language: node
251
+ stages: [prepare-commit-msg]
252
+ always_run: true
253
+ ```
254
+
255
+ Make sure you have:
256
+
257
+ 1. Installed pre-commit: `brew install pre-commit`
258
+ 2. Installed aicommit2 globally: `npm install -g aicommit2`
259
+ 3. Run `pre-commit install --hook-type prepare-commit-msg` to set up the hook
260
+
261
+ > **Note** : The `--pre-commit` flag is specifically designed for use with the pre-commit framework and ensures proper integration with other pre-commit hooks.
262
+
198
263
  #### Uninstall
199
264
 
200
265
  In the Git repository you want to uninstall the hook from:
@@ -209,14 +274,27 @@ Or manually delete the `.git/hooks/prepare-commit-msg` file.
209
274
 
210
275
  #### Reading and Setting Configuration
211
276
 
212
- - READ: `aicommit2 config get <key>`
277
+ - READ: `aicommit2 config get [<key> [<key> ...]]`
213
278
  - SET: `aicommit2 config set <key>=<value>`
279
+ - DELETE: `aicommit2 config del <config-name>`
214
280
 
215
281
  Example:
282
+
216
283
  ```sh
284
+ # Get all configurations
285
+ aicommit2 config get
286
+
287
+ # Get specific configuration
217
288
  aicommit2 config get OPENAI
218
289
  aicommit2 config get GEMINI.key
290
+
291
+ # Set configurations
219
292
  aicommit2 config set OPENAI.generate=3 GEMINI.temperature=0.5
293
+
294
+ # Delete a configuration setting or section
295
+ aicommit2 config del OPENAI.key
296
+ aicommit2 config del GEMINI
297
+ aicommit2 config del timeout
220
298
  ```
221
299
 
222
300
  #### Environment Variables
@@ -246,17 +324,19 @@ Usage Example:
246
324
  OPENAI_API_KEY="your-openai-key" ANTHROPIC_API_KEY="your-anthropic-key" aicommit2
247
325
  ```
248
326
 
249
- > **Note**: Environment variables take precedence over configuration file settings.
327
+ > **Note**: Environment variables take precedence over configuration file settings.
250
328
 
251
329
  #### How to Configure in detail
252
330
 
253
331
  1. Command-line arguments: **use the format** `--[Model].[Key]=value`
332
+
254
333
  ```sh
255
334
  aicommit2 --OPENAI.locale="jp" --GEMINI.temperatue="0.5"
256
335
  ```
257
336
 
258
337
  2. Configuration file: **use INI format in the `~/.aicommit2` file or use `set` command**.
259
338
  Example `~/.aicommit2`:
339
+
260
340
  ```ini
261
341
  # General Settings
262
342
  logging=true
@@ -289,7 +369,7 @@ The following settings can be applied to most models, but support may vary.
289
369
  Please check the documentation for each specific model to confirm which settings are supported.
290
370
 
291
371
  | Setting | Description | Default |
292
- |------------------------|---------------------------------------------------------------------|--------------|
372
+ | ---------------------- | ------------------------------------------------------------------- | ------------ |
293
373
  | `systemPrompt` | System Prompt text | - |
294
374
  | `systemPromptPath` | Path to system prompt file | - |
295
375
  | `exclude` | Files to exclude from AI analysis | - |
@@ -307,7 +387,8 @@ Please check the documentation for each specific model to confirm which settings
307
387
  | `codeReviewPromptPath` | Path to code review prompt file | - |
308
388
  | `disabled` | Whether a specific model is enabled or disabled | false |
309
389
 
310
- > 👉 **Tip:** To set the General Settings for each model, use the following command.
390
+ > 👉 **Tip:** To set the General Settings for each model, use the following command.
391
+ >
311
392
  > ```shell
312
393
  > aicommit2 config set OPENAI.locale="jp"
313
394
  > aicommit2 config set CODESTRAL.type="gitmoji"
@@ -315,6 +396,7 @@ Please check the documentation for each specific model to confirm which settings
315
396
  > ```
316
397
 
317
398
  ##### systemPrompt
399
+
318
400
  - Allow users to specify a custom system prompt
319
401
 
320
402
  ```sh
@@ -324,6 +406,7 @@ aicommit2 config set systemPrompt="Generate git commit message."
324
406
  > `systemPrompt` takes precedence over `systemPromptPath` and does not apply at the same time.
325
407
 
326
408
  ##### systemPromptPath
409
+
327
410
  - Allow users to specify a custom file path for their own system prompt template
328
411
  - Please see [Custom Prompt Template](#custom-prompt-template)
329
412
 
@@ -386,7 +469,7 @@ The log files will be stored in the `~/.aicommit2_log` directory(user's home).
386
469
 
387
470
  ![log-path](https://github.com/tak-bro/aicommit2/blob/main/img/log_path.png?raw=true)
388
471
 
389
- - You can remove all logs below comamnd.
472
+ - You can remove all logs below command.
390
473
 
391
474
  ```sh
392
475
  aicommit2 log removeAll
@@ -404,7 +487,6 @@ aicommit2 config set includeBody="true"
404
487
 
405
488
  ![ignore_body_false](https://github.com/tak-bro/aicommit2/blob/main/img/demo_body_min.gif?raw=true)
406
489
 
407
-
408
490
  ```sh
409
491
  aicommit2 config set includeBody="false"
410
492
  ```
@@ -499,6 +581,7 @@ aicommit2 config set codeReview=true
499
581
  - **The code review process consumes a large number of tokens.**
500
582
 
501
583
  ##### codeReviewPromptPath
584
+
502
585
  - Allow users to specify a custom file path for code review
503
586
 
504
587
  ```sh
@@ -506,486 +589,58 @@ aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"
506
589
  ```
507
590
 
508
591
  ## Available General Settings by Model
509
- | | timeout | temperature | maxTokens | topP |
510
- |:---------------------------:|:-------:|:-----------:|:---------:|:------:|
511
- | **OpenAI** |||||
512
- | **Anthropic Claude** | ✓ | ✓ | ✓ | |
513
- | **Gemini** | | ✓ | ✓ | |
514
- | **Mistral AI** || ✓ | ✓ | |
515
- | **Codestral** | ✓ | ✓ | ✓ | |
516
- | **Cohere** | ✓ | ✓ | ✓ | |
517
- | **Groq** | ✓ | ✓ | ✓ | |
518
- | **Perplexity** | ✓ | ✓ | ✓ | |
519
- | **DeepSeek** | ✓ | ✓ | ✓ | |
520
- | **Ollama** | ✓ | ✓ | | |
521
- | **OpenAI API-Compatible** | ✓ | ✓ | | |
592
+
593
+ | | timeout | temperature | maxTokens | topP |
594
+ | :-----------------------: | :-----: | :---------: | :-------: | :--: |
595
+ | **OpenAI** | ✓ | ✓ | ✓ | |
596
+ | **Anthropic Claude** || ✓ | ✓ | |
597
+ | **Gemini** | | ✓ | ✓ | |
598
+ | **Mistral AI** | ✓ | ✓ | ✓ | |
599
+ | **Codestral** | ✓ | ✓ | ✓ | |
600
+ | **Cohere** | ✓ | ✓ | ✓ | |
601
+ | **Groq** | ✓ | ✓ | ✓ | |
602
+ | **Perplexity** | ✓ | ✓ | ✓ | |
603
+ | **DeepSeek** | ✓ | ✓ || |
604
+ | **Ollama** | ✓ | ✓ | | ✓ |
605
+ | **OpenAI API-Compatible** | ✓ | ✓ | ✓ | ✓ |
522
606
 
523
607
  > All AI support the following options in General Settings.
608
+ >
524
609
  > - systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength
525
610
 
526
- ## Model-Specific Settings
527
-
528
- > Some models mentioned below are subject to change.
529
-
530
- ### OpenAI
531
-
532
- | Setting | Description | Default |
533
- |---------|--------------------|------------------------|
534
- | `key` | API key | - |
535
- | `model` | Model to use | `gpt-4o-mini` |
536
- | `url` | API endpoint URL | https://api.openai.com |
537
- | `path` | API path | /v1/chat/completions |
538
- | `proxy` | Proxy settings | - |
539
-
540
- ##### OPENAI.key
541
-
542
- The OpenAI API key. You can retrieve it from [OpenAI API Keys page](https://platform.openai.com/account/api-keys).
543
-
544
- ```sh
545
- aicommit2 config set OPENAI.key="your api key"
546
- ```
547
-
548
- ##### OPENAI.model
549
-
550
- Default: `gpt-4o-mini`
551
-
552
- The Chat Completions (`/v1/chat/completions`) model to use. Consult the list of models available in the [OpenAI Documentation](https://platform.openai.com/docs/models/model-endpoint-compatibility).
553
-
554
- ```sh
555
- aicommit2 config set OPENAI.model=gpt-4o
556
- ```
557
-
558
- ##### OPENAI.url
559
-
560
- Default: `https://api.openai.com`
561
-
562
- The OpenAI URL. Both https and http protocols supported. It allows to run local OpenAI-compatible server.
563
-
564
- ```sh
565
- aicommit2 config set OPENAI.url="<your-host>"
566
- ```
567
-
568
- ##### OPENAI.path
569
-
570
- Default: `/v1/chat/completions`
571
-
572
- The OpenAI Path.
573
-
574
- ##### OPENAI.topP
575
-
576
- Default: `0.9`
577
-
578
- The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p).
579
-
580
- ```sh
581
- aicommit2 config set OPENAI.topP=0.2
582
- ```
583
-
584
- > NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
585
-
586
- ### Anthropic
587
-
588
- | Setting | Description | Default |
589
- |-------------|----------------|-----------------------------|
590
- | `key` | API key | - |
591
- | `model` | Model to use | `claude-3-5-haiku-20241022` |
592
-
593
- ##### ANTHROPIC.key
594
-
595
- The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
596
-
597
- ##### ANTHROPIC.model
598
-
599
- Default: `claude-3-5-haiku-20241022`
600
-
601
- Supported:
602
- - `claude-3-7-sonnet-20250219`
603
- - `claude-3-5-sonnet-20241022`
604
- - `claude-3-5-haiku-20241022`
605
- - `claude-3-opus-20240229`
606
- - `claude-3-sonnet-20240229`
607
- - `claude-3-haiku-20240307`
608
-
609
- ```sh
610
- aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
611
- ```
612
-
613
- ### Gemini
614
-
615
- | Setting | Description | Default |
616
- |--------------------|------------------------|------------------------|
617
- | `key` | API key | - |
618
- | `model` | Model to use | `gemini-2.0-flash` |
619
-
620
- ##### GEMINI.key
621
-
622
- The Gemini API key. If you don't have one, create a key in [Google AI Studio](https://aistudio.google.com/app/apikey).
623
-
624
- ```sh
625
- aicommit2 config set GEMINI.key="your api key"
626
- ```
627
-
628
- ##### GEMINI.model
629
-
630
- Default: `gemini-2.0-flash`
631
-
632
- Supported:
633
- - `gemini-2.0-flash`
634
- - `gemini-2.0-flash-lite`
635
- - `gemini-2.0-pro-exp-02-05`
636
- - `gemini-2.0-flash-thinking-exp-01-21`
637
- - `gemini-2.0-flash-exp`
638
- - `gemini-1.5-flash`
639
- - `gemini-1.5-flash-8b`
640
- - `gemini-1.5-pro`
641
-
642
- ```sh
643
- aicommit2 config set GEMINI.model="gemini-2.0-flash"
644
- ```
645
-
646
- ##### Unsupported Options
647
-
648
- Gemini does not support the following options in General Settings.
649
-
650
- - timeout
651
-
652
- ### Mistral
653
-
654
- | Setting | Description | Default |
655
- |----------|------------------|--------------------|
656
- | `key` | API key | - |
657
- | `model` | Model to use | `pixtral-12b-2409` |
658
-
659
- ##### MISTRAL.key
660
-
661
- The Mistral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/).
662
-
663
- ##### MISTRAL.model
664
-
665
- Default: `pixtral-12b-2409`
666
-
667
- Supported:
668
- - `codestral-latest`
669
- - `mistral-large-latest`
670
- - `pixtral-large-latest`
671
- - `ministral-8b-latest`
672
- - `mistral-small-latest`
673
- - `mistral-embed`
674
- - `mistral-moderation-latest`
675
-
676
- ### Codestral
677
-
678
- | Setting | Description | Default |
679
- |---------|------------------|--------------------|
680
- | `key` | API key | - |
681
- | `model` | Model to use | `codestral-latest` |
682
-
683
- ##### CODESTRAL.key
684
-
685
- The Codestral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/codestral).
686
-
687
- ##### CODESTRAL.model
688
-
689
- Default: `codestral-latest`
690
-
691
- Supported:
692
- - `codestral-latest`
693
- - `codestral-2501`
694
-
695
- ```sh
696
- aicommit2 config set CODESTRAL.model="codestral-2501"
697
- ```
698
-
699
- ### Cohere
700
-
701
- | Setting | Description | Default |
702
- |--------------------|--------------|-------------|
703
- | `key` | API key | - |
704
- | `model` | Model to use | `command` |
705
-
706
- ##### COHERE.key
707
-
708
- The Cohere API key. If you don't have one, please sign up and get the API key in [Cohere Dashboard](https://dashboard.cohere.com/).
709
-
710
- ##### COHERE.model
711
-
712
- Default: `command`
713
-
714
- Supported models:
715
- - `command-r7b-12-2024`
716
- - `command-r-plus-08-2024`
717
- - `command-r-plus-04-2024`
718
- - `command-r-plus`
719
- - `command-r-08-2024`
720
- - `command-r-03-2024`
721
- - `command-r`
722
- - `command`
723
- - `command-nightly`
724
- - `command-light`
725
- - `command-light-nightly`
726
- - `c4ai-aya-expanse-8b`
727
- - `c4ai-aya-expanse-32b`
728
-
729
- ```sh
730
- aicommit2 config set COHERE.model="command-nightly"
731
- ```
732
-
733
- ### Groq
734
-
735
- | Setting | Description | Default |
736
- |--------------------|------------------------|---------------------------------|
737
- | `key` | API key | - |
738
- | `model` | Model to use | `deepseek-r1-distill-llama-70b` |
739
-
740
- ##### GROQ.key
741
-
742
- The Groq API key. If you don't have one, please sign up and get the API key in [Groq Console](https://console.groq.com).
743
-
744
- ##### GROQ.model
745
-
746
- Default: `deepseek-r1-distill-llama-70b`
747
-
748
- Supported:
749
- - `qwen-2.5-32b`
750
- - `qwen-2.5-coder-32b`
751
- - `deepseek-r1-distill-qwen-32b`
752
- - `deepseek-r1-distill-llama-70b`
753
- - `distil-whisper-large-v3-en`
754
- - `gemma2-9b-it`
755
- - `llama-3.3-70b-versatile`
756
- - `llama-3.1-8b-instant`
757
- - `llama-guard-3-8b`
758
- - `llama3-70b-8192`
759
- - `llama3-8b-8192`
760
- - `mixtral-8x7b-32768`
761
- - `whisper-large-v3`
762
- - `whisper-large-v3-turbo`
763
- - `llama-3.3-70b-specdec`
764
- - `llama-3.2-1b-preview`
765
- - `llama-3.2-3b-preview`
766
- - `llama-3.2-11b-vision-preview`
767
- - `llama-3.2-90b-vision-preview`
768
-
769
-
770
- ```sh
771
- aicommit2 config set GROQ.model="deepseek-r1-distill-llama-70b"
772
- ```
773
-
774
- ### Perplexity
775
-
776
- | Setting | Description | Default |
777
- |----------|------------------|----------|
778
- | `key` | API key | - |
779
- | `model` | Model to use | `sonar` |
780
-
781
- ##### PERPLEXITY.key
782
-
783
- The Perplexity API key. If you don't have one, please sign up and get the API key in [Perplexity](https://docs.perplexity.ai/)
784
-
785
- ##### PERPLEXITY.model
786
-
787
- Default: `sonar`
788
-
789
- Supported:
790
- - `sonar-pro`
791
- - `sonar`
792
- - `llama-3.1-sonar-small-128k-online`
793
- - `llama-3.1-sonar-large-128k-online`
794
- - `llama-3.1-sonar-huge-128k-online`
795
-
796
- > The models mentioned above are subject to change.
797
-
798
- ```sh
799
- aicommit2 config set PERPLEXITY.model="sonar-pro"
800
- ```
801
-
802
- ### DeepSeek
803
-
804
- | Setting | Description | Default |
805
- |---------|------------------|--------------------|
806
- | `key` | API key | - |
807
- | `model` | Model to use | `deepseek-chat` |
808
-
809
- ##### DEEPSEEK.key
810
-
811
- The DeepSeek API key. If you don't have one, please sign up and subscribe in [DeepSeek Platform](https://platform.deepseek.com/).
812
-
813
- ##### DEEPSEEK.model
814
-
815
- Default: `deepseek-chat`
816
-
817
- Supported:
818
- - `deepseek-chat`
819
- - `deepseek-reasoner`
820
-
821
- ```sh
822
- aicommit2 config set DEEPSEEK.model="deepseek-reasoner"
823
- ```
824
-
825
- ### Ollama
826
-
827
- | Setting | Description | Default |
828
- |------------|-------------------------------------------------------------|------------------------|
829
- | `model` | Model(s) to use (comma-separated list) | - |
830
- | `host` | Ollama host URL | http://localhost:11434 |
831
- | `auth` | Authentication type | Bearer |
832
- | `key` | Authentication key | - |
833
- | `numCtx` | The maximum number of tokens the model can process at once | 2048 |
834
-
835
- ##### OLLAMA.model
836
-
837
- The Ollama Model. Please see [a list of models available](https://ollama.com/library)
838
-
839
- ```sh
840
- aicommit2 config set OLLAMA.model="llama3.1"
841
- aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
842
-
843
- aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
844
- ```
845
-
846
- > OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
847
-
848
- ##### OLLAMA.host
849
-
850
- Default: `http://localhost:11434`
851
-
852
- The Ollama host
853
-
854
- ```sh
855
- aicommit2 config set OLLAMA.host=<host>
856
- ```
857
-
858
- ##### OLLAMA.auth
859
-
860
- Not required. Use when your Ollama server requires authentication. Please see [this issue](https://github.com/tak-bro/aicommit2/issues/90).
861
-
862
- ```sh
863
- aicommit2 config set OLLAMA.auth=<auth>
864
- ```
865
-
866
- ##### OLLAMA.key
867
-
868
- Not required. Use when your Ollama server requires authentication. Please see [this issue](https://github.com/tak-bro/aicommit2/issues/90).
869
-
870
- ```sh
871
- aicommit2 config set OLLAMA.key=<key>
872
- ```
873
-
874
- Few examples of authentication methods:
875
-
876
- | **Authentication Method** | **OLLAMA.auth** | **OLLAMA.key** |
877
- |---------------------------|------------------------------|---------------------------------------|
878
- | Bearer | `Bearer` | `<API key>` |
879
- | Basic | `Basic` | `<Base64 Encoded username:password>` |
880
- | JWT | `Bearer` | `<JWT Token>` |
881
- | OAuth 2.0 | `Bearer` | `<Access Token>` |
882
- | HMAC-SHA256 | `HMAC` | `<Base64 Encoded clientId:signature>` |
883
-
884
- ##### OLLAMA.numCtx
885
-
886
- The maximum number of tokens the model can process at once, determining its context length and memory usage.
887
- It is recommended to set it to 4096 or higher.
888
-
889
- ```sh
890
- aicommit2 config set OLLAMA.numCtx=4096
891
- ```
892
-
893
- ##### Unsupported Options
894
-
895
- Ollama does not support the following options in General Settings.
896
-
897
- - maxTokens
898
-
899
- ### OpenAI API-Compatible Services
900
-
901
- You can configure any OpenAI API-compatible service by adding a configuration section with the `compatible=true` option. This allows you to use services that implement the OpenAI API specification.
902
-
903
- ```sh
904
- # together
905
- aicommit2 config set TOGETHER.compatible=true
906
- aicommit2 config set TOGETHER.url=https://api.together.xyz
907
- aicommit2 config set TOGETHER.path=/v1
908
- aicommit2 config set TOGETHER.model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
909
- aicommit2 config set TOGETHER.key="your-api-key"
910
- ```
911
-
912
- | Setting | Description | Required | Default |
913
- |--------------|----------------------------------------|----------------------|---------|
914
- | `compatible` | Enable OpenAI API compatibility mode | ✓ (**must be true**) | false |
915
- | `url` | Base URL of the API endpoint | ✓ | - |
916
- | `path` | API path for chat completions | | - |
917
- | `key` | API key for authentication | ✓ | - |
918
- | `model` | Model identifier to use | ✓ | - |
919
-
920
- Example configuration:
921
- ```ini
922
- [TOGETHER]
923
- compatible=true
924
- key=<your-api-key>
925
- url=https://api.together.xyz/v1
926
- model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
927
-
928
- [GEMINI_COMPATIBILITY]
929
- compatible=true
930
- key=<your-api-key>
931
- url=https://generativelanguage.googleapis.com
932
- path=/v1beta/openai/
933
- model=gemini-1.5-flash
934
-
935
- [OLLAMA_COMPATIBILITY]
936
- compatible=true
937
- key=ollama
938
- url=http://localhost:11434/v1
939
- model=llama3.2
940
- ```
941
-
942
- ## Watch Commit Mode
943
-
944
- ![watch-commit-gif](https://github.com/tak-bro/aicommit2/blob/main/img/watch-commit-min.gif?raw=true)
945
-
946
- Watch Commit mode allows you to monitor Git commits in real-time and automatically perform AI code reviews using the `--watch-commit` flag.
947
-
948
- ```sh
949
- aicommit2 --watch-commit
950
- ```
951
-
952
- This feature only works within Git repository directories and automatically triggers whenever a commit event occurs. When a new commit is detected, it automatically:
953
- 1. Analyzes commit changes
954
- 2. Performs AI code review
955
- 3. Displays results in real-time
956
-
957
- > For detailed configuration of the code review feature, please refer to the [codeReview](#codereview) section. The settings in that section are shared with this feature.
958
-
959
- ⚠️ **CAUTION**
960
-
961
- - The Watch Commit feature is currently **experimental**
962
- - This feature performs AI analysis for each commit, which **consumes a significant number of API tokens**
963
- - API costs can increase substantially if there are many commits
964
- - It is recommended to **carefully monitor your token usage** when using this feature
965
- - To use this feature, you must enable watch mode for at least one AI model:
966
- ```sh
967
- aicommit2 config set [MODEL].watchMode="true"
968
- ```
969
-
970
- ## Upgrading
971
-
972
- Check the installed version with:
973
-
974
- ```
975
- aicommit2 --version
976
- ```
977
-
978
- If it's not the [latest version](https://github.com/tak-bro/aicommit2/releases/latest), run:
979
-
980
- ```sh
981
- npm update -g aicommit2
982
- ```
611
+ ## Configuration Examples
612
+
613
+ ```
614
+ aicommit2 config set \
615
+ generate=2 \
616
+ topP=0.8 \
617
+ maxTokens=1024 \
618
+ temperature=0.7 \
619
+ OPENAI.key="sk-..." OPENAI.model="gpt-4o" OPENAI.temperature=0.5 \
620
+ ANTHROPIC.key="sk-..." ANTHROPIC.model="claude-3-haiku" ANTHROPIC.maxTokens=2000 \
621
+ MISTRAL.key="your-key" MISTRAL.model="codestral-latest" \
622
+ OLLAMA.model="llama3.2" OLLAMA.numCtx=4096 OLLAMA.watchMode=true
623
+ ```
624
+
625
+ > 🔍 **Detailed Support Info**: Check each provider's documentation for specific limits and behaviors:
626
+ >
627
+ > - [OpenAI](docs/providers/openai.md)
628
+ > - [Anthropic Claude](docs/providers/anthropic.md)
629
+ > - [Gemini](docs/providers/gemini.md)
630
+ > - [Mistral & Codestral](docs/providers/mistral.md)
631
+ > - [Cohere](docs/providers/cohere.md)
632
+ > - [Groq](docs/providers/groq.md)
633
+ > - [Perplexity](docs/providers/perplexity.md)
634
+ > - [DeepSeek](docs/providers/deepseek.md)
635
+ > - [OpenAI API Compatibility](docs/providers/compatible.md)
636
+ > - [Ollama](docs/providers/ollama.md)
983
637
 
984
638
  ## Custom Prompt Template
985
639
 
986
640
  _aicommit2_ supports custom prompt templates through the `systemPromptPath` option. This feature allows you to define your own prompt structure, giving you more control over the commit message generation process.
987
641
 
988
642
  ### Using the systemPromptPath Option
643
+
989
644
  To use a custom prompt template, specify the path to your template file when running the tool:
990
645
 
991
646
  ```
@@ -1046,69 +701,50 @@ The response should be valid JSON that can be parsed without errors.
1046
701
 
1047
702
  This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
1048
703
 
1049
- ## Integration with pre-commit framework
1050
-
1051
- If you're using the [pre-commit](https://pre-commit.com/) framework, you can add _aicommit2_ to your `.pre-commit-config.yaml`:
1052
-
1053
- ```yaml
1054
- repos:
1055
- - repo: local
1056
- hooks:
1057
- - id: aicommit2
1058
- name: AI Commit Message Generator
1059
- entry: aicommit2 --pre-commit
1060
- language: node
1061
- stages: [prepare-commit-msg]
1062
- always_run: true
1063
- ```
1064
-
1065
- Make sure you have:
1066
-
1067
- 1. Installed pre-commit: `brew install pre-commit`
1068
- 2. Installed aicommit2 globally: `npm install -g aicommit2`
1069
- 3. Run `pre-commit install --hook-type prepare-commit-msg` to set up the hook
704
+ ## Watch Commit Mode
1070
705
 
1071
- > **Note** : The `--pre-commit` flag is specifically designed for use with the pre-commit framework and ensures proper integration with other pre-commit hooks.
706
+ ![watch-commit-gif](https://github.com/tak-bro/aicommit2/blob/main/img/watch-commit-min.gif?raw=true)
1072
707
 
1073
- ## Loading Multiple Ollama Models
708
+ Watch Commit mode allows you to monitor Git commits in real-time and automatically perform AI code reviews using the `--watch-commit` flag.
1074
709
 
1075
- <img src="https://github.com/tak-bro/aicommit2/blob/main/img/ollama_parallel.gif?raw=true" alt="OLLAMA_PARALLEL" />
710
+ ```sh
711
+ aicommit2 --watch-commit
712
+ ```
1076
713
 
1077
- You can load and make simultaneous requests to multiple models using Ollama's experimental feature, the `OLLAMA_MAX_LOADED_MODELS` option.
1078
- - `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously
714
+ This feature only works within Git repository directories and automatically triggers whenever a commit event occurs. When a new commit is detected, it automatically:
1079
715
 
1080
- #### Setup Guide
716
+ 1. Analyzes commit changes
717
+ 2. Performs AI code review
718
+ 3. Displays results in real-time
1081
719
 
1082
- Follow these steps to set up and utilize multiple models simultaneously:
720
+ > For detailed configuration of the code review feature, please refer to the [codeReview](#codereview) section. The settings in that section are shared with this feature.
1083
721
 
1084
- ##### 1. Running Ollama Server
722
+ ⚠️ **CAUTION**
1085
723
 
1086
- First, launch the Ollama server with the `OLLAMA_MAX_LOADED_MODELS` environment variable set. This variable specifies the maximum number of models to be loaded simultaneously.
1087
- For example, to load up to 3 models, use the following command:
724
+ - The Watch Commit feature is currently **experimental**
725
+ - This feature performs AI analysis for each commit, which **consumes a significant number of API tokens**
726
+ - API costs can increase substantially if there are many commits
727
+ - It is recommended to **carefully monitor your token usage** when using this feature
728
+ - To use this feature, you must enable watch mode for at least one AI model:
1088
729
 
1089
- ```shell
1090
- OLLAMA_MAX_LOADED_MODELS=3 ollama serve
730
+ ```sh
731
+ aicommit2 config set [MODEL].watchMode="true"
1091
732
  ```
1092
- > Refer to [configuration](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server) for detailed instructions.
1093
733
 
1094
- ##### 2. Configuring _aicommit2_
734
+ ## Upgrading
1095
735
 
1096
- Next, set up _aicommit2_ to specify multiple models. You can assign a list of models, separated by **commas(`,`)**, to the OLLAMA.model environment variable. Here's how you do it:
736
+ Check the installed version with:
1097
737
 
1098
- ```shell
1099
- aicommit2 config set OLLAMA.model="mistral,dolphin-llama3"
738
+ ```
739
+ aicommit2 --version
1100
740
  ```
1101
741
 
1102
- With this command, _aicommit2_ is instructed to utilize both the "mistral" and "dolphin-llama3" models when making requests to the Ollama server.
1103
-
1104
- ##### 3. Run _aicommit2_
742
+ If it's not the [latest version](https://github.com/tak-bro/aicommit2/releases/latest), run:
1105
743
 
1106
- ```shell
1107
- aicommit2
744
+ ```sh
745
+ npm update -g aicommit2
1108
746
  ```
1109
747
 
1110
- > Note that this feature is available starting from Ollama version [**0.1.33**](https://github.com/ollama/ollama/releases/tag/v0.1.33) and _aicommit2_ version [**1.9.5**](https://www.npmjs.com/package/aicommit2/v/1.9.5).
1111
-
1112
748
  ## Disclaimer and Risks
1113
749
 
1114
750
  This project uses functionalities from external APIs but is not officially affiliated with or endorsed by their providers. Users are responsible for complying with API terms, rate limits, and policies.
@@ -1122,8 +758,11 @@ For bug fixes or feature implementations, please check the [Contribution Guide](
1122
758
  Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
1123
759
 
1124
760
  <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
761
+
1125
762
  <!-- prettier-ignore-start -->
763
+
1126
764
  <!-- markdownlint-disable -->
765
+
1127
766
  <table>
1128
767
  <tr>
1129
768
  <td align="center"><a href="https://github.com/eltociear"><img src="https://avatars.githubusercontent.com/eltociear" width="100px;" alt=""/><br /><sub><b>@eltociear</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=eltociear" title="Documentation">📖</a></td>
@@ -1137,13 +776,14 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
1137
776
  <tr>
1138
777
  <td align="center"><a href="https://github.com/devxpain"><img src="https://avatars.githubusercontent.com/devxpain" width="100px;" alt=""/><br /><sub><b>@devxpain</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=devxpain" title="Code">💻</a></td>
1139
778
  <td align="center"><a href="https://github.com/delenzhang"><img src="https://avatars.githubusercontent.com/delenzhang" width="100px;" alt=""/><br /><sub><b>@delenzhang</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=delenzhang" title="Code">💻</a></td>
779
+ <td align="center"><a href="https://github.com/kvokka"><img src="https://avatars.githubusercontent.com/kvokka" width="100px;" alt=""/><br /><sub><b>@kvokka</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=kvokka" title="Documentation">📖</a></td>
1140
780
  </tr>
1141
781
  </table>
1142
782
  <!-- markdownlint-restore -->
1143
783
  <!-- prettier-ignore-end -->
1144
784
  <!-- ALL-CONTRIBUTORS-LIST:END -->
1145
785
 
1146
- ---
786
+ ______________________________________________________________________
1147
787
 
1148
788
  If this project has been helpful, please consider giving it a Star ⭐️!
1149
789