aicommit2 2.2.13 → 2.2.15
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +112 -547
- package/dist/cli.mjs +96 -117
- package/package.json +2 -1
package/README.md
CHANGED
|
@@ -37,24 +37,29 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
|
|
|
37
37
|
|
|
38
38
|
## ✨ Key Features
|
|
39
39
|
|
|
40
|
-
- **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
|
|
41
|
-
- **OpenAI API Compatibility**: Support for any service that implements the OpenAI API specification.
|
|
42
|
-
- **Reactive CLI**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
|
|
43
|
-
- **Git Hook Integration**: Can be used as a prepare-commit-msg hook.
|
|
44
|
-
- **Custom Prompt**: Supports user-defined system prompt templates.
|
|
40
|
+
- **[Multi-AI Support](#cloud-ai-services)**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
|
|
41
|
+
- **[OpenAI API Compatibility](docs/providers/compatible.md)**: Support for any service that implements the OpenAI API specification.
|
|
42
|
+
- **[Reactive CLI](#usage)**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
|
|
43
|
+
- **[Git Hook Integration](#git-hook)**: Can be used as a prepare-commit-msg hook.
|
|
44
|
+
- **[Custom Prompt](#custom-prompt-template)**: Supports user-defined system prompt templates.
|
|
45
45
|
|
|
46
46
|
## 🤖 Supported Providers
|
|
47
47
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
- [
|
|
51
|
-
- [
|
|
52
|
-
- [
|
|
53
|
-
- [
|
|
54
|
-
- [
|
|
55
|
-
- [
|
|
56
|
-
- [
|
|
57
|
-
- [
|
|
48
|
+
### Cloud AI Services
|
|
49
|
+
|
|
50
|
+
- [OpenAI](docs/providers/openai.md)
|
|
51
|
+
- [Anthropic Claude](docs/providers/anthropic.md)
|
|
52
|
+
- [Gemini](docs/providers/gemini.md)
|
|
53
|
+
- [Mistral & Codestral](docs/providers/mistral.md)
|
|
54
|
+
- [Cohere](docs/providers/cohere.md)
|
|
55
|
+
- [Groq](docs/providers/groq.md)
|
|
56
|
+
- [Perplexity](docs/providers/perplexity.md)
|
|
57
|
+
- [DeepSeek](docs/providers/deepseek.md)
|
|
58
|
+
- [OpenAI API Compatibility](docs/providers/compatible.md)
|
|
59
|
+
|
|
60
|
+
### Local AI Services
|
|
61
|
+
|
|
62
|
+
- [Ollama](docs/providers/ollama.md)
|
|
58
63
|
|
|
59
64
|
## Setup
|
|
60
65
|
|
|
@@ -62,19 +67,10 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
|
|
|
62
67
|
|
|
63
68
|
1. Install _aicommit2_:
|
|
64
69
|
|
|
65
|
-
**Directly from npm**:
|
|
66
70
|
```sh
|
|
67
71
|
npm install -g aicommit2
|
|
68
72
|
```
|
|
69
73
|
|
|
70
|
-
**Alternatively, from source**:
|
|
71
|
-
```sh
|
|
72
|
-
git clone https://github.com/tak-bro/aicommit2.git
|
|
73
|
-
cd aicommit2
|
|
74
|
-
npm run build
|
|
75
|
-
npm install -g .
|
|
76
|
-
```
|
|
77
|
-
|
|
78
74
|
2. Set up API keys (**at least ONE key must be set**):
|
|
79
75
|
|
|
80
76
|
```sh
|
|
@@ -91,33 +87,27 @@ aicommit2
|
|
|
91
87
|
|
|
92
88
|
> 👉 **Tip:** Use the `aic2` alias if `aicommit2` is too long for you.
|
|
93
89
|
|
|
94
|
-
|
|
90
|
+
### Alternative Installation Methods
|
|
95
91
|
|
|
96
|
-
|
|
92
|
+
#### From Source
|
|
97
93
|
|
|
98
|
-
1. Install Ollama from [https://ollama.com](https://ollama.com/)
|
|
99
|
-
|
|
100
|
-
2. Start it with your model
|
|
101
|
-
```shell
|
|
102
|
-
ollama run llama3.2 # model you want use. ex) codellama, deepseek-coder
|
|
103
|
-
```
|
|
104
|
-
|
|
105
|
-
3. Set the host, model and numCtx. (The default numCtx value in Ollama is 2048. It is recommended to set it to `4096` or higher.)
|
|
106
94
|
```sh
|
|
107
|
-
|
|
108
|
-
aicommit2
|
|
109
|
-
|
|
95
|
+
git clone https://github.com/tak-bro/aicommit2.git
|
|
96
|
+
cd aicommit2
|
|
97
|
+
npm run build
|
|
98
|
+
npm install -g .
|
|
110
99
|
```
|
|
111
100
|
|
|
112
|
-
|
|
101
|
+
#### Via VSCode Devcontainer
|
|
113
102
|
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
git add <files...>
|
|
117
|
-
aicommit2
|
|
118
|
-
```
|
|
103
|
+
Add [feature](https://github.com/kvokka/features/tree/main/src/aicommit2) to
|
|
104
|
+
your `devcontainer.json` file:
|
|
119
105
|
|
|
120
|
-
|
|
106
|
+
```json
|
|
107
|
+
"features": {
|
|
108
|
+
"ghcr.io/kvokka/features/aicommit2:1": {}
|
|
109
|
+
}
|
|
110
|
+
```
|
|
121
111
|
|
|
122
112
|
## How it works
|
|
123
113
|
|
|
@@ -195,6 +185,31 @@ Make the hook executable:
|
|
|
195
185
|
chmod +x .git/hooks/prepare-commit-msg
|
|
196
186
|
```
|
|
197
187
|
|
|
188
|
+
#### Integration with pre-commit Framework
|
|
189
|
+
|
|
190
|
+
If you're using the [pre-commit](https://pre-commit.com/) framework, you can add _aicommit2_ to your `.pre-commit-config.yaml`:
|
|
191
|
+
|
|
192
|
+
```yaml
|
|
193
|
+
repos:
|
|
194
|
+
- repo: local
|
|
195
|
+
hooks:
|
|
196
|
+
- id: aicommit2
|
|
197
|
+
name: AI Commit Message Generator
|
|
198
|
+
entry: aicommit2 --pre-commit
|
|
199
|
+
language: node
|
|
200
|
+
stages: [prepare-commit-msg]
|
|
201
|
+
always_run: true
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
Make sure you have:
|
|
205
|
+
|
|
206
|
+
1. Installed pre-commit: `brew install pre-commit`
|
|
207
|
+
2. Installed aicommit2 globally: `npm install -g aicommit2`
|
|
208
|
+
3. Run `pre-commit install --hook-type prepare-commit-msg` to set up the hook
|
|
209
|
+
|
|
210
|
+
> **Note** : The `--pre-commit` flag is specifically designed for use with the pre-commit framework and ensures proper integration with other pre-commit hooks.
|
|
211
|
+
|
|
212
|
+
|
|
198
213
|
#### Uninstall
|
|
199
214
|
|
|
200
215
|
In the Git repository you want to uninstall the hook from:
|
|
@@ -523,464 +538,33 @@ aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"
|
|
|
523
538
|
> All AI support the following options in General Settings.
|
|
524
539
|
> - systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength
|
|
525
540
|
|
|
526
|
-
## Model-Specific Settings
|
|
527
|
-
|
|
528
|
-
> Some models mentioned below are subject to change.
|
|
529
|
-
|
|
530
|
-
### OpenAI
|
|
531
|
-
|
|
532
|
-
| Setting | Description | Default |
|
|
533
|
-
|---------|--------------------|------------------------|
|
|
534
|
-
| `key` | API key | - |
|
|
535
|
-
| `model` | Model to use | `gpt-4o-mini` |
|
|
536
|
-
| `url` | API endpoint URL | https://api.openai.com |
|
|
537
|
-
| `path` | API path | /v1/chat/completions |
|
|
538
|
-
| `proxy` | Proxy settings | - |
|
|
539
|
-
|
|
540
|
-
##### OPENAI.key
|
|
541
|
-
|
|
542
|
-
The OpenAI API key. You can retrieve it from [OpenAI API Keys page](https://platform.openai.com/account/api-keys).
|
|
543
|
-
|
|
544
|
-
```sh
|
|
545
|
-
aicommit2 config set OPENAI.key="your api key"
|
|
546
|
-
```
|
|
547
|
-
|
|
548
|
-
##### OPENAI.model
|
|
549
|
-
|
|
550
|
-
Default: `gpt-4o-mini`
|
|
551
|
-
|
|
552
|
-
The Chat Completions (`/v1/chat/completions`) model to use. Consult the list of models available in the [OpenAI Documentation](https://platform.openai.com/docs/models/model-endpoint-compatibility).
|
|
553
|
-
|
|
554
|
-
```sh
|
|
555
|
-
aicommit2 config set OPENAI.model=gpt-4o
|
|
556
|
-
```
|
|
557
|
-
|
|
558
|
-
##### OPENAI.url
|
|
559
|
-
|
|
560
|
-
Default: `https://api.openai.com`
|
|
561
|
-
|
|
562
|
-
The OpenAI URL. Both https and http protocols supported. It allows to run local OpenAI-compatible server.
|
|
563
|
-
|
|
564
|
-
```sh
|
|
565
|
-
aicommit2 config set OPENAI.url="<your-host>"
|
|
566
|
-
```
|
|
567
|
-
|
|
568
|
-
##### OPENAI.path
|
|
569
|
-
|
|
570
|
-
Default: `/v1/chat/completions`
|
|
571
|
-
|
|
572
|
-
The OpenAI Path.
|
|
573
|
-
|
|
574
|
-
##### OPENAI.topP
|
|
575
|
-
|
|
576
|
-
Default: `0.9`
|
|
577
|
-
|
|
578
|
-
The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p).
|
|
579
|
-
|
|
580
|
-
```sh
|
|
581
|
-
aicommit2 config set OPENAI.topP=0.2
|
|
582
|
-
```
|
|
583
|
-
|
|
584
|
-
> NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
|
|
585
|
-
|
|
586
|
-
### Anthropic
|
|
587
|
-
|
|
588
|
-
| Setting | Description | Default |
|
|
589
|
-
|-------------|----------------|-----------------------------|
|
|
590
|
-
| `key` | API key | - |
|
|
591
|
-
| `model` | Model to use | `claude-3-5-haiku-20241022` |
|
|
592
|
-
|
|
593
|
-
##### ANTHROPIC.key
|
|
594
|
-
|
|
595
|
-
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
596
|
-
|
|
597
|
-
##### ANTHROPIC.model
|
|
598
|
-
|
|
599
|
-
Default: `claude-3-5-haiku-20241022`
|
|
600
|
-
|
|
601
|
-
Supported:
|
|
602
|
-
- `claude-3-7-sonnet-20250219`
|
|
603
|
-
- `claude-3-5-sonnet-20241022`
|
|
604
|
-
- `claude-3-5-haiku-20241022`
|
|
605
|
-
- `claude-3-opus-20240229`
|
|
606
|
-
- `claude-3-sonnet-20240229`
|
|
607
|
-
- `claude-3-haiku-20240307`
|
|
608
|
-
|
|
609
|
-
```sh
|
|
610
|
-
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
611
|
-
```
|
|
612
|
-
|
|
613
|
-
### Gemini
|
|
614
|
-
|
|
615
|
-
| Setting | Description | Default |
|
|
616
|
-
|--------------------|------------------------|------------------------|
|
|
617
|
-
| `key` | API key | - |
|
|
618
|
-
| `model` | Model to use | `gemini-2.0-flash` |
|
|
619
|
-
|
|
620
|
-
##### GEMINI.key
|
|
621
|
-
|
|
622
|
-
The Gemini API key. If you don't have one, create a key in [Google AI Studio](https://aistudio.google.com/app/apikey).
|
|
623
|
-
|
|
624
|
-
```sh
|
|
625
|
-
aicommit2 config set GEMINI.key="your api key"
|
|
626
|
-
```
|
|
627
|
-
|
|
628
|
-
##### GEMINI.model
|
|
629
|
-
|
|
630
|
-
Default: `gemini-2.0-flash`
|
|
631
|
-
|
|
632
|
-
Supported:
|
|
633
|
-
- `gemini-2.0-flash`
|
|
634
|
-
- `gemini-2.0-flash-lite`
|
|
635
|
-
- `gemini-2.0-pro-exp-02-05`
|
|
636
|
-
- `gemini-2.0-flash-thinking-exp-01-21`
|
|
637
|
-
- `gemini-2.0-flash-exp`
|
|
638
|
-
- `gemini-1.5-flash`
|
|
639
|
-
- `gemini-1.5-flash-8b`
|
|
640
|
-
- `gemini-1.5-pro`
|
|
641
|
-
|
|
642
|
-
```sh
|
|
643
|
-
aicommit2 config set GEMINI.model="gemini-2.0-flash"
|
|
644
|
-
```
|
|
645
|
-
|
|
646
|
-
##### Unsupported Options
|
|
647
|
-
|
|
648
|
-
Gemini does not support the following options in General Settings.
|
|
649
|
-
|
|
650
|
-
- timeout
|
|
651
|
-
|
|
652
|
-
### Mistral
|
|
653
|
-
|
|
654
|
-
| Setting | Description | Default |
|
|
655
|
-
|----------|------------------|--------------------|
|
|
656
|
-
| `key` | API key | - |
|
|
657
|
-
| `model` | Model to use | `pixtral-12b-2409` |
|
|
658
|
-
|
|
659
|
-
##### MISTRAL.key
|
|
660
|
-
|
|
661
|
-
The Mistral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/).
|
|
662
|
-
|
|
663
|
-
##### MISTRAL.model
|
|
664
|
-
|
|
665
|
-
Default: `pixtral-12b-2409`
|
|
666
|
-
|
|
667
|
-
Supported:
|
|
668
|
-
- `codestral-latest`
|
|
669
|
-
- `mistral-large-latest`
|
|
670
|
-
- `pixtral-large-latest`
|
|
671
|
-
- `ministral-8b-latest`
|
|
672
|
-
- `mistral-small-latest`
|
|
673
|
-
- `mistral-embed`
|
|
674
|
-
- `mistral-moderation-latest`
|
|
675
|
-
|
|
676
|
-
### Codestral
|
|
677
|
-
|
|
678
|
-
| Setting | Description | Default |
|
|
679
|
-
|---------|------------------|--------------------|
|
|
680
|
-
| `key` | API key | - |
|
|
681
|
-
| `model` | Model to use | `codestral-latest` |
|
|
682
|
-
|
|
683
|
-
##### CODESTRAL.key
|
|
684
|
-
|
|
685
|
-
The Codestral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/codestral).
|
|
686
|
-
|
|
687
|
-
##### CODESTRAL.model
|
|
688
|
-
|
|
689
|
-
Default: `codestral-latest`
|
|
690
|
-
|
|
691
|
-
Supported:
|
|
692
|
-
- `codestral-latest`
|
|
693
|
-
- `codestral-2501`
|
|
694
|
-
|
|
695
|
-
```sh
|
|
696
|
-
aicommit2 config set CODESTRAL.model="codestral-2501"
|
|
697
|
-
```
|
|
698
|
-
|
|
699
|
-
### Cohere
|
|
700
|
-
|
|
701
|
-
| Setting | Description | Default |
|
|
702
|
-
|--------------------|--------------|-------------|
|
|
703
|
-
| `key` | API key | - |
|
|
704
|
-
| `model` | Model to use | `command` |
|
|
705
|
-
|
|
706
|
-
##### COHERE.key
|
|
707
|
-
|
|
708
|
-
The Cohere API key. If you don't have one, please sign up and get the API key in [Cohere Dashboard](https://dashboard.cohere.com/).
|
|
709
|
-
|
|
710
|
-
##### COHERE.model
|
|
711
|
-
|
|
712
|
-
Default: `command`
|
|
713
|
-
|
|
714
|
-
Supported models:
|
|
715
|
-
- `command-r7b-12-2024`
|
|
716
|
-
- `command-r-plus-08-2024`
|
|
717
|
-
- `command-r-plus-04-2024`
|
|
718
|
-
- `command-r-plus`
|
|
719
|
-
- `command-r-08-2024`
|
|
720
|
-
- `command-r-03-2024`
|
|
721
|
-
- `command-r`
|
|
722
|
-
- `command`
|
|
723
|
-
- `command-nightly`
|
|
724
|
-
- `command-light`
|
|
725
|
-
- `command-light-nightly`
|
|
726
|
-
- `c4ai-aya-expanse-8b`
|
|
727
|
-
- `c4ai-aya-expanse-32b`
|
|
728
|
-
|
|
729
|
-
```sh
|
|
730
|
-
aicommit2 config set COHERE.model="command-nightly"
|
|
731
|
-
```
|
|
732
|
-
|
|
733
|
-
### Groq
|
|
734
|
-
|
|
735
|
-
| Setting | Description | Default |
|
|
736
|
-
|--------------------|------------------------|---------------------------------|
|
|
737
|
-
| `key` | API key | - |
|
|
738
|
-
| `model` | Model to use | `deepseek-r1-distill-llama-70b` |
|
|
739
|
-
|
|
740
|
-
##### GROQ.key
|
|
741
|
-
|
|
742
|
-
The Groq API key. If you don't have one, please sign up and get the API key in [Groq Console](https://console.groq.com).
|
|
743
|
-
|
|
744
|
-
##### GROQ.model
|
|
745
|
-
|
|
746
|
-
Default: `deepseek-r1-distill-llama-70b`
|
|
747
|
-
|
|
748
|
-
Supported:
|
|
749
|
-
- `qwen-2.5-32b`
|
|
750
|
-
- `qwen-2.5-coder-32b`
|
|
751
|
-
- `deepseek-r1-distill-qwen-32b`
|
|
752
|
-
- `deepseek-r1-distill-llama-70b`
|
|
753
|
-
- `distil-whisper-large-v3-en`
|
|
754
|
-
- `gemma2-9b-it`
|
|
755
|
-
- `llama-3.3-70b-versatile`
|
|
756
|
-
- `llama-3.1-8b-instant`
|
|
757
|
-
- `llama-guard-3-8b`
|
|
758
|
-
- `llama3-70b-8192`
|
|
759
|
-
- `llama3-8b-8192`
|
|
760
|
-
- `mixtral-8x7b-32768`
|
|
761
|
-
- `whisper-large-v3`
|
|
762
|
-
- `whisper-large-v3-turbo`
|
|
763
|
-
- `llama-3.3-70b-specdec`
|
|
764
|
-
- `llama-3.2-1b-preview`
|
|
765
|
-
- `llama-3.2-3b-preview`
|
|
766
|
-
- `llama-3.2-11b-vision-preview`
|
|
767
|
-
- `llama-3.2-90b-vision-preview`
|
|
768
|
-
|
|
769
|
-
|
|
770
|
-
```sh
|
|
771
|
-
aicommit2 config set GROQ.model="deepseek-r1-distill-llama-70b"
|
|
772
|
-
```
|
|
773
|
-
|
|
774
|
-
### Perplexity
|
|
775
|
-
|
|
776
|
-
| Setting | Description | Default |
|
|
777
|
-
|----------|------------------|----------|
|
|
778
|
-
| `key` | API key | - |
|
|
779
|
-
| `model` | Model to use | `sonar` |
|
|
780
|
-
|
|
781
|
-
##### PERPLEXITY.key
|
|
782
|
-
|
|
783
|
-
The Perplexity API key. If you don't have one, please sign up and get the API key in [Perplexity](https://docs.perplexity.ai/)
|
|
784
|
-
|
|
785
|
-
##### PERPLEXITY.model
|
|
786
|
-
|
|
787
|
-
Default: `sonar`
|
|
788
|
-
|
|
789
|
-
Supported:
|
|
790
|
-
- `sonar-pro`
|
|
791
|
-
- `sonar`
|
|
792
|
-
- `llama-3.1-sonar-small-128k-online`
|
|
793
|
-
- `llama-3.1-sonar-large-128k-online`
|
|
794
|
-
- `llama-3.1-sonar-huge-128k-online`
|
|
795
|
-
|
|
796
|
-
> The models mentioned above are subject to change.
|
|
797
|
-
|
|
798
|
-
```sh
|
|
799
|
-
aicommit2 config set PERPLEXITY.model="sonar-pro"
|
|
800
|
-
```
|
|
801
|
-
|
|
802
|
-
### DeepSeek
|
|
803
|
-
|
|
804
|
-
| Setting | Description | Default |
|
|
805
|
-
|---------|------------------|--------------------|
|
|
806
|
-
| `key` | API key | - |
|
|
807
|
-
| `model` | Model to use | `deepseek-chat` |
|
|
808
|
-
|
|
809
|
-
##### DEEPSEEK.key
|
|
810
|
-
|
|
811
|
-
The DeepSeek API key. If you don't have one, please sign up and subscribe in [DeepSeek Platform](https://platform.deepseek.com/).
|
|
812
|
-
|
|
813
|
-
##### DEEPSEEK.model
|
|
814
|
-
|
|
815
|
-
Default: `deepseek-chat`
|
|
816
|
-
|
|
817
|
-
Supported:
|
|
818
|
-
- `deepseek-chat`
|
|
819
|
-
- `deepseek-reasoner`
|
|
820
|
-
|
|
821
|
-
```sh
|
|
822
|
-
aicommit2 config set DEEPSEEK.model="deepseek-reasoner"
|
|
823
|
-
```
|
|
824
|
-
|
|
825
|
-
### Ollama
|
|
826
|
-
|
|
827
|
-
| Setting | Description | Default |
|
|
828
|
-
|------------|-------------------------------------------------------------|------------------------|
|
|
829
|
-
| `model` | Model(s) to use (comma-separated list) | - |
|
|
830
|
-
| `host` | Ollama host URL | http://localhost:11434 |
|
|
831
|
-
| `auth` | Authentication type | Bearer |
|
|
832
|
-
| `key` | Authentication key | - |
|
|
833
|
-
| `numCtx` | The maximum number of tokens the model can process at once | 2048 |
|
|
834
|
-
|
|
835
|
-
##### OLLAMA.model
|
|
836
|
-
|
|
837
|
-
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
838
|
-
|
|
839
|
-
```sh
|
|
840
|
-
aicommit2 config set OLLAMA.model="llama3.1"
|
|
841
|
-
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
842
|
-
|
|
843
|
-
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
844
|
-
```
|
|
845
|
-
|
|
846
|
-
> OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
847
|
-
|
|
848
|
-
##### OLLAMA.host
|
|
849
|
-
|
|
850
|
-
Default: `http://localhost:11434`
|
|
851
|
-
|
|
852
|
-
The Ollama host
|
|
853
|
-
|
|
854
|
-
```sh
|
|
855
|
-
aicommit2 config set OLLAMA.host=<host>
|
|
856
|
-
```
|
|
857
|
-
|
|
858
|
-
##### OLLAMA.auth
|
|
859
|
-
|
|
860
|
-
Not required. Use when your Ollama server requires authentication. Please see [this issue](https://github.com/tak-bro/aicommit2/issues/90).
|
|
861
|
-
|
|
862
|
-
```sh
|
|
863
|
-
aicommit2 config set OLLAMA.auth=<auth>
|
|
864
|
-
```
|
|
865
|
-
|
|
866
|
-
##### OLLAMA.key
|
|
867
|
-
|
|
868
|
-
Not required. Use when your Ollama server requires authentication. Please see [this issue](https://github.com/tak-bro/aicommit2/issues/90).
|
|
869
|
-
|
|
870
|
-
```sh
|
|
871
|
-
aicommit2 config set OLLAMA.key=<key>
|
|
872
|
-
```
|
|
873
|
-
|
|
874
|
-
Few examples of authentication methods:
|
|
875
|
-
|
|
876
|
-
| **Authentication Method** | **OLLAMA.auth** | **OLLAMA.key** |
|
|
877
|
-
|---------------------------|------------------------------|---------------------------------------|
|
|
878
|
-
| Bearer | `Bearer` | `<API key>` |
|
|
879
|
-
| Basic | `Basic` | `<Base64 Encoded username:password>` |
|
|
880
|
-
| JWT | `Bearer` | `<JWT Token>` |
|
|
881
|
-
| OAuth 2.0 | `Bearer` | `<Access Token>` |
|
|
882
|
-
| HMAC-SHA256 | `HMAC` | `<Base64 Encoded clientId:signature>` |
|
|
883
|
-
|
|
884
|
-
##### OLLAMA.numCtx
|
|
885
|
-
|
|
886
|
-
The maximum number of tokens the model can process at once, determining its context length and memory usage.
|
|
887
|
-
It is recommended to set it to 4096 or higher.
|
|
888
|
-
|
|
889
|
-
```sh
|
|
890
|
-
aicommit2 config set OLLAMA.numCtx=4096
|
|
891
|
-
```
|
|
892
|
-
|
|
893
|
-
##### Unsupported Options
|
|
894
|
-
|
|
895
|
-
Ollama does not support the following options in General Settings.
|
|
896
|
-
|
|
897
|
-
- maxTokens
|
|
898
|
-
|
|
899
|
-
### OpenAI API-Compatible Services
|
|
900
|
-
|
|
901
|
-
You can configure any OpenAI API-compatible service by adding a configuration section with the `compatible=true` option. This allows you to use services that implement the OpenAI API specification.
|
|
902
|
-
|
|
903
|
-
```sh
|
|
904
|
-
# together
|
|
905
|
-
aicommit2 config set TOGETHER.compatible=true
|
|
906
|
-
aicommit2 config set TOGETHER.url=https://api.together.xyz
|
|
907
|
-
aicommit2 config set TOGETHER.path=/v1
|
|
908
|
-
aicommit2 config set TOGETHER.model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
|
|
909
|
-
aicommit2 config set TOGETHER.key="your-api-key"
|
|
910
|
-
```
|
|
911
|
-
|
|
912
|
-
| Setting | Description | Required | Default |
|
|
913
|
-
|--------------|----------------------------------------|----------------------|---------|
|
|
914
|
-
| `compatible` | Enable OpenAI API compatibility mode | ✓ (**must be true**) | false |
|
|
915
|
-
| `url` | Base URL of the API endpoint | ✓ | - |
|
|
916
|
-
| `path` | API path for chat completions | | - |
|
|
917
|
-
| `key` | API key for authentication | ✓ | - |
|
|
918
|
-
| `model` | Model identifier to use | ✓ | - |
|
|
919
|
-
|
|
920
|
-
Example configuration:
|
|
921
|
-
```ini
|
|
922
|
-
[TOGETHER]
|
|
923
|
-
compatible=true
|
|
924
|
-
key=<your-api-key>
|
|
925
|
-
url=https://api.together.xyz/v1
|
|
926
|
-
model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
|
|
927
|
-
|
|
928
|
-
[GEMINI_COMPATIBILITY]
|
|
929
|
-
compatible=true
|
|
930
|
-
key=<your-api-key>
|
|
931
|
-
url=https://generativelanguage.googleapis.com
|
|
932
|
-
path=/v1beta/openai/
|
|
933
|
-
model=gemini-1.5-flash
|
|
934
|
-
|
|
935
|
-
[OLLAMA_COMPATIBILITY]
|
|
936
|
-
compatible=true
|
|
937
|
-
key=ollama
|
|
938
|
-
url=http://localhost:11434/v1
|
|
939
|
-
model=llama3.2
|
|
940
|
-
```
|
|
941
|
-
|
|
942
|
-
## Watch Commit Mode
|
|
943
|
-
|
|
944
|
-

|
|
945
|
-
|
|
946
|
-
Watch Commit mode allows you to monitor Git commits in real-time and automatically perform AI code reviews using the `--watch-commit` flag.
|
|
947
|
-
|
|
948
|
-
```sh
|
|
949
|
-
aicommit2 --watch-commit
|
|
950
|
-
```
|
|
951
|
-
|
|
952
|
-
This feature only works within Git repository directories and automatically triggers whenever a commit event occurs. When a new commit is detected, it automatically:
|
|
953
|
-
1. Analyzes commit changes
|
|
954
|
-
2. Performs AI code review
|
|
955
|
-
3. Displays results in real-time
|
|
956
|
-
|
|
957
|
-
> For detailed configuration of the code review feature, please refer to the [codeReview](#codereview) section. The settings in that section are shared with this feature.
|
|
958
|
-
|
|
959
|
-
⚠️ **CAUTION**
|
|
960
|
-
|
|
961
|
-
- The Watch Commit feature is currently **experimental**
|
|
962
|
-
- This feature performs AI analysis for each commit, which **consumes a significant number of API tokens**
|
|
963
|
-
- API costs can increase substantially if there are many commits
|
|
964
|
-
- It is recommended to **carefully monitor your token usage** when using this feature
|
|
965
|
-
- To use this feature, you must enable watch mode for at least one AI model:
|
|
966
|
-
```sh
|
|
967
|
-
aicommit2 config set [MODEL].watchMode="true"
|
|
968
|
-
```
|
|
969
|
-
|
|
970
|
-
## Upgrading
|
|
971
|
-
|
|
972
|
-
Check the installed version with:
|
|
973
|
-
|
|
974
|
-
```
|
|
975
|
-
aicommit2 --version
|
|
976
|
-
```
|
|
977
|
-
|
|
978
|
-
If it's not the [latest version](https://github.com/tak-bro/aicommit2/releases/latest), run:
|
|
979
|
-
|
|
980
|
-
```sh
|
|
981
|
-
npm update -g aicommit2
|
|
982
|
-
```
|
|
983
541
|
|
|
542
|
+
## Configuration Examples
|
|
543
|
+
|
|
544
|
+
```
|
|
545
|
+
aicommit2 config set \
|
|
546
|
+
generate=2 \
|
|
547
|
+
topP=0.8 \
|
|
548
|
+
maxTokens=1024 \
|
|
549
|
+
temperature=0.7 \
|
|
550
|
+
OPENAI.key="sk-..." OPENAI.model="gpt-4o" OPENAI.temperature=0.5 \
|
|
551
|
+
ANTHROPIC.key="sk-..." ANTHROPIC.model="claude-3-haiku" ANTHROPIC.maxTokens=2000 \
|
|
552
|
+
MISTRAL.key="your-key" MISTRAL.model="codestral-latest" \
|
|
553
|
+
OLLAMA.model="llama3.2" OLLAMA.numCtx=4096 OLLAMA.watchMode=true
|
|
554
|
+
```
|
|
555
|
+
|
|
556
|
+
> 🔍 **Detailed Support Info**: Check each provider's documentation for specific limits and behaviors:
|
|
557
|
+
> - [OpenAI](docs/providers/openai.md)
|
|
558
|
+
> - [Anthropic Claude](docs/providers/anthropic.md)
|
|
559
|
+
> - [Gemini](docs/providers/gemini.md)
|
|
560
|
+
> - [Mistral & Codestral](docs/providers/mistral.md)
|
|
561
|
+
> - [Cohere](docs/providers/cohere.md)
|
|
562
|
+
> - [Groq](docs/providers/groq.md)
|
|
563
|
+
> - [Perplexity](docs/providers/perplexity.md)
|
|
564
|
+
> - [DeepSeek](docs/providers/deepseek.md)
|
|
565
|
+
> - [OpenAI API Compatibility](docs/providers/compatible.md)
|
|
566
|
+
> - [Ollama](docs/providers/ollama.md)
|
|
567
|
+
|
|
984
568
|
## Custom Prompt Template
|
|
985
569
|
|
|
986
570
|
_aicommit2_ supports custom prompt templates through the `systemPromptPath` option. This feature allows you to define your own prompt structure, giving you more control over the commit message generation process.
|
|
@@ -1046,69 +630,49 @@ The response should be valid JSON that can be parsed without errors.
|
|
|
1046
630
|
|
|
1047
631
|
This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
|
|
1048
632
|
|
|
1049
|
-
## Integration with pre-commit framework
|
|
1050
633
|
|
|
1051
|
-
|
|
1052
|
-
|
|
1053
|
-
```yaml
|
|
1054
|
-
repos:
|
|
1055
|
-
- repo: local
|
|
1056
|
-
hooks:
|
|
1057
|
-
- id: aicommit2
|
|
1058
|
-
name: AI Commit Message Generator
|
|
1059
|
-
entry: aicommit2 --pre-commit
|
|
1060
|
-
language: node
|
|
1061
|
-
stages: [prepare-commit-msg]
|
|
1062
|
-
always_run: true
|
|
1063
|
-
```
|
|
1064
|
-
|
|
1065
|
-
Make sure you have:
|
|
634
|
+
## Watch Commit Mode
|
|
1066
635
|
|
|
1067
|
-
|
|
1068
|
-
2. Installed aicommit2 globally: `npm install -g aicommit2`
|
|
1069
|
-
3. Run `pre-commit install --hook-type prepare-commit-msg` to set up the hook
|
|
636
|
+

|
|
1070
637
|
|
|
1071
|
-
|
|
638
|
+
Watch Commit mode allows you to monitor Git commits in real-time and automatically perform AI code reviews using the `--watch-commit` flag.
|
|
1072
639
|
|
|
1073
|
-
|
|
640
|
+
```sh
|
|
641
|
+
aicommit2 --watch-commit
|
|
642
|
+
```
|
|
1074
643
|
|
|
1075
|
-
|
|
644
|
+
This feature only works within Git repository directories and automatically triggers whenever a commit event occurs. When a new commit is detected, it automatically:
|
|
645
|
+
1. Analyzes commit changes
|
|
646
|
+
2. Performs AI code review
|
|
647
|
+
3. Displays results in real-time
|
|
1076
648
|
|
|
1077
|
-
|
|
1078
|
-
- `OLLAMA_MAX_LOADED_MODELS`: Load multiple models simultaneously
|
|
649
|
+
> For detailed configuration of the code review feature, please refer to the [codeReview](#codereview) section. The settings in that section are shared with this feature.
|
|
1079
650
|
|
|
1080
|
-
|
|
651
|
+
⚠️ **CAUTION**
|
|
1081
652
|
|
|
1082
|
-
|
|
653
|
+
- The Watch Commit feature is currently **experimental**
|
|
654
|
+
- This feature performs AI analysis for each commit, which **consumes a significant number of API tokens**
|
|
655
|
+
- API costs can increase substantially if there are many commits
|
|
656
|
+
- It is recommended to **carefully monitor your token usage** when using this feature
|
|
657
|
+
- To use this feature, you must enable watch mode for at least one AI model:
|
|
658
|
+
```sh
|
|
659
|
+
aicommit2 config set [MODEL].watchMode="true"
|
|
660
|
+
```
|
|
1083
661
|
|
|
1084
|
-
|
|
662
|
+
## Upgrading
|
|
1085
663
|
|
|
1086
|
-
|
|
1087
|
-
For example, to load up to 3 models, use the following command:
|
|
664
|
+
Check the installed version with:
|
|
1088
665
|
|
|
1089
|
-
```shell
|
|
1090
|
-
OLLAMA_MAX_LOADED_MODELS=3 ollama serve
|
|
1091
666
|
```
|
|
1092
|
-
|
|
1093
|
-
|
|
1094
|
-
##### 2. Configuring _aicommit2_
|
|
1095
|
-
|
|
1096
|
-
Next, set up _aicommit2_ to specify multiple models. You can assign a list of models, separated by **commas(`,`)**, to the OLLAMA.model environment variable. Here's how you do it:
|
|
1097
|
-
|
|
1098
|
-
```shell
|
|
1099
|
-
aicommit2 config set OLLAMA.model="mistral,dolphin-llama3"
|
|
667
|
+
aicommit2 --version
|
|
1100
668
|
```
|
|
1101
669
|
|
|
1102
|
-
|
|
1103
|
-
|
|
1104
|
-
##### 3. Run _aicommit2_
|
|
670
|
+
If it's not the [latest version](https://github.com/tak-bro/aicommit2/releases/latest), run:
|
|
1105
671
|
|
|
1106
|
-
```
|
|
1107
|
-
aicommit2
|
|
672
|
+
```sh
|
|
673
|
+
npm update -g aicommit2
|
|
1108
674
|
```
|
|
1109
675
|
|
|
1110
|
-
> Note that this feature is available starting from Ollama version [**0.1.33**](https://github.com/ollama/ollama/releases/tag/v0.1.33) and _aicommit2_ version [**1.9.5**](https://www.npmjs.com/package/aicommit2/v/1.9.5).
|
|
1111
|
-
|
|
1112
676
|
## Disclaimer and Risks
|
|
1113
677
|
|
|
1114
678
|
This project uses functionalities from external APIs but is not officially affiliated with or endorsed by their providers. Users are responsible for complying with API terms, rate limits, and policies.
|
|
@@ -1137,6 +701,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
|
|
|
1137
701
|
<tr>
|
|
1138
702
|
<td align="center"><a href="https://github.com/devxpain"><img src="https://avatars.githubusercontent.com/devxpain" width="100px;" alt=""/><br /><sub><b>@devxpain</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=devxpain" title="Code">💻</a></td>
|
|
1139
703
|
<td align="center"><a href="https://github.com/delenzhang"><img src="https://avatars.githubusercontent.com/delenzhang" width="100px;" alt=""/><br /><sub><b>@delenzhang</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=delenzhang" title="Code">💻</a></td>
|
|
704
|
+
<td align="center"><a href="https://github.com/kvokka"><img src="https://avatars.githubusercontent.com/kvokka" width="100px;" alt=""/><br /><sub><b>@kvokka</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=kvokka" title="Documentation">📖</a></td>
|
|
1140
705
|
</tr>
|
|
1141
706
|
</table>
|
|
1142
707
|
<!-- markdownlint-restore -->
|