aicommit2 1.12.6 β 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +388 -381
- package/dist/cli.mjs +93 -89
- package/package.json +6 -5
package/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
<div align="center">
|
|
2
2
|
<div>
|
|
3
|
-
<img src="https://github.com/tak-bro/aicommit2/blob/main/img/
|
|
3
|
+
<img src="https://github.com/tak-bro/aicommit2/blob/main/img/demo-min.gif?raw=true" alt="AICommit2"/>
|
|
4
4
|
<h1 align="center">AICommit2</h1>
|
|
5
5
|
</div>
|
|
6
6
|
<p>
|
|
@@ -25,11 +25,11 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
|
|
|
25
25
|
|
|
26
26
|
## Key Features
|
|
27
27
|
|
|
28
|
-
- **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq
|
|
28
|
+
- **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq and more.
|
|
29
29
|
- **Local Model Support**: Use local AI models via Ollama.
|
|
30
30
|
- **Reactive CLI**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
|
|
31
31
|
- **Git Hook Integration**: Can be used as a prepare-commit-msg hook.
|
|
32
|
-
- **Custom
|
|
32
|
+
- **Custom Prompt**: Supports user-defined system prompt templates.
|
|
33
33
|
|
|
34
34
|
## Supported Providers
|
|
35
35
|
|
|
@@ -38,12 +38,11 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
|
|
|
38
38
|
- [OpenAI](https://openai.com/)
|
|
39
39
|
- [Anthropic Claude](https://console.anthropic.com/)
|
|
40
40
|
- [Gemini](https://gemini.google.com/)
|
|
41
|
-
- [Mistral AI](https://mistral.ai/)
|
|
42
|
-
- [Codestral **(Free till August 1, 2024)**](https://mistral.ai/news/codestral/)
|
|
41
|
+
- [Mistral AI](https://mistral.ai/) (including [Codestral](https://mistral.ai/news/codestral/))
|
|
43
42
|
- [Cohere](https://cohere.com/)
|
|
44
43
|
- [Groq](https://groq.com/)
|
|
45
|
-
- [Huggingface **(Unofficial)**](https://huggingface.co/chat/)
|
|
46
44
|
- [Perplexity](https://docs.perplexity.ai/)
|
|
45
|
+
- [Huggingface **(Unofficial)**](https://huggingface.co/chat/)
|
|
47
46
|
|
|
48
47
|
### Local
|
|
49
48
|
|
|
@@ -59,66 +58,22 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
|
|
|
59
58
|
npm install -g aicommit2
|
|
60
59
|
```
|
|
61
60
|
|
|
62
|
-
2.
|
|
63
|
-
|
|
64
|
-
It is not necessary to set all keys. **But at least one key must be set up.**
|
|
65
|
-
|
|
66
|
-
- [OpenAI](https://platform.openai.com/account/api-keys)
|
|
67
|
-
```sh
|
|
68
|
-
aicommit2 config set OPENAI_KEY=<your key>
|
|
69
|
-
```
|
|
70
|
-
|
|
71
|
-
- [Anthropic Claude](https://console.anthropic.com/)
|
|
72
|
-
```sh
|
|
73
|
-
aicommit2 config set ANTHROPIC_KEY=<your key>
|
|
74
|
-
```
|
|
75
|
-
|
|
76
|
-
- [Gemini](https://aistudio.google.com/app/apikey)
|
|
77
|
-
```sh
|
|
78
|
-
aicommit2 config set GEMINI_KEY=<your key>
|
|
79
|
-
```
|
|
80
|
-
|
|
81
|
-
- [Mistral AI](https://console.mistral.ai/)
|
|
82
|
-
```sh
|
|
83
|
-
aicommit2 config set MISTRAL_KEY=<your key>
|
|
84
|
-
```
|
|
85
|
-
|
|
86
|
-
- [Codestral](https://console.mistral.ai/)
|
|
87
|
-
```sh
|
|
88
|
-
aicommit2 config set CODESTRAL_KEY=<your key>
|
|
89
|
-
```
|
|
90
|
-
|
|
91
|
-
- [Cohere](https://dashboard.cohere.com/)
|
|
92
|
-
```sh
|
|
93
|
-
aicommit2 config set COHERE_KEY=<your key>
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
- [Groq](https://console.groq.com)
|
|
97
|
-
```sh
|
|
98
|
-
aicommit2 config set GROQ_KEY=<your key>
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
- [Huggingface **(Unofficial)**](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
102
|
-
```shell
|
|
103
|
-
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
104
|
-
aicommit2 config set HUGGINGFACE_COOKIE="<your browser cookie>"
|
|
105
|
-
```
|
|
61
|
+
2. Set up API keys (**at least ONE key must be set**):
|
|
106
62
|
|
|
107
|
-
- [Perplexity](https://docs.perplexity.ai/)
|
|
108
63
|
```sh
|
|
109
|
-
aicommit2 config set
|
|
64
|
+
aicommit2 config set OPENAI.key=<your key>
|
|
65
|
+
aicommit2 config set OLLAMA.model=<your local model>
|
|
66
|
+
# ... (similar commands for other providers)
|
|
110
67
|
```
|
|
111
68
|
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
> You may need to create an account and set up billing.
|
|
115
|
-
|
|
116
|
-
3. Run aicommit2 with your staged files in git repository:
|
|
69
|
+
3. Run _aicommit2_ with your staged files in git repository:
|
|
117
70
|
```shell
|
|
118
71
|
git add <files...>
|
|
119
72
|
aicommit2
|
|
120
73
|
```
|
|
121
74
|
|
|
75
|
+
> π **Tip:** Use the `aic2` alias if `aicommit2` is too long for you.
|
|
76
|
+
|
|
122
77
|
## Using Locally
|
|
123
78
|
|
|
124
79
|
You can also use your model for free with [Ollama](https://ollama.com/) and it is available to use both Ollama and remote providers **simultaneously**.
|
|
@@ -126,18 +81,16 @@ You can also use your model for free with [Ollama](https://ollama.com/) and it i
|
|
|
126
81
|
1. Install Ollama from [https://ollama.com](https://ollama.com/)
|
|
127
82
|
|
|
128
83
|
2. Start it with your model
|
|
129
|
-
|
|
130
84
|
```shell
|
|
131
|
-
ollama run llama3 # model you want use. ex) codellama, deepseek-coder
|
|
85
|
+
ollama run llama3.1 # model you want use. ex) codellama, deepseek-coder
|
|
132
86
|
```
|
|
133
87
|
|
|
134
88
|
3. Set the model and host
|
|
135
|
-
|
|
136
89
|
```sh
|
|
137
|
-
aicommit2 config set
|
|
90
|
+
aicommit2 config set OLLAMA.model=<your model>
|
|
138
91
|
```
|
|
139
92
|
|
|
140
|
-
> If you want to use
|
|
93
|
+
> If you want to use Ollama, you must set **OLLAMA.model**.
|
|
141
94
|
|
|
142
95
|
4. Run _aicommit2_ with your staged in git repository
|
|
143
96
|
```shell
|
|
@@ -172,202 +125,165 @@ For example, you can stage all changes in tracked files with as you commit:
|
|
|
172
125
|
aicommit2 --all # or -a
|
|
173
126
|
```
|
|
174
127
|
|
|
175
|
-
> πΒ **Tip:** Use the `aic2` alias if `aicommit2` is too long for you.
|
|
176
|
-
|
|
177
128
|
#### CLI Options
|
|
178
129
|
|
|
179
|
-
|
|
180
|
-
-
|
|
130
|
+
- `--locale` or `-l`: Locale to use for the generated commit messages (default: **en**)
|
|
131
|
+
- `--all` or `-a`: Automatically stage changes in tracked files for the commit (default: **false**)
|
|
132
|
+
- `--type` or `-t`: Git commit message format (default: **conventional**). It supports [`conventional`](https://conventionalcommits.org/) and [`gitmoji`](https://gitmoji.dev/)
|
|
133
|
+
- `--confirm` or `-y`: Skip confirmation when committing after message generation (default: **false**)
|
|
134
|
+
- `--clipboard` or `-c`: Copy the selected message to the clipboard (default: **false**).
|
|
135
|
+
- If you give this option, **_aicommit2_ will not commit**.
|
|
136
|
+
- `--generate` or `-g`: Number of messages to generate (default: **1**)
|
|
137
|
+
- **Warning**: This uses more tokens, meaning it costs more.
|
|
138
|
+
- `--prompt` or `-p`: System prompt for fine-tuning
|
|
139
|
+
- **Warning**: This option is **not recommended**. Please use `systemPrompt` or `systemPromptPath` for each model.
|
|
181
140
|
|
|
141
|
+
Example:
|
|
182
142
|
```sh
|
|
183
|
-
aicommit2 --locale
|
|
143
|
+
aicommit2 --locale "jp" --all --type "conventional" --generate 3 --clipboard
|
|
184
144
|
```
|
|
185
145
|
|
|
186
|
-
|
|
187
|
-
- Number of messages to generate (Warning: generating multiple costs more) (default: **1**)
|
|
188
|
-
- Sometimes the recommended commit message isn't the best so you want it to generate a few to pick from. You can generate multiple commit messages at once by passing in the `--generate <i>` flag, where 'i' is the number of generated messages:
|
|
146
|
+
### Git hook
|
|
189
147
|
|
|
190
|
-
|
|
191
|
-
aicommit2 --generate <i> # or -g <i>
|
|
192
|
-
```
|
|
148
|
+
You can also integrate _aicommit2_ with Git via the [`prepare-commit-msg`](https://git-scm.com/docs/githooks#_prepare_commit_msg) hook. This lets you use Git like you normally would, and edit the commit message before committing.
|
|
193
149
|
|
|
194
|
-
|
|
150
|
+
#### Install
|
|
195
151
|
|
|
196
|
-
|
|
197
|
-
- Automatically stage changes in tracked files for the commit (default: **false**)
|
|
152
|
+
In the Git repository you want to install the hook in:
|
|
198
153
|
|
|
199
154
|
```sh
|
|
200
|
-
aicommit2
|
|
155
|
+
aicommit2 hook install
|
|
201
156
|
```
|
|
202
157
|
|
|
203
|
-
|
|
204
|
-
- Automatically stage changes in tracked files for the commit (default: **conventional**)
|
|
205
|
-
- it supports [`conventional`](https://conventionalcommits.org/) and [`gitmoji`](https://gitmoji.dev/)
|
|
206
|
-
|
|
207
|
-
```sh
|
|
208
|
-
aicommit2 --type conventional # or -t conventional
|
|
209
|
-
aicommit2 --type gitmoji # or -t gitmoji
|
|
210
|
-
```
|
|
158
|
+
#### Uninstall
|
|
211
159
|
|
|
212
|
-
|
|
213
|
-
- Skip confirmation when committing after message generation (default: **false**)
|
|
160
|
+
In the Git repository you want to uninstall the hook from:
|
|
214
161
|
|
|
215
162
|
```sh
|
|
216
|
-
aicommit2
|
|
163
|
+
aicommit2 hook uninstall
|
|
217
164
|
```
|
|
218
165
|
|
|
219
|
-
|
|
220
|
-
- Copy the selected message to the clipboard (default: **false**)
|
|
221
|
-
- This is a useful option when you don't want to commit through _aicommit2_.
|
|
222
|
-
- If you give this option, _aicommit2_ will not commit.
|
|
166
|
+
### Configuration
|
|
223
167
|
|
|
224
|
-
|
|
225
|
-
aicommit2 --clipboard # or -c
|
|
226
|
-
```
|
|
168
|
+
#### Reading and Setting Configuration
|
|
227
169
|
|
|
228
|
-
|
|
229
|
-
-
|
|
230
|
-
- Enable users to define and use their own prompts instead of relying solely on the default prompt
|
|
231
|
-
- Please see [Custom Prompt Template](#custom-prompt-template)
|
|
170
|
+
- READ: `aicommit2 config get <key>`
|
|
171
|
+
- SET: `aicommit2 config set <key>=<value>`
|
|
232
172
|
|
|
173
|
+
Example:
|
|
233
174
|
```sh
|
|
234
|
-
aicommit2
|
|
175
|
+
aicommit2 config get OPENAI
|
|
176
|
+
aicommit2 config get GEMINI.key
|
|
177
|
+
aicommit2 config set OPENAI.generate=3 GEMINI.temperature=0.5
|
|
235
178
|
```
|
|
236
179
|
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
You can also integrate _aicommit2_ with Git via the [`prepare-commit-msg`](https://git-scm.com/docs/githooks#_prepare_commit_msg) hook. This lets you use Git like you normally would, and edit the commit message before committing.
|
|
240
|
-
|
|
241
|
-
#### Install
|
|
242
|
-
|
|
243
|
-
In the Git repository you want to install the hook in:
|
|
180
|
+
#### How to Configure in detail
|
|
244
181
|
|
|
182
|
+
1. Command-line arguments: **use the format** `--[ModelName].[SettingKey]=value`
|
|
245
183
|
```sh
|
|
246
|
-
aicommit2
|
|
184
|
+
aicommit2 --OPENAI.locale="jp" --GEMINI.temperatue="0.5"
|
|
247
185
|
```
|
|
248
186
|
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
187
|
+
2. Configuration file: **use INI format in the `~/.aicommit2` file or use `set` command**.
|
|
188
|
+
Example `~/.aicommit2`:
|
|
189
|
+
```ini
|
|
190
|
+
# General Settings
|
|
191
|
+
logging=true
|
|
192
|
+
generate=2
|
|
193
|
+
temperature=1.0
|
|
256
194
|
|
|
257
|
-
|
|
195
|
+
# Model-Specific Settings
|
|
196
|
+
[OPENAI]
|
|
197
|
+
key="<your-api-key>"
|
|
198
|
+
temperature=0.8
|
|
199
|
+
generate=1
|
|
200
|
+
systemPromptPath="<your-prompt-path>"
|
|
258
201
|
|
|
259
|
-
|
|
202
|
+
[GEMINI]
|
|
203
|
+
key="<your-api-key>"
|
|
204
|
+
generate=5
|
|
205
|
+
ignoreBody=false
|
|
260
206
|
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
207
|
+
[OLLAMA]
|
|
208
|
+
temperature=0.7
|
|
209
|
+
model[]=llama3.1
|
|
210
|
+
model[]=codestral
|
|
264
211
|
```
|
|
265
212
|
|
|
266
|
-
>
|
|
213
|
+
> The priority of settings is: **Command-line Arguments > Model-Specific Settings > General Settings > Default Values**.
|
|
267
214
|
|
|
268
|
-
|
|
215
|
+
## General Settings
|
|
269
216
|
|
|
270
|
-
|
|
217
|
+
The following settings can be applied to most models, but support may vary.
|
|
218
|
+
Please check the documentation for each specific model to confirm which settings are supported.
|
|
271
219
|
|
|
272
|
-
|
|
220
|
+
| Setting | Description | Default |
|
|
221
|
+
|--------------------|----------------------------------------------------------------------|--------------|
|
|
222
|
+
| `systemPrompt` | System Prompt text | - |
|
|
223
|
+
| `systemPromptPath` | Path to system prompt file | - |
|
|
224
|
+
| `timeout` | Request timeout (milliseconds) | 10000 |
|
|
225
|
+
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7 |
|
|
226
|
+
| `maxTokens` | Maximum number of tokens to generate | 1024 |
|
|
227
|
+
| `locale` | Locale for the generated commit messages | en |
|
|
228
|
+
| `generate` | Number of commit messages to generate | 1 |
|
|
229
|
+
| `type` | Type of commit message to generate | conventional |
|
|
230
|
+
| `maxLength` | Maximum character length of the Subject of generated commit message | 50 |
|
|
231
|
+
| `logging` | Enable logging | true |
|
|
232
|
+
| `ignoreBody` | Whether the commit message includes body | true |
|
|
273
233
|
|
|
274
|
-
|
|
234
|
+
> πΒ **Tip:** To set the General Settings for each model, use the following command.
|
|
235
|
+
> ```shell
|
|
236
|
+
> aicommit2 config set OPENAI.locale="jp"
|
|
237
|
+
> aicommit2 config set CODESTRAL.type="gitmoji"
|
|
238
|
+
> aicommit2 config set GEMINI.ignoreBody=false
|
|
239
|
+
> ```
|
|
275
240
|
|
|
276
|
-
|
|
241
|
+
##### systemPrompt
|
|
242
|
+
- Allow users to specify a custom system prompt
|
|
277
243
|
|
|
278
244
|
```sh
|
|
279
|
-
aicommit2 config
|
|
245
|
+
aicommit2 config set systemPrompt="Generate git commit message."
|
|
280
246
|
```
|
|
281
247
|
|
|
282
|
-
|
|
248
|
+
> `systemPrompt` takes precedence over `systemPromptPath` and does not apply at the same time.
|
|
283
249
|
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
You can also retrieve multiple configuration options at once by separating them with spaces:
|
|
250
|
+
##### systemPromptPath
|
|
251
|
+
- Allow users to specify a custom file path for their own system prompt template
|
|
252
|
+
- Please see [Custom Prompt Template](#custom-prompt-template)
|
|
289
253
|
|
|
290
254
|
```sh
|
|
291
|
-
aicommit2 config
|
|
255
|
+
aicommit2 config set systemPromptPath="/path/to/user/prompt.txt"
|
|
292
256
|
```
|
|
293
257
|
|
|
294
|
-
|
|
258
|
+
##### timeout
|
|
259
|
+
|
|
260
|
+
The timeout for network requests in milliseconds.
|
|
295
261
|
|
|
296
|
-
|
|
262
|
+
Default: `10_000` (10 seconds)
|
|
297
263
|
|
|
298
264
|
```sh
|
|
299
|
-
aicommit2 config set
|
|
265
|
+
aicommit2 config set timeout=20000 # 20s
|
|
300
266
|
```
|
|
301
267
|
|
|
302
|
-
|
|
268
|
+
##### temperature
|
|
303
269
|
|
|
304
|
-
|
|
305
|
-
aicommit2 config set OPENAI_KEY=<your-api-key>
|
|
306
|
-
```
|
|
270
|
+
The temperature (0.0-2.0) is used to control the randomness of the output
|
|
307
271
|
|
|
308
|
-
|
|
272
|
+
Default: `0.7`
|
|
309
273
|
|
|
310
274
|
```sh
|
|
311
|
-
aicommit2 config set
|
|
275
|
+
aicommit2 config set temperature=0.3
|
|
312
276
|
```
|
|
313
277
|
|
|
314
|
-
|
|
315
|
-
|
|
316
|
-
| Option | Default | Description |
|
|
317
|
-
|----------------------|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
|
|
318
|
-
| `OPENAI_KEY` | N/A | The OpenAI API key |
|
|
319
|
-
| `OPENAI_MODEL` | `gpt-3.5-turbo` | The OpenAI Model to use |
|
|
320
|
-
| `OPENAI_URL` | `https://api.openai.com` | The OpenAI URL |
|
|
321
|
-
| `OPENAI_PATH` | `/v1/chat/completions` | The OpenAI request pathname |
|
|
322
|
-
| `ANTHROPIC_KEY` | N/A | The Anthropic API key |
|
|
323
|
-
| `ANTHROPIC_MODEL` | `claude-3-haiku-20240307` | The Anthropic Model to use |
|
|
324
|
-
| `GEMINI_KEY` | N/A | The Gemini API key |
|
|
325
|
-
| `GEMINI_MODEL` | `gemini-1.5-pro-latest` | The Gemini Model |
|
|
326
|
-
| `MISTRAL_KEY` | N/A | The Mistral API key |
|
|
327
|
-
| `MISTRAL_MODEL` | `mistral-tiny` | The Mistral Model to use |
|
|
328
|
-
| `CODESTRAL_KEY` | N/A | The Codestral API key |
|
|
329
|
-
| `CODESTRAL_MODEL` | `codestral-latest` | The Codestral Model to use |
|
|
330
|
-
| `COHERE_KEY` | N/A | The Cohere API Key |
|
|
331
|
-
| `COHERE_MODEL` | `command` | The identifier of the Cohere model |
|
|
332
|
-
| `GROQ_KEY` | N/A | The Groq API Key |
|
|
333
|
-
| `GROQ_MODEL` | `gemma-7b-it` | The Groq model name to use |
|
|
334
|
-
| `HUGGINGFACE_COOKIE` | N/A | The HuggingFace Cookie string |
|
|
335
|
-
| `HUGGINGFACE_MODEL` | `mistralai/Mixtral-8x7B-Instruct-v0.1` | The HuggingFace Model to use |
|
|
336
|
-
| `PERPLEXITY_KEY` | N/A | The Perplexity API key |
|
|
337
|
-
| `PERPLEXITY_MODEL` | `llama-3.1-sonar-small-128k-chat` | The Perplexity Model to use |
|
|
338
|
-
| `OLLAMA_MODEL` | N/A | The Ollama Model. It should be downloaded your local |
|
|
339
|
-
| `OLLAMA_HOST` | `http://localhost:11434` | The Ollama Host |
|
|
340
|
-
| `OLLAMA_TIMEOUT` | `100_000` ms | Request timeout for the Ollama |
|
|
341
|
-
| `locale` | `en` | Locale for the generated commit messages |
|
|
342
|
-
| `generate` | `1` | Number of commit messages to generate |
|
|
343
|
-
| `type` | `conventional` | Type of commit message to generate |
|
|
344
|
-
| `proxy` | N/A | Set a HTTP/HTTPS proxy to use for requests(only **OpenAI**) |
|
|
345
|
-
| `timeout` | `10_000` ms | Network request timeout |
|
|
346
|
-
| `max-length` | `50` | Maximum character length of the generated commit message(Subject) |
|
|
347
|
-
| `max-tokens` | `1024` | The maximum number of tokens that the AI models can generate (for **Open AI, Anthropic, Gemini, Mistral, Codestral**) |
|
|
348
|
-
| `temperature` | `0.7` | The temperature (0.0-2.0) is used to control the randomness of the output (for **Open AI, Anthropic, Gemini, Mistral, Codestral**) |
|
|
349
|
-
| `promptPath` | N/A | Allow users to specify a custom file path for their own prompt template |
|
|
350
|
-
| `logging` | `false` | Whether to log AI responses for debugging (true or false) |
|
|
351
|
-
| `ignoreBody` | `true` | Whether the commit message includes body (true or false) |
|
|
352
|
-
|
|
353
|
-
> **Currently, options are set universally. However, there are plans to develop the ability to set individual options in the future.**
|
|
278
|
+
##### maxTokens
|
|
354
279
|
|
|
355
|
-
|
|
356
|
-
| | locale | generate | type | proxy | timeout | max-length | max-tokens | temperature | prompt |
|
|
357
|
-
|:--------------------:|:------:|:--------:|:-----:|:-----:|:----------------------:|:-----------:|:----------:|:-----------:|:------:|
|
|
358
|
-
| **OpenAI** | β | β | β | β | β | β | β | β | β |
|
|
359
|
-
| **Anthropic Claude** | β | β | β | | | β | β | β | β |
|
|
360
|
-
| **Gemini** | β | β | β | | | β | β | β | β |
|
|
361
|
-
| **Mistral AI** | β | β | β | | β | β | β | β | β |
|
|
362
|
-
| **Codestral** | β | β | β | | β | β | β | β | β |
|
|
363
|
-
| **Cohere** | β | β | β | | | β | β | β | β |
|
|
364
|
-
| **Groq** | β | β | β | | β | β | | | β |
|
|
365
|
-
| **Huggingface** | β | β | β | | | β | | | β |
|
|
366
|
-
| **Perplexity** | β | β | β | | β | β | β | β | β |
|
|
367
|
-
| **Ollama** | β | β | β | | β <br/>(OLLAMA_TIMEOUT) | β | | β | β |
|
|
280
|
+
The maximum number of tokens that the AI models can generate.
|
|
368
281
|
|
|
282
|
+
Default: `1024`
|
|
369
283
|
|
|
370
|
-
|
|
284
|
+
```sh
|
|
285
|
+
aicommit2 config set maxTokens=3000
|
|
286
|
+
```
|
|
371
287
|
|
|
372
288
|
##### locale
|
|
373
289
|
|
|
@@ -375,6 +291,10 @@ Default: `en`
|
|
|
375
291
|
|
|
376
292
|
The locale to use for the generated commit messages. Consult the list of codes in: https://wikipedia.org/wiki/List_of_ISO_639_language_codes.
|
|
377
293
|
|
|
294
|
+
```sh
|
|
295
|
+
aicommit2 config set locale="jp"
|
|
296
|
+
```
|
|
297
|
+
|
|
378
298
|
##### generate
|
|
379
299
|
|
|
380
300
|
Default: `1`
|
|
@@ -383,189 +303,251 @@ The number of commit messages to generate to pick from.
|
|
|
383
303
|
|
|
384
304
|
Note, this will use more tokens as it generates more results.
|
|
385
305
|
|
|
386
|
-
##### proxy
|
|
387
|
-
|
|
388
|
-
Set a HTTP/HTTPS proxy to use for requests.
|
|
389
|
-
|
|
390
|
-
To clear the proxy option, you can use the command (note the empty value after the equals sign):
|
|
391
|
-
|
|
392
|
-
> **Only supported within the OpenAI**
|
|
393
|
-
|
|
394
306
|
```sh
|
|
395
|
-
aicommit2 config set
|
|
307
|
+
aicommit2 config set generate=2
|
|
396
308
|
```
|
|
397
309
|
|
|
398
|
-
#####
|
|
310
|
+
##### type
|
|
399
311
|
|
|
400
|
-
|
|
312
|
+
Default: `conventional`
|
|
401
313
|
|
|
402
|
-
|
|
314
|
+
Supported: `conventional`, `gitmoji`
|
|
315
|
+
|
|
316
|
+
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
|
|
403
317
|
|
|
404
318
|
```sh
|
|
405
|
-
aicommit2 config set
|
|
319
|
+
aicommit2 config set type="conventional"
|
|
406
320
|
```
|
|
407
321
|
|
|
408
|
-
#####
|
|
322
|
+
##### maxLength
|
|
409
323
|
|
|
410
|
-
The maximum character length of the generated commit message
|
|
324
|
+
The maximum character length of the Subject of generated commit message
|
|
411
325
|
|
|
412
326
|
Default: `50`
|
|
413
327
|
|
|
414
328
|
```sh
|
|
415
|
-
aicommit2 config set
|
|
329
|
+
aicommit2 config set maxLength=100
|
|
416
330
|
```
|
|
417
331
|
|
|
418
|
-
#####
|
|
419
|
-
|
|
420
|
-
Default: `conventional`
|
|
332
|
+
##### logging
|
|
421
333
|
|
|
422
|
-
|
|
334
|
+
Default: `true`
|
|
423
335
|
|
|
424
|
-
|
|
336
|
+
Option that allows users to decide whether to generate a log file capturing the responses.
|
|
337
|
+
The log files will be stored in the `~/.aicommit2_log` directory(user's home).
|
|
425
338
|
|
|
426
|
-
|
|
427
|
-
aicommit2 config set type=conventional
|
|
428
|
-
```
|
|
339
|
+

|
|
429
340
|
|
|
430
|
-
You can
|
|
341
|
+
- You can remove all logs below comamnd.
|
|
431
342
|
|
|
432
343
|
```sh
|
|
433
|
-
aicommit2
|
|
344
|
+
aicommit2 log removeAll
|
|
434
345
|
```
|
|
435
346
|
|
|
436
|
-
#####
|
|
437
|
-
The maximum number of tokens that the AI models can generate.
|
|
347
|
+
##### ignoreBody
|
|
438
348
|
|
|
439
|
-
Default: `
|
|
349
|
+
Default: `true`
|
|
350
|
+
|
|
351
|
+
This option determines whether the commit message includes body. If you want to include body in message, you can set it to `false`.
|
|
440
352
|
|
|
441
353
|
```sh
|
|
442
|
-
aicommit2 config set
|
|
354
|
+
aicommit2 config set ignoreBody="false"
|
|
443
355
|
```
|
|
444
356
|
|
|
445
|
-
|
|
446
|
-
The temperature (0.0-2.0) is used to control the randomness of the output
|
|
357
|
+

|
|
447
358
|
|
|
448
|
-
Default: `0.7`
|
|
449
359
|
|
|
450
360
|
```sh
|
|
451
|
-
aicommit2 config set
|
|
361
|
+
aicommit2 config set ignoreBody="true"
|
|
452
362
|
```
|
|
453
363
|
|
|
454
|
-
|
|
455
|
-
- Allow users to specify a custom file path for their own prompt template
|
|
456
|
-
- Enable users to define and use their own prompts instead of relying solely on the default prompt
|
|
457
|
-
- Please see [Custom Prompt Template](#custom-prompt-template)
|
|
364
|
+

|
|
458
365
|
|
|
459
|
-
|
|
460
|
-
aicommit2 config set promptPath="/path/to/user/prompt.txt"
|
|
461
|
-
```
|
|
366
|
+
## Model-Specific Settings
|
|
462
367
|
|
|
463
|
-
|
|
368
|
+
> Some models mentioned below are subject to change.
|
|
464
369
|
|
|
465
|
-
|
|
370
|
+
### OpenAI
|
|
466
371
|
|
|
467
|
-
|
|
468
|
-
|
|
372
|
+
| Setting | Description | Default |
|
|
373
|
+
|--------------------|---------------------------------------------------------------------|------------------------|
|
|
374
|
+
| `key` | API key | - |
|
|
375
|
+
| `model` | Model to use | `gpt-3.5-turbo` |
|
|
376
|
+
| `url` | API endpoint URL | https://api.openai.com |
|
|
377
|
+
| `path` | API path | /v1/chat/completions |
|
|
378
|
+
| `proxy` | Proxy settings | - |
|
|
469
379
|
|
|
470
|
-
|
|
380
|
+
##### OPENAI.key
|
|
471
381
|
|
|
472
|
-
|
|
473
|
-
aicommit2 config set logging="true"
|
|
474
|
-
```
|
|
382
|
+
The OpenAI API key. You can retrieve it from [OpenAI API Keys page](https://platform.openai.com/account/api-keys).
|
|
475
383
|
|
|
476
|
-
- You can remove all logs below comamnd.
|
|
477
|
-
|
|
478
384
|
```sh
|
|
479
|
-
aicommit2
|
|
385
|
+
aicommit2 config set OPENAI.key="your api key"
|
|
480
386
|
```
|
|
481
387
|
|
|
482
|
-
#####
|
|
388
|
+
##### OPENAI.model
|
|
483
389
|
|
|
484
|
-
Default: `
|
|
390
|
+
Default: `gpt-3.5-turbo`
|
|
485
391
|
|
|
486
|
-
|
|
392
|
+
The Chat Completions (`/v1/chat/completions`) model to use. Consult the list of models available in the [OpenAI Documentation](https://platform.openai.com/docs/models/model-endpoint-compatibility).
|
|
393
|
+
|
|
394
|
+
> Tip: If you have access, try upgrading to [`gpt-4`](https://platform.openai.com/docs/models/gpt-4) for next-level code analysis. It can handle double the input size, but comes at a higher cost. Check out OpenAI's website to learn more.
|
|
487
395
|
|
|
488
396
|
```sh
|
|
489
|
-
aicommit2 config set
|
|
397
|
+
aicommit2 config set OPENAI.model=gpt-4
|
|
490
398
|
```
|
|
491
399
|
|
|
492
|
-
|
|
400
|
+
##### OPENAI.url
|
|
493
401
|
|
|
402
|
+
Default: `https://api.openai.com`
|
|
403
|
+
|
|
404
|
+
The OpenAI URL. Both https and http protocols supported. It allows to run local OpenAI-compatible server.
|
|
494
405
|
|
|
495
406
|
```sh
|
|
496
|
-
aicommit2 config set
|
|
407
|
+
aicommit2 config set OPENAI.url="<your-host>"
|
|
497
408
|
```
|
|
498
409
|
|
|
499
|
-
|
|
410
|
+
##### OPENAI.path
|
|
411
|
+
|
|
412
|
+
Default: `/v1/chat/completions`
|
|
413
|
+
|
|
414
|
+
The OpenAI Path.
|
|
500
415
|
|
|
501
416
|
### Ollama
|
|
502
417
|
|
|
503
|
-
|
|
418
|
+
| Setting | Description | Default |
|
|
419
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|------------------------|
|
|
420
|
+
| `model` | Model(s) to use (comma-separated list) | - |
|
|
421
|
+
| `host` | Ollama host URL | http://localhost:11434 |
|
|
422
|
+
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
|
|
423
|
+
|
|
424
|
+
##### OLLAMA.model
|
|
504
425
|
|
|
505
426
|
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
506
427
|
|
|
507
428
|
```sh
|
|
508
|
-
aicommit2 config set
|
|
509
|
-
aicommit2 config set
|
|
429
|
+
aicommit2 config set OLLAMA.model="llama3.1"
|
|
430
|
+
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
431
|
+
|
|
432
|
+
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
510
433
|
```
|
|
511
434
|
|
|
512
|
-
|
|
435
|
+
> OLLAMA.model is only **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
436
|
+
|
|
437
|
+
##### OLLAMA.host
|
|
513
438
|
|
|
514
439
|
Default: `http://localhost:11434`
|
|
515
440
|
|
|
516
441
|
The Ollama host
|
|
517
442
|
|
|
518
443
|
```sh
|
|
519
|
-
aicommit2 config set
|
|
444
|
+
aicommit2 config set OLLAMA.host=<host>
|
|
520
445
|
```
|
|
521
446
|
|
|
522
|
-
#####
|
|
447
|
+
##### OLLAMA.timeout
|
|
523
448
|
|
|
524
449
|
Default: `100_000` (100 seconds)
|
|
525
450
|
|
|
526
|
-
Request timeout for the Ollama.
|
|
451
|
+
Request timeout for the Ollama.
|
|
527
452
|
|
|
528
453
|
```sh
|
|
529
|
-
aicommit2 config set
|
|
454
|
+
aicommit2 config set OLLAMA.timeout=<timeout>
|
|
530
455
|
```
|
|
531
456
|
|
|
532
|
-
|
|
457
|
+
##### Unsupported Options
|
|
533
458
|
|
|
534
|
-
|
|
459
|
+
Ollama does not support the following options in General Settings.
|
|
460
|
+
|
|
461
|
+
- maxTokens
|
|
535
462
|
|
|
536
|
-
|
|
463
|
+
### HuggingFace
|
|
537
464
|
|
|
538
|
-
|
|
465
|
+
| Setting | Description | Default |
|
|
466
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|----------------------------------------|
|
|
467
|
+
| `cookie` | Authentication cookie | - |
|
|
468
|
+
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
|
|
539
469
|
|
|
540
|
-
|
|
470
|
+
##### HUGGINGFACE.cookie
|
|
541
471
|
|
|
542
|
-
The Chat
|
|
472
|
+
The [Huggingface Chat](https://huggingface.co/chat/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
543
473
|
|
|
544
|
-
|
|
474
|
+
```sh
|
|
475
|
+
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
476
|
+
aicommit2 config set HUGGINGFACE.cookie="your-cooke"
|
|
477
|
+
```
|
|
478
|
+
|
|
479
|
+
##### HUGGINGFACE.model
|
|
480
|
+
|
|
481
|
+
Default: `CohereForAI/c4ai-command-r-plus`
|
|
482
|
+
|
|
483
|
+
Supported:
|
|
484
|
+
- `CohereForAI/c4ai-command-r-plus`
|
|
485
|
+
- `meta-llama/Meta-Llama-3-70B-Instruct`
|
|
486
|
+
- `HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1`
|
|
487
|
+
- `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
488
|
+
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
489
|
+
- `01-ai/Yi-1.5-34B-Chat`
|
|
490
|
+
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
491
|
+
- `microsoft/Phi-3-mini-4k-instruct`
|
|
545
492
|
|
|
546
493
|
```sh
|
|
547
|
-
aicommit2 config set
|
|
494
|
+
aicommit2 config set HUGGINGFACE.model="mistralai/Mistral-7B-Instruct-v0.2"
|
|
548
495
|
```
|
|
549
496
|
|
|
550
|
-
#####
|
|
497
|
+
##### Unsupported Options
|
|
551
498
|
|
|
552
|
-
|
|
499
|
+
Huggingface does not support the following options in General Settings.
|
|
553
500
|
|
|
554
|
-
|
|
501
|
+
- maxTokens
|
|
502
|
+
- timeout
|
|
503
|
+
- temperature
|
|
555
504
|
|
|
556
|
-
|
|
505
|
+
### Gemini
|
|
557
506
|
|
|
558
|
-
Default
|
|
507
|
+
| Setting | Description | Default |
|
|
508
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|-------------------|
|
|
509
|
+
| `key` | API key | - |
|
|
510
|
+
| `model` | Model to use | `gemini-1.5-pro` |
|
|
559
511
|
|
|
560
|
-
|
|
512
|
+
##### GEMINI.key
|
|
561
513
|
|
|
562
|
-
|
|
514
|
+
The Gemini API key. If you don't have one, create a key in [Google AI Studio](https://aistudio.google.com/app/apikey).
|
|
515
|
+
|
|
516
|
+
```sh
|
|
517
|
+
aicommit2 config set GEMINI.key="your api key"
|
|
518
|
+
```
|
|
563
519
|
|
|
564
|
-
#####
|
|
520
|
+
##### GEMINI.model
|
|
521
|
+
|
|
522
|
+
Default: `gemini-1.5-pro`
|
|
523
|
+
|
|
524
|
+
Supported:
|
|
525
|
+
- `gemini-1.5-pro`
|
|
526
|
+
- `gemini-1.5-flash`
|
|
527
|
+
- `gemini-1.5-pro-exp-0801`
|
|
528
|
+
|
|
529
|
+
```sh
|
|
530
|
+
aicommit2 config set GEMINI.model="gemini-1.5-pro-exp-0801"
|
|
531
|
+
```
|
|
532
|
+
|
|
533
|
+
##### Unsupported Options
|
|
534
|
+
|
|
535
|
+
Gemini does not support the following options in General Settings.
|
|
536
|
+
|
|
537
|
+
- timeout
|
|
538
|
+
|
|
539
|
+
### Anthropic
|
|
540
|
+
|
|
541
|
+
| Setting | Description | Default |
|
|
542
|
+
|-------------|----------------|---------------------------|
|
|
543
|
+
| `key` | API key | - |
|
|
544
|
+
| `model` | Model to use | `claude-3-haiku-20240307` |
|
|
545
|
+
|
|
546
|
+
##### ANTHROPIC.key
|
|
565
547
|
|
|
566
548
|
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
567
549
|
|
|
568
|
-
#####
|
|
550
|
+
##### ANTHROPIC.model
|
|
569
551
|
|
|
570
552
|
Default: `claude-3-haiku-20240307`
|
|
571
553
|
|
|
@@ -573,37 +555,30 @@ Supported:
|
|
|
573
555
|
- `claude-3-haiku-20240307`
|
|
574
556
|
- `claude-3-sonnet-20240229`
|
|
575
557
|
- `claude-3-opus-20240229`
|
|
576
|
-
- `claude-
|
|
577
|
-
- `claude-2.0`
|
|
578
|
-
- `claude-instant-1.2`
|
|
558
|
+
- `claude-3-5-sonnet-20240620`
|
|
579
559
|
|
|
580
560
|
```sh
|
|
581
|
-
aicommit2 config set
|
|
561
|
+
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
582
562
|
```
|
|
583
563
|
|
|
584
|
-
|
|
585
|
-
|
|
586
|
-
##### GEMINI_KEY
|
|
564
|
+
##### Unsupported Options
|
|
587
565
|
|
|
588
|
-
|
|
589
|
-
|
|
590
|
-
##### GEMINI_MODEL
|
|
566
|
+
Anthropic does not support the following options in General Settings.
|
|
591
567
|
|
|
592
|
-
|
|
593
|
-
|
|
594
|
-
Supported:
|
|
595
|
-
- `gemini-1.5-pro-latest`
|
|
596
|
-
- `gemini-1.5-flash-latest`
|
|
568
|
+
- timeout
|
|
597
569
|
|
|
598
|
-
|
|
570
|
+
### Mistral
|
|
599
571
|
|
|
600
|
-
|
|
572
|
+
| Setting | Description | Default |
|
|
573
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|----------------|
|
|
574
|
+
| `key` | API key | - |
|
|
575
|
+
| `model` | Model to use | `mistral-tiny` |
|
|
601
576
|
|
|
602
|
-
#####
|
|
577
|
+
##### MISTRAL.key
|
|
603
578
|
|
|
604
579
|
The Mistral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/).
|
|
605
580
|
|
|
606
|
-
#####
|
|
581
|
+
##### MISTRAL.model
|
|
607
582
|
|
|
608
583
|
Default: `mistral-tiny`
|
|
609
584
|
|
|
@@ -623,15 +598,18 @@ Supported:
|
|
|
623
598
|
- `mistral-large-2402`
|
|
624
599
|
- `mistral-embed`
|
|
625
600
|
|
|
626
|
-
|
|
601
|
+
### Codestral
|
|
627
602
|
|
|
628
|
-
|
|
603
|
+
| Setting | Description | Default |
|
|
604
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|--------------------|
|
|
605
|
+
| `key` | API key | - |
|
|
606
|
+
| `model` | Model to use | `codestral-latest` |
|
|
629
607
|
|
|
630
|
-
#####
|
|
608
|
+
##### CODESTRAL.key
|
|
631
609
|
|
|
632
610
|
The Codestral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/codestral).
|
|
633
611
|
|
|
634
|
-
#####
|
|
612
|
+
##### CODESTRAL.model
|
|
635
613
|
|
|
636
614
|
Default: `codestral-latest`
|
|
637
615
|
|
|
@@ -639,67 +617,82 @@ Supported:
|
|
|
639
617
|
- `codestral-latest`
|
|
640
618
|
- `codestral-2405`
|
|
641
619
|
|
|
642
|
-
|
|
620
|
+
```sh
|
|
621
|
+
aicommit2 config set CODESTRAL.model="codestral-2405"
|
|
622
|
+
```
|
|
643
623
|
|
|
644
|
-
|
|
624
|
+
#### Cohere
|
|
645
625
|
|
|
646
|
-
|
|
626
|
+
| Setting | Description | Default |
|
|
627
|
+
|--------------------|------------------------------------------------------------------------------------------------------------------|-------------|
|
|
628
|
+
| `key` | API key | - |
|
|
629
|
+
| `model` | Model to use | `command` |
|
|
630
|
+
|
|
631
|
+
##### COHERE.key
|
|
647
632
|
|
|
648
633
|
The Cohere API key. If you don't have one, please sign up and get the API key in [Cohere Dashboard](https://dashboard.cohere.com/).
|
|
649
634
|
|
|
650
|
-
#####
|
|
635
|
+
##### COHERE.model
|
|
651
636
|
|
|
652
637
|
Default: `command`
|
|
653
638
|
|
|
654
|
-
Supported:
|
|
639
|
+
Supported models:
|
|
655
640
|
- `command`
|
|
656
641
|
- `command-nightly`
|
|
657
642
|
- `command-light`
|
|
658
643
|
- `command-light-nightly`
|
|
659
644
|
|
|
660
|
-
|
|
645
|
+
```sh
|
|
646
|
+
aicommit2 config set COHERE.model="command-nightly"
|
|
647
|
+
```
|
|
648
|
+
|
|
649
|
+
##### Unsupported Options
|
|
650
|
+
|
|
651
|
+
Cohere does not support the following options in General Settings.
|
|
652
|
+
|
|
653
|
+
- timeout
|
|
661
654
|
|
|
662
655
|
### Groq
|
|
663
656
|
|
|
664
|
-
|
|
657
|
+
| Setting | Description | Default |
|
|
658
|
+
|--------------------|------------------------|----------------|
|
|
659
|
+
| `key` | API key | - |
|
|
660
|
+
| `model` | Model to use | `gemma2-9b-it` |
|
|
661
|
+
|
|
662
|
+
##### GROQ.key
|
|
665
663
|
|
|
666
664
|
The Groq API key. If you don't have one, please sign up and get the API key in [Groq Console](https://console.groq.com).
|
|
667
665
|
|
|
668
|
-
#####
|
|
666
|
+
##### GROQ.model
|
|
669
667
|
|
|
670
|
-
Default: `
|
|
668
|
+
Default: `gemma2-9b-it`
|
|
671
669
|
|
|
672
670
|
Supported:
|
|
673
|
-
- `
|
|
674
|
-
- `llama3-70b-8192`
|
|
675
|
-
- `mixtral-8x7b-32768`
|
|
671
|
+
- `gemma2-9b-it`
|
|
676
672
|
- `gemma-7b-it`
|
|
673
|
+
- `llama-3.1-70b-versatile`
|
|
674
|
+
- `llama-3.1-8b-instant`
|
|
675
|
+
- `llama3-70b-8192`
|
|
676
|
+
- `llama3-8b-8192`
|
|
677
|
+
- `llama3-groq-70b-8192-tool-use-preview`
|
|
678
|
+
- `llama3-groq-8b-8192-tool-use-preview`
|
|
677
679
|
|
|
678
|
-
|
|
679
|
-
|
|
680
|
-
|
|
681
|
-
|
|
682
|
-
##### HUGGINGFACE_COOKIE
|
|
683
|
-
|
|
684
|
-
The [Huggingface Chat](https://huggingface.co/chat/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
680
|
+
```sh
|
|
681
|
+
aicommit2 config set GROQ.model="llama3-8b-8192"
|
|
682
|
+
```
|
|
685
683
|
|
|
686
|
-
|
|
684
|
+
### Perplexity
|
|
687
685
|
|
|
688
|
-
Default
|
|
686
|
+
| Setting | Description | Default |
|
|
687
|
+
|--------------------|------------------|-----------------------------------|
|
|
688
|
+
| `key` | API key | - |
|
|
689
|
+
| `model` | Model to use | `llama-3.1-sonar-small-128k-chat` |
|
|
689
690
|
|
|
690
|
-
|
|
691
|
-
- `CohereForAI/c4ai-command-r-plus`
|
|
692
|
-
- `meta-llama/Meta-Llama-3-70B-Instruct`
|
|
693
|
-
- `HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1`
|
|
694
|
-
- `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
695
|
-
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
696
|
-
- `01-ai/Yi-1.5-34B-Chat`
|
|
697
|
-
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
698
|
-
- `microsoft/Phi-3-mini-4k-instruct`
|
|
691
|
+
##### PERPLEXITY.key
|
|
699
692
|
|
|
700
|
-
|
|
693
|
+
The Perplexity API key. If you don't have one, please sign up and get the API key in [Perplexity](https://docs.perplexity.ai/)
|
|
701
694
|
|
|
702
|
-
#####
|
|
695
|
+
##### PERPLEXITY.model
|
|
703
696
|
|
|
704
697
|
Default: `llama-3.1-sonar-small-128k-chat`
|
|
705
698
|
|
|
@@ -715,6 +708,25 @@ Supported:
|
|
|
715
708
|
|
|
716
709
|
> The models mentioned above are subject to change.
|
|
717
710
|
|
|
711
|
+
```sh
|
|
712
|
+
aicommit2 config set PERPLEXITY.model="llama-3.1-70b"
|
|
713
|
+
```
|
|
714
|
+
|
|
715
|
+
#### Usage
|
|
716
|
+
|
|
717
|
+
1. Stage your files and commit:
|
|
718
|
+
|
|
719
|
+
```sh
|
|
720
|
+
git add <files...>
|
|
721
|
+
git commit # Only generates a message when it's not passed in
|
|
722
|
+
```
|
|
723
|
+
|
|
724
|
+
> If you ever want to write your own message instead of generating one, you can simply pass one in: `git commit -m "My message"`
|
|
725
|
+
|
|
726
|
+
2. _aicommit2_ will generate the commit message for you and pass it back to Git. Git will open it with the [configured editor](https://docs.github.com/en/get-started/getting-started-with-git/associating-text-editors-with-git) for you to review/edit it.
|
|
727
|
+
|
|
728
|
+
3. Save and close the editor to commit!
|
|
729
|
+
|
|
718
730
|
## Upgrading
|
|
719
731
|
|
|
720
732
|
Check the installed version with:
|
|
@@ -731,23 +743,29 @@ npm update -g aicommit2
|
|
|
731
743
|
|
|
732
744
|
## Custom Prompt Template
|
|
733
745
|
|
|
734
|
-
_aicommit2_ supports custom prompt templates through the `
|
|
746
|
+
_aicommit2_ supports custom prompt templates through the `systemPromptPath` option. This feature allows you to define your own prompt structure, giving you more control over the commit message generation process.
|
|
735
747
|
|
|
736
|
-
### Using the
|
|
748
|
+
### Using the systemPromptPath Option
|
|
737
749
|
To use a custom prompt template, specify the path to your template file when running the tool:
|
|
750
|
+
|
|
738
751
|
```
|
|
739
|
-
aicommit2 config set
|
|
752
|
+
aicommit2 config set systemPromptPath="/path/to/user/prompt.txt"
|
|
753
|
+
aicommit2 config set OPENAI.systemPromptPath="/path/to/another-prompt.txt"
|
|
740
754
|
```
|
|
741
755
|
|
|
756
|
+
For the above command, OpenAI uses the prompt in the `another-prompt.txt` file, and the rest of the model uses `prompt.txt`.
|
|
757
|
+
|
|
758
|
+
> **NOTE**: For the `systemPromptPath` option, set the **template path**, not the template content
|
|
759
|
+
|
|
742
760
|
### Template Format
|
|
743
761
|
|
|
744
762
|
Your custom template can include placeholders for various commit options.
|
|
745
763
|
Use curly braces `{}` to denote these placeholders for options. The following placeholders are supported:
|
|
746
764
|
|
|
747
|
-
- [{locale}](#locale): The language for the commit message (string)
|
|
748
|
-
- [{maxLength}](#max-length): The maximum length for the commit message (number)
|
|
749
|
-
- [{type}](#type): The type of the commit (conventional or gitmoji)
|
|
750
|
-
- [{generate}](#generate): The number of commit messages to generate (number)
|
|
765
|
+
- [{locale}](#locale): The language for the commit message (**string**)
|
|
766
|
+
- [{maxLength}](#max-length): The maximum length for the commit message (**number**)
|
|
767
|
+
- [{type}](#type): The type of the commit message (**conventional** or **gitmoji**)
|
|
768
|
+
- [{generate}](#generate): The number of commit messages to generate (**number**)
|
|
751
769
|
|
|
752
770
|
### Example Template
|
|
753
771
|
|
|
@@ -764,35 +782,31 @@ Remember to follow these guidelines:
|
|
|
764
782
|
3. Explain the 'why' behind the change
|
|
765
783
|
```
|
|
766
784
|
|
|
767
|
-
|
|
785
|
+
### **Appended Text**
|
|
768
786
|
|
|
769
|
-
Please note that the following text will always be appended to the end of your custom prompt:
|
|
787
|
+
Please note that the following text will **always** be appended to the end of your custom prompt:
|
|
770
788
|
|
|
771
789
|
```
|
|
772
|
-
Provide your response as a JSON array
|
|
773
|
-
|
|
774
|
-
|
|
790
|
+
Provide your response as a JSON array containing exactly 1 object, each with the following keys:
|
|
791
|
+
- "subject": The main commit message using the conventional style. It should be a concise summary of the changes.
|
|
792
|
+
- "body": An optional detailed explanation of the changes. If not needed, use an empty string.
|
|
793
|
+
- "footer": An optional footer for metadata like BREAKING CHANGES. If not needed, use an empty string.
|
|
794
|
+
The array must always contain 1 element, no more and no less.
|
|
795
|
+
Example response format:
|
|
775
796
|
[
|
|
776
797
|
{
|
|
777
|
-
"subject": "
|
|
778
|
-
"body": "
|
|
779
|
-
"footer": "
|
|
780
|
-
}
|
|
781
|
-
...
|
|
798
|
+
"subject": "fix: fix bug in user authentication process",
|
|
799
|
+
"body": "- Update login function to handle edge cases\n- Add additional error logging for debugging",
|
|
800
|
+
"footer": ""
|
|
801
|
+
}
|
|
782
802
|
]
|
|
803
|
+
Ensure you generate exactly 1 commit message, even if it requires creating slightly varied versions for similar changes.
|
|
804
|
+
The response should be valid JSON that can be parsed without errors.
|
|
783
805
|
```
|
|
784
806
|
|
|
785
807
|
This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
|
|
786
808
|
|
|
787
|
-
|
|
788
|
-
|
|
789
|
-
If the specified file cannot be read or parsed, _aicommit2_ will fall back to using the default prompt generation logic.
|
|
790
|
-
Ensure your template includes all necessary instructions for generating appropriate commit messages.
|
|
791
|
-
You can still use all other command-line options in conjunction with `promptPath`.
|
|
792
|
-
|
|
793
|
-
By using custom templates, you can tailor the commit message generation to your team's specific needs or coding standards.
|
|
794
|
-
|
|
795
|
-
> NOTE: For the `promptPath` option, set the **template path**, not the template content
|
|
809
|
+
> NOTE: The template may vary depending on the generate and commit message type.
|
|
796
810
|
|
|
797
811
|
## Loading Multiple Ollama Models
|
|
798
812
|
|
|
@@ -817,10 +831,10 @@ OLLAMA_MAX_LOADED_MODELS=3 ollama serve
|
|
|
817
831
|
|
|
818
832
|
##### 2. Configuring _aicommit2_
|
|
819
833
|
|
|
820
|
-
Next, set up _aicommit2_ to specify multiple models. You can assign a list of models, separated by **commas(`,`)**, to the
|
|
834
|
+
Next, set up _aicommit2_ to specify multiple models. You can assign a list of models, separated by **commas(`,`)**, to the OLLAMA.model environment variable. Here's how you do it:
|
|
821
835
|
|
|
822
836
|
```shell
|
|
823
|
-
aicommit2 config set
|
|
837
|
+
aicommit2 config set OLLAMA.model="mistral,dolphin-llama3"
|
|
824
838
|
```
|
|
825
839
|
|
|
826
840
|
With this command, _aicommit2_ is instructed to utilize both the "mistral" and "dolphin-llama3" models when making requests to the Ollama server.
|
|
@@ -833,7 +847,6 @@ aicommit2
|
|
|
833
847
|
|
|
834
848
|
> Note that this feature is available starting from Ollama version [**0.1.33**](https://github.com/ollama/ollama/releases/tag/v0.1.33) and _aicommit2_ version [**1.9.5**](https://www.npmjs.com/package/aicommit2/v/1.9.5).
|
|
835
849
|
|
|
836
|
-
|
|
837
850
|
## How to get Cookie(**Unofficial API**)
|
|
838
851
|
|
|
839
852
|
* Login to the site you want
|
|
@@ -847,26 +860,13 @@ aicommit2
|
|
|
847
860
|
|
|
848
861
|

|
|
849
862
|
|
|
850
|
-
## Disclaimer
|
|
851
|
-
|
|
852
|
-
This project utilizes certain functionalities or data from external APIs, but it is important to note that it is not officially affiliated with or endorsed by the providers of those APIs. The use of external APIs is at the sole discretion and risk of the user.
|
|
853
|
-
|
|
854
|
-
## Risk Acknowledgment
|
|
863
|
+
## Disclaimer and Risks
|
|
855
864
|
|
|
856
|
-
|
|
857
|
-
|
|
858
|
-
It is recommended that users thoroughly review the API documentation and adhere to best practices to ensure a positive and compliant experience.
|
|
859
|
-
|
|
860
|
-
## Please Star βοΈ
|
|
861
|
-
If this project has been helpful to you, I would greatly appreciate it if you could click the StarβοΈ button on this repository!
|
|
862
|
-
|
|
863
|
-
## Maintainers
|
|
864
|
-
|
|
865
|
-
- [@tak-bro](https://env-tak.github.io/)
|
|
865
|
+
This project uses functionalities from external APIs but is not officially affiliated with or endorsed by their providers. Users are responsible for complying with API terms, rate limits, and policies.
|
|
866
866
|
|
|
867
867
|
## Contributing
|
|
868
868
|
|
|
869
|
-
|
|
869
|
+
For bug fixes or feature implementations, please check the [Contribution Guide](CONTRIBUTING.md).
|
|
870
870
|
|
|
871
871
|
## Contributors β¨
|
|
872
872
|
|
|
@@ -880,8 +880,15 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
|
|
|
880
880
|
<td align="center"><a href="https://github.com/eltociear"><img src="https://avatars.githubusercontent.com/eltociear" width="100px;" alt=""/><br /><sub><b>@eltociear</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=eltociear" title="Documentation">π</a></td>
|
|
881
881
|
<td align="center"><a href="https://github.com/ubranch"><img src="https://avatars.githubusercontent.com/ubranch" width="100px;" alt=""/><br /><sub><b>@ubranch</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=ubranch" title="Code">π»</a></td>
|
|
882
882
|
<td align="center"><a href="https://github.com/bhodrolok"><img src="https://avatars.githubusercontent.com/bhodrolok" width="100px;" alt=""/><br /><sub><b>@bhodrolok</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=bhodrolok" title="Code">π»</a></td>
|
|
883
|
+
<td align="center"><a href="https://github.com/ryicoh"><img src="https://avatars.githubusercontent.com/ryicoh" width="100px;" alt=""/><br /><sub><b>@ryicoh</b></sub></a><br /><a href="https://github.com/tak-bro/aicommit2/commits?author=ryicoh" title="Code">π»</a></td>
|
|
883
884
|
</tr>
|
|
884
885
|
</table>
|
|
885
886
|
<!-- markdownlint-restore -->
|
|
886
887
|
<!-- prettier-ignore-end -->
|
|
887
888
|
<!-- ALL-CONTRIBUTORS-LIST:END -->
|
|
889
|
+
|
|
890
|
+
---
|
|
891
|
+
|
|
892
|
+
If this project has been helpful, please consider giving it a Star βοΈ!
|
|
893
|
+
|
|
894
|
+
Maintainer: [@tak-bro](https://env-tak.github.io/)
|