aicommit2 2.0.8 → 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +184 -158
- package/dist/cli.mjs +81 -71
- package/package.json +6 -6
package/README.md
CHANGED
|
@@ -179,7 +179,7 @@ aicommit2 config set OPENAI.generate=3 GEMINI.temperature=0.5
|
|
|
179
179
|
|
|
180
180
|
#### How to Configure in detail
|
|
181
181
|
|
|
182
|
-
1. Command-line arguments: **use the format** `--[
|
|
182
|
+
1. Command-line arguments: **use the format** `--[Model].[Key]=value`
|
|
183
183
|
```sh
|
|
184
184
|
aicommit2 --OPENAI.locale="jp" --GEMINI.temperatue="0.5"
|
|
185
185
|
```
|
|
@@ -202,7 +202,7 @@ systemPromptPath="<your-prompt-path>"
|
|
|
202
202
|
[GEMINI]
|
|
203
203
|
key="<your-api-key>"
|
|
204
204
|
generate=5
|
|
205
|
-
|
|
205
|
+
includeBody=true
|
|
206
206
|
|
|
207
207
|
[OLLAMA]
|
|
208
208
|
temperature=0.7
|
|
@@ -217,27 +217,29 @@ model[]=codestral
|
|
|
217
217
|
The following settings can be applied to most models, but support may vary.
|
|
218
218
|
Please check the documentation for each specific model to confirm which settings are supported.
|
|
219
219
|
|
|
220
|
-
| Setting
|
|
221
|
-
|
|
222
|
-
| `systemPrompt`
|
|
223
|
-
| `systemPromptPath`
|
|
224
|
-
| `exclude`
|
|
225
|
-
| `type`
|
|
226
|
-
| `locale`
|
|
227
|
-
| `generate`
|
|
228
|
-
| `logging`
|
|
229
|
-
| `
|
|
230
|
-
| `maxLength`
|
|
231
|
-
| `timeout`
|
|
232
|
-
| `temperature`
|
|
233
|
-
| `maxTokens`
|
|
234
|
-
| `topP`
|
|
220
|
+
| Setting | Description | Default |
|
|
221
|
+
|------------------------|---------------------------------------------------------------------|--------------|
|
|
222
|
+
| `systemPrompt` | System Prompt text | - |
|
|
223
|
+
| `systemPromptPath` | Path to system prompt file | - |
|
|
224
|
+
| `exclude` | Files to exclude from AI analysis | - |
|
|
225
|
+
| `type` | Type of commit message to generate | conventional |
|
|
226
|
+
| `locale` | Locale for the generated commit messages | en |
|
|
227
|
+
| `generate` | Number of commit messages to generate | 1 |
|
|
228
|
+
| `logging` | Enable logging | true |
|
|
229
|
+
| `includeBody` | Whether the commit message includes body | false |
|
|
230
|
+
| `maxLength` | Maximum character length of the Subject of generated commit message | 50 |
|
|
231
|
+
| `timeout` | Request timeout (milliseconds) | 10000 |
|
|
232
|
+
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7 |
|
|
233
|
+
| `maxTokens` | Maximum number of tokens to generate | 1024 |
|
|
234
|
+
| `topP` | Nucleus sampling | 0.9 |
|
|
235
|
+
| `codeReview` | whether to include an automated code review in the process | false |
|
|
236
|
+
| `codeReviewPromptPath` | Path to code review prompt file | - |
|
|
235
237
|
|
|
236
238
|
> 👉 **Tip:** To set the General Settings for each model, use the following command.
|
|
237
239
|
> ```shell
|
|
238
240
|
> aicommit2 config set OPENAI.locale="jp"
|
|
239
241
|
> aicommit2 config set CODESTRAL.type="gitmoji"
|
|
240
|
-
> aicommit2 config set GEMINI.
|
|
242
|
+
> aicommit2 config set GEMINI.includeBody=true
|
|
241
243
|
> ```
|
|
242
244
|
|
|
243
245
|
##### systemPrompt
|
|
@@ -318,21 +320,21 @@ The log files will be stored in the `~/.aicommit2_log` directory(user's home).
|
|
|
318
320
|
aicommit2 log removeAll
|
|
319
321
|
```
|
|
320
322
|
|
|
321
|
-
#####
|
|
323
|
+
##### includeBody
|
|
322
324
|
|
|
323
|
-
Default: `
|
|
325
|
+
Default: `false`
|
|
324
326
|
|
|
325
|
-
This option determines whether the commit message includes body. If you want to include body in message, you can set it to `
|
|
327
|
+
This option determines whether the commit message includes body. If you want to include body in message, you can set it to `true`.
|
|
326
328
|
|
|
327
329
|
```sh
|
|
328
|
-
aicommit2 config set
|
|
330
|
+
aicommit2 config set includeBody="true"
|
|
329
331
|
```
|
|
330
332
|
|
|
331
333
|

|
|
332
334
|
|
|
333
335
|
|
|
334
336
|
```sh
|
|
335
|
-
aicommit2 config set
|
|
337
|
+
aicommit2 config set includeBody="false"
|
|
336
338
|
```
|
|
337
339
|
|
|
338
340
|

|
|
@@ -379,7 +381,7 @@ aicommit2 config set maxTokens=3000
|
|
|
379
381
|
|
|
380
382
|
##### topP
|
|
381
383
|
|
|
382
|
-
Default: `
|
|
384
|
+
Default: `0.9`
|
|
383
385
|
|
|
384
386
|
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
|
|
385
387
|
|
|
@@ -387,23 +389,53 @@ Nucleus sampling, where the model considers the results of the tokens with top_p
|
|
|
387
389
|
aicommit2 config set topP=0.2
|
|
388
390
|
```
|
|
389
391
|
|
|
392
|
+
##### codeReview
|
|
393
|
+
|
|
394
|
+
Default: `false`
|
|
395
|
+
|
|
396
|
+
The `codeReview` parameter determines whether to include an automated code review in the process.
|
|
397
|
+
|
|
398
|
+
```sh
|
|
399
|
+
aicommit2 config set codeReview=true
|
|
400
|
+
```
|
|
401
|
+
|
|
402
|
+
> NOTE: When enabled, aicommit2 will perform a code review before generating commit messages.
|
|
403
|
+
|
|
404
|
+
<img src="https://github.com/tak-bro/aicommit2/blob/main/img/code_review.gif?raw=true" alt="CODE_REVIEW" />
|
|
405
|
+
|
|
406
|
+
⚠️ **CAUTION**
|
|
407
|
+
|
|
408
|
+
- The `codeReview` feature is currently experimental.
|
|
409
|
+
- This feature performs a code review before generating commit messages.
|
|
410
|
+
- Using this feature will significantly increase the overall processing time.
|
|
411
|
+
- It may significantly impact performance and cost.
|
|
412
|
+
- **The code review process consumes a large number of tokens, due to the lack of caching for git diff.**
|
|
413
|
+
|
|
414
|
+
##### codeReviewPromptPath
|
|
415
|
+
- Allow users to specify a custom file path for code review
|
|
416
|
+
|
|
417
|
+
```sh
|
|
418
|
+
aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"
|
|
419
|
+
```
|
|
420
|
+
|
|
421
|
+
|
|
390
422
|
## Available General Settings by Model
|
|
391
|
-
| | timeout | temperature | maxTokens |
|
|
392
|
-
|
|
393
|
-
| **OpenAI** | ✓ | ✓ | ✓ |
|
|
394
|
-
| **Anthropic Claude** | | ✓ | ✓ |
|
|
395
|
-
| **Gemini** | | ✓ | ✓ |
|
|
396
|
-
| **Mistral AI** | ✓ | ✓ | ✓ |
|
|
397
|
-
| **Codestral** | ✓ | ✓ | ✓ |
|
|
398
|
-
| **Cohere** | | ✓ | ✓ |
|
|
399
|
-
| **Groq** | ✓ | ✓ | ✓ |
|
|
400
|
-
| **Perplexity** | ✓ | ✓ | ✓ |
|
|
401
|
-
| **DeepSeek** | ✓ | ✓ | ✓ |
|
|
402
|
-
| **Huggingface** | | | |
|
|
403
|
-
| **Ollama** | ✓ | ✓ | |
|
|
423
|
+
| | timeout | temperature | maxTokens | topP |
|
|
424
|
+
|:--------------------:|:-------:|:-----------:|:---------:|:------:|
|
|
425
|
+
| **OpenAI** | ✓ | ✓ | ✓ | ✓ |
|
|
426
|
+
| **Anthropic Claude** | | ✓ | ✓ | ✓ |
|
|
427
|
+
| **Gemini** | | ✓ | ✓ | ✓ |
|
|
428
|
+
| **Mistral AI** | ✓ | ✓ | ✓ | ✓ |
|
|
429
|
+
| **Codestral** | ✓ | ✓ | ✓ | ✓ |
|
|
430
|
+
| **Cohere** | | ✓ | ✓ | ✓ |
|
|
431
|
+
| **Groq** | ✓ | ✓ | ✓ | ✓ |
|
|
432
|
+
| **Perplexity** | ✓ | ✓ | ✓ | ✓ |
|
|
433
|
+
| **DeepSeek** | ✓ | ✓ | ✓ | ✓ |
|
|
434
|
+
| **Huggingface** | | | | |
|
|
435
|
+
| **Ollama** | ✓ | ✓ | | ✓ |
|
|
404
436
|
|
|
405
437
|
> All AI support the following options in General Settings.
|
|
406
|
-
> - systemPrompt, systemPromptPath, exclude, type, locale, generate, logging,
|
|
438
|
+
> - systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength
|
|
407
439
|
|
|
408
440
|
## Model-Specific Settings
|
|
409
441
|
|
|
@@ -455,7 +487,7 @@ The OpenAI Path.
|
|
|
455
487
|
|
|
456
488
|
##### OPENAI.topP
|
|
457
489
|
|
|
458
|
-
Default: `
|
|
490
|
+
Default: `0.9`
|
|
459
491
|
|
|
460
492
|
The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p).
|
|
461
493
|
|
|
@@ -465,96 +497,36 @@ aicommit2 config set OPENAI.topP=0.2
|
|
|
465
497
|
|
|
466
498
|
> NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
|
|
467
499
|
|
|
468
|
-
###
|
|
469
|
-
|
|
470
|
-
| Setting | Description | Default |
|
|
471
|
-
|--------------------|----------------------------------------------|------------------------|
|
|
472
|
-
| `model` | Model(s) to use (comma-separated list) | - |
|
|
473
|
-
| `host` | Ollama host URL | http://localhost:11434 |
|
|
474
|
-
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
|
|
475
|
-
|
|
476
|
-
##### OLLAMA.model
|
|
477
|
-
|
|
478
|
-
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
479
|
-
|
|
480
|
-
```sh
|
|
481
|
-
aicommit2 config set OLLAMA.model="llama3.1"
|
|
482
|
-
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
483
|
-
|
|
484
|
-
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
485
|
-
```
|
|
486
|
-
|
|
487
|
-
> OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
488
|
-
|
|
489
|
-
##### OLLAMA.host
|
|
490
|
-
|
|
491
|
-
Default: `http://localhost:11434`
|
|
492
|
-
|
|
493
|
-
The Ollama host
|
|
494
|
-
|
|
495
|
-
```sh
|
|
496
|
-
aicommit2 config set OLLAMA.host=<host>
|
|
497
|
-
```
|
|
498
|
-
|
|
499
|
-
##### OLLAMA.timeout
|
|
500
|
-
|
|
501
|
-
Default: `100_000` (100 seconds)
|
|
502
|
-
|
|
503
|
-
Request timeout for the Ollama.
|
|
504
|
-
|
|
505
|
-
```sh
|
|
506
|
-
aicommit2 config set OLLAMA.timeout=<timeout>
|
|
507
|
-
```
|
|
508
|
-
|
|
509
|
-
##### Unsupported Options
|
|
510
|
-
|
|
511
|
-
Ollama does not support the following options in General Settings.
|
|
512
|
-
|
|
513
|
-
- maxTokens
|
|
514
|
-
- topP
|
|
515
|
-
|
|
516
|
-
### HuggingFace
|
|
517
|
-
|
|
518
|
-
| Setting | Description | Default |
|
|
519
|
-
|--------------------|----------------------------|----------------------------------------|
|
|
520
|
-
| `cookie` | Authentication cookie | - |
|
|
521
|
-
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
|
|
500
|
+
### Anthropic
|
|
522
501
|
|
|
523
|
-
|
|
502
|
+
| Setting | Description | Default |
|
|
503
|
+
|-------------|----------------|---------------------------|
|
|
504
|
+
| `key` | API key | - |
|
|
505
|
+
| `model` | Model to use | `claude-3-haiku-20240307` |
|
|
524
506
|
|
|
525
|
-
|
|
507
|
+
##### ANTHROPIC.key
|
|
526
508
|
|
|
527
|
-
|
|
528
|
-
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
529
|
-
aicommit2 config set HUGGINGFACE.cookie="your-cooke"
|
|
530
|
-
```
|
|
509
|
+
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
531
510
|
|
|
532
|
-
#####
|
|
511
|
+
##### ANTHROPIC.model
|
|
533
512
|
|
|
534
|
-
Default: `
|
|
513
|
+
Default: `claude-3-haiku-20240307`
|
|
535
514
|
|
|
536
515
|
Supported:
|
|
537
|
-
- `
|
|
538
|
-
- `
|
|
539
|
-
- `
|
|
540
|
-
- `
|
|
541
|
-
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
542
|
-
- `01-ai/Yi-1.5-34B-Chat`
|
|
543
|
-
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
544
|
-
- `microsoft/Phi-3-mini-4k-instruct`
|
|
516
|
+
- `claude-3-haiku-20240307`
|
|
517
|
+
- `claude-3-sonnet-20240229`
|
|
518
|
+
- `claude-3-opus-20240229`
|
|
519
|
+
- `claude-3-5-sonnet-20240620`
|
|
545
520
|
|
|
546
521
|
```sh
|
|
547
|
-
aicommit2 config set
|
|
522
|
+
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
548
523
|
```
|
|
549
524
|
|
|
550
525
|
##### Unsupported Options
|
|
551
526
|
|
|
552
|
-
|
|
527
|
+
Anthropic does not support the following options in General Settings.
|
|
553
528
|
|
|
554
|
-
- maxTokens
|
|
555
529
|
- timeout
|
|
556
|
-
- temperature
|
|
557
|
-
- topP
|
|
558
530
|
|
|
559
531
|
### Gemini
|
|
560
532
|
|
|
@@ -589,39 +561,6 @@ aicommit2 config set GEMINI.model="gemini-1.5-pro-exp-0801"
|
|
|
589
561
|
Gemini does not support the following options in General Settings.
|
|
590
562
|
|
|
591
563
|
- timeout
|
|
592
|
-
- topP
|
|
593
|
-
|
|
594
|
-
### Anthropic
|
|
595
|
-
|
|
596
|
-
| Setting | Description | Default |
|
|
597
|
-
|-------------|----------------|---------------------------|
|
|
598
|
-
| `key` | API key | - |
|
|
599
|
-
| `model` | Model to use | `claude-3-haiku-20240307` |
|
|
600
|
-
|
|
601
|
-
##### ANTHROPIC.key
|
|
602
|
-
|
|
603
|
-
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
604
|
-
|
|
605
|
-
##### ANTHROPIC.model
|
|
606
|
-
|
|
607
|
-
Default: `claude-3-haiku-20240307`
|
|
608
|
-
|
|
609
|
-
Supported:
|
|
610
|
-
- `claude-3-haiku-20240307`
|
|
611
|
-
- `claude-3-sonnet-20240229`
|
|
612
|
-
- `claude-3-opus-20240229`
|
|
613
|
-
- `claude-3-5-sonnet-20240620`
|
|
614
|
-
|
|
615
|
-
```sh
|
|
616
|
-
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
617
|
-
```
|
|
618
|
-
|
|
619
|
-
##### Unsupported Options
|
|
620
|
-
|
|
621
|
-
Anthropic does not support the following options in General Settings.
|
|
622
|
-
|
|
623
|
-
- timeout
|
|
624
|
-
- topP
|
|
625
564
|
|
|
626
565
|
### Mistral
|
|
627
566
|
|
|
@@ -677,7 +616,7 @@ Supported:
|
|
|
677
616
|
aicommit2 config set CODESTRAL.model="codestral-2405"
|
|
678
617
|
```
|
|
679
618
|
|
|
680
|
-
|
|
619
|
+
### Cohere
|
|
681
620
|
|
|
682
621
|
| Setting | Description | Default |
|
|
683
622
|
|--------------------|--------------|-------------|
|
|
@@ -707,7 +646,6 @@ aicommit2 config set COHERE.model="command-nightly"
|
|
|
707
646
|
Cohere does not support the following options in General Settings.
|
|
708
647
|
|
|
709
648
|
- timeout
|
|
710
|
-
- topP
|
|
711
649
|
|
|
712
650
|
### Groq
|
|
713
651
|
|
|
@@ -794,6 +732,96 @@ Supported:
|
|
|
794
732
|
aicommit2 config set DEEPSEEK.model="deepseek-chat"
|
|
795
733
|
```
|
|
796
734
|
|
|
735
|
+
### HuggingFace
|
|
736
|
+
|
|
737
|
+
| Setting | Description | Default |
|
|
738
|
+
|--------------------|----------------------------|----------------------------------------|
|
|
739
|
+
| `cookie` | Authentication cookie | - |
|
|
740
|
+
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
|
|
741
|
+
|
|
742
|
+
##### HUGGINGFACE.cookie
|
|
743
|
+
|
|
744
|
+
The [Huggingface Chat](https://huggingface.co/chat/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
745
|
+
|
|
746
|
+
```sh
|
|
747
|
+
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
748
|
+
aicommit2 config set HUGGINGFACE.cookie="your-cooke"
|
|
749
|
+
```
|
|
750
|
+
|
|
751
|
+
##### HUGGINGFACE.model
|
|
752
|
+
|
|
753
|
+
Default: `CohereForAI/c4ai-command-r-plus`
|
|
754
|
+
|
|
755
|
+
Supported:
|
|
756
|
+
- `CohereForAI/c4ai-command-r-plus`
|
|
757
|
+
- `meta-llama/Meta-Llama-3-70B-Instruct`
|
|
758
|
+
- `HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1`
|
|
759
|
+
- `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
760
|
+
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
761
|
+
- `01-ai/Yi-1.5-34B-Chat`
|
|
762
|
+
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
763
|
+
- `microsoft/Phi-3-mini-4k-instruct`
|
|
764
|
+
|
|
765
|
+
```sh
|
|
766
|
+
aicommit2 config set HUGGINGFACE.model="mistralai/Mistral-7B-Instruct-v0.2"
|
|
767
|
+
```
|
|
768
|
+
|
|
769
|
+
##### Unsupported Options
|
|
770
|
+
|
|
771
|
+
Huggingface does not support the following options in General Settings.
|
|
772
|
+
|
|
773
|
+
- maxTokens
|
|
774
|
+
- timeout
|
|
775
|
+
- temperature
|
|
776
|
+
- topP
|
|
777
|
+
|
|
778
|
+
### Ollama
|
|
779
|
+
|
|
780
|
+
| Setting | Description | Default |
|
|
781
|
+
|--------------------|----------------------------------------------|------------------------|
|
|
782
|
+
| `model` | Model(s) to use (comma-separated list) | - |
|
|
783
|
+
| `host` | Ollama host URL | http://localhost:11434 |
|
|
784
|
+
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
|
|
785
|
+
|
|
786
|
+
##### OLLAMA.model
|
|
787
|
+
|
|
788
|
+
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
789
|
+
|
|
790
|
+
```sh
|
|
791
|
+
aicommit2 config set OLLAMA.model="llama3.1"
|
|
792
|
+
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
793
|
+
|
|
794
|
+
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
795
|
+
```
|
|
796
|
+
|
|
797
|
+
> OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
798
|
+
|
|
799
|
+
##### OLLAMA.host
|
|
800
|
+
|
|
801
|
+
Default: `http://localhost:11434`
|
|
802
|
+
|
|
803
|
+
The Ollama host
|
|
804
|
+
|
|
805
|
+
```sh
|
|
806
|
+
aicommit2 config set OLLAMA.host=<host>
|
|
807
|
+
```
|
|
808
|
+
|
|
809
|
+
##### OLLAMA.timeout
|
|
810
|
+
|
|
811
|
+
Default: `100_000` (100 seconds)
|
|
812
|
+
|
|
813
|
+
Request timeout for the Ollama.
|
|
814
|
+
|
|
815
|
+
```sh
|
|
816
|
+
aicommit2 config set OLLAMA.timeout=<timeout>
|
|
817
|
+
```
|
|
818
|
+
|
|
819
|
+
##### Unsupported Options
|
|
820
|
+
|
|
821
|
+
Ollama does not support the following options in General Settings.
|
|
822
|
+
|
|
823
|
+
- maxTokens
|
|
824
|
+
|
|
797
825
|
## Upgrading
|
|
798
826
|
|
|
799
827
|
Check the installed version with:
|
|
@@ -834,7 +862,7 @@ Use curly braces `{}` to denote these placeholders for options. The following pl
|
|
|
834
862
|
- [{type}](#type): The type of the commit message (**conventional** or **gitmoji**)
|
|
835
863
|
- [{generate}](#generate): The number of commit messages to generate (**number**)
|
|
836
864
|
|
|
837
|
-
|
|
865
|
+
#### Example Template
|
|
838
866
|
|
|
839
867
|
Here's an example of how your custom template might look:
|
|
840
868
|
|
|
@@ -849,16 +877,16 @@ Remember to follow these guidelines:
|
|
|
849
877
|
3. Explain the 'why' behind the change
|
|
850
878
|
```
|
|
851
879
|
|
|
852
|
-
|
|
880
|
+
#### **Appended Text**
|
|
853
881
|
|
|
854
|
-
Please note that the following text will **
|
|
882
|
+
Please note that the following text will **ALWAYS** be appended to the end of your custom prompt:
|
|
855
883
|
|
|
856
884
|
```
|
|
857
|
-
Provide your response as a JSON array containing exactly
|
|
858
|
-
- "subject": The main commit message using the
|
|
885
|
+
Lastly, Provide your response as a JSON array containing exactly {generate} object, each with the following keys:
|
|
886
|
+
- "subject": The main commit message using the {type} style. It should be a concise summary of the changes.
|
|
859
887
|
- "body": An optional detailed explanation of the changes. If not needed, use an empty string.
|
|
860
888
|
- "footer": An optional footer for metadata like BREAKING CHANGES. If not needed, use an empty string.
|
|
861
|
-
The array must always contain
|
|
889
|
+
The array must always contain {generate} element, no more and no less.
|
|
862
890
|
Example response format:
|
|
863
891
|
[
|
|
864
892
|
{
|
|
@@ -867,14 +895,12 @@ Example response format:
|
|
|
867
895
|
"footer": ""
|
|
868
896
|
}
|
|
869
897
|
]
|
|
870
|
-
Ensure you generate exactly
|
|
898
|
+
Ensure you generate exactly {generate} commit message, even if it requires creating slightly varied versions for similar changes.
|
|
871
899
|
The response should be valid JSON that can be parsed without errors.
|
|
872
900
|
```
|
|
873
901
|
|
|
874
902
|
This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
|
|
875
903
|
|
|
876
|
-
> NOTE: The template may vary depending on the generate and commit message type.
|
|
877
|
-
|
|
878
904
|
## Loading Multiple Ollama Models
|
|
879
905
|
|
|
880
906
|
<img src="https://github.com/tak-bro/aicommit2/blob/main/img/ollama_parallel.gif?raw=true" alt="OLLAMA_PARALLEL" />
|