aicommit2 2.0.7 → 2.0.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +144 -150
- package/dist/cli.mjs +67 -65
- package/package.json +5 -5
package/README.md
CHANGED
|
@@ -179,7 +179,7 @@ aicommit2 config set OPENAI.generate=3 GEMINI.temperature=0.5
|
|
|
179
179
|
|
|
180
180
|
#### How to Configure in detail
|
|
181
181
|
|
|
182
|
-
1. Command-line arguments: **use the format** `--[
|
|
182
|
+
1. Command-line arguments: **use the format** `--[Model].[Key]=value`
|
|
183
183
|
```sh
|
|
184
184
|
aicommit2 --OPENAI.locale="jp" --GEMINI.temperatue="0.5"
|
|
185
185
|
```
|
|
@@ -217,21 +217,21 @@ model[]=codestral
|
|
|
217
217
|
The following settings can be applied to most models, but support may vary.
|
|
218
218
|
Please check the documentation for each specific model to confirm which settings are supported.
|
|
219
219
|
|
|
220
|
-
| Setting | Description | Default
|
|
221
|
-
|
|
222
|
-
| `systemPrompt` | System Prompt text | -
|
|
223
|
-
| `systemPromptPath` | Path to system prompt file | -
|
|
224
|
-
| `exclude` | Files to exclude from AI analysis | -
|
|
225
|
-
| `type` | Type of commit message to generate | conventional
|
|
226
|
-
| `locale` | Locale for the generated commit messages | en
|
|
227
|
-
| `generate` | Number of commit messages to generate | 1
|
|
228
|
-
| `logging` | Enable logging | true
|
|
229
|
-
| `ignoreBody` | Whether the commit message includes body | true
|
|
230
|
-
| `maxLength` | Maximum character length of the Subject of generated commit message | 50
|
|
231
|
-
| `timeout` | Request timeout (milliseconds) | 10000
|
|
232
|
-
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7
|
|
233
|
-
| `maxTokens` | Maximum number of tokens to generate | 1024
|
|
234
|
-
| `topP` | Nucleus sampling |
|
|
220
|
+
| Setting | Description | Default |
|
|
221
|
+
|--------------------|---------------------------------------------------------------------|--------------|
|
|
222
|
+
| `systemPrompt` | System Prompt text | - |
|
|
223
|
+
| `systemPromptPath` | Path to system prompt file | - |
|
|
224
|
+
| `exclude` | Files to exclude from AI analysis | - |
|
|
225
|
+
| `type` | Type of commit message to generate | conventional |
|
|
226
|
+
| `locale` | Locale for the generated commit messages | en |
|
|
227
|
+
| `generate` | Number of commit messages to generate | 1 |
|
|
228
|
+
| `logging` | Enable logging | true |
|
|
229
|
+
| `ignoreBody` | Whether the commit message includes body | true |
|
|
230
|
+
| `maxLength` | Maximum character length of the Subject of generated commit message | 50 |
|
|
231
|
+
| `timeout` | Request timeout (milliseconds) | 10000 |
|
|
232
|
+
| `temperature` | Model's creativity (0.0 - 2.0) | 0.7 |
|
|
233
|
+
| `maxTokens` | Maximum number of tokens to generate | 1024 |
|
|
234
|
+
| `topP` | Nucleus sampling | 0.9 |
|
|
235
235
|
|
|
236
236
|
> 👉 **Tip:** To set the General Settings for each model, use the following command.
|
|
237
237
|
> ```shell
|
|
@@ -379,7 +379,7 @@ aicommit2 config set maxTokens=3000
|
|
|
379
379
|
|
|
380
380
|
##### topP
|
|
381
381
|
|
|
382
|
-
Default: `
|
|
382
|
+
Default: `0.9`
|
|
383
383
|
|
|
384
384
|
Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
|
|
385
385
|
|
|
@@ -388,19 +388,19 @@ aicommit2 config set topP=0.2
|
|
|
388
388
|
```
|
|
389
389
|
|
|
390
390
|
## Available General Settings by Model
|
|
391
|
-
| | timeout | temperature | maxTokens |
|
|
392
|
-
|
|
393
|
-
| **OpenAI** | ✓ | ✓ | ✓ |
|
|
394
|
-
| **Anthropic Claude** | | ✓ | ✓ |
|
|
395
|
-
| **Gemini** | | ✓ | ✓ |
|
|
396
|
-
| **Mistral AI** | ✓ | ✓ | ✓ |
|
|
397
|
-
| **Codestral** | ✓ | ✓ | ✓ |
|
|
398
|
-
| **Cohere** | | ✓ | ✓ |
|
|
399
|
-
| **Groq** | ✓ | ✓ | ✓ |
|
|
400
|
-
| **Perplexity** | ✓ | ✓ | ✓ |
|
|
401
|
-
| **DeepSeek** | ✓ | ✓ | ✓ |
|
|
402
|
-
| **Huggingface** | | | |
|
|
403
|
-
| **Ollama** | ✓ | ✓ | |
|
|
391
|
+
| | timeout | temperature | maxTokens | topP |
|
|
392
|
+
|:--------------------:|:-------:|:-----------:|:---------:|:------:|
|
|
393
|
+
| **OpenAI** | ✓ | ✓ | ✓ | ✓ |
|
|
394
|
+
| **Anthropic Claude** | | ✓ | ✓ | ✓ |
|
|
395
|
+
| **Gemini** | | ✓ | ✓ | ✓ |
|
|
396
|
+
| **Mistral AI** | ✓ | ✓ | ✓ | ✓ |
|
|
397
|
+
| **Codestral** | ✓ | ✓ | ✓ | ✓ |
|
|
398
|
+
| **Cohere** | | ✓ | ✓ | ✓ |
|
|
399
|
+
| **Groq** | ✓ | ✓ | ✓ | ✓ |
|
|
400
|
+
| **Perplexity** | ✓ | ✓ | ✓ | ✓ |
|
|
401
|
+
| **DeepSeek** | ✓ | ✓ | ✓ | ✓ |
|
|
402
|
+
| **Huggingface** | | | | |
|
|
403
|
+
| **Ollama** | ✓ | ✓ | | ✓ |
|
|
404
404
|
|
|
405
405
|
> All AI support the following options in General Settings.
|
|
406
406
|
> - systemPrompt, systemPromptPath, exclude, type, locale, generate, logging, ignoreBody, maxLength
|
|
@@ -455,7 +455,7 @@ The OpenAI Path.
|
|
|
455
455
|
|
|
456
456
|
##### OPENAI.topP
|
|
457
457
|
|
|
458
|
-
Default: `
|
|
458
|
+
Default: `0.9`
|
|
459
459
|
|
|
460
460
|
The `top_p` parameter selects tokens whose combined probability meets a threshold. Please see [detail](https://platform.openai.com/docs/api-reference/chat/create#chat-create-top_p).
|
|
461
461
|
|
|
@@ -465,96 +465,36 @@ aicommit2 config set OPENAI.topP=0.2
|
|
|
465
465
|
|
|
466
466
|
> NOTE: If `topP` is less than 0, it does not deliver the `top_p` parameter to the request.
|
|
467
467
|
|
|
468
|
-
###
|
|
469
|
-
|
|
470
|
-
| Setting | Description | Default |
|
|
471
|
-
|--------------------|----------------------------------------------|------------------------|
|
|
472
|
-
| `model` | Model(s) to use (comma-separated list) | - |
|
|
473
|
-
| `host` | Ollama host URL | http://localhost:11434 |
|
|
474
|
-
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
|
|
475
|
-
|
|
476
|
-
##### OLLAMA.model
|
|
477
|
-
|
|
478
|
-
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
479
|
-
|
|
480
|
-
```sh
|
|
481
|
-
aicommit2 config set OLLAMA.model="llama3.1"
|
|
482
|
-
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
483
|
-
|
|
484
|
-
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
485
|
-
```
|
|
486
|
-
|
|
487
|
-
> OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
488
|
-
|
|
489
|
-
##### OLLAMA.host
|
|
490
|
-
|
|
491
|
-
Default: `http://localhost:11434`
|
|
492
|
-
|
|
493
|
-
The Ollama host
|
|
494
|
-
|
|
495
|
-
```sh
|
|
496
|
-
aicommit2 config set OLLAMA.host=<host>
|
|
497
|
-
```
|
|
498
|
-
|
|
499
|
-
##### OLLAMA.timeout
|
|
500
|
-
|
|
501
|
-
Default: `100_000` (100 seconds)
|
|
502
|
-
|
|
503
|
-
Request timeout for the Ollama.
|
|
504
|
-
|
|
505
|
-
```sh
|
|
506
|
-
aicommit2 config set OLLAMA.timeout=<timeout>
|
|
507
|
-
```
|
|
508
|
-
|
|
509
|
-
##### Unsupported Options
|
|
510
|
-
|
|
511
|
-
Ollama does not support the following options in General Settings.
|
|
512
|
-
|
|
513
|
-
- maxTokens
|
|
514
|
-
- topP
|
|
515
|
-
|
|
516
|
-
### HuggingFace
|
|
517
|
-
|
|
518
|
-
| Setting | Description | Default |
|
|
519
|
-
|--------------------|----------------------------|----------------------------------------|
|
|
520
|
-
| `cookie` | Authentication cookie | - |
|
|
521
|
-
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
|
|
468
|
+
### Anthropic
|
|
522
469
|
|
|
523
|
-
|
|
470
|
+
| Setting | Description | Default |
|
|
471
|
+
|-------------|----------------|---------------------------|
|
|
472
|
+
| `key` | API key | - |
|
|
473
|
+
| `model` | Model to use | `claude-3-haiku-20240307` |
|
|
524
474
|
|
|
525
|
-
|
|
475
|
+
##### ANTHROPIC.key
|
|
526
476
|
|
|
527
|
-
|
|
528
|
-
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
529
|
-
aicommit2 config set HUGGINGFACE.cookie="your-cooke"
|
|
530
|
-
```
|
|
477
|
+
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
531
478
|
|
|
532
|
-
#####
|
|
479
|
+
##### ANTHROPIC.model
|
|
533
480
|
|
|
534
|
-
Default: `
|
|
481
|
+
Default: `claude-3-haiku-20240307`
|
|
535
482
|
|
|
536
483
|
Supported:
|
|
537
|
-
- `
|
|
538
|
-
- `
|
|
539
|
-
- `
|
|
540
|
-
- `
|
|
541
|
-
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
542
|
-
- `01-ai/Yi-1.5-34B-Chat`
|
|
543
|
-
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
544
|
-
- `microsoft/Phi-3-mini-4k-instruct`
|
|
484
|
+
- `claude-3-haiku-20240307`
|
|
485
|
+
- `claude-3-sonnet-20240229`
|
|
486
|
+
- `claude-3-opus-20240229`
|
|
487
|
+
- `claude-3-5-sonnet-20240620`
|
|
545
488
|
|
|
546
489
|
```sh
|
|
547
|
-
aicommit2 config set
|
|
490
|
+
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
548
491
|
```
|
|
549
492
|
|
|
550
493
|
##### Unsupported Options
|
|
551
494
|
|
|
552
|
-
|
|
495
|
+
Anthropic does not support the following options in General Settings.
|
|
553
496
|
|
|
554
|
-
- maxTokens
|
|
555
497
|
- timeout
|
|
556
|
-
- temperature
|
|
557
|
-
- topP
|
|
558
498
|
|
|
559
499
|
### Gemini
|
|
560
500
|
|
|
@@ -589,39 +529,6 @@ aicommit2 config set GEMINI.model="gemini-1.5-pro-exp-0801"
|
|
|
589
529
|
Gemini does not support the following options in General Settings.
|
|
590
530
|
|
|
591
531
|
- timeout
|
|
592
|
-
- topP
|
|
593
|
-
|
|
594
|
-
### Anthropic
|
|
595
|
-
|
|
596
|
-
| Setting | Description | Default |
|
|
597
|
-
|-------------|----------------|---------------------------|
|
|
598
|
-
| `key` | API key | - |
|
|
599
|
-
| `model` | Model to use | `claude-3-haiku-20240307` |
|
|
600
|
-
|
|
601
|
-
##### ANTHROPIC.key
|
|
602
|
-
|
|
603
|
-
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
604
|
-
|
|
605
|
-
##### ANTHROPIC.model
|
|
606
|
-
|
|
607
|
-
Default: `claude-3-haiku-20240307`
|
|
608
|
-
|
|
609
|
-
Supported:
|
|
610
|
-
- `claude-3-haiku-20240307`
|
|
611
|
-
- `claude-3-sonnet-20240229`
|
|
612
|
-
- `claude-3-opus-20240229`
|
|
613
|
-
- `claude-3-5-sonnet-20240620`
|
|
614
|
-
|
|
615
|
-
```sh
|
|
616
|
-
aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
|
|
617
|
-
```
|
|
618
|
-
|
|
619
|
-
##### Unsupported Options
|
|
620
|
-
|
|
621
|
-
Anthropic does not support the following options in General Settings.
|
|
622
|
-
|
|
623
|
-
- timeout
|
|
624
|
-
- topP
|
|
625
532
|
|
|
626
533
|
### Mistral
|
|
627
534
|
|
|
@@ -677,7 +584,7 @@ Supported:
|
|
|
677
584
|
aicommit2 config set CODESTRAL.model="codestral-2405"
|
|
678
585
|
```
|
|
679
586
|
|
|
680
|
-
|
|
587
|
+
### Cohere
|
|
681
588
|
|
|
682
589
|
| Setting | Description | Default |
|
|
683
590
|
|--------------------|--------------|-------------|
|
|
@@ -707,7 +614,6 @@ aicommit2 config set COHERE.model="command-nightly"
|
|
|
707
614
|
Cohere does not support the following options in General Settings.
|
|
708
615
|
|
|
709
616
|
- timeout
|
|
710
|
-
- topP
|
|
711
617
|
|
|
712
618
|
### Groq
|
|
713
619
|
|
|
@@ -794,6 +700,96 @@ Supported:
|
|
|
794
700
|
aicommit2 config set DEEPSEEK.model="deepseek-chat"
|
|
795
701
|
```
|
|
796
702
|
|
|
703
|
+
### HuggingFace
|
|
704
|
+
|
|
705
|
+
| Setting | Description | Default |
|
|
706
|
+
|--------------------|----------------------------|----------------------------------------|
|
|
707
|
+
| `cookie` | Authentication cookie | - |
|
|
708
|
+
| `model` | Model to use | `CohereForAI/c4ai-command-r-plus` |
|
|
709
|
+
|
|
710
|
+
##### HUGGINGFACE.cookie
|
|
711
|
+
|
|
712
|
+
The [Huggingface Chat](https://huggingface.co/chat/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
713
|
+
|
|
714
|
+
```sh
|
|
715
|
+
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
716
|
+
aicommit2 config set HUGGINGFACE.cookie="your-cooke"
|
|
717
|
+
```
|
|
718
|
+
|
|
719
|
+
##### HUGGINGFACE.model
|
|
720
|
+
|
|
721
|
+
Default: `CohereForAI/c4ai-command-r-plus`
|
|
722
|
+
|
|
723
|
+
Supported:
|
|
724
|
+
- `CohereForAI/c4ai-command-r-plus`
|
|
725
|
+
- `meta-llama/Meta-Llama-3-70B-Instruct`
|
|
726
|
+
- `HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1`
|
|
727
|
+
- `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
728
|
+
- `NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO`
|
|
729
|
+
- `01-ai/Yi-1.5-34B-Chat`
|
|
730
|
+
- `mistralai/Mistral-7B-Instruct-v0.2`
|
|
731
|
+
- `microsoft/Phi-3-mini-4k-instruct`
|
|
732
|
+
|
|
733
|
+
```sh
|
|
734
|
+
aicommit2 config set HUGGINGFACE.model="mistralai/Mistral-7B-Instruct-v0.2"
|
|
735
|
+
```
|
|
736
|
+
|
|
737
|
+
##### Unsupported Options
|
|
738
|
+
|
|
739
|
+
Huggingface does not support the following options in General Settings.
|
|
740
|
+
|
|
741
|
+
- maxTokens
|
|
742
|
+
- timeout
|
|
743
|
+
- temperature
|
|
744
|
+
- topP
|
|
745
|
+
|
|
746
|
+
### Ollama
|
|
747
|
+
|
|
748
|
+
| Setting | Description | Default |
|
|
749
|
+
|--------------------|----------------------------------------------|------------------------|
|
|
750
|
+
| `model` | Model(s) to use (comma-separated list) | - |
|
|
751
|
+
| `host` | Ollama host URL | http://localhost:11434 |
|
|
752
|
+
| `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
|
|
753
|
+
|
|
754
|
+
##### OLLAMA.model
|
|
755
|
+
|
|
756
|
+
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
757
|
+
|
|
758
|
+
```sh
|
|
759
|
+
aicommit2 config set OLLAMA.model="llama3.1"
|
|
760
|
+
aicommit2 config set OLLAMA.model="llama3,codellama" # for multiple models
|
|
761
|
+
|
|
762
|
+
aicommit2 config add OLLAMA.model="gemma2" # Only Ollama.model can be added.
|
|
763
|
+
```
|
|
764
|
+
|
|
765
|
+
> OLLAMA.model is **string array** type to support multiple Ollama. Please see [this section](#loading-multiple-ollama-models).
|
|
766
|
+
|
|
767
|
+
##### OLLAMA.host
|
|
768
|
+
|
|
769
|
+
Default: `http://localhost:11434`
|
|
770
|
+
|
|
771
|
+
The Ollama host
|
|
772
|
+
|
|
773
|
+
```sh
|
|
774
|
+
aicommit2 config set OLLAMA.host=<host>
|
|
775
|
+
```
|
|
776
|
+
|
|
777
|
+
##### OLLAMA.timeout
|
|
778
|
+
|
|
779
|
+
Default: `100_000` (100 seconds)
|
|
780
|
+
|
|
781
|
+
Request timeout for the Ollama.
|
|
782
|
+
|
|
783
|
+
```sh
|
|
784
|
+
aicommit2 config set OLLAMA.timeout=<timeout>
|
|
785
|
+
```
|
|
786
|
+
|
|
787
|
+
##### Unsupported Options
|
|
788
|
+
|
|
789
|
+
Ollama does not support the following options in General Settings.
|
|
790
|
+
|
|
791
|
+
- maxTokens
|
|
792
|
+
|
|
797
793
|
## Upgrading
|
|
798
794
|
|
|
799
795
|
Check the installed version with:
|
|
@@ -834,7 +830,7 @@ Use curly braces `{}` to denote these placeholders for options. The following pl
|
|
|
834
830
|
- [{type}](#type): The type of the commit message (**conventional** or **gitmoji**)
|
|
835
831
|
- [{generate}](#generate): The number of commit messages to generate (**number**)
|
|
836
832
|
|
|
837
|
-
|
|
833
|
+
#### Example Template
|
|
838
834
|
|
|
839
835
|
Here's an example of how your custom template might look:
|
|
840
836
|
|
|
@@ -849,16 +845,16 @@ Remember to follow these guidelines:
|
|
|
849
845
|
3. Explain the 'why' behind the change
|
|
850
846
|
```
|
|
851
847
|
|
|
852
|
-
|
|
848
|
+
#### **Appended Text**
|
|
853
849
|
|
|
854
|
-
Please note that the following text will **
|
|
850
|
+
Please note that the following text will **ALWAYS** be appended to the end of your custom prompt:
|
|
855
851
|
|
|
856
852
|
```
|
|
857
|
-
Provide your response as a JSON array containing exactly
|
|
858
|
-
- "subject": The main commit message using the
|
|
853
|
+
Lastly, Provide your response as a JSON array containing exactly {generate} object, each with the following keys:
|
|
854
|
+
- "subject": The main commit message using the {type} style. It should be a concise summary of the changes.
|
|
859
855
|
- "body": An optional detailed explanation of the changes. If not needed, use an empty string.
|
|
860
856
|
- "footer": An optional footer for metadata like BREAKING CHANGES. If not needed, use an empty string.
|
|
861
|
-
The array must always contain
|
|
857
|
+
The array must always contain {generate} element, no more and no less.
|
|
862
858
|
Example response format:
|
|
863
859
|
[
|
|
864
860
|
{
|
|
@@ -867,14 +863,12 @@ Example response format:
|
|
|
867
863
|
"footer": ""
|
|
868
864
|
}
|
|
869
865
|
]
|
|
870
|
-
Ensure you generate exactly
|
|
866
|
+
Ensure you generate exactly {generate} commit message, even if it requires creating slightly varied versions for similar changes.
|
|
871
867
|
The response should be valid JSON that can be parsed without errors.
|
|
872
868
|
```
|
|
873
869
|
|
|
874
870
|
This ensures that the output is consistently formatted as a JSON array, regardless of the custom template used.
|
|
875
871
|
|
|
876
|
-
> NOTE: The template may vary depending on the generate and commit message type.
|
|
877
|
-
|
|
878
872
|
## Loading Multiple Ollama Models
|
|
879
873
|
|
|
880
874
|
<img src="https://github.com/tak-bro/aicommit2/blob/main/img/ollama_parallel.gif?raw=true" alt="OLLAMA_PARALLEL" />
|