aicommit2 2.2.0 → 2.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +132 -81
  2. package/dist/cli.mjs +62 -66
  3. package/package.json +3 -3
package/README.md CHANGED
@@ -25,16 +25,14 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
25
25
 
26
26
  ## Key Features
27
27
 
28
- - **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq and more.
29
- - **Local Model Support**: Use local AI models via Ollama.
28
+ - **Multi-AI Support**: Integrates with OpenAI, Anthropic Claude, Google Gemini, Mistral AI, Cohere, Groq, Ollama and more.
29
+ - **OpenAI API Compatibility**: Support for any service that implements the OpenAI API specification.
30
30
  - **Reactive CLI**: Enables simultaneous requests to multiple AIs and selection of the best commit message.
31
31
  - **Git Hook Integration**: Can be used as a prepare-commit-msg hook.
32
32
  - **Custom Prompt**: Supports user-defined system prompt templates.
33
33
 
34
34
  ## Supported Providers
35
35
 
36
- ### Remote
37
-
38
36
  - [OpenAI](https://openai.com/)
39
37
  - [Anthropic Claude](https://console.anthropic.com/)
40
38
  - [Gemini](https://gemini.google.com/)
@@ -44,10 +42,8 @@ _aicommit2_ is a reactive CLI tool that automatically generates Git commit messa
44
42
  - [Perplexity](https://docs.perplexity.ai/)
45
43
  - [DeepSeek](https://www.deepseek.com/)
46
44
  - [Huggingface **(Unofficial)**](https://huggingface.co/chat/)
47
-
48
- ### Local
49
-
50
45
  - [Ollama](https://ollama.com/)
46
+ - [OpenAI API Compatibility](#openai-api-compatible-services)
51
47
 
52
48
  ## Setup
53
49
 
@@ -86,7 +82,7 @@ You can also use your model for free with [Ollama](https://ollama.com/) and it i
86
82
  ollama run llama3.2 # model you want use. ex) codellama, deepseek-coder
87
83
  ```
88
84
 
89
- 3. Set the host, model and numCtx. (The default numCtx value in Ollama is 2048. It is recommended to set it to 4096 or higher.)
85
+ 3. Set the host, model and numCtx. (The default numCtx value in Ollama is 2048. It is recommended to set it to `4096` or higher.)
90
86
  ```sh
91
87
  aicommit2 config set OLLAMA.host=<your host>
92
88
  aicommit2 config set OLLAMA.model=<your model>
@@ -435,19 +431,20 @@ aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"
435
431
  ```
436
432
 
437
433
  ## Available General Settings by Model
438
- | | timeout | temperature | maxTokens | topP |
439
- |:--------------------:|:-------:|:-----------:|:---------:|:------:|
440
- | **OpenAI** | ✓ | ✓ | ✓ | ✓ |
441
- | **Anthropic Claude** | | ✓ | ✓ | ✓ |
442
- | **Gemini** | | ✓ | ✓ | ✓ |
443
- | **Mistral AI** | ✓ | ✓ | ✓ | ✓ |
444
- | **Codestral** | ✓ | ✓ | ✓ | ✓ |
445
- | **Cohere** | | ✓ | ✓ | ✓ |
446
- | **Groq** | ✓ | ✓ | ✓ | ✓ |
447
- | **Perplexity** | ✓ | ✓ | ✓ | ✓ |
448
- | **DeepSeek** | ✓ | ✓ | ✓ | ✓ |
449
- | **Huggingface** | | | | |
450
- | **Ollama** | ✓ | ✓ | | ✓ |
434
+ | | timeout | temperature | maxTokens | topP |
435
+ |:---------------------------:|:-------:|:-----------:|:---------:|:------:|
436
+ | **OpenAI** | ✓ | ✓ | ✓ | ✓ |
437
+ | **Anthropic Claude** | | ✓ | ✓ | ✓ |
438
+ | **Gemini** | | ✓ | ✓ | ✓ |
439
+ | **Mistral AI** | ✓ | ✓ | ✓ | ✓ |
440
+ | **Codestral** | ✓ | ✓ | ✓ | ✓ |
441
+ | **Cohere** | | ✓ | ✓ | ✓ |
442
+ | **Groq** | ✓ | ✓ | ✓ | ✓ |
443
+ | **Perplexity** | ✓ | ✓ | ✓ | ✓ |
444
+ | **DeepSeek** | ✓ | ✓ | ✓ | ✓ |
445
+ | **Huggingface** | | | | |
446
+ | **Ollama** | ✓ | ✓ | | ✓ |
447
+ | **OpenAI API-Compatible** | ✓ | ✓ | ✓ | ✓ |
451
448
 
452
449
  > All AI support the following options in General Settings.
453
450
  > - systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength
@@ -514,10 +511,10 @@ aicommit2 config set OPENAI.topP=0.2
514
511
 
515
512
  ### Anthropic
516
513
 
517
- | Setting | Description | Default |
518
- |-------------|----------------|---------------------------|
519
- | `key` | API key | - |
520
- | `model` | Model to use | `claude-3-haiku-20240307` |
514
+ | Setting | Description | Default |
515
+ |-------------|----------------|-----------------------------|
516
+ | `key` | API key | - |
517
+ | `model` | Model to use | `claude-3-5-haiku-20241022` |
521
518
 
522
519
  ##### ANTHROPIC.key
523
520
 
@@ -525,14 +522,14 @@ The Anthropic API key. To get started with Anthropic Claude, request access to t
525
522
 
526
523
  ##### ANTHROPIC.model
527
524
 
528
- Default: `claude-3-haiku-20240307`
525
+ Default: `claude-3-5-haiku-20241022`
529
526
 
530
527
  Supported:
531
- - `claude-3-haiku-20240307`
532
- - `claude-3-sonnet-20240229`
533
- - `claude-3-opus-20240229`
534
- - `claude-3-5-sonnet-20240620`
535
528
  - `claude-3-5-sonnet-20241022`
529
+ - `claude-3-5-haiku-20241022`
530
+ - `claude-3-opus-20240229`
531
+ - `claude-3-sonnet-20240229`
532
+ - `claude-3-haiku-20240307`
536
533
 
537
534
  ```sh
538
535
  aicommit2 config set ANTHROPIC.model="claude-3-5-sonnet-20240620"
@@ -546,10 +543,10 @@ Anthropic does not support the following options in General Settings.
546
543
 
547
544
  ### Gemini
548
545
 
549
- | Setting | Description | Default |
550
- |--------------------|------------------------|-------------------|
551
- | `key` | API key | - |
552
- | `model` | Model to use | `gemini-1.5-pro` |
546
+ | Setting | Description | Default |
547
+ |--------------------|------------------------|------------------------|
548
+ | `key` | API key | - |
549
+ | `model` | Model to use | `gemini-2.0-flash-exp` |
553
550
 
554
551
  ##### GEMINI.key
555
552
 
@@ -561,15 +558,16 @@ aicommit2 config set GEMINI.key="your api key"
561
558
 
562
559
  ##### GEMINI.model
563
560
 
564
- Default: `gemini-1.5-pro`
561
+ Default: `gemini-2.0-flash-exp`
565
562
 
566
563
  Supported:
567
- - `gemini-1.5-pro`
564
+ - `gemini-2.0-flash-exp`
568
565
  - `gemini-1.5-flash`
569
- - `gemini-1.5-pro-exp-0801`
566
+ - `gemini-1.5-flash-8b`
567
+ - `gemini-1.5-pro`
570
568
 
571
569
  ```sh
572
- aicommit2 config set GEMINI.model="gemini-1.5-pro-exp-0801"
570
+ aicommit2 config set GEMINI.model="gemini-1.5-flash"
573
571
  ```
574
572
 
575
573
  ##### Unsupported Options
@@ -580,10 +578,10 @@ Gemini does not support the following options in General Settings.
580
578
 
581
579
  ### Mistral
582
580
 
583
- | Setting | Description | Default |
584
- |----------|------------------|----------------|
585
- | `key` | API key | - |
586
- | `model` | Model to use | `mistral-tiny` |
581
+ | Setting | Description | Default |
582
+ |----------|------------------|--------------------|
583
+ | `key` | API key | - |
584
+ | `model` | Model to use | `pixtral-12b-2409` |
587
585
 
588
586
  ##### MISTRAL.key
589
587
 
@@ -591,23 +589,16 @@ The Mistral API key. If you don't have one, please sign up and subscribe in [Mis
591
589
 
592
590
  ##### MISTRAL.model
593
591
 
594
- Default: `mistral-tiny`
592
+ Default: `pixtral-12b-2409`
595
593
 
596
594
  Supported:
597
- - `open-mistral-7b`
598
- - `mistral-tiny-2312`
599
- - `mistral-tiny`
600
- - `open-mixtral-8x7b`
601
- - `mistral-small-2312`
602
- - `mistral-small`
603
- - `mistral-small-2402`
604
- - `mistral-small-latest`
605
- - `mistral-medium-latest`
606
- - `mistral-medium-2312`
607
- - `mistral-medium`
595
+ - `codestral-latest`
608
596
  - `mistral-large-latest`
609
- - `mistral-large-2402`
597
+ - `pixtral-large-latest`
598
+ - `ministral-8b-latest`
599
+ - `mistral-small-latest`
610
600
  - `mistral-embed`
601
+ - `mistral-moderation-latest`
611
602
 
612
603
  ### Codestral
613
604
 
@@ -626,10 +617,10 @@ Default: `codestral-latest`
626
617
 
627
618
  Supported:
628
619
  - `codestral-latest`
629
- - `codestral-2405`
620
+ - `codestral-2501`
630
621
 
631
622
  ```sh
632
- aicommit2 config set CODESTRAL.model="codestral-2405"
623
+ aicommit2 config set CODESTRAL.model="codestral-2501"
633
624
  ```
634
625
 
635
626
  ### Cohere
@@ -648,10 +639,19 @@ The Cohere API key. If you don't have one, please sign up and get the API key in
648
639
  Default: `command`
649
640
 
650
641
  Supported models:
642
+ - `command-r7b-12-2024`
643
+ - `command-r-plus-08-2024`
644
+ - `command-r-plus-04-2024`
645
+ - `command-r-plus`
646
+ - `command-r-08-2024`
647
+ - `command-r-03-2024`
648
+ - `command-r`
651
649
  - `command`
652
650
  - `command-nightly`
653
651
  - `command-light`
654
652
  - `command-light-nightly`
653
+ - `c4ai-aya-expanse-8b`
654
+ - `c4ai-aya-expanse-32b`
655
655
 
656
656
  ```sh
657
657
  aicommit2 config set COHERE.model="command-nightly"
@@ -706,10 +706,10 @@ aicommit2 config set GROQ.model="llama3-8b-8192"
706
706
 
707
707
  ### Perplexity
708
708
 
709
- | Setting | Description | Default |
710
- |----------|------------------|-----------------------------------|
711
- | `key` | API key | - |
712
- | `model` | Model to use | `llama-3.1-sonar-small-128k-chat` |
709
+ | Setting | Description | Default |
710
+ |----------|------------------|----------|
711
+ | `key` | API key | - |
712
+ | `model` | Model to use | `sonar` |
713
713
 
714
714
  ##### PERPLEXITY.key
715
715
 
@@ -717,22 +717,19 @@ The Perplexity API key. If you don't have one, please sign up and get the API ke
717
717
 
718
718
  ##### PERPLEXITY.model
719
719
 
720
- Default: `llama-3.1-sonar-small-128k-chat`
720
+ Default: `sonar`
721
721
 
722
722
  Supported:
723
- - `llama-3.1-sonar-small-128k-chat`
724
- - `llama-3.1-sonar-large-128k-chat`
725
- - `llama-3.1-sonar-large-128k-online`
723
+ - `sonar-pro`
724
+ - `sonar`
726
725
  - `llama-3.1-sonar-small-128k-online`
727
- - `llama-3.1-8b-instruct`
728
- - `llama-3.1-70b-instruct`
729
- - `llama-3.1-8b`
730
- - `llama-3.1-70b`
726
+ - `llama-3.1-sonar-large-128k-online`
727
+ - `llama-3.1-sonar-huge-128k-online`
731
728
 
732
729
  > The models mentioned above are subject to change.
733
730
 
734
731
  ```sh
735
- aicommit2 config set PERPLEXITY.model="llama-3.1-70b"
732
+ aicommit2 config set PERPLEXITY.model="sonar-pro"
736
733
  ```
737
734
 
738
735
  ### DeepSeek
@@ -740,7 +737,7 @@ aicommit2 config set PERPLEXITY.model="llama-3.1-70b"
740
737
  | Setting | Description | Default |
741
738
  |---------|------------------|--------------------|
742
739
  | `key` | API key | - |
743
- | `model` | Model to use | `deepseek-coder` |
740
+ | `model` | Model to use | `deepseek-chat` |
744
741
 
745
742
  ##### DEEPSEEK.key
746
743
 
@@ -748,14 +745,14 @@ The DeepSeek API key. If you don't have one, please sign up and subscribe in [De
748
745
 
749
746
  ##### DEEPSEEK.model
750
747
 
751
- Default: `deepseek-coder`
748
+ Default: `deepseek-chat`
752
749
 
753
750
  Supported:
754
- - `deepseek-coder`
755
751
  - `deepseek-chat`
752
+ - `deepseek-reasoner`
756
753
 
757
754
  ```sh
758
- aicommit2 config set DEEPSEEK.model="deepseek-chat"
755
+ aicommit2 config set DEEPSEEK.model="deepseek-reasoner"
759
756
  ```
760
757
 
761
758
  ### HuggingFace
@@ -803,13 +800,14 @@ Huggingface does not support the following options in General Settings.
803
800
 
804
801
  ### Ollama
805
802
 
806
- | Setting | Description | Default |
807
- |-----------|----------------------------------------|------------------------|
808
- | `model` | Model(s) to use (comma-separated list) | - |
809
- | `host` | Ollama host URL | http://localhost:11434 |
810
- | `auth` | Authentication type | Bearer |
811
- | `key` | Authentication key | - |
812
- | `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
803
+ | Setting | Description | Default |
804
+ |------------|-------------------------------------------------------------|------------------------|
805
+ | `model` | Model(s) to use (comma-separated list) | - |
806
+ | `host` | Ollama host URL | http://localhost:11434 |
807
+ | `auth` | Authentication type | Bearer |
808
+ | `key` | Authentication key | - |
809
+ | `timeout` | Request timeout (milliseconds) | 100_000 (100sec) |
810
+ | `numCtx` | The maximum number of tokens the model can process at once | 2048 |
813
811
 
814
812
  ##### OLLAMA.model
815
813
 
@@ -851,6 +849,7 @@ aicommit2 config set OLLAMA.key=<key>
851
849
  ```
852
850
 
853
851
  Few examples of authentication methods:
852
+
854
853
  | **Authentication Method** | **OLLAMA.auth** | **OLLAMA.key** |
855
854
  |---------------------------|------------------------------|---------------------------------------|
856
855
  | Bearer | `Bearer` | `<API key>` |
@@ -869,12 +868,64 @@ Request timeout for the Ollama.
869
868
  aicommit2 config set OLLAMA.timeout=<timeout>
870
869
  ```
871
870
 
871
+ ##### OLLAMA.numCtx
872
+
873
+ The maximum number of tokens the model can process at once, determining its context length and memory usage.
874
+ It is recommended to set it to 4096 or higher.
875
+
876
+ ```sh
877
+ aicommit2 config set OLLAMA.numCtx=4096
878
+ ```
879
+
872
880
  ##### Unsupported Options
873
881
 
874
882
  Ollama does not support the following options in General Settings.
875
883
 
876
884
  - maxTokens
877
885
 
886
+ ### OpenAI API-Compatible Services
887
+
888
+ You can configure any OpenAI API-compatible service by adding a configuration section with the `compatible=true` option. This allows you to use services that implement the OpenAI API specification.
889
+
890
+ ```sh
891
+ # together
892
+ aicommit2 config set TOGETHER.compatible=true
893
+ aicommit2 config set TOGETHER.url=https://api.together.xyz
894
+ aicommit2 config set TOGETHER.path=/v1
895
+ aicommit2 config set TOGETHER.model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
896
+ aicommit2 config set TOGETHER.key="your-api-key"
897
+ ```
898
+
899
+ | Setting | Description | Required | Default |
900
+ |--------------|----------------------------------------|----------------------|---------|
901
+ | `compatible` | Enable OpenAI API compatibility mode | ✓ (**must be true**) | false |
902
+ | `url` | Base URL of the API endpoint | ✓ | - |
903
+ | `path` | API path for chat completions | | - |
904
+ | `key` | API key for authentication | ✓ | - |
905
+ | `model` | Model identifier to use | ✓ | - |
906
+
907
+ Example configuration:
908
+ ```ini
909
+ [TOGETHER]
910
+ compatible=true
911
+ key=<your-api-key>
912
+ url=https://api.together.xyz/v1
913
+ model=meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
914
+
915
+ [GEMINI_COMPATIBILITY]
916
+ compatible=true
917
+ key=<your-api-key>
918
+ url=https://generativelanguage.googleapis.com
919
+ path=/v1beta/openai/
920
+ model=gemini-1.5-flash
921
+
922
+ [OLLAMA_COMPATIBILITY]
923
+ compatible=true
924
+ key=ollama
925
+ url=http://localhost:11434/v1
926
+ model=llama3.2
927
+ ```
928
+
878
929
  ## Watch Commit Mode
879
930
 
880
931
  ![watch-commit-gif](https://github.com/tak-bro/aicommit2/blob/main/img/watch-commit-min.gif?raw=true)