aicommit2 1.7.5 → 1.8.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +189 -141
- package/dist/cli.mjs +53 -50
- package/package.json +3 -3
package/README.md
CHANGED
|
@@ -23,12 +23,14 @@ AICommit2 streamlines interactions with various AI, enabling users to request mu
|
|
|
23
23
|
The core functionalities and architecture of this project are inspired by [AI Commits](https://github.com/Nutlope/aicommits).
|
|
24
24
|
|
|
25
25
|
## Features
|
|
26
|
-
- **Generate
|
|
26
|
+
- **Generate Messages**: Quickly generate commit messages based on AI predictions.
|
|
27
27
|
- **Multiple AI Support**: Utilize various AI providers simultaneously.
|
|
28
28
|
- **Local Model**: Integrate with the local Ollama model for offline use.
|
|
29
29
|
|
|
30
30
|
## Supported Providers
|
|
31
31
|
|
|
32
|
+
### Remote
|
|
33
|
+
|
|
32
34
|
- [OpenAI](https://openai.com/)
|
|
33
35
|
- [Anthropic Claude](https://console.anthropic.com/)
|
|
34
36
|
- [Gemini](https://gemini.google.com/)
|
|
@@ -36,7 +38,7 @@ The core functionalities and architecture of this project are inspired by [AI Co
|
|
|
36
38
|
- [Huggingface **(Unofficial)**](https://huggingface.co/chat/)
|
|
37
39
|
- [Clova X **(Unofficial)**](https://clova-x.naver.com/)
|
|
38
40
|
|
|
39
|
-
|
|
41
|
+
### Local
|
|
40
42
|
|
|
41
43
|
- [Ollama](https://ollama.com/)
|
|
42
44
|
|
|
@@ -63,20 +65,41 @@ npm install -g aicommit2
|
|
|
63
65
|
|
|
64
66
|
3. Set API keys you intend to use:
|
|
65
67
|
|
|
68
|
+
It is not necessary to set all keys. **But at least one key must be set up.**
|
|
69
|
+
|
|
70
|
+
- OpenAI
|
|
71
|
+
```sh
|
|
72
|
+
aicommit2 config set OPENAI_KEY=<your key>
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
- Anthropic Claude
|
|
76
|
+
```sh
|
|
77
|
+
aicommit2 config set ANTHROPIC_KEY=<your key>
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
- Gemini
|
|
81
|
+
```sh
|
|
82
|
+
aicommit2 config set GEMINI_KEY=<your key>
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
- Mistral AI
|
|
66
86
|
```sh
|
|
67
|
-
aicommit2 config set
|
|
68
|
-
|
|
69
|
-
aicommit2 config set GEMINI_KEY=<your key> # Gemini
|
|
70
|
-
aicommit2 config set MISTRAL_KEY=<your key> # Mistral AI
|
|
87
|
+
aicommit2 config set MISTRAL_KEY=<your key>
|
|
88
|
+
```
|
|
71
89
|
|
|
90
|
+
- Huggingface Chat
|
|
91
|
+
```shell
|
|
72
92
|
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
73
|
-
aicommit2 config set HUGGING_COOKIE="<your browser cookie>"
|
|
74
|
-
aicommit2 config set CLOVAX_COOKIE="<your browser cookie>" # Clova X
|
|
93
|
+
aicommit2 config set HUGGING_COOKIE="<your browser cookie>"
|
|
75
94
|
```
|
|
76
95
|
|
|
77
|
-
|
|
96
|
+
- Clova X
|
|
97
|
+
```shell
|
|
98
|
+
# Please be cautious of Escape characters(\", \') in browser cookie string
|
|
99
|
+
aicommit2 config set CLOVAX_COOKIE="<your browser cookie>"
|
|
100
|
+
```
|
|
78
101
|
|
|
79
|
-
|
|
102
|
+
This will create a `.aicommit2` file in your home directory.
|
|
80
103
|
|
|
81
104
|
4. Run aicommits with your staged in git repository:
|
|
82
105
|
```shell
|
|
@@ -93,7 +116,7 @@ You can also use your model for free with [Ollama](https://ollama.com/).
|
|
|
93
116
|
2. Start it with your model
|
|
94
117
|
|
|
95
118
|
```shell
|
|
96
|
-
ollama run
|
|
119
|
+
ollama run codellama # model you want use. ex) llama2, codellama
|
|
97
120
|
```
|
|
98
121
|
|
|
99
122
|
3. Set the model and host
|
|
@@ -274,39 +297,40 @@ You can also set multiple configuration options at once by separating them with
|
|
|
274
297
|
aicommit2 config set OPENAI_KEY=<your-api-key> generate=3 locale=en
|
|
275
298
|
```
|
|
276
299
|
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
| Option | Default
|
|
280
|
-
|
|
281
|
-
| `OPENAI_KEY` | N/A
|
|
282
|
-
| `OPENAI_MODEL` | `gpt-3.5-turbo`
|
|
283
|
-
| `OPENAI_URL` | `https://api.openai.com`
|
|
284
|
-
| `OPENAI_PATH` | `/v1/chat/completions`
|
|
285
|
-
| `ANTHROPIC_KEY` | N/A
|
|
286
|
-
| `ANTHROPIC_MODEL` | `claude-2.1`
|
|
287
|
-
| `GEMINI_KEY` | N/A
|
|
288
|
-
| `GEMINI_MODEL` | `gemini-pro`
|
|
289
|
-
| `MISTRAL_KEY` | N/A
|
|
290
|
-
| `MISTRAL_MODEL` | `mistral-tiny`
|
|
291
|
-
| `HUGGING_COOKIE` | N/A
|
|
292
|
-
| `HUGGING_MODEL` | `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
293
|
-
| `CLOVAX_COOKIE` | N/A
|
|
294
|
-
| `OLLAMA_MODEL` | N/A
|
|
295
|
-
| `OLLAMA_HOST` | `http://localhost:11434`
|
|
296
|
-
| `OLLAMA_TIMEOUT` | `100000` ms
|
|
297
|
-
| `
|
|
298
|
-
| `
|
|
299
|
-
| `
|
|
300
|
-
| `
|
|
301
|
-
| `
|
|
302
|
-
| `
|
|
303
|
-
| `max-
|
|
304
|
-
| `
|
|
305
|
-
| `
|
|
300
|
+
## Options
|
|
301
|
+
|
|
302
|
+
| Option | Default | Description |
|
|
303
|
+
|-------------------|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------|
|
|
304
|
+
| `OPENAI_KEY` | N/A | The OpenAI API key |
|
|
305
|
+
| `OPENAI_MODEL` | `gpt-3.5-turbo` | The OpenAI Model to use |
|
|
306
|
+
| `OPENAI_URL` | `https://api.openai.com` | The OpenAI URL |
|
|
307
|
+
| `OPENAI_PATH` | `/v1/chat/completions` | The OpenAI request pathname |
|
|
308
|
+
| `ANTHROPIC_KEY` | N/A | The Anthropic API key |
|
|
309
|
+
| `ANTHROPIC_MODEL` | `claude-2.1` | The Anthropic Model to use |
|
|
310
|
+
| `GEMINI_KEY` | N/A | The Gemini API key |
|
|
311
|
+
| `GEMINI_MODEL` | `gemini-pro` | The Gemini Model |
|
|
312
|
+
| `MISTRAL_KEY` | N/A | The Mistral API key |
|
|
313
|
+
| `MISTRAL_MODEL` | `mistral-tiny` | The Mistral Model to use |
|
|
314
|
+
| `HUGGING_COOKIE` | N/A | The HuggingFace Cookie string |
|
|
315
|
+
| `HUGGING_MODEL` | `mistralai/Mixtral-8x7B-Instruct-v0.1` | The HuggingFace Model to use |
|
|
316
|
+
| `CLOVAX_COOKIE` | N/A | The Clova X Cookie string |
|
|
317
|
+
| `OLLAMA_MODEL` | N/A | The Ollama Model. It should be downloaded your local |
|
|
318
|
+
| `OLLAMA_HOST` | `http://localhost:11434` | The Ollama Host |
|
|
319
|
+
| `OLLAMA_TIMEOUT` | `100000` ms | Request timeout for the Ollama |
|
|
320
|
+
| `OLLAMA_STREAM` | N/A | Whether to make stream requests (**experimental feature**) |
|
|
321
|
+
| `locale` | `en` | Locale for the generated commit messages |
|
|
322
|
+
| `generate` | `1` | Number of commit messages to generate |
|
|
323
|
+
| `type` | `conventional` | Type of commit message to generate |
|
|
324
|
+
| `proxy` | N/A | Set a HTTP/HTTPS proxy to use for requests(only **OpenAI**) |
|
|
325
|
+
| `timeout` | `10000` ms | Network request timeout |
|
|
326
|
+
| `max-length` | `50` | Maximum character length of the generated commit message |
|
|
327
|
+
| `max-tokens` | `200` | The maximum number of tokens that the AI models can generate (for **Open AI, Anthropic, Gemini, Mistral**) |
|
|
328
|
+
| `temperature` | `0.7` | The temperature (0.0-2.0) is used to control the randomness of the output (for **Open AI, Anthropic, Gemini, Mistral**) |
|
|
329
|
+
| `prompt` | N/A | Additional prompt to let users fine-tune provided prompt |
|
|
306
330
|
|
|
307
331
|
> **Currently, options are set universally. However, there are plans to develop the ability to set individual options in the future.**
|
|
308
332
|
|
|
309
|
-
|
|
333
|
+
### Available Options by Model
|
|
310
334
|
| | locale | generate | type | proxy | timeout | max-length | max-tokens | temperature | prompt |
|
|
311
335
|
|:--------------------:|:------:|:--------:|:-----:|:-----:|:----------------------:|:-----------:|:----------:|:-----------:|:------:|
|
|
312
336
|
| **OpenAI** | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
|
|
@@ -317,11 +341,105 @@ aicommit2 config set OPENAI_KEY=<your-api-key> generate=3 locale=en
|
|
|
317
341
|
| **Clova X** | ✓ | ✓ | ✓ | | ✓ | ✓ | | | ✓ |
|
|
318
342
|
| **Ollama** | ✓ | ✓ | ✓ | | ✓<br/>(OLLAMA_TIMEOUT) | ✓ | | ✓ | ✓ |
|
|
319
343
|
|
|
320
|
-
|
|
344
|
+
|
|
345
|
+
### Common Options
|
|
346
|
+
|
|
347
|
+
##### locale
|
|
348
|
+
|
|
349
|
+
Default: `en`
|
|
350
|
+
|
|
351
|
+
The locale to use for the generated commit messages. Consult the list of codes in: https://wikipedia.org/wiki/List_of_ISO_639_language_codes.
|
|
352
|
+
|
|
353
|
+
##### generate
|
|
354
|
+
|
|
355
|
+
Default: `1`
|
|
356
|
+
|
|
357
|
+
The number of commit messages to generate to pick from.
|
|
358
|
+
|
|
359
|
+
Note, this will use more tokens as it generates more results.
|
|
360
|
+
|
|
361
|
+
##### proxy
|
|
362
|
+
|
|
363
|
+
Set a HTTP/HTTPS proxy to use for requests.
|
|
364
|
+
|
|
365
|
+
To clear the proxy option, you can use the command (note the empty value after the equals sign):
|
|
366
|
+
|
|
367
|
+
> **Only supported within the OpenAI**
|
|
368
|
+
|
|
369
|
+
```sh
|
|
370
|
+
aicommit2 config set proxy=
|
|
371
|
+
```
|
|
372
|
+
|
|
373
|
+
##### timeout
|
|
374
|
+
|
|
375
|
+
The timeout for network requests to the OpenAI API in milliseconds.
|
|
376
|
+
|
|
377
|
+
Default: `10000` (10 seconds)
|
|
378
|
+
|
|
379
|
+
```sh
|
|
380
|
+
aicommit2 config set timeout=20000 # 20s
|
|
381
|
+
```
|
|
382
|
+
|
|
383
|
+
##### max-length
|
|
384
|
+
|
|
385
|
+
The maximum character length of the generated commit message.
|
|
386
|
+
|
|
387
|
+
Default: `50`
|
|
388
|
+
|
|
389
|
+
```sh
|
|
390
|
+
aicommit2 config set max-length=100
|
|
391
|
+
```
|
|
392
|
+
|
|
393
|
+
##### type
|
|
394
|
+
|
|
395
|
+
Default: `conventional`
|
|
396
|
+
|
|
397
|
+
Supported: `conventional`, `gitmoji`
|
|
398
|
+
|
|
399
|
+
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
|
|
400
|
+
|
|
401
|
+
```sh
|
|
402
|
+
aicommit2 config set type=conventional
|
|
403
|
+
```
|
|
404
|
+
|
|
405
|
+
You can clear this option by setting it to an empty string:
|
|
406
|
+
|
|
407
|
+
```sh
|
|
408
|
+
aicommit2 config set type=
|
|
409
|
+
```
|
|
410
|
+
|
|
411
|
+
##### max-tokens
|
|
412
|
+
The maximum number of tokens that the AI models can generate.
|
|
413
|
+
|
|
414
|
+
Default: `200`
|
|
415
|
+
|
|
416
|
+
```sh
|
|
417
|
+
aicommit2 config set max-tokens=1000
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
##### temperature
|
|
421
|
+
The temperature (0.0-2.0) is used to control the randomness of the output
|
|
422
|
+
|
|
423
|
+
Default: `0.7`
|
|
424
|
+
|
|
425
|
+
```sh
|
|
426
|
+
aicommit2 config set temperature=0
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
##### prompt
|
|
430
|
+
Additional prompt to let users fine-tune provided prompt. Users provide extra instructions to AI and can guide how commit messages should look like.
|
|
431
|
+
|
|
432
|
+
```sh
|
|
433
|
+
aicommit2 config set prompt="Do not mention config changes"
|
|
434
|
+
```
|
|
435
|
+
|
|
436
|
+
### OPEN AI
|
|
437
|
+
|
|
438
|
+
##### OPENAI_KEY
|
|
321
439
|
|
|
322
440
|
The OpenAI API key. You can retrieve it from [OpenAI API Keys page](https://platform.openai.com/account/api-keys).
|
|
323
441
|
|
|
324
|
-
|
|
442
|
+
##### OPENAI_MODEL
|
|
325
443
|
|
|
326
444
|
Default: `gpt-3.5-turbo`
|
|
327
445
|
|
|
@@ -345,11 +463,14 @@ Default: `/v1/chat/completions`
|
|
|
345
463
|
|
|
346
464
|
The OpenAI Path.
|
|
347
465
|
|
|
348
|
-
|
|
466
|
+
|
|
467
|
+
### Anthropic Claude
|
|
468
|
+
|
|
469
|
+
##### ANTHROPIC_KEY
|
|
349
470
|
|
|
350
471
|
The Anthropic API key. To get started with Anthropic Claude, request access to their API at [anthropic.com/earlyaccess](https://www.anthropic.com/earlyaccess).
|
|
351
472
|
|
|
352
|
-
|
|
473
|
+
##### ANTHROPIC_MODEL
|
|
353
474
|
|
|
354
475
|
Default: `claude-2.1`
|
|
355
476
|
|
|
@@ -362,11 +483,13 @@ Supported:
|
|
|
362
483
|
aicommit2 config set ANTHROPIC_MODEL=claude-instant-1.2
|
|
363
484
|
```
|
|
364
485
|
|
|
365
|
-
|
|
486
|
+
### GEMINI
|
|
487
|
+
|
|
488
|
+
##### GEMINI_KEY
|
|
366
489
|
|
|
367
490
|
The Gemini API key. If you don't have one, create a key in [Google AI Studio](https://aistudio.google.com/app/apikey).
|
|
368
491
|
|
|
369
|
-
|
|
492
|
+
##### GEMINI_MODEL
|
|
370
493
|
|
|
371
494
|
Default: `gemini-pro`
|
|
372
495
|
|
|
@@ -375,11 +498,13 @@ Supported:
|
|
|
375
498
|
|
|
376
499
|
> Currently supporting only one model, but as Gemini starts supporting other models, it will be updated.
|
|
377
500
|
|
|
378
|
-
|
|
501
|
+
### MISTRAL
|
|
502
|
+
|
|
503
|
+
##### MISTRAL_KEY
|
|
379
504
|
|
|
380
505
|
The Mistral API key. If you don't have one, please sign up and subscribe in [Mistral Console](https://console.mistral.ai/).
|
|
381
506
|
|
|
382
|
-
|
|
507
|
+
##### MISTRAL_MODEL
|
|
383
508
|
|
|
384
509
|
Default: `mistral-tiny`
|
|
385
510
|
|
|
@@ -401,11 +526,13 @@ Supported:
|
|
|
401
526
|
|
|
402
527
|
> The models mentioned above are subject to change.
|
|
403
528
|
|
|
404
|
-
|
|
529
|
+
### HuggingFace Chat
|
|
530
|
+
|
|
531
|
+
##### HUGGING_COOKIE
|
|
405
532
|
|
|
406
533
|
The [Huggingface Chat](https://huggingface.co/chat/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
407
534
|
|
|
408
|
-
|
|
535
|
+
##### HUGGING_MODEL
|
|
409
536
|
|
|
410
537
|
Default: `mistralai/Mixtral-8x7B-Instruct-v0.1`
|
|
411
538
|
|
|
@@ -419,114 +546,35 @@ Supported:
|
|
|
419
546
|
|
|
420
547
|
> The models mentioned above are subject to change.
|
|
421
548
|
|
|
422
|
-
|
|
549
|
+
### Clova X
|
|
550
|
+
|
|
551
|
+
##### CLOVAX_COOKIE
|
|
423
552
|
|
|
424
553
|
The [Clova X](https://clova-x.naver.com/) Cookie. Please check [how to get cookie](https://github.com/tak-bro/aicommit2?tab=readme-ov-file#how-to-get-cookieunofficial-api)
|
|
425
554
|
|
|
426
|
-
|
|
555
|
+
### Ollama
|
|
556
|
+
|
|
557
|
+
##### OLLAMA_MODEL
|
|
427
558
|
|
|
428
559
|
The Ollama Model. Please see [a list of models available](https://ollama.com/library)
|
|
429
560
|
|
|
430
|
-
|
|
561
|
+
##### OLLAMA_HOST
|
|
431
562
|
|
|
432
563
|
Default: `http://localhost:11434`
|
|
433
564
|
|
|
434
565
|
The Ollama host
|
|
435
566
|
|
|
436
|
-
|
|
567
|
+
##### OLLAMA_TIMEOUT
|
|
437
568
|
|
|
438
569
|
Default: `100000` (100 seconds)
|
|
439
570
|
|
|
440
571
|
Request timeout for the Ollama. Default OLLAMA_TIMEOUT is **100 seconds** because it can take a long time to run locally.
|
|
441
572
|
|
|
442
|
-
|
|
443
|
-
|
|
444
|
-
Default: `en`
|
|
445
|
-
|
|
446
|
-
The locale to use for the generated commit messages. Consult the list of codes in: https://wikipedia.org/wiki/List_of_ISO_639_language_codes.
|
|
447
|
-
|
|
448
|
-
#### generate
|
|
449
|
-
|
|
450
|
-
Default: `1`
|
|
451
|
-
|
|
452
|
-
The number of commit messages to generate to pick from.
|
|
453
|
-
|
|
454
|
-
Note, this will use more tokens as it generates more results.
|
|
455
|
-
|
|
456
|
-
#### proxy
|
|
457
|
-
|
|
458
|
-
Set a HTTP/HTTPS proxy to use for requests.
|
|
459
|
-
|
|
460
|
-
To clear the proxy option, you can use the command (note the empty value after the equals sign):
|
|
461
|
-
|
|
462
|
-
> **Only supported within the OpenAI**
|
|
463
|
-
|
|
464
|
-
```sh
|
|
465
|
-
aicommit2 config set proxy=
|
|
466
|
-
```
|
|
467
|
-
|
|
468
|
-
#### timeout
|
|
469
|
-
|
|
470
|
-
The timeout for network requests to the OpenAI API in milliseconds.
|
|
471
|
-
|
|
472
|
-
Default: `10000` (10 seconds)
|
|
473
|
-
|
|
474
|
-
```sh
|
|
475
|
-
aicommit2 config set timeout=20000 # 20s
|
|
476
|
-
```
|
|
477
|
-
|
|
478
|
-
#### max-length
|
|
479
|
-
|
|
480
|
-
The maximum character length of the generated commit message.
|
|
481
|
-
|
|
482
|
-
Default: `50`
|
|
483
|
-
|
|
484
|
-
```sh
|
|
485
|
-
aicommit2 config set max-length=100
|
|
486
|
-
```
|
|
487
|
-
|
|
488
|
-
#### type
|
|
489
|
-
|
|
490
|
-
Default: `conventional`
|
|
491
|
-
|
|
492
|
-
Supported: `conventional`, `gitmoji`
|
|
493
|
-
|
|
494
|
-
The type of commit message to generate. Set this to "conventional" to generate commit messages that follow the Conventional Commits specification:
|
|
495
|
-
|
|
496
|
-
```sh
|
|
497
|
-
aicommit2 config set type=conventional
|
|
498
|
-
```
|
|
499
|
-
|
|
500
|
-
You can clear this option by setting it to an empty string:
|
|
501
|
-
|
|
502
|
-
```sh
|
|
503
|
-
aicommit2 config set type=
|
|
504
|
-
```
|
|
505
|
-
|
|
506
|
-
#### max-tokens
|
|
507
|
-
The maximum number of tokens that the AI models can generate.
|
|
508
|
-
|
|
509
|
-
Default: `200`
|
|
510
|
-
|
|
511
|
-
```sh
|
|
512
|
-
aicommit2 config set max-tokens=1000
|
|
513
|
-
```
|
|
573
|
+
##### OLLAMA_STREAM
|
|
514
574
|
|
|
515
|
-
|
|
516
|
-
The temperature (0.0-2.0) is used to control the randomness of the output
|
|
575
|
+
Default: `false`
|
|
517
576
|
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
```sh
|
|
521
|
-
aicommit2 config set temperature=0
|
|
522
|
-
```
|
|
523
|
-
|
|
524
|
-
#### prompt
|
|
525
|
-
Additional prompt to let users fine-tune provided prompt. Users provide extra instructions to AI and can guide how commit messages should look like.
|
|
526
|
-
|
|
527
|
-
```sh
|
|
528
|
-
aicommit2 config set prompt="Do not mention config changes"
|
|
529
|
-
```
|
|
577
|
+
Determines whether the application will make stream requests to Ollama. This feature is experimental and may not be fully stable.
|
|
530
578
|
|
|
531
579
|
## Upgrading
|
|
532
580
|
|