@negoziator/ai-commit 2.9.2 → 2.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -15,8 +15,15 @@
15
15
  - [Usage](#usage)
16
16
  - [CLI Mode](#cli-mode)
17
17
  - [Git Hook Integration](#git-hook-integration)
18
+ - [LLM Providers](#llm-providers)
19
+ - [OpenAI (Default)](#openai-default)
20
+ - [Anthropic Claude](#anthropic-claude)
21
+ - [Azure OpenAI](#azure-openai)
22
+ - [Ollama (Local)](#ollama-local)
23
+ - [Custom/RAG](#customrag)
18
24
  - [Configuration](#configuration)
19
25
  - [Options](#options)
26
+ - [Environment Variables (.env)](#environment-variables-env)
20
27
  - [Project-Specific Configuration](#project-specific-configuration)
21
28
  - [Maintainers](#maintainers)
22
29
  - [Contributing](#contributing)
@@ -29,6 +36,7 @@
29
36
  - 🌐 **Multiple Languages**: Supports commit messages in different locales
30
37
  - ⚙️ **Customizable**: Configure the AI model, message length, and other parameters
31
38
  - 📝 **Project Context**: Add project-specific context to improve message relevance
39
+ - 🔌 **Multiple LLM Providers**: Choose from OpenAI, Anthropic, Azure OpenAI, Ollama, or custom endpoints
32
40
 
33
41
  ## Setup
34
42
  > A minimum of Node v20 is required. Check your Node.js version with `node --version`.
@@ -39,17 +47,37 @@
39
47
  npm install -g @negoziator/ai-commit
40
48
  ```
41
49
 
42
- 2. **Retrieve your API key from [OpenAI](https://platform.openai.com/account/api-keys)**
50
+ 2. **Choose and configure your LLM provider:**
43
51
 
44
- > Note: This requires an OpenAI account. If you don't have one, you can sign up for a free trial.
52
+ **Option A: OpenAI (Default)**
53
+ ```sh
54
+ aicommit config set OPENAI_KEY=<your-api-key>
55
+ ```
56
+ Get your API key from [OpenAI Platform](https://platform.openai.com/account/api-keys)
57
+
58
+ **Option B: Anthropic Claude**
59
+ ```sh
60
+ aicommit config set provider=anthropic
61
+ aicommit config set ANTHROPIC_KEY=<your-api-key>
62
+ ```
63
+ Get your API key from [Anthropic Console](https://console.anthropic.com/)
45
64
 
46
- 3. **Set the key so aicommit can use it:**
65
+ **Option C: Ollama (Local, Free)**
66
+ ```sh
67
+ # Install Ollama from https://ollama.com, then:
68
+ ollama pull llama3.2
69
+ aicommit config set provider=ollama
70
+ aicommit config set model=llama3.2
71
+ ```
47
72
 
73
+ **Option D: Azure OpenAI**
48
74
  ```sh
49
- aicommit config set OPENAI_KEY=<your token>
75
+ aicommit config set provider=azure-openai
76
+ aicommit config set AZURE_OPENAI_KEY=<your-key>
77
+ aicommit config set AZURE_ENDPOINT=<your-endpoint>
50
78
  ```
51
79
 
52
- This will create a `.aicommit` file in your home directory.
80
+ See the [LLM Providers](#llm-providers) section for more options and details.
53
81
 
54
82
  ### Upgrading
55
83
 
@@ -98,6 +126,125 @@ To uninstall the hook:
98
126
  aicommit hook uninstall
99
127
  ```
100
128
 
129
+ ## LLM Providers
130
+
131
+ AI-Commit supports multiple LLM providers, allowing you to choose the best option for your needs.
132
+
133
+ ### OpenAI (Default)
134
+
135
+ The default provider using OpenAI's GPT models.
136
+
137
+ **Setup:**
138
+ ```sh
139
+ aicommit config set OPENAI_KEY=<your-api-key>
140
+ ```
141
+
142
+ **Recommended models:**
143
+ - `gpt-4o-mini` (default, fast and cost-effective)
144
+ - `gpt-4o` (more capable, higher cost)
145
+ - `gpt-4-turbo`
146
+
147
+ **Get your API key:** [OpenAI Platform](https://platform.openai.com/account/api-keys)
148
+
149
+ ---
150
+
151
+ ### Anthropic Claude
152
+
153
+ Use Anthropic's Claude models as an alternative to OpenAI.
154
+
155
+ **Setup:**
156
+ ```sh
157
+ aicommit config set provider=anthropic
158
+ aicommit config set ANTHROPIC_KEY=<your-api-key>
159
+ ```
160
+
161
+ **Recommended models:**
162
+ - `claude-3-5-sonnet-20241022` (recommended, best balance)
163
+ - `claude-3-opus-20240229` (most capable)
164
+ - `claude-3-haiku-20240307` (fastest, most economical)
165
+
166
+ **Get your API key:** [Anthropic Console](https://console.anthropic.com/)
167
+
168
+ ---
169
+
170
+ ### Azure OpenAI
171
+
172
+ Use Azure's OpenAI Service for enterprise deployments.
173
+
174
+ **Setup:**
175
+ ```sh
176
+ aicommit config set provider=azure-openai
177
+ aicommit config set AZURE_OPENAI_KEY=<your-api-key>
178
+ aicommit config set AZURE_ENDPOINT=<your-endpoint>
179
+ ```
180
+
181
+ **Example endpoint:** `https://your-resource.openai.azure.com`
182
+
183
+ **Note:** The `model` config should match your Azure deployment name.
184
+
185
+ **Learn more:** [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
186
+
187
+ ---
188
+
189
+ ### Ollama (Local)
190
+
191
+ Run AI-Commit completely offline using local models via Ollama.
192
+
193
+ **Setup:**
194
+ ```sh
195
+ # 1. Install Ollama from https://ollama.com
196
+ # 2. Pull a model
197
+ ollama pull llama3.2
198
+
199
+ # 3. Configure AI-Commit
200
+ aicommit config set provider=ollama
201
+ aicommit config set model=llama3.2
202
+ ```
203
+
204
+ **Recommended models:**
205
+ - `llama3.2` (recommended, good balance)
206
+ - `codellama` (optimized for code)
207
+ - `mistral` (fast and capable)
208
+ - `qwen2.5-coder` (specialized for coding)
209
+
210
+ **Default endpoint:** `http://localhost:11434` (automatically configured)
211
+
212
+ **Benefits:**
213
+ - ✅ Completely free
214
+ - ✅ Works offline
215
+ - ✅ Privacy-focused (data never leaves your machine)
216
+ - ✅ No API key required
217
+
218
+ ---
219
+
220
+ ### Custom/RAG
221
+
222
+ Connect to custom LLM endpoints, RAG systems, or OpenAI-compatible APIs.
223
+
224
+ **Setup:**
225
+ ```sh
226
+ aicommit config set provider=custom
227
+ aicommit config set CUSTOM_ENDPOINT=<your-endpoint-url>
228
+ aicommit config set CUSTOM_KEY=<optional-api-key>
229
+ ```
230
+
231
+ **Compatible with:**
232
+ - Custom RAG implementations
233
+ - LM Studio
234
+ - LocalAI
235
+ - Text Generation WebUI
236
+ - vLLM
237
+ - Any OpenAI-compatible API
238
+
239
+ **Example:**
240
+ ```sh
241
+ aicommit config set provider=custom
242
+ aicommit config set CUSTOM_ENDPOINT=https://my-rag.example.com/v1/chat/completions
243
+ aicommit config set model=my-custom-model
244
+ ```
245
+
246
+ ---
247
+
101
248
  ## Configuration
102
249
  Manage configuration using the `aicommit config` command.
103
250
 
@@ -121,33 +268,103 @@ aicommit config set <key>=<value>
121
268
 
122
269
  ### Options
123
270
 
271
+ #### General Options
272
+
124
273
  | Option | Default | Description |
125
274
  |---------------------|-----------------|---------------------------------------------------------------------------------------|
126
- | `OPENAI_KEY` | N/A | The OpenAI API key. |
127
- | `locale` | `en` | Locale for the generated commit messages. |
128
- | `generate` | `1` | Number of commit messages to generate. |
129
- | `model` | `gpt-4o-mini` | The Chat Completions model to use. |
130
- | `timeout` | `10000` ms | Network request timeout to the OpenAI API. |
131
- | `max-length` | `50` | Maximum character length of the generated commit message. |
132
- | `type` | `""` | Type of commit message to generate. |
133
- | `auto-confirm` | `false` | Automatically confirm the generated commit message without user prompt. |
134
- | `prepend-reference` | `false` | Prepend issue reference from branch name to commit message. |
135
- | `temperature` | `0.2` | The temperature (0.0-2.0) is used to control the randomness of the output from OpenAI |
136
- | `max-completion-tokens` | `10000` | Maximum number of tokens that can be generated in the completion |
275
+ | `provider` | `openai` | LLM provider to use (`openai`, `anthropic`, `azure-openai`, `ollama`, `custom`) |
276
+ | `locale` | `en` | Locale for the generated commit messages |
277
+ | `generate` | `1` | Number of commit messages to generate |
278
+ | `model` | `gpt-4o-mini` | The model to use (provider-specific) |
279
+ | `timeout` | `10000` | Network request timeout in milliseconds |
280
+ | `max-length` | `50` | Maximum character length of the generated commit message |
281
+ | `type` | `""` | Type of commit message to generate (`conventional` or empty) |
282
+ | `auto-confirm` | `false` | Automatically confirm the generated commit message without user prompt |
283
+ | `prepend-reference` | `false` | Prepend issue reference from branch name to commit message |
284
+ | `temperature` | `0.2` | Temperature (0.0-2.0) to control randomness of the output |
285
+ | `max-completion-tokens` | `10000` | Maximum number of tokens that can be generated in the completion |
286
+ | `proxy` | N/A | HTTPS proxy URL (e.g., `http://proxy.example.com:8080`) |
287
+
288
+ #### Provider-Specific Options
289
+
290
+ | Option | Provider | Description |
291
+ |---------------------|-----------------|---------------------------------------------------------------------------------------|
292
+ | `OPENAI_KEY` | OpenAI | OpenAI API key (starts with `sk-`) |
293
+ | `ANTHROPIC_KEY` | Anthropic | Anthropic API key (starts with `sk-ant-`) |
294
+ | `AZURE_OPENAI_KEY` | Azure OpenAI | Azure OpenAI API key |
295
+ | `AZURE_ENDPOINT` | Azure OpenAI | Azure OpenAI endpoint URL (e.g., `https://your-resource.openai.azure.com`) |
296
+ | `OLLAMA_ENDPOINT` | Ollama | Ollama server endpoint (default: `http://localhost:11434`) |
297
+ | `CUSTOM_ENDPOINT` | Custom | Custom API endpoint URL |
298
+ | `CUSTOM_KEY` | Custom | Custom API key (optional, for endpoints requiring authentication) |
299
+
300
+ ### Environment Variables (.env)
301
+
302
+ For local development, you can create environment files in your project root:
303
+
304
+ **For CLI usage** (`.env`):
305
+ ```bash
306
+ # .env - Used when running aicommit
307
+ provider=anthropic
308
+ ANTHROPIC_KEY=sk-ant-...
309
+ model=claude-3-5-sonnet-20241022
310
+ max-length=80
311
+ ```
312
+
313
+ **For testing** (`.env.local`):
314
+ ```bash
315
+ # .env.local - Used when running npm test
316
+ OPENAI_KEY=sk-...
317
+ # or
318
+ provider=anthropic
319
+ ANTHROPIC_KEY=sk-ant-...
320
+ ```
321
+
322
+ **File usage:**
323
+ - `.env` → Loaded when running `aicommit` (CLI usage)
324
+ - `.env.local` → Loaded when running `npm test` (local testing only)
325
+ - Both files are gitignored for security
326
+
327
+ **Priority order** (highest to lowest):
328
+ 1. `.ai-commit.json` (project-specific)
329
+ 2. CLI arguments
330
+ 3. Environment variables (`.env`, `.env.local`, or shell)
331
+ 4. Global config (`~/.aicommit`)
332
+
333
+ **Note:** See `.env.example` for all available options. Copy it to `.env` for CLI usage or `.env.local` for testing.
137
334
 
138
335
  ### Project-Specific Configuration
139
336
 
140
337
  You can add a `.ai-commit.json` file in the root of your project to provide additional context about your project to the AI and to override global configuration settings for the specific project.
141
338
 
142
- Example `.ai-commit.json`:
339
+ **Example with OpenAI:**
143
340
  ```json
144
341
  {
145
- "projectPrompt": "This is a Node.js CLI tool that uses OpenAI to generate meaningful git commit messages.",
146
- "model": "gpt-4",
342
+ "projectPrompt": "This is a Node.js CLI tool that uses AI to generate meaningful git commit messages.",
343
+ "model": "gpt-4o",
147
344
  "locale": "en",
148
345
  "max-length": "100",
149
- "temperature": "0.5",
150
- "max-completion-tokens": "5000"
346
+ "temperature": "0.5"
347
+ }
348
+ ```
349
+
350
+ **Example with Anthropic:**
351
+ ```json
352
+ {
353
+ "provider": "anthropic",
354
+ "ANTHROPIC_KEY": "sk-ant-...",
355
+ "model": "claude-3-5-sonnet-20241022",
356
+ "projectPrompt": "This is a TypeScript library for data validation.",
357
+ "max-length": "80"
358
+ }
359
+ ```
360
+
361
+ **Example with Ollama:**
362
+ ```json
363
+ {
364
+ "provider": "ollama",
365
+ "model": "codellama",
366
+ "projectPrompt": "This is a Python web application using FastAPI.",
367
+ "temperature": "0.3"
151
368
  }
152
369
  ```
153
370
 
@@ -0,0 +1,5 @@
1
+ var y=Object.defineProperty;var u=(f,t)=>y(f,"name",{value:t,configurable:!0});import l from"https";import{c as w}from"./index-CQ8E6Ymc.mjs";import{L as A,g as C}from"./prompt-BMR92Sng.mjs";import{K as a}from"./cli-D2zun-19.mjs";import"net";import"tls";import"url";import"assert";import"tty";import"util";import"os";import"events";import"node:buffer";import"node:path";import"node:child_process";import"node:process";import"child_process";import"path";import"fs";import"node:url";import"node:os";import"node:fs";import"buffer";import"stream";import"node:util";import"node:readline";import"node:stream";import"fs/promises";const h=class h extends A{get name(){return"anthropic"}validateConfig(){const{apiKey:t}=this.config;if(!t)throw new a("Please set your Anthropic API key via `aicommit config set ANTHROPIC_KEY=<your token>`\nGet your API key from: https://console.anthropic.com/");if(!t.startsWith("sk-ant-"))throw new a('Invalid Anthropic API key: Must start with "sk-ant-"')}async generateCommitMessage(t){try{const i=C(this.config.locale,this.config.maxLength,this.config.type,t.projectConfig),o=[{role:"user",content:t.diff}],e=[];for(let s=0;s<t.completions;s++){const r=(await this.createMessage(i,o)).content[0]?.text;r&&e.push(this.sanitizeMessage(r))}return{messages:this.deduplicateMessages(e)}}catch(i){const o=i;throw o.code==="ENOTFOUND"?new a(`Error connecting to ${o.hostname} (${o.syscall}). Are you connected to the internet?`):o}}async createMessage(t,i){const o={model:this.config.model,max_tokens:this.config.maxCompletionTokens,temperature:this.config.temperature,system:t,messages:i},{response:e,data:n}=await this.httpsPost("api.anthropic.com","/v1/messages",{"x-api-key":this.config.apiKey,"anthropic-version":"2023-06-01"},o);if(!e.statusCode||e.statusCode<200||e.statusCode>299){let s=`Anthropic API Error: ${e.statusCode} - ${e.statusMessage}`;throw n&&(s+=`
2
+
3
+ ${n}`),e.statusCode===500&&(s+=`
4
+
5
+ Check the API status: https://status.anthropic.com`),new a(s)}return JSON.parse(n)}async httpsPost(t,i,o,e){return new Promise((n,s)=>{const c=JSON.stringify(e),r=l.request({hostname:t,path:i,method:"POST",headers:{...o,"Content-Type":"application/json","Content-Length":Buffer.byteLength(c)},timeout:this.config.timeout,agent:this.config.proxy?w(this.config.proxy):void 0},m=>{const g=[];m.on("data",d=>g.push(d)),m.on("end",()=>{n({request:r,response:m,data:Buffer.concat(g).toString()})})});r.on("error",s),r.on("timeout",()=>{r.destroy(),s(new a(`Time out error: request took over ${this.config.timeout}ms. Try increasing the \`timeout\` config, or checking the Anthropic API status https://status.anthropic.com`))}),r.write(c),r.end()})}sanitizeMessage(t){return t.trim().replace(/[\n\r]/g,"").replace(/(\w)\.$/,"$1")}deduplicateMessages(t){return Array.from(new Set(t))}};u(h,"AnthropicProvider");let p=h;export{p as default};
@@ -0,0 +1,3 @@
1
+ var d=Object.defineProperty;var f=(u,t)=>d(u,"name",{value:t,configurable:!0});import y from"https";import{c as w}from"./index-CQ8E6Ymc.mjs";import{L as C,g as A}from"./prompt-BMR92Sng.mjs";import{K as a}from"./cli-D2zun-19.mjs";import"net";import"tls";import"url";import"assert";import"tty";import"util";import"os";import"events";import"node:buffer";import"node:path";import"node:child_process";import"node:process";import"child_process";import"path";import"fs";import"node:url";import"node:os";import"node:fs";import"buffer";import"stream";import"node:util";import"node:readline";import"node:stream";import"fs/promises";const g=class g extends C{get name(){return"azure-openai"}validateConfig(){const{apiKey:t,endpoint:n}=this.config;if(!t)throw new a("Please set your Azure OpenAI API key via `aicommit config set AZURE_OPENAI_KEY=<your token>`");if(!n)throw new a("Please set your Azure OpenAI endpoint via `aicommit config set AZURE_ENDPOINT=<your endpoint>`")}async generateCommitMessage(t){try{const n=await this.createChatCompletion({model:this.config.model,messages:[{role:"system",content:A(this.config.locale,this.config.maxLength,this.config.type,t.projectConfig)},{role:"user",content:t.diff}],temperature:this.config.temperature,top_p:1,frequency_penalty:0,presence_penalty:0,max_tokens:this.config.maxCompletionTokens,n:t.completions});return{messages:this.deduplicateMessages(n.choices.filter(r=>r.message?.content).map(r=>this.sanitizeMessage(r.message.content)))}}catch(n){const e=n;throw e.code==="ENOTFOUND"?new a(`Error connecting to ${e.hostname} (${e.syscall}). Are you connected to the internet?`):e}}async createChatCompletion(t){const e=new URL(this.config.endpoint).hostname,m=`/openai/deployments/${this.config.deploymentName||this.config.model}/chat/completions?api-version=2024-02-01`,{response:i,data:s}=await this.httpsPost(e,m,{"api-key":this.config.apiKey},t);if(!i.statusCode||i.statusCode<200||i.statusCode>299){let o=`Azure OpenAI API Error: ${i.statusCode} - ${i.statusMessage}`;throw s&&(o+=`
2
+
3
+ ${s}`),new a(o)}return JSON.parse(s)}async httpsPost(t,n,e,r){return new Promise((m,i)=>{const s=JSON.stringify(r),o=y.request({hostname:t,path:n,method:"POST",headers:{...e,"Content-Type":"application/json","Content-Length":Buffer.byteLength(s)},timeout:this.config.timeout,agent:this.config.proxy?w(this.config.proxy):void 0},p=>{const h=[];p.on("data",l=>h.push(l)),p.on("end",()=>{m({request:o,response:p,data:Buffer.concat(h).toString()})})});o.on("error",i),o.on("timeout",()=>{o.destroy(),i(new a(`Time out error: request took over ${this.config.timeout}ms. Try increasing the \`timeout\` config`))}),o.write(s),o.end()})}sanitizeMessage(t){return t.trim().replace(/[\n\r]/g,"").replace(/(\w)\.$/,"$1")}deduplicateMessages(t){return Array.from(new Set(t))}};f(g,"AzureOpenAIProvider");let c=g;export{c as default};