devcommit 0.1.5.0__tar.gz → 0.1.5.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: devcommit
3
- Version: 0.1.5.0
3
+ Version: 0.1.5.1
4
4
  Summary: AI-powered git commit message generator
5
5
  License: GNU GENERAL PUBLIC LICENSE
6
6
  Version 3, 29 June 2007
@@ -711,6 +711,7 @@ A command-line AI tool for autocommits.
711
711
 
712
712
  - 🤖 **Multi-AI Provider Support** - Choose from Gemini, Groq, OpenAI, Claude, Ollama, or custom APIs
713
713
  - 🚀 Automatic commit generation using AI
714
+ - 📝 **Changelog Generation** - Automatically generate markdown changelogs from your changes
714
715
  - 📁 Directory-based commits - create separate commits for each root directory
715
716
  - 🎯 Interactive mode to choose between global or directory-based commits
716
717
  - 📄 **Commit specific files or folders** - Stage and commit only selected files/directories
@@ -1115,6 +1116,47 @@ Selecting "Regenerate commit messages" will:
1115
1116
 
1116
1117
  This works for all commit modes (global, directory, and per-file commits).
1117
1118
 
1119
+ ### Changelog Generation
1120
+
1121
+ DevCommit can automatically generate markdown changelog files from your changes using AI.
1122
+
1123
+ **Usage:**
1124
+
1125
+ ```bash
1126
+ # Generate changelog after committing
1127
+ devcommit --changelog
1128
+
1129
+ # Generate changelog before staging (recommended)
1130
+ devcommit --stageAll --changelog
1131
+
1132
+ # Short form
1133
+ devcommit -s -c
1134
+
1135
+ # With specific files
1136
+ devcommit --stageAll --changelog --files src/
1137
+ ```
1138
+
1139
+ **How it works:**
1140
+
1141
+ - **With `--stageAll`**: Changelog is generated from unstaged changes **before** staging
1142
+ - **Without `--stageAll`**: Changelog is generated from the last commit **after** committing
1143
+ - Changelogs are saved as markdown files with datetime-based names (e.g., `2026-01-28_00-55-30.md`)
1144
+ - Default directory: `changelogs/` (configurable via `CHANGELOG_DIR` in `.dcommit`)
1145
+ - Uses Keep a Changelog format with AI-generated content
1146
+
1147
+ **Example workflow:**
1148
+
1149
+ ```bash
1150
+ # Make changes to your code
1151
+ # ...
1152
+
1153
+ # Stage all changes and generate changelog before committing
1154
+ devcommit --stageAll --changelog
1155
+
1156
+ # The changelog file is created in changelogs/ directory
1157
+ # Then changes are staged and committed
1158
+ ```
1159
+
1118
1160
  ### Additional Options
1119
1161
 
1120
1162
  - `--excludeFiles` or `-e`: Exclude specific files from the diff
@@ -1124,6 +1166,7 @@ This works for all commit modes (global, directory, and per-file commits).
1124
1166
  - `--directory` or `-d`: Force directory-based commits
1125
1167
  - `--files` or `-f`: Stage and commit specific files or folders (can specify multiple)
1126
1168
  - `--push` or `-p`: Push commits to remote after committing
1169
+ - `--changelog` or `-c`: Generate changelog file from changes
1127
1170
 
1128
1171
  ### Examples
1129
1172
 
@@ -1157,6 +1200,15 @@ devcommit -s -f src/core src/modules/account/ --directory
1157
1200
 
1158
1201
  # Stage and commit, then push
1159
1202
  devcommit -s -f src/core src/modules/account/ -p
1203
+
1204
+ # Generate changelog before staging and committing
1205
+ devcommit --stageAll --changelog
1206
+
1207
+ # Generate changelog after committing
1208
+ devcommit --changelog
1209
+
1210
+ # Generate changelog with specific files
1211
+ devcommit -s -c -f src/
1160
1212
  ```
1161
1213
 
1162
1214
  ## AI Provider Support
@@ -1188,28 +1240,19 @@ devcommit
1188
1240
  ```bash
1189
1241
  export AI_PROVIDER=openrouter
1190
1242
  export OPENROUTER_API_KEY='your-openrouter-api-key'
1191
- # Optional: specify model (default: mistralai/devstral-2512:free)
1192
- export OPENROUTER_MODEL='mistralai/devstral-2512:free'
1243
+ # Optional: specify model (default: meta-llama/llama-3.3-70b-instruct:free)
1244
+ export OPENROUTER_MODEL='meta-llama/llama-3.3-70b-instruct:free'
1193
1245
  devcommit
1194
1246
  ```
1195
1247
 
1196
1248
  **Popular free models on OpenRouter (add `:free` suffix):**
1197
1249
 
1198
- **Coding & Development:**
1199
- - `mistralai/devstral-2512:free` - Mistral's state-of-the-art coding model (123B params)
1200
- - `kwaipilot/kat-coder-pro:free` - Advanced agentic coding model (73.4% SWE-Bench solve rate)
1201
- - `qwen/qwen3-coder:free` - Qwen's MoE code generation model (480B params, 35B active)
1202
- - `deepseek/deepseek-r1-0528:free` - DeepSeek R1 reasoning model (671B params, open-source)
1203
-
1204
- **Reasoning & General Purpose:**
1205
- - `xiaomi/mimo-v2-flash:free` - Mixture-of-Experts model (309B params, top open-source on SWE-bench)
1206
- - `tngtech/deepseek-r1t2-chimera:free` - DeepSeek R1T2 reasoning model (671B params)
1207
- - `tngtech/deepseek-r1t-chimera:free` - DeepSeek R1T reasoning model
1208
- - `tngtech/tng-r1t-chimera:free` - Creative storytelling and character interaction model
1209
-
1210
- **Lightweight & Fast:**
1211
- - `z-ai/glm-4.5-air:free` - GLM 4.5 Air lightweight variant (MoE architecture)
1212
- - `nvidia/nemotron-3-nano-30b-a3b:free` - NVIDIA's efficient small model (30B params, 3B active)
1250
+ **Recommended Models:**
1251
+ - `meta-llama/llama-3.3-70b-instruct:free` - Llama 3.3 70B Instruct (Powerful & General Purpose)
1252
+ - `google/gemma-3-27b-it:free` - Google Gemma 3 27B Instruct (Efficient & Capable)
1253
+ - `openai/gpt-oss-120b:free` - OpenAI GPT-OSS 120B (Large & Experimental)
1254
+ - `tngtech/deepseek-r1t-chimera:free` - DeepSeek R1T Chimera (Strong Reasoning)
1255
+ - `qwen/qwen3-next-80b-a3b-instruct:free` - Qwen3 Next 80B (Advanced Instruction Following)
1213
1256
 
1214
1257
  **Important Notes:**
1215
1258
  - **Logging Requirements:** Some free models may log your prompts and responses for model improvement purposes. This means:
@@ -1274,7 +1317,7 @@ All configuration can be set via **environment variables** or **`.dcommit` file*
1274
1317
  | Variable | Description | Default |
1275
1318
  |----------|-------------|---------|
1276
1319
  | `OPENROUTER_API_KEY` | OpenRouter API key ([Get it here](https://openrouter.ai/keys)) | - |
1277
- | `OPENROUTER_MODEL` | Model name (add `:free` suffix for free models) | `mistralai/devstral-2512:free` |
1320
+ | `OPENROUTER_MODEL` | Model name (add `:free` suffix for free models) | `meta-llama/llama-3.3-70b-instruct:free` |
1278
1321
 
1279
1322
  **Anthropic:**
1280
1323
  | Variable | Description | Default |
@@ -1305,6 +1348,7 @@ All configuration can be set via **environment variables** or **`.dcommit` file*
1305
1348
  | `COMMIT_MODE` | Default commit strategy | `auto` | `auto`, `directory`, `global`, `related` |
1306
1349
  | `EXCLUDE_FILES` | Files to exclude from diff | `package-lock.json, pnpm-lock.yaml, yarn.lock, *.lock` | Comma-separated file patterns |
1307
1350
  | `MAX_TOKENS` | Maximum tokens for AI response | `8192` | Any positive integer |
1351
+ | `CHANGELOG_DIR` | Directory for changelog files | `changelogs` | Any directory path |
1308
1352
 
1309
1353
  ### Configuration Priority
1310
1354
 
@@ -6,6 +6,7 @@ A command-line AI tool for autocommits.
6
6
 
7
7
  - 🤖 **Multi-AI Provider Support** - Choose from Gemini, Groq, OpenAI, Claude, Ollama, or custom APIs
8
8
  - 🚀 Automatic commit generation using AI
9
+ - 📝 **Changelog Generation** - Automatically generate markdown changelogs from your changes
9
10
  - 📁 Directory-based commits - create separate commits for each root directory
10
11
  - 🎯 Interactive mode to choose between global or directory-based commits
11
12
  - 📄 **Commit specific files or folders** - Stage and commit only selected files/directories
@@ -410,6 +411,47 @@ Selecting "Regenerate commit messages" will:
410
411
 
411
412
  This works for all commit modes (global, directory, and per-file commits).
412
413
 
414
+ ### Changelog Generation
415
+
416
+ DevCommit can automatically generate markdown changelog files from your changes using AI.
417
+
418
+ **Usage:**
419
+
420
+ ```bash
421
+ # Generate changelog after committing
422
+ devcommit --changelog
423
+
424
+ # Generate changelog before staging (recommended)
425
+ devcommit --stageAll --changelog
426
+
427
+ # Short form
428
+ devcommit -s -c
429
+
430
+ # With specific files
431
+ devcommit --stageAll --changelog --files src/
432
+ ```
433
+
434
+ **How it works:**
435
+
436
+ - **With `--stageAll`**: Changelog is generated from unstaged changes **before** staging
437
+ - **Without `--stageAll`**: Changelog is generated from the last commit **after** committing
438
+ - Changelogs are saved as markdown files with datetime-based names (e.g., `2026-01-28_00-55-30.md`)
439
+ - Default directory: `changelogs/` (configurable via `CHANGELOG_DIR` in `.dcommit`)
440
+ - Uses Keep a Changelog format with AI-generated content
441
+
442
+ **Example workflow:**
443
+
444
+ ```bash
445
+ # Make changes to your code
446
+ # ...
447
+
448
+ # Stage all changes and generate changelog before committing
449
+ devcommit --stageAll --changelog
450
+
451
+ # The changelog file is created in changelogs/ directory
452
+ # Then changes are staged and committed
453
+ ```
454
+
413
455
  ### Additional Options
414
456
 
415
457
  - `--excludeFiles` or `-e`: Exclude specific files from the diff
@@ -419,6 +461,7 @@ This works for all commit modes (global, directory, and per-file commits).
419
461
  - `--directory` or `-d`: Force directory-based commits
420
462
  - `--files` or `-f`: Stage and commit specific files or folders (can specify multiple)
421
463
  - `--push` or `-p`: Push commits to remote after committing
464
+ - `--changelog` or `-c`: Generate changelog file from changes
422
465
 
423
466
  ### Examples
424
467
 
@@ -452,6 +495,15 @@ devcommit -s -f src/core src/modules/account/ --directory
452
495
 
453
496
  # Stage and commit, then push
454
497
  devcommit -s -f src/core src/modules/account/ -p
498
+
499
+ # Generate changelog before staging and committing
500
+ devcommit --stageAll --changelog
501
+
502
+ # Generate changelog after committing
503
+ devcommit --changelog
504
+
505
+ # Generate changelog with specific files
506
+ devcommit -s -c -f src/
455
507
  ```
456
508
 
457
509
  ## AI Provider Support
@@ -483,28 +535,19 @@ devcommit
483
535
  ```bash
484
536
  export AI_PROVIDER=openrouter
485
537
  export OPENROUTER_API_KEY='your-openrouter-api-key'
486
- # Optional: specify model (default: mistralai/devstral-2512:free)
487
- export OPENROUTER_MODEL='mistralai/devstral-2512:free'
538
+ # Optional: specify model (default: meta-llama/llama-3.3-70b-instruct:free)
539
+ export OPENROUTER_MODEL='meta-llama/llama-3.3-70b-instruct:free'
488
540
  devcommit
489
541
  ```
490
542
 
491
543
  **Popular free models on OpenRouter (add `:free` suffix):**
492
544
 
493
- **Coding & Development:**
494
- - `mistralai/devstral-2512:free` - Mistral's state-of-the-art coding model (123B params)
495
- - `kwaipilot/kat-coder-pro:free` - Advanced agentic coding model (73.4% SWE-Bench solve rate)
496
- - `qwen/qwen3-coder:free` - Qwen's MoE code generation model (480B params, 35B active)
497
- - `deepseek/deepseek-r1-0528:free` - DeepSeek R1 reasoning model (671B params, open-source)
498
-
499
- **Reasoning & General Purpose:**
500
- - `xiaomi/mimo-v2-flash:free` - Mixture-of-Experts model (309B params, top open-source on SWE-bench)
501
- - `tngtech/deepseek-r1t2-chimera:free` - DeepSeek R1T2 reasoning model (671B params)
502
- - `tngtech/deepseek-r1t-chimera:free` - DeepSeek R1T reasoning model
503
- - `tngtech/tng-r1t-chimera:free` - Creative storytelling and character interaction model
504
-
505
- **Lightweight & Fast:**
506
- - `z-ai/glm-4.5-air:free` - GLM 4.5 Air lightweight variant (MoE architecture)
507
- - `nvidia/nemotron-3-nano-30b-a3b:free` - NVIDIA's efficient small model (30B params, 3B active)
545
+ **Recommended Models:**
546
+ - `meta-llama/llama-3.3-70b-instruct:free` - Llama 3.3 70B Instruct (Powerful & General Purpose)
547
+ - `google/gemma-3-27b-it:free` - Google Gemma 3 27B Instruct (Efficient & Capable)
548
+ - `openai/gpt-oss-120b:free` - OpenAI GPT-OSS 120B (Large & Experimental)
549
+ - `tngtech/deepseek-r1t-chimera:free` - DeepSeek R1T Chimera (Strong Reasoning)
550
+ - `qwen/qwen3-next-80b-a3b-instruct:free` - Qwen3 Next 80B (Advanced Instruction Following)
508
551
 
509
552
  **Important Notes:**
510
553
  - **Logging Requirements:** Some free models may log your prompts and responses for model improvement purposes. This means:
@@ -569,7 +612,7 @@ All configuration can be set via **environment variables** or **`.dcommit` file*
569
612
  | Variable | Description | Default |
570
613
  |----------|-------------|---------|
571
614
  | `OPENROUTER_API_KEY` | OpenRouter API key ([Get it here](https://openrouter.ai/keys)) | - |
572
- | `OPENROUTER_MODEL` | Model name (add `:free` suffix for free models) | `mistralai/devstral-2512:free` |
615
+ | `OPENROUTER_MODEL` | Model name (add `:free` suffix for free models) | `meta-llama/llama-3.3-70b-instruct:free` |
573
616
 
574
617
  **Anthropic:**
575
618
  | Variable | Description | Default |
@@ -600,6 +643,7 @@ All configuration can be set via **environment variables** or **`.dcommit` file*
600
643
  | `COMMIT_MODE` | Default commit strategy | `auto` | `auto`, `directory`, `global`, `related` |
601
644
  | `EXCLUDE_FILES` | Files to exclude from diff | `package-lock.json, pnpm-lock.yaml, yarn.lock, *.lock` | Comma-separated file patterns |
602
645
  | `MAX_TOKENS` | Maximum tokens for AI response | `8192` | Any positive integer |
646
+ | `CHANGELOG_DIR` | Directory for changelog files | `changelogs` | Any directory path |
603
647
 
604
648
  ### Configuration Priority
605
649
 
@@ -8,7 +8,7 @@ from typing import Optional
8
8
 
9
9
  # Suppress stderr for all AI imports
10
10
  _stderr = sys.stderr
11
- _devnull = open(os.devnull, 'w')
11
+ _devnull = open(os.devnull, "w")
12
12
  sys.stderr = _devnull
13
13
 
14
14
  try:
@@ -37,31 +37,37 @@ _devnull.close()
37
37
 
38
38
  class AIProvider(ABC):
39
39
  """Base class for AI providers"""
40
-
40
+
41
41
  @abstractmethod
42
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
42
+ def generate_commit_message(
43
+ self, diff: str, prompt: str, max_tokens: int
44
+ ) -> str:
43
45
  """Generate commit message from diff"""
44
46
  pass
45
47
 
46
48
 
47
49
  class GeminiProvider(AIProvider):
48
50
  """Google Gemini AI provider"""
49
-
51
+
50
52
  def __init__(self, api_key: str, model: str = "gemini-2.0-flash-exp"):
51
53
  if not genai:
52
- raise ImportError("google-generativeai not installed. Run: pip install google-generativeai")
53
-
54
+ raise ImportError(
55
+ "google-generativeai not installed. Run: pip install google-generativeai"
56
+ )
57
+
54
58
  # Suppress stderr during configuration
55
59
  _stderr = sys.stderr
56
- _devnull = open(os.devnull, 'w')
60
+ _devnull = open(os.devnull, "w")
57
61
  sys.stderr = _devnull
58
62
  genai.configure(api_key=api_key)
59
63
  sys.stderr = _stderr
60
64
  _devnull.close()
61
-
65
+
62
66
  self.model_name = model
63
-
64
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
67
+
68
+ def generate_commit_message(
69
+ self, diff: str, prompt: str, max_tokens: int
70
+ ) -> str:
65
71
  generation_config = {
66
72
  "response_mime_type": "text/plain",
67
73
  "max_output_tokens": max_tokens,
@@ -69,12 +75,12 @@ class GeminiProvider(AIProvider):
69
75
  "top_p": 0.9,
70
76
  "temperature": 0.7,
71
77
  }
72
-
78
+
73
79
  model = genai.GenerativeModel(
74
80
  generation_config=generation_config,
75
81
  model_name=self.model_name,
76
82
  )
77
-
83
+
78
84
  chat_session = model.start_chat(
79
85
  history=[
80
86
  {
@@ -83,15 +89,15 @@ class GeminiProvider(AIProvider):
83
89
  },
84
90
  ]
85
91
  )
86
-
92
+
87
93
  # Suppress stderr during API call
88
94
  _stderr = sys.stderr
89
- _devnull = open(os.devnull, 'w')
95
+ _devnull = open(os.devnull, "w")
90
96
  sys.stderr = _devnull
91
97
  response = chat_session.send_message(diff)
92
98
  sys.stderr = _stderr
93
99
  _devnull.close()
94
-
100
+
95
101
  if response and hasattr(response, "text"):
96
102
  return response.text.strip()
97
103
  return "No valid commit message generated."
@@ -99,232 +105,258 @@ class GeminiProvider(AIProvider):
99
105
 
100
106
  class OpenAIProvider(AIProvider):
101
107
  """OpenAI GPT provider"""
102
-
108
+
103
109
  def __init__(self, api_key: str, model: str = "gpt-4o-mini"):
104
110
  if not openai:
105
111
  raise ImportError("openai not installed. Run: pip install openai")
106
112
  self.client = openai.OpenAI(api_key=api_key)
107
113
  self.model = model
108
-
109
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
114
+
115
+ def generate_commit_message(
116
+ self, diff: str, prompt: str, max_tokens: int
117
+ ) -> str:
110
118
  response = self.client.chat.completions.create(
111
119
  model=self.model,
112
120
  messages=[
113
121
  {"role": "system", "content": prompt},
114
- {"role": "user", "content": diff}
122
+ {"role": "user", "content": diff},
115
123
  ],
116
124
  max_tokens=max_tokens,
117
- temperature=0.7
125
+ temperature=0.7,
118
126
  )
119
127
  return response.choices[0].message.content.strip()
120
128
 
121
129
 
122
130
  class GroqProvider(AIProvider):
123
131
  """Groq AI provider (OpenAI-compatible)"""
124
-
132
+
125
133
  def __init__(self, api_key: str, model: str = "llama-3.3-70b-versatile"):
126
134
  if not openai:
127
135
  raise ImportError("openai not installed. Run: pip install openai")
128
136
  self.client = openai.OpenAI(
129
- api_key=api_key,
130
- base_url="https://api.groq.com/openai/v1"
137
+ api_key=api_key, base_url="https://api.groq.com/openai/v1"
131
138
  )
132
139
  self.model = model
133
-
134
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
140
+
141
+ def generate_commit_message(
142
+ self, diff: str, prompt: str, max_tokens: int
143
+ ) -> str:
135
144
  response = self.client.chat.completions.create(
136
145
  model=self.model,
137
146
  messages=[
138
147
  {"role": "system", "content": prompt},
139
- {"role": "user", "content": diff}
148
+ {"role": "user", "content": diff},
140
149
  ],
141
150
  max_tokens=max_tokens,
142
- temperature=0.7
151
+ temperature=0.7,
143
152
  )
144
153
  return response.choices[0].message.content.strip()
145
154
 
146
155
 
147
156
  class OpenRouterProvider(AIProvider):
148
157
  """OpenRouter.ai provider (OpenAI-compatible, access to multiple models)"""
149
-
150
- def __init__(self, api_key: str, model: str = "mistralai/devstral-2512:free"):
158
+
159
+ def __init__(
160
+ self,
161
+ api_key: str,
162
+ model: str = "meta-llama/llama-3.3-70b-instruct:free",
163
+ ):
151
164
  if not openai:
152
165
  raise ImportError("openai not installed. Run: pip install openai")
153
166
  self.client = openai.OpenAI(
154
167
  api_key=api_key,
155
- base_url="https://openrouter.ai/api/v1"
168
+ base_url="https://openrouter.ai/api/v1",
169
+ default_headers={
170
+ "HTTP-Referer": "https://github.com/opsguild/DevCommit",
171
+ "X-Title": "DevCommit",
172
+ },
156
173
  )
157
174
  self.model = model
158
-
159
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
175
+
176
+ def generate_commit_message(
177
+ self, diff: str, prompt: str, max_tokens: int
178
+ ) -> str:
160
179
  response = self.client.chat.completions.create(
161
180
  model=self.model,
162
181
  messages=[
163
182
  {"role": "system", "content": prompt},
164
- {"role": "user", "content": diff}
183
+ {"role": "user", "content": diff},
165
184
  ],
166
185
  max_tokens=max_tokens,
167
- temperature=0.7
186
+ temperature=0.7,
187
+ extra_body={"transforms": ["middle-out"]},
188
+ extra_headers={
189
+ "HTTP-Referer": "https://github.com/hordunlarmy/DevCommit",
190
+ "X-Title": "DevCommit",
191
+ },
168
192
  )
169
193
  return response.choices[0].message.content.strip()
170
194
 
171
195
 
172
196
  class AnthropicProvider(AIProvider):
173
197
  """Anthropic Claude provider"""
174
-
198
+
175
199
  def __init__(self, api_key: str, model: str = "claude-3-haiku-20240307"):
176
200
  if not anthropic:
177
- raise ImportError("anthropic not installed. Run: pip install anthropic")
201
+ raise ImportError(
202
+ "anthropic not installed. Run: pip install anthropic"
203
+ )
178
204
  self.client = anthropic.Anthropic(api_key=api_key)
179
205
  self.model = model
180
-
181
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
206
+
207
+ def generate_commit_message(
208
+ self, diff: str, prompt: str, max_tokens: int
209
+ ) -> str:
182
210
  message = self.client.messages.create(
183
211
  model=self.model,
184
212
  max_tokens=max_tokens,
185
213
  system=prompt,
186
- messages=[
187
- {"role": "user", "content": diff}
188
- ]
214
+ messages=[{"role": "user", "content": diff}],
189
215
  )
190
216
  return message.content[0].text.strip()
191
217
 
192
218
 
193
219
  class OllamaProvider(AIProvider):
194
220
  """Ollama local model provider"""
195
-
196
- def __init__(self, base_url: str = "http://localhost:11434", model: str = "llama3"):
221
+
222
+ def __init__(
223
+ self, base_url: str = "http://localhost:11434", model: str = "llama3"
224
+ ):
197
225
  if not requests:
198
- raise ImportError("requests not installed. Run: pip install requests")
199
- self.base_url = base_url.rstrip('/')
226
+ raise ImportError(
227
+ "requests not installed. Run: pip install requests"
228
+ )
229
+ self.base_url = base_url.rstrip("/")
200
230
  self.model = model
201
-
202
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
231
+
232
+ def generate_commit_message(
233
+ self, diff: str, prompt: str, max_tokens: int
234
+ ) -> str:
203
235
  url = f"{self.base_url}/api/generate"
204
236
  data = {
205
237
  "model": self.model,
206
238
  "prompt": f"{prompt}\n\n{diff}",
207
239
  "stream": False,
208
- "options": {
209
- "temperature": 0.7,
210
- "num_predict": max_tokens
211
- }
240
+ "options": {"temperature": 0.7, "num_predict": max_tokens},
212
241
  }
213
-
242
+
214
243
  response = requests.post(url, json=data, timeout=60)
215
244
  response.raise_for_status()
216
245
  result = response.json()["response"].strip()
217
-
246
+
218
247
  # Return raw result - normalization is done centrally in gemini_ai.py
219
248
  return result
220
249
 
221
250
 
222
251
  class CustomProvider(AIProvider):
223
252
  """Custom OpenAI-compatible API provider"""
224
-
225
- def __init__(self, api_url: str, api_key: Optional[str] = None, model: str = "default"):
253
+
254
+ def __init__(
255
+ self,
256
+ api_url: str,
257
+ api_key: Optional[str] = None,
258
+ model: str = "default",
259
+ ):
226
260
  if not openai:
227
261
  raise ImportError("openai not installed. Run: pip install openai")
228
-
262
+
229
263
  # Extract base URL (remove /chat/completions if present)
230
- base_url = api_url.replace('/chat/completions', '').replace('/v1/chat/completions', '')
231
- if not base_url.endswith('/v1'):
232
- base_url = base_url.rstrip('/') + '/v1'
233
-
264
+ base_url = api_url.replace("/chat/completions", "").replace(
265
+ "/v1/chat/completions", ""
266
+ )
267
+ if not base_url.endswith("/v1"):
268
+ base_url = base_url.rstrip("/") + "/v1"
269
+
234
270
  self.client = openai.OpenAI(
235
- api_key=api_key or "dummy-key",
236
- base_url=base_url
271
+ api_key=api_key or "dummy-key", base_url=base_url
237
272
  )
238
273
  self.model = model
239
-
240
- def generate_commit_message(self, diff: str, prompt: str, max_tokens: int) -> str:
274
+
275
+ def generate_commit_message(
276
+ self, diff: str, prompt: str, max_tokens: int
277
+ ) -> str:
241
278
  response = self.client.chat.completions.create(
242
279
  model=self.model,
243
280
  messages=[
244
281
  {"role": "system", "content": prompt},
245
- {"role": "user", "content": diff}
282
+ {"role": "user", "content": diff},
246
283
  ],
247
284
  max_tokens=max_tokens,
248
- temperature=0.7
285
+ temperature=0.7,
249
286
  )
250
287
  return response.choices[0].message.content.strip()
251
288
 
252
289
 
253
290
  def get_ai_provider(config) -> AIProvider:
254
291
  """Factory function to get the appropriate AI provider based on config"""
255
-
292
+
256
293
  provider_name = config("AI_PROVIDER", default="gemini").lower()
257
-
294
+
258
295
  if provider_name == "gemini":
259
296
  api_key = config("GEMINI_API_KEY", default=None)
260
297
  if not api_key:
261
298
  raise ValueError("GEMINI_API_KEY not set")
262
299
  # Support legacy MODEL_NAME for backward compatibility
263
- model = config("GEMINI_MODEL", default=None) or config("MODEL_NAME", default="gemini-2.0-flash-exp")
300
+ model = config("GEMINI_MODEL", default=None) or config(
301
+ "MODEL_NAME", default="gemini-2.0-flash-exp"
302
+ )
264
303
  return GeminiProvider(api_key, model)
265
-
304
+
266
305
  elif provider_name == "openai":
267
306
  api_key = config("OPENAI_API_KEY", default=None)
268
307
  if not api_key:
269
308
  raise ValueError("OPENAI_API_KEY not set")
270
- model = (
271
- config("OPENAI_MODEL", default=None)
272
- or config("MODEL_NAME", default="gpt-4o-mini")
309
+ model = config("OPENAI_MODEL", default=None) or config(
310
+ "MODEL_NAME", default="gpt-4o-mini"
273
311
  )
274
312
  return OpenAIProvider(api_key, model)
275
-
313
+
276
314
  elif provider_name == "groq":
277
315
  api_key = config("GROQ_API_KEY", default=None)
278
316
  if not api_key:
279
317
  raise ValueError("GROQ_API_KEY not set")
280
- model = (
281
- config("GROQ_MODEL", default=None)
282
- or config("MODEL_NAME", default="llama-3.3-70b-versatile")
318
+ model = config("GROQ_MODEL", default=None) or config(
319
+ "MODEL_NAME", default="llama-3.3-70b-versatile"
283
320
  )
284
321
  return GroqProvider(api_key, model)
285
-
322
+
286
323
  elif provider_name == "openrouter":
287
324
  api_key = config("OPENROUTER_API_KEY", default=None)
288
325
  if not api_key:
289
326
  raise ValueError("OPENROUTER_API_KEY not set")
290
- model = (
291
- config("OPENROUTER_MODEL", default=None)
292
- or config("MODEL_NAME", default="mistralai/devstral-2512:free")
327
+ model = config("OPENROUTER_MODEL", default=None) or config(
328
+ "MODEL_NAME", default="meta-llama/llama-3.3-70b-instruct:free"
293
329
  )
294
330
  return OpenRouterProvider(api_key, model)
295
-
331
+
296
332
  elif provider_name == "anthropic":
297
333
  api_key = config("ANTHROPIC_API_KEY", default=None)
298
334
  if not api_key:
299
335
  raise ValueError("ANTHROPIC_API_KEY not set")
300
- model = (
301
- config("ANTHROPIC_MODEL", default=None)
302
- or config("MODEL_NAME", default="claude-3-haiku-20240307")
336
+ model = config("ANTHROPIC_MODEL", default=None) or config(
337
+ "MODEL_NAME", default="claude-3-haiku-20240307"
303
338
  )
304
339
  return AnthropicProvider(api_key, model)
305
-
340
+
306
341
  elif provider_name == "ollama":
307
342
  base_url = config("OLLAMA_BASE_URL", default="http://localhost:11434")
308
- model = (
309
- config("OLLAMA_MODEL", default=None)
310
- or config("MODEL_NAME", default="llama3")
343
+ model = config("OLLAMA_MODEL", default=None) or config(
344
+ "MODEL_NAME", default="llama3"
311
345
  )
312
346
  return OllamaProvider(base_url, model)
313
-
347
+
314
348
  elif provider_name == "custom":
315
349
  api_url = config("CUSTOM_API_URL", default=None)
316
350
  if not api_url:
317
351
  raise ValueError("CUSTOM_API_URL not set for custom provider")
318
352
  api_key = config("CUSTOM_API_KEY", default=None)
319
- model = (
320
- config("CUSTOM_MODEL", default=None)
321
- or config("MODEL_NAME", default="default")
353
+ model = config("CUSTOM_MODEL", default=None) or config(
354
+ "MODEL_NAME", default="default"
322
355
  )
323
356
  return CustomProvider(api_url, api_key, model)
324
-
357
+
325
358
  else:
326
359
  raise ValueError(
327
360
  f"Unknown AI provider: {provider_name}. "
328
361
  f"Supported: gemini, openai, groq, openrouter, anthropic, ollama, custom"
329
362
  )
330
-
@@ -0,0 +1,89 @@
1
+ #!/usr/bin/env python
2
+ """Generate changelog files from git diffs using AI"""
3
+
4
+ import os
5
+ from datetime import datetime
6
+ from devcommit.utils.logger import config
7
+ from devcommit.app.ai_providers import get_ai_provider
8
+
9
+
10
+ def generate_changelog_prompt() -> str:
11
+ """Generate the prompt for changelog creation"""
12
+ return """You are a changelog generator. Analyze the git diff and create a structured changelog in Keep a Changelog format.
13
+
14
+ Follow these guidelines:
15
+ 1. Use markdown format with clear sections
16
+ 2. Categorize changes into: Added, Changed, Fixed, Removed, Deprecated, Security
17
+ 3. Write clear, user-friendly descriptions (not implementation details)
18
+ 4. Group related changes together
19
+ 5. Focus on what changed from a user/developer perspective
20
+ 6. Be concise but informative
21
+
22
+ Format:
23
+ # Changelog
24
+
25
+ ## [Unreleased]
26
+
27
+ ### Added
28
+ - List new features
29
+
30
+ ### Changed
31
+ - List changes to existing functionality
32
+
33
+ ### Fixed
34
+ - List bug fixes
35
+
36
+ ### Removed
37
+ - List removed features
38
+
39
+ Only include sections that have changes. Do not add empty sections."""
40
+
41
+
42
+ def generate_changelog(diff: str) -> str:
43
+ """Generate changelog content from git diff using AI.
44
+
45
+ Args:
46
+ diff: Git diff string
47
+
48
+ Returns:
49
+ Formatted markdown changelog content
50
+ """
51
+ prompt = generate_changelog_prompt()
52
+
53
+ # Get AI provider from config
54
+ provider = get_ai_provider(config)
55
+
56
+ # Generate changelog using AI
57
+ max_tokens = config("MAX_TOKENS", default=8192, cast=int)
58
+ changelog_content = provider.generate_commit_message(diff, prompt, max_tokens)
59
+
60
+ return changelog_content
61
+
62
+
63
+ def save_changelog(content: str, directory: str = None) -> str:
64
+ """Save changelog content to a file.
65
+
66
+ Args:
67
+ content: Changelog markdown content
68
+ directory: Directory to save changelog (default from config)
69
+
70
+ Returns:
71
+ Path to the saved changelog file
72
+ """
73
+ # Get directory from config if not provided
74
+ if directory is None:
75
+ directory = config("CHANGELOG_DIR", default="changelogs")
76
+
77
+ # Create directory if it doesn't exist
78
+ os.makedirs(directory, exist_ok=True)
79
+
80
+ # Generate filename with current datetime
81
+ now = datetime.now()
82
+ filename = now.strftime("%Y-%m-%d_%H-%M-%S.md")
83
+ filepath = os.path.join(directory, filename)
84
+
85
+ # Write content to file
86
+ with open(filepath, 'w', encoding='utf-8') as f:
87
+ f.write(content)
88
+
89
+ return filepath
@@ -16,6 +16,7 @@ warnings.filterwarnings('ignore', message='.*ALTS.*')
16
16
  warnings.filterwarnings('ignore', category=UserWarning)
17
17
 
18
18
  from devcommit.utils.logger import Logger, config
19
+ from devcommit.utils.git import KnownError
19
20
  from .ai_providers import get_ai_provider
20
21
  from .prompt import generate_prompt
21
22
 
@@ -83,8 +84,16 @@ def generateCommitMessage(diff: str) -> str:
83
84
  return normalized_response
84
85
 
85
86
  except Exception as e:
86
- logger.error(f"Error generating commit message: {e}")
87
- return f"Error generating commit message: {str(e)}"
87
+ error_msg = str(e)
88
+ logger.error(f"Error generating commit message: {error_msg}")
89
+
90
+ # Raise KnownError with user-friendly message that includes error details
91
+ # This prevents error messages from being shown as commit options
92
+ # while still informing the user what went wrong
93
+ raise KnownError(
94
+ f"Failed to generate commit message: {error_msg}. "
95
+ f"Please check your API configuration and try again."
96
+ )
88
97
  finally:
89
98
  # Restore stderr and close devnull
90
99
  sys.stderr = _stderr
@@ -10,6 +10,7 @@ from InquirerPy import get_style, inquirer
10
10
  from rich.console import Console
11
11
 
12
12
  from devcommit.app.gemini_ai import generateCommitMessage
13
+ from devcommit.app.changelog import generate_changelog, save_changelog
13
14
  from devcommit.utils.git import (KnownError, assert_git_repo,
14
15
  get_detected_message, get_diff_for_files,
15
16
  get_files_from_paths, get_staged_diff,
@@ -34,6 +35,36 @@ def has_commits() -> bool:
34
35
  return result.returncode == 0
35
36
 
36
37
 
38
+ def sanitize_commit_messages(messages):
39
+ """Filter out error messages from commit message list.
40
+
41
+ Args:
42
+ messages: String or list of commit messages
43
+
44
+ Returns:
45
+ List of valid commit messages with error messages filtered out
46
+ """
47
+ # Convert string to list if needed
48
+ if isinstance(messages, str):
49
+ if not messages or messages.strip() == "":
50
+ return []
51
+ messages = messages.split("|")
52
+
53
+ # Filter out error messages and empty strings
54
+ valid_messages = []
55
+ for msg in messages:
56
+ if not msg:
57
+ continue
58
+ msg = msg.strip()
59
+ # Skip if it's an error message
60
+ if msg.startswith("Error generating commit message:"):
61
+ continue
62
+ if msg and msg.strip():
63
+ valid_messages.append(msg)
64
+
65
+ return valid_messages
66
+
67
+
37
68
  # Main function
38
69
  def main(flags: CommitFlag = None):
39
70
  if flags is None:
@@ -99,6 +130,42 @@ def main(flags: CommitFlag = None):
99
130
  except Exception as e:
100
131
  raise KnownError(f"Failed to get files from paths: {str(e)}")
101
132
 
133
+
134
+ # Generate changelog before staging if --changelog and --stageAll are both set
135
+ if flags["stageAll"] and flags["changelog"]:
136
+ console.print()
137
+ console.print("[bold cyan]📝 Generating changelog...[/bold cyan]")
138
+
139
+ with console.status(
140
+ "[magenta]🤖 AI generating changelog from changes...[/magenta]",
141
+ spinner="dots",
142
+ spinner_style="magenta"
143
+ ):
144
+ # Get diff of unstaged changes
145
+ if push_files_list:
146
+ # Get diff for specific files
147
+ diff = get_diff_for_files(push_files_list, flags["excludeFiles"])
148
+ else:
149
+ # Get diff for all changes
150
+ import subprocess
151
+ diff = subprocess.run(
152
+ ["git", "diff"],
153
+ stdout=subprocess.PIPE,
154
+ text=True,
155
+ ).stdout
156
+
157
+ if diff:
158
+ try:
159
+ changelog_content = generate_changelog(diff)
160
+ changelog_path = save_changelog(changelog_content)
161
+ console.print(f"[bold green]✅ Changelog saved to:[/bold green] [cyan]{changelog_path}[/cyan]")
162
+ except Exception as e:
163
+ logger.error(f"Failed to generate changelog: {e}")
164
+ console.print(f"[bold yellow]⚠️ Failed to generate changelog: {e}[/bold yellow]")
165
+ else:
166
+ console.print("[bold yellow]⚠️ No changes to generate changelog from[/bold yellow]")
167
+ console.print()
168
+
102
169
  if flags["stageAll"]:
103
170
  if push_files_list:
104
171
  # Stage specific files/folders only
@@ -258,6 +325,36 @@ def main(flags: CommitFlag = None):
258
325
  elif flags.get("push", False) and not commit_made:
259
326
  console.print("\n[bold yellow]⚠️ No commits were made, skipping push[/bold yellow]\n")
260
327
 
328
+ # Generate changelog after commit if --changelog is used without --stageAll
329
+ if flags.get("changelog", False) and not flags.get("stageAll", False) and commit_made:
330
+ console.print()
331
+ console.print("[bold cyan]📝 Generating changelog from committed changes...[/bold cyan]")
332
+
333
+ with console.status(
334
+ "[magenta]🤖 AI generating changelog...[/magenta]",
335
+ spinner="dots",
336
+ spinner_style="magenta"
337
+ ):
338
+ # Get diff from last commit
339
+ import subprocess
340
+ diff = subprocess.run(
341
+ ["git", "diff", "HEAD~1", "HEAD"],
342
+ stdout=subprocess.PIPE,
343
+ text=True,
344
+ ).stdout
345
+
346
+ if diff:
347
+ try:
348
+ changelog_content = generate_changelog(diff)
349
+ changelog_path = save_changelog(changelog_content)
350
+ console.print(f"[bold green]✅ Changelog saved to:[/bold green] [cyan]{changelog_path}[/cyan]")
351
+ except Exception as e:
352
+ logger.error(f"Failed to generate changelog: {e}")
353
+ console.print(f"[bold yellow]⚠️ Failed to generate changelog: {e}[/bold yellow]")
354
+ else:
355
+ console.print("[bold yellow]⚠️ No changes to generate changelog from[/bold yellow]")
356
+ console.print()
357
+
261
358
  # Print stylish completion message only if commits were made
262
359
  if commit_made:
263
360
  console.print()
@@ -273,7 +370,8 @@ def main(flags: CommitFlag = None):
273
370
  return
274
371
  except KnownError as error:
275
372
  logger.error(str(error))
276
- console.print(f"\n[bold red]❌ Error:[/bold red] [red]{error}[/red]\n")
373
+ # Don't print here - Rich status context already displays the error
374
+ pass
277
375
  except subprocess.CalledProcessError as error:
278
376
  logger.error(str(error))
279
377
  console.print(f"\n[bold red]❌ Git command failed:[/bold red] [red]{error}[/red]\n")
@@ -322,6 +420,9 @@ def analyze_changes(console, files=None):
322
420
  """
323
421
  import sys
324
422
 
423
+ # Store any exception to re-raise after status context exits
424
+ caught_exception = None
425
+
325
426
  with console.status(
326
427
  "[magenta]🤖 AI analyzing changes...[/magenta]",
327
428
  spinner="dots",
@@ -350,17 +451,28 @@ def analyze_changes(console, files=None):
350
451
 
351
452
  try:
352
453
  commit_message = generateCommitMessage(diff)
454
+ except KnownError as e:
455
+ # Catch KnownError to prevent Rich status from printing it
456
+ caught_exception = e
457
+ commit_message = None
353
458
  finally:
354
459
  sys.stderr = _stderr
355
460
  _devnull.close()
356
461
 
357
- if isinstance(commit_message, str):
358
- commit_message = commit_message.split("|")
359
-
360
- if not commit_message:
361
- raise KnownError("No commit messages were generated. Try again.")
462
+ # If we caught an exception, we'll re-raise it after the status context exits
463
+ if caught_exception:
464
+ pass # Will be raised below
465
+ else:
466
+ commit_message = sanitize_commit_messages(commit_message)
362
467
 
363
- return commit_message
468
+ if not commit_message:
469
+ raise KnownError("No commit messages were generated. Try again.")
470
+
471
+ # Re-raise the exception outside the status context to avoid duplicate printing
472
+ if caught_exception:
473
+ raise caught_exception
474
+
475
+ return commit_message
364
476
 
365
477
 
366
478
  def prompt_commit_message(console, commit_message, regenerate_callback=None):
@@ -703,8 +815,7 @@ def process_per_directory_commits(console, staged, flags):
703
815
  sys.stderr = _stderr
704
816
  _devnull.close()
705
817
 
706
- if isinstance(commit_message, str):
707
- commit_message = commit_message.split("|")
818
+ commit_message = sanitize_commit_messages(commit_message)
708
819
 
709
820
  if not commit_message:
710
821
  console.print(f"\n[bold yellow]⚠️ No commit message generated for {directory}, skipping[/bold yellow]\n")
@@ -722,9 +833,7 @@ def process_per_directory_commits(console, staged, flags):
722
833
  sys.stderr = _devnull
723
834
  try:
724
835
  msg = generateCommitMessage(diff)
725
- if isinstance(msg, str):
726
- msg = msg.split("|")
727
- return msg
836
+ return sanitize_commit_messages(msg)
728
837
  finally:
729
838
  sys.stderr = _stderr
730
839
  _devnull.close()
@@ -865,8 +974,7 @@ def process_per_file_commits(console, staged, flags):
865
974
  sys.stderr = _stderr
866
975
  _devnull.close()
867
976
 
868
- if isinstance(commit_message, str):
869
- commit_message = commit_message.split("|")
977
+ commit_message = sanitize_commit_messages(commit_message)
870
978
 
871
979
  if not commit_message:
872
980
  console.print(f"\n[bold yellow]⚠️ No commit message generated for {file}, skipping[/bold yellow]\n")
@@ -884,9 +992,7 @@ def process_per_file_commits(console, staged, flags):
884
992
  sys.stderr = _devnull
885
993
  try:
886
994
  msg = generateCommitMessage(diff)
887
- if isinstance(msg, str):
888
- msg = msg.split("|")
889
- return msg
995
+ return sanitize_commit_messages(msg)
890
996
  finally:
891
997
  sys.stderr = _stderr
892
998
  _devnull.close()
@@ -997,8 +1103,7 @@ def process_per_directory_commits_from_paths(console, staged, flags, original_pa
997
1103
  sys.stderr = _stderr
998
1104
  _devnull.close()
999
1105
 
1000
- if isinstance(commit_message, str):
1001
- commit_message = commit_message.split("|")
1106
+ commit_message = sanitize_commit_messages(commit_message)
1002
1107
 
1003
1108
  if not commit_message:
1004
1109
  console.print(f"\n[bold yellow]⚠️ No commit message generated for {path}, skipping[/bold yellow]\n")
@@ -1016,9 +1121,7 @@ def process_per_directory_commits_from_paths(console, staged, flags, original_pa
1016
1121
  sys.stderr = _devnull
1017
1122
  try:
1018
1123
  msg = generateCommitMessage(diff)
1019
- if isinstance(msg, str):
1020
- msg = msg.split("|")
1021
- return msg
1124
+ return sanitize_commit_messages(msg)
1022
1125
  finally:
1023
1126
  sys.stderr = _stderr
1024
1127
  _devnull.close()
@@ -1140,7 +1243,7 @@ def process_per_related_commits(console, staged, flags):
1140
1243
  # Fallback mode detected - AI parsing failed, so commit_messages are empty
1141
1244
  # Generate commit messages for all groups (one call per group, but done upfront)
1142
1245
  # This is still extra calls, but at least they're done before user interaction
1143
- from devcommit.app.gemini_ai import generateCommitMessage
1246
+ # Note: generateCommitMessage is already imported at the top of the file
1144
1247
 
1145
1248
  console.print("[dim]⚠️ AI grouping response had no commit messages. Generating now...[/dim]")
1146
1249
  with console.status(
@@ -1160,9 +1263,7 @@ def process_per_related_commits(console, staged, flags):
1160
1263
  diff = get_diff_for_files(group_files, flags["excludeFiles"])
1161
1264
  if diff:
1162
1265
  commit_msgs = generateCommitMessage(diff)
1163
- if isinstance(commit_msgs, str):
1164
- commit_msgs = commit_msgs.split("|")
1165
- group_data['commit_messages'] = [msg.strip() for msg in commit_msgs if msg and msg.strip()]
1266
+ group_data['commit_messages'] = sanitize_commit_messages(commit_msgs)
1166
1267
  finally:
1167
1268
  sys.stderr = _stderr
1168
1269
  _devnull.close()
@@ -1345,8 +1446,7 @@ def process_per_related_commits(console, staged, flags):
1345
1446
  sys.stderr = _stderr
1346
1447
  _devnull.close()
1347
1448
 
1348
- if isinstance(commit_message, str):
1349
- commit_message = commit_message.split("|")
1449
+ commit_message = sanitize_commit_messages(commit_message)
1350
1450
 
1351
1451
  if not commit_message:
1352
1452
  console.print(f"\n[bold yellow]⚠️ No commit message generated for {group_name}, skipping[/bold yellow]\n")
@@ -1364,9 +1464,7 @@ def process_per_related_commits(console, staged, flags):
1364
1464
  sys.stderr = _devnull
1365
1465
  try:
1366
1466
  msg = generateCommitMessage(diff)
1367
- if isinstance(msg, str):
1368
- msg = msg.split("|")
1369
- return msg
1467
+ return sanitize_commit_messages(msg)
1370
1468
  finally:
1371
1469
  sys.stderr = _stderr
1372
1470
  _devnull.close()
@@ -11,6 +11,7 @@ class CommitFlag(TypedDict):
11
11
  directory: bool
12
12
  files: List[str]
13
13
  push: bool
14
+ changelog: bool
14
15
  rawArgv: List[str]
15
16
 
16
17
 
@@ -52,6 +53,10 @@ def parse_arguments() -> CommitFlag:
52
53
  "--push", "-p", action="store_true",
53
54
  help="Push commits to remote after committing"
54
55
  )
56
+ parser.add_argument(
57
+ "--changelog", "-c", action="store_true",
58
+ help="Generate changelog file from changes"
59
+ )
55
60
  parser.add_argument(
56
61
  "rawArgv", nargs="*", help="Additional arguments for git commit"
57
62
  )
@@ -66,5 +71,6 @@ def parse_arguments() -> CommitFlag:
66
71
  directory=args.directory,
67
72
  files=args.files or [],
68
73
  push=args.push,
74
+ changelog=args.changelog,
69
75
  rawArgv=args.rawArgv,
70
76
  )
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "devcommit"
3
- version = "0.1.5.0"
3
+ version = "0.1.5.1"
4
4
  description = "AI-powered git commit message generator"
5
5
  readme = "README.md"
6
6
  license = {file = "COPYING"}
File without changes