git-commit-message 0.7.0__tar.gz → 0.8.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (20) hide show
  1. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/PKG-INFO +95 -45
  2. git_commit_message-0.8.0/README.md +198 -0
  3. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/pyproject.toml +2 -1
  4. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/_cli.py +59 -14
  5. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/_gemini.py +6 -1
  6. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/_gpt.py +7 -1
  7. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/_llm.py +23 -5
  8. git_commit_message-0.8.0/src/git_commit_message/_ollama.py +122 -0
  9. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/PKG-INFO +95 -45
  10. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/SOURCES.txt +1 -0
  11. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/requires.txt +1 -0
  12. git_commit_message-0.7.0/README.md +0 -149
  13. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/UNLICENSE +0 -0
  14. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/setup.cfg +0 -0
  15. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/__init__.py +0 -0
  16. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/__main__.py +0 -0
  17. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message/_git.py +0 -0
  18. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/dependency_links.txt +0 -0
  19. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/entry_points.txt +0 -0
  20. {git_commit_message-0.7.0 → git_commit_message-0.8.0}/src/git_commit_message.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: git-commit-message
3
- Version: 0.7.0
3
+ Version: 0.8.0
4
4
  Summary: Generate Git commit messages from staged changes using LLM
5
5
  Maintainer-email: Mina Her <minacle@live.com>
6
6
  License: This is free and unencumbered software released into the public domain.
@@ -45,16 +45,22 @@ Requires-Python: >=3.13
45
45
  Description-Content-Type: text/markdown
46
46
  Requires-Dist: babel>=2.17.0
47
47
  Requires-Dist: google-genai>=1.56.0
48
+ Requires-Dist: ollama>=0.4.0
48
49
  Requires-Dist: openai>=2.6.1
49
50
  Requires-Dist: tiktoken>=0.12.0
50
51
 
51
52
  # git-commit-message
52
53
 
53
- Staged changes -> GPT commit message generator.
54
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
54
55
 
55
56
  [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
56
57
 
57
- ## Install (PyPI)
58
+ ## Requirements
59
+
60
+ - Python 3.13+
61
+ - A Git repo with staged changes (`git add ...`)
62
+
63
+ ## Install
58
64
 
59
65
  Install the latest released version from PyPI:
60
66
 
@@ -78,19 +84,43 @@ Quick check:
78
84
  git-commit-message --help
79
85
  ```
80
86
 
81
- Set your API key (POSIX sh):
87
+ ## Setup
88
+
89
+ ### OpenAI
82
90
 
83
91
  ```sh
84
92
  export OPENAI_API_KEY="sk-..."
85
93
  ```
86
94
 
87
- Or for the Google provider:
95
+ ### Google Gemini
88
96
 
89
97
  ```sh
90
98
  export GOOGLE_API_KEY="..."
91
99
  ```
92
100
 
93
- Note (fish): In fish, set it as follows.
101
+ ### Ollama (local models)
102
+
103
+ 1. Install Ollama: https://ollama.ai
104
+ 2. Start the server:
105
+
106
+ ```sh
107
+ ollama serve
108
+ ```
109
+
110
+ 3. Pull a model:
111
+
112
+ ```sh
113
+ ollama pull mistral
114
+ ```
115
+
116
+ Optional: set defaults:
117
+
118
+ ```sh
119
+ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
120
+ export OLLAMA_MODEL=mistral
121
+ ```
122
+
123
+ Note (fish):
94
124
 
95
125
  ```fish
96
126
  set -x OPENAI_API_KEY "sk-..."
@@ -104,96 +134,116 @@ python -m pip install -e .
104
134
 
105
135
  ## Usage
106
136
 
107
- - Print commit message only:
137
+ Generate and print a commit message:
108
138
 
109
139
  ```sh
110
140
  git add -A
111
141
  git-commit-message "optional extra context about the change"
112
142
  ```
113
143
 
114
- - Force single-line subject only:
144
+ Generate a single-line subject only:
115
145
 
116
146
  ```sh
117
147
  git-commit-message --one-line "optional context"
118
148
  ```
119
149
 
120
- - Select provider (default: openai):
150
+ Select provider:
121
151
 
122
152
  ```sh
123
- git-commit-message --provider openai "optional context"
153
+ # OpenAI (default)
154
+ git-commit-message --provider openai
155
+
156
+ # Google Gemini (via google-genai)
157
+ git-commit-message --provider google
158
+
159
+ # Ollama
160
+ git-commit-message --provider ollama
124
161
  ```
125
162
 
126
- - Select provider (Google Gemini via google-genai):
163
+ Commit immediately (optionally open editor):
127
164
 
128
165
  ```sh
129
- git-commit-message --provider google "optional context"
166
+ git-commit-message --commit "refactor parser for speed"
167
+ git-commit-message --commit --edit "refactor parser for speed"
130
168
  ```
131
169
 
132
- - Limit subject length (default 72):
170
+ Limit subject length:
133
171
 
134
172
  ```sh
135
- git-commit-message --one-line --max-length 50 "optional context"
173
+ git-commit-message --one-line --max-length 50
136
174
  ```
137
175
 
138
- - Chunk long diffs by token budget (0 = single chunk + summary, -1 = disable chunking):
176
+ Chunk/summarise long diffs by token budget:
139
177
 
140
178
  ```sh
141
179
  # force a single summary pass over the whole diff (default)
142
- git-commit-message --chunk-tokens 0 "optional context"
180
+ git-commit-message --chunk-tokens 0
143
181
 
144
182
  # chunk the diff into ~4000-token pieces before summarising
145
- git-commit-message --chunk-tokens 4000 "optional context"
183
+ git-commit-message --chunk-tokens 4000
146
184
 
147
185
  # disable summarisation and use the legacy one-shot prompt
148
- git-commit-message --chunk-tokens -1 "optional context"
186
+ git-commit-message --chunk-tokens -1
149
187
  ```
150
188
 
151
- - Commit immediately with editor:
189
+ Select output language/locale (IETF language tag):
152
190
 
153
191
  ```sh
154
- git-commit-message --commit --edit "refactor parser for speed"
192
+ git-commit-message --language en-US
193
+ git-commit-message --language ko-KR
194
+ git-commit-message --language ja-JP
155
195
  ```
156
196
 
157
- - Print debug info (prompt/response + token usage):
197
+ Print debug info:
158
198
 
159
199
  ```sh
160
- git-commit-message --debug "optional context"
200
+ git-commit-message --debug
161
201
  ```
162
202
 
163
- - Select output language/locale (default: en-GB):
203
+ Configure Ollama host (if running on a different machine):
164
204
 
165
205
  ```sh
166
- # American English
167
- git-commit-message --language en-US "optional context"
206
+ git-commit-message --provider ollama --host http://192.168.1.100:11434
207
+ ```
168
208
 
169
- # Korean
170
- git-commit-message --language ko-KR
209
+ ## Options
171
210
 
172
- # Japanese
173
- git-commit-message --language ja-JP
174
- ```
211
+ - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
212
+ - `--model MODEL`: model override (provider-specific)
213
+ - `--language TAG`: output language/locale (default: `en-GB`)
214
+ - `--one-line`: output subject only
215
+ - `--max-length N`: max subject length (default: 72)
216
+ - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
217
+ - `--debug`: print request/response details
218
+ - `--commit`: run `git commit -m <message>`
219
+ - `--edit`: with `--commit`, open editor for final message
220
+ - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
221
+
222
+ ## Environment variables
175
223
 
176
- Notes:
224
+ Required:
177
225
 
178
- - The model is instructed to write using the selected language/locale.
179
- - In multi-line mode, the only allowed label ("Rationale:") is also translated into the target language.
226
+ - `OPENAI_API_KEY`: when provider is `openai`
227
+ - `GOOGLE_API_KEY`: when provider is `google`
180
228
 
181
- Environment:
229
+ Optional:
182
230
 
183
- - `OPENAI_API_KEY`: required when provider is `openai`
184
- - `GOOGLE_API_KEY`: required when provider is `google`
185
- - `GIT_COMMIT_MESSAGE_PROVIDER`: optional (default: `openai`). `--provider` overrides this value.
186
- - `GIT_COMMIT_MESSAGE_MODEL`: optional model override (defaults: `openai` -> `gpt-5-mini`, `google` -> `gemini-2.5-flash`)
187
- - `OPENAI_MODEL`: optional OpenAI-only model override
188
- - `GIT_COMMIT_MESSAGE_LANGUAGE`: optional (default: `en-GB`)
189
- - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: optional token budget per diff chunk (default: 0 = single chunk + summary; -1 disables summarisation)
231
+ - `GIT_COMMIT_MESSAGE_PROVIDER`: default provider (`openai` by default). `--provider` overrides this.
232
+ - `GIT_COMMIT_MESSAGE_MODEL`: model override for any provider. `--model` overrides this.
233
+ - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
234
+ - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
235
+ - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
236
+ - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
237
+ - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
190
238
 
191
- Notes:
239
+ Default models (if not overridden):
192
240
 
193
- - If token counting fails for your provider while chunking, try `--chunk-tokens 0` (default) or `--chunk-tokens -1`.
241
+ - OpenAI: `gpt-5-mini`
242
+ - Google: `gemini-2.5-flash`
243
+ - Ollama: `gpt-oss:20b`
194
244
 
195
- ## AIgenerated code notice
245
+ ## AI-generated code notice
196
246
 
197
247
  Parts of this project were created with assistance from AI tools (e.g. large language models).
198
- All AIassisted contributions were reviewed and adapted by maintainers before inclusion.
248
+ All AI-assisted contributions were reviewed and adapted by maintainers before inclusion.
199
249
  If you need provenance for specific changes, please refer to the Git history and commit messages.
@@ -0,0 +1,198 @@
1
+ # git-commit-message
2
+
3
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
4
+
5
+ [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
6
+
7
+ ## Requirements
8
+
9
+ - Python 3.13+
10
+ - A Git repo with staged changes (`git add ...`)
11
+
12
+ ## Install
13
+
14
+ Install the latest released version from PyPI:
15
+
16
+ ```sh
17
+ # User environment (recommended)
18
+ python -m pip install --user git-commit-message
19
+
20
+ # Or system/virtualenv as appropriate
21
+ python -m pip install git-commit-message
22
+
23
+ # Or with pipx for isolated CLI installs
24
+ pipx install git-commit-message
25
+
26
+ # Upgrade to the newest version
27
+ python -m pip install --upgrade git-commit-message
28
+ ```
29
+
30
+ Quick check:
31
+
32
+ ```sh
33
+ git-commit-message --help
34
+ ```
35
+
36
+ ## Setup
37
+
38
+ ### OpenAI
39
+
40
+ ```sh
41
+ export OPENAI_API_KEY="sk-..."
42
+ ```
43
+
44
+ ### Google Gemini
45
+
46
+ ```sh
47
+ export GOOGLE_API_KEY="..."
48
+ ```
49
+
50
+ ### Ollama (local models)
51
+
52
+ 1. Install Ollama: https://ollama.ai
53
+ 2. Start the server:
54
+
55
+ ```sh
56
+ ollama serve
57
+ ```
58
+
59
+ 3. Pull a model:
60
+
61
+ ```sh
62
+ ollama pull mistral
63
+ ```
64
+
65
+ Optional: set defaults:
66
+
67
+ ```sh
68
+ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
69
+ export OLLAMA_MODEL=mistral
70
+ ```
71
+
72
+ Note (fish):
73
+
74
+ ```fish
75
+ set -x OPENAI_API_KEY "sk-..."
76
+ ```
77
+
78
+ ## Install (editable)
79
+
80
+ ```sh
81
+ python -m pip install -e .
82
+ ```
83
+
84
+ ## Usage
85
+
86
+ Generate and print a commit message:
87
+
88
+ ```sh
89
+ git add -A
90
+ git-commit-message "optional extra context about the change"
91
+ ```
92
+
93
+ Generate a single-line subject only:
94
+
95
+ ```sh
96
+ git-commit-message --one-line "optional context"
97
+ ```
98
+
99
+ Select provider:
100
+
101
+ ```sh
102
+ # OpenAI (default)
103
+ git-commit-message --provider openai
104
+
105
+ # Google Gemini (via google-genai)
106
+ git-commit-message --provider google
107
+
108
+ # Ollama
109
+ git-commit-message --provider ollama
110
+ ```
111
+
112
+ Commit immediately (optionally open editor):
113
+
114
+ ```sh
115
+ git-commit-message --commit "refactor parser for speed"
116
+ git-commit-message --commit --edit "refactor parser for speed"
117
+ ```
118
+
119
+ Limit subject length:
120
+
121
+ ```sh
122
+ git-commit-message --one-line --max-length 50
123
+ ```
124
+
125
+ Chunk/summarise long diffs by token budget:
126
+
127
+ ```sh
128
+ # force a single summary pass over the whole diff (default)
129
+ git-commit-message --chunk-tokens 0
130
+
131
+ # chunk the diff into ~4000-token pieces before summarising
132
+ git-commit-message --chunk-tokens 4000
133
+
134
+ # disable summarisation and use the legacy one-shot prompt
135
+ git-commit-message --chunk-tokens -1
136
+ ```
137
+
138
+ Select output language/locale (IETF language tag):
139
+
140
+ ```sh
141
+ git-commit-message --language en-US
142
+ git-commit-message --language ko-KR
143
+ git-commit-message --language ja-JP
144
+ ```
145
+
146
+ Print debug info:
147
+
148
+ ```sh
149
+ git-commit-message --debug
150
+ ```
151
+
152
+ Configure Ollama host (if running on a different machine):
153
+
154
+ ```sh
155
+ git-commit-message --provider ollama --host http://192.168.1.100:11434
156
+ ```
157
+
158
+ ## Options
159
+
160
+ - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
161
+ - `--model MODEL`: model override (provider-specific)
162
+ - `--language TAG`: output language/locale (default: `en-GB`)
163
+ - `--one-line`: output subject only
164
+ - `--max-length N`: max subject length (default: 72)
165
+ - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
166
+ - `--debug`: print request/response details
167
+ - `--commit`: run `git commit -m <message>`
168
+ - `--edit`: with `--commit`, open editor for final message
169
+ - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
170
+
171
+ ## Environment variables
172
+
173
+ Required:
174
+
175
+ - `OPENAI_API_KEY`: when provider is `openai`
176
+ - `GOOGLE_API_KEY`: when provider is `google`
177
+
178
+ Optional:
179
+
180
+ - `GIT_COMMIT_MESSAGE_PROVIDER`: default provider (`openai` by default). `--provider` overrides this.
181
+ - `GIT_COMMIT_MESSAGE_MODEL`: model override for any provider. `--model` overrides this.
182
+ - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
183
+ - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
184
+ - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
185
+ - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
186
+ - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
187
+
188
+ Default models (if not overridden):
189
+
190
+ - OpenAI: `gpt-5-mini`
191
+ - Google: `gemini-2.5-flash`
192
+ - Ollama: `gpt-oss:20b`
193
+
194
+ ## AI-generated code notice
195
+
196
+ Parts of this project were created with assistance from AI tools (e.g. large language models).
197
+ All AI-assisted contributions were reviewed and adapted by maintainers before inclusion.
198
+ If you need provenance for specific changes, please refer to the Git history and commit messages.
@@ -1,12 +1,13 @@
1
1
  [project]
2
2
  name = "git-commit-message"
3
- version = "0.7.0"
3
+ version = "0.8.0"
4
4
  description = "Generate Git commit messages from staged changes using LLM"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.13"
7
7
  dependencies = [
8
8
  "babel>=2.17.0",
9
9
  "google-genai>=1.56.0",
10
+ "ollama>=0.4.0",
10
11
  "openai>=2.6.1",
11
12
  "tiktoken>=0.12.0",
12
13
  ]
@@ -27,6 +27,38 @@ from ._llm import (
27
27
  )
28
28
 
29
29
 
30
+ class CliArgs(Namespace):
31
+ __slots__ = (
32
+ "description",
33
+ "commit",
34
+ "edit",
35
+ "provider",
36
+ "model",
37
+ "language",
38
+ "debug",
39
+ "one_line",
40
+ "max_length",
41
+ "chunk_tokens",
42
+ "host",
43
+ )
44
+
45
+ def __init__(
46
+ self,
47
+ /,
48
+ ) -> None:
49
+ self.description: str | None = None
50
+ self.commit: bool = False
51
+ self.edit: bool = False
52
+ self.provider: str | None = None
53
+ self.model: str | None = None
54
+ self.language: str | None = None
55
+ self.debug: bool = False
56
+ self.one_line: bool = False
57
+ self.max_length: int | None = None
58
+ self.chunk_tokens: int | None = None
59
+ self.host: str | None = None
60
+
61
+
30
62
  def _env_chunk_tokens_default() -> int | None:
31
63
  """Return chunk token default from env if valid, else None."""
32
64
 
@@ -87,7 +119,7 @@ def _build_parser() -> ArgumentParser:
87
119
  "--model",
88
120
  default=None,
89
121
  help=(
90
- "Model name to use. If unspecified, uses GIT_COMMIT_MESSAGE_MODEL or a provider-specific default (openai: gpt-5-mini; google: gemini-2.5-flash)."
122
+ "Model name to use. If unspecified, uses GIT_COMMIT_MESSAGE_MODEL or a provider-specific default (openai: gpt-5-mini; google: gemini-2.5-flash; ollama: gpt-oss:20b)."
91
123
  ),
92
124
  )
93
125
 
@@ -134,11 +166,21 @@ def _build_parser() -> ArgumentParser:
134
166
  ),
135
167
  )
136
168
 
169
+ parser.add_argument(
170
+ "--host",
171
+ dest="host",
172
+ default=None,
173
+ help=(
174
+ "Host URL for API providers like Ollama (default: http://localhost:11434). "
175
+ "You may also set OLLAMA_HOST for Ollama."
176
+ ),
177
+ )
178
+
137
179
  return parser
138
180
 
139
181
 
140
182
  def _run(
141
- args: Namespace,
183
+ args: CliArgs,
142
184
  /,
143
185
  ) -> int:
144
186
  """Main execution logic.
@@ -177,11 +219,12 @@ def _run(
177
219
  diff_text,
178
220
  hint,
179
221
  args.model,
180
- getattr(args, "one_line", False),
181
- getattr(args, "max_length", None),
182
- getattr(args, "language", None),
222
+ args.one_line,
223
+ args.max_length,
224
+ args.language,
183
225
  chunk_tokens,
184
- getattr(args, "provider", None),
226
+ args.provider,
227
+ args.host,
185
228
  )
186
229
  message = result.message
187
230
  else:
@@ -189,11 +232,12 @@ def _run(
189
232
  diff_text,
190
233
  hint,
191
234
  args.model,
192
- getattr(args, "one_line", False),
193
- getattr(args, "max_length", None),
194
- getattr(args, "language", None),
235
+ args.one_line,
236
+ args.max_length,
237
+ args.language,
195
238
  chunk_tokens,
196
- getattr(args, "provider", None),
239
+ args.provider,
240
+ args.host,
197
241
  )
198
242
  except UnsupportedProviderError as exc:
199
243
  print(str(exc), file=stderr)
@@ -203,7 +247,7 @@ def _run(
203
247
  return 3
204
248
 
205
249
  # Option: force single-line message
206
- if getattr(args, "one_line", False):
250
+ if args.one_line:
207
251
  # Use the first non-empty line only
208
252
  for line in (ln.strip() for ln in message.splitlines()):
209
253
  if line:
@@ -218,7 +262,7 @@ def _run(
218
262
  print(f"==== {result.provider} Usage ====")
219
263
  print(f"provider: {result.provider}")
220
264
  print(f"model: {result.model}")
221
- print(f"response_id: {getattr(result, 'response_id', '(n/a)')}")
265
+ print(f"response_id: {result.response_id or '(n/a)'}")
222
266
  if result.total_tokens is not None:
223
267
  print(
224
268
  f"tokens: prompt={result.prompt_tokens} completion={result.completion_tokens} total={result.total_tokens}"
@@ -240,7 +284,7 @@ def _run(
240
284
  print(f"==== {result.provider} Usage ====")
241
285
  print(f"provider: {result.provider}")
242
286
  print(f"model: {result.model}")
243
- print(f"response_id: {getattr(result, 'response_id', '(n/a)')}")
287
+ print(f"response_id: {result.response_id or '(n/a)'}")
244
288
  if result.total_tokens is not None:
245
289
  print(
246
290
  f"tokens: prompt={result.prompt_tokens} completion={result.completion_tokens} total={result.total_tokens}"
@@ -269,7 +313,8 @@ def main() -> None:
269
313
  """
270
314
 
271
315
  parser: Final[ArgumentParser] = _build_parser()
272
- args: Namespace = parser.parse_args()
316
+ args = CliArgs()
317
+ parser.parse_args(namespace=args)
273
318
 
274
319
  if args.edit and not args.commit:
275
320
  print("'--edit' must be used together with '--commit'.", file=stderr)
@@ -7,6 +7,7 @@ Provider-agnostic orchestration/prompt logic lives in `_llm.py`.
7
7
  from __future__ import annotations
8
8
 
9
9
  from os import environ
10
+ from typing import ClassVar
10
11
 
11
12
  from google import genai
12
13
  from google.genai import types
@@ -15,7 +16,11 @@ from ._llm import LLMTextResult, LLMUsage
15
16
 
16
17
 
17
18
  class GoogleGenAIProvider:
18
- name = "google"
19
+ __slots__ = (
20
+ "_client",
21
+ )
22
+
23
+ name: ClassVar[str] = "google"
19
24
 
20
25
  def __init__(
21
26
  self,
@@ -9,6 +9,8 @@ from __future__ import annotations
9
9
  from openai import OpenAI
10
10
  from openai.types.responses import Response
11
11
  from os import environ
12
+ from typing import ClassVar
13
+
12
14
  from tiktoken import Encoding, encoding_for_model, get_encoding
13
15
  from ._llm import LLMTextResult, LLMUsage
14
16
 
@@ -24,7 +26,11 @@ def _encoding_for_model(
24
26
 
25
27
 
26
28
  class OpenAIResponsesProvider:
27
- name = "openai"
29
+ __slots__ = (
30
+ "_client",
31
+ )
32
+
33
+ name: ClassVar[str] = "openai"
28
34
 
29
35
  def __init__(
30
36
  self,
@@ -12,16 +12,19 @@ from __future__ import annotations
12
12
 
13
13
  from babel import Locale
14
14
  from os import environ
15
- from typing import Final, Protocol
15
+ from typing import ClassVar, Final, Protocol
16
16
 
17
17
 
18
18
  _DEFAULT_PROVIDER: Final[str] = "openai"
19
19
  _DEFAULT_MODEL_OPENAI: Final[str] = "gpt-5-mini"
20
20
  _DEFAULT_MODEL_GOOGLE: Final[str] = "gemini-2.5-flash"
21
+ _DEFAULT_MODEL_OLLAMA: Final[str] = "gpt-oss:20b"
21
22
  _DEFAULT_LANGUAGE: Final[str] = "en-GB"
22
23
 
23
24
 
24
25
  class UnsupportedProviderError(RuntimeError):
26
+ __slots__ = ()
27
+
25
28
  pass
26
29
 
27
30
 
@@ -66,7 +69,9 @@ class LLMTextResult:
66
69
 
67
70
 
68
71
  class CommitMessageProvider(Protocol):
69
- name: str
72
+ __slots__ = ()
73
+
74
+ name: ClassVar[str]
70
75
 
71
76
  def generate_text(
72
77
  self,
@@ -147,6 +152,9 @@ def _resolve_model(
147
152
  if provider_name == "google":
148
153
  default_model = _DEFAULT_MODEL_GOOGLE
149
154
  provider_model = None
155
+ elif provider_name == "ollama":
156
+ default_model = _DEFAULT_MODEL_OLLAMA
157
+ provider_model = environ.get("OLLAMA_MODEL")
150
158
  else:
151
159
  default_model = _DEFAULT_MODEL_OPENAI
152
160
  provider_model = environ.get("OPENAI_MODEL")
@@ -164,6 +172,8 @@ def _resolve_language(
164
172
  def get_provider(
165
173
  provider: str | None,
166
174
  /,
175
+ *,
176
+ host: str | None = None,
167
177
  ) -> CommitMessageProvider:
168
178
  name = _resolve_provider(provider)
169
179
 
@@ -179,8 +189,14 @@ def get_provider(
179
189
 
180
190
  return GoogleGenAIProvider()
181
191
 
192
+ if name == "ollama":
193
+ # Local import to avoid import cycles: providers may import shared types from this module.
194
+ from ._ollama import OllamaProvider
195
+
196
+ return OllamaProvider(host=host)
197
+
182
198
  raise UnsupportedProviderError(
183
- f"Unsupported provider: {name}. Supported providers: openai, google"
199
+ f"Unsupported provider: {name}. Supported providers: openai, google, ollama"
184
200
  )
185
201
 
186
202
 
@@ -459,13 +475,14 @@ def generate_commit_message(
459
475
  language: str | None = None,
460
476
  chunk_tokens: int | None = 0,
461
477
  provider: str | None = None,
478
+ host: str | None = None,
462
479
  /,
463
480
  ) -> str:
464
481
  chosen_provider = _resolve_provider(provider)
465
482
  chosen_model = _resolve_model(model, chosen_provider)
466
483
  chosen_language = _resolve_language(language)
467
484
 
468
- llm = get_provider(chosen_provider)
485
+ llm = get_provider(chosen_provider, host=host)
469
486
 
470
487
  normalized_chunk_tokens = 0 if chunk_tokens is None else chunk_tokens
471
488
 
@@ -513,13 +530,14 @@ def generate_commit_message_with_info(
513
530
  language: str | None = None,
514
531
  chunk_tokens: int | None = 0,
515
532
  provider: str | None = None,
533
+ host: str | None = None,
516
534
  /,
517
535
  ) -> CommitMessageResult:
518
536
  chosen_provider = _resolve_provider(provider)
519
537
  chosen_model = _resolve_model(model, chosen_provider)
520
538
  chosen_language = _resolve_language(language)
521
539
 
522
- llm = get_provider(chosen_provider)
540
+ llm = get_provider(chosen_provider, host=host)
523
541
 
524
542
  normalized_chunk_tokens = 0 if chunk_tokens is None else chunk_tokens
525
543
 
@@ -0,0 +1,122 @@
1
+ """Ollama provider implementation.
2
+
3
+ Mirrors the Gemini provider structure: a single provider class that exposes
4
+ `generate_text` and `count_tokens`. Provider-agnostic orchestration lives in
5
+ `_llm.py`.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ from os import environ
11
+ from typing import ClassVar, Final
12
+
13
+ from ollama import Client, ResponseError
14
+ from tiktoken import Encoding, get_encoding
15
+
16
+ from ._llm import LLMTextResult, LLMUsage
17
+
18
+
19
+ _DEFAULT_OLLAMA_HOST: Final[str] = "http://localhost:11434"
20
+
21
+
22
+ def _resolve_ollama_host(
23
+ host: str | None,
24
+ /,
25
+ ) -> str:
26
+ """Resolve the Ollama host URL from arg, env, or default."""
27
+
28
+ return host or environ.get("OLLAMA_HOST") or _DEFAULT_OLLAMA_HOST
29
+
30
+
31
+ def _get_encoding() -> Encoding:
32
+ """Get a fallback encoding for token counting."""
33
+
34
+ try:
35
+ return get_encoding("cl100k_base")
36
+ except Exception:
37
+ return get_encoding("gpt2")
38
+
39
+
40
+ class OllamaProvider:
41
+ """Ollama provider implementation for the LLM protocol."""
42
+
43
+ __slots__ = (
44
+ "_host",
45
+ "_client",
46
+ )
47
+
48
+ name: ClassVar[str] = "ollama"
49
+
50
+ def __init__(
51
+ self,
52
+ /,
53
+ *,
54
+ host: str | None = None,
55
+ ) -> None:
56
+ self._host = _resolve_ollama_host(host)
57
+ self._client = Client(host=self._host)
58
+
59
+ def generate_text(
60
+ self,
61
+ /,
62
+ *,
63
+ model: str,
64
+ instructions: str,
65
+ user_text: str,
66
+ ) -> LLMTextResult:
67
+ """Generate text using Ollama (non-streaming)."""
68
+
69
+ messages = [
70
+ {"role": "system", "content": instructions},
71
+ {"role": "user", "content": user_text},
72
+ ]
73
+
74
+ try:
75
+ response = self._client.chat(model=model, messages=messages)
76
+ except ResponseError as exc:
77
+ raise RuntimeError(
78
+ f"Ollama API error: {exc}"
79
+ ) from exc
80
+ except Exception as exc:
81
+ raise RuntimeError(
82
+ f"Failed to connect to Ollama at {self._host}. Make sure Ollama is running: {exc}"
83
+ ) from exc
84
+
85
+ response_text = response.message.content or ""
86
+
87
+ # Extract token usage if available
88
+ prompt_tokens: int | None = None
89
+ completion_tokens: int | None = None
90
+ total_tokens: int | None = None
91
+
92
+ if hasattr(response, "prompt_eval_count") and response.prompt_eval_count:
93
+ prompt_tokens = response.prompt_eval_count
94
+ if hasattr(response, "eval_count") and response.eval_count:
95
+ completion_tokens = response.eval_count
96
+ if prompt_tokens is not None and completion_tokens is not None:
97
+ total_tokens = prompt_tokens + completion_tokens
98
+
99
+ return LLMTextResult(
100
+ text=response_text.strip(),
101
+ response_id=None,
102
+ usage=LLMUsage(
103
+ prompt_tokens=prompt_tokens,
104
+ completion_tokens=completion_tokens,
105
+ total_tokens=total_tokens,
106
+ ),
107
+ )
108
+
109
+ def count_tokens(
110
+ self,
111
+ /,
112
+ *,
113
+ model: str,
114
+ text: str,
115
+ ) -> int:
116
+ """Approximate token count using tiktoken; fallback to whitespace split."""
117
+
118
+ try:
119
+ encoding = _get_encoding()
120
+ return len(encoding.encode(text))
121
+ except Exception:
122
+ return len(text.split())
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: git-commit-message
3
- Version: 0.7.0
3
+ Version: 0.8.0
4
4
  Summary: Generate Git commit messages from staged changes using LLM
5
5
  Maintainer-email: Mina Her <minacle@live.com>
6
6
  License: This is free and unencumbered software released into the public domain.
@@ -45,16 +45,22 @@ Requires-Python: >=3.13
45
45
  Description-Content-Type: text/markdown
46
46
  Requires-Dist: babel>=2.17.0
47
47
  Requires-Dist: google-genai>=1.56.0
48
+ Requires-Dist: ollama>=0.4.0
48
49
  Requires-Dist: openai>=2.6.1
49
50
  Requires-Dist: tiktoken>=0.12.0
50
51
 
51
52
  # git-commit-message
52
53
 
53
- Staged changes -> GPT commit message generator.
54
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
54
55
 
55
56
  [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
56
57
 
57
- ## Install (PyPI)
58
+ ## Requirements
59
+
60
+ - Python 3.13+
61
+ - A Git repo with staged changes (`git add ...`)
62
+
63
+ ## Install
58
64
 
59
65
  Install the latest released version from PyPI:
60
66
 
@@ -78,19 +84,43 @@ Quick check:
78
84
  git-commit-message --help
79
85
  ```
80
86
 
81
- Set your API key (POSIX sh):
87
+ ## Setup
88
+
89
+ ### OpenAI
82
90
 
83
91
  ```sh
84
92
  export OPENAI_API_KEY="sk-..."
85
93
  ```
86
94
 
87
- Or for the Google provider:
95
+ ### Google Gemini
88
96
 
89
97
  ```sh
90
98
  export GOOGLE_API_KEY="..."
91
99
  ```
92
100
 
93
- Note (fish): In fish, set it as follows.
101
+ ### Ollama (local models)
102
+
103
+ 1. Install Ollama: https://ollama.ai
104
+ 2. Start the server:
105
+
106
+ ```sh
107
+ ollama serve
108
+ ```
109
+
110
+ 3. Pull a model:
111
+
112
+ ```sh
113
+ ollama pull mistral
114
+ ```
115
+
116
+ Optional: set defaults:
117
+
118
+ ```sh
119
+ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
120
+ export OLLAMA_MODEL=mistral
121
+ ```
122
+
123
+ Note (fish):
94
124
 
95
125
  ```fish
96
126
  set -x OPENAI_API_KEY "sk-..."
@@ -104,96 +134,116 @@ python -m pip install -e .
104
134
 
105
135
  ## Usage
106
136
 
107
- - Print commit message only:
137
+ Generate and print a commit message:
108
138
 
109
139
  ```sh
110
140
  git add -A
111
141
  git-commit-message "optional extra context about the change"
112
142
  ```
113
143
 
114
- - Force single-line subject only:
144
+ Generate a single-line subject only:
115
145
 
116
146
  ```sh
117
147
  git-commit-message --one-line "optional context"
118
148
  ```
119
149
 
120
- - Select provider (default: openai):
150
+ Select provider:
121
151
 
122
152
  ```sh
123
- git-commit-message --provider openai "optional context"
153
+ # OpenAI (default)
154
+ git-commit-message --provider openai
155
+
156
+ # Google Gemini (via google-genai)
157
+ git-commit-message --provider google
158
+
159
+ # Ollama
160
+ git-commit-message --provider ollama
124
161
  ```
125
162
 
126
- - Select provider (Google Gemini via google-genai):
163
+ Commit immediately (optionally open editor):
127
164
 
128
165
  ```sh
129
- git-commit-message --provider google "optional context"
166
+ git-commit-message --commit "refactor parser for speed"
167
+ git-commit-message --commit --edit "refactor parser for speed"
130
168
  ```
131
169
 
132
- - Limit subject length (default 72):
170
+ Limit subject length:
133
171
 
134
172
  ```sh
135
- git-commit-message --one-line --max-length 50 "optional context"
173
+ git-commit-message --one-line --max-length 50
136
174
  ```
137
175
 
138
- - Chunk long diffs by token budget (0 = single chunk + summary, -1 = disable chunking):
176
+ Chunk/summarise long diffs by token budget:
139
177
 
140
178
  ```sh
141
179
  # force a single summary pass over the whole diff (default)
142
- git-commit-message --chunk-tokens 0 "optional context"
180
+ git-commit-message --chunk-tokens 0
143
181
 
144
182
  # chunk the diff into ~4000-token pieces before summarising
145
- git-commit-message --chunk-tokens 4000 "optional context"
183
+ git-commit-message --chunk-tokens 4000
146
184
 
147
185
  # disable summarisation and use the legacy one-shot prompt
148
- git-commit-message --chunk-tokens -1 "optional context"
186
+ git-commit-message --chunk-tokens -1
149
187
  ```
150
188
 
151
- - Commit immediately with editor:
189
+ Select output language/locale (IETF language tag):
152
190
 
153
191
  ```sh
154
- git-commit-message --commit --edit "refactor parser for speed"
192
+ git-commit-message --language en-US
193
+ git-commit-message --language ko-KR
194
+ git-commit-message --language ja-JP
155
195
  ```
156
196
 
157
- - Print debug info (prompt/response + token usage):
197
+ Print debug info:
158
198
 
159
199
  ```sh
160
- git-commit-message --debug "optional context"
200
+ git-commit-message --debug
161
201
  ```
162
202
 
163
- - Select output language/locale (default: en-GB):
203
+ Configure Ollama host (if running on a different machine):
164
204
 
165
205
  ```sh
166
- # American English
167
- git-commit-message --language en-US "optional context"
206
+ git-commit-message --provider ollama --host http://192.168.1.100:11434
207
+ ```
168
208
 
169
- # Korean
170
- git-commit-message --language ko-KR
209
+ ## Options
171
210
 
172
- # Japanese
173
- git-commit-message --language ja-JP
174
- ```
211
+ - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
212
+ - `--model MODEL`: model override (provider-specific)
213
+ - `--language TAG`: output language/locale (default: `en-GB`)
214
+ - `--one-line`: output subject only
215
+ - `--max-length N`: max subject length (default: 72)
216
+ - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
217
+ - `--debug`: print request/response details
218
+ - `--commit`: run `git commit -m <message>`
219
+ - `--edit`: with `--commit`, open editor for final message
220
+ - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
221
+
222
+ ## Environment variables
175
223
 
176
- Notes:
224
+ Required:
177
225
 
178
- - The model is instructed to write using the selected language/locale.
179
- - In multi-line mode, the only allowed label ("Rationale:") is also translated into the target language.
226
+ - `OPENAI_API_KEY`: when provider is `openai`
227
+ - `GOOGLE_API_KEY`: when provider is `google`
180
228
 
181
- Environment:
229
+ Optional:
182
230
 
183
- - `OPENAI_API_KEY`: required when provider is `openai`
184
- - `GOOGLE_API_KEY`: required when provider is `google`
185
- - `GIT_COMMIT_MESSAGE_PROVIDER`: optional (default: `openai`). `--provider` overrides this value.
186
- - `GIT_COMMIT_MESSAGE_MODEL`: optional model override (defaults: `openai` -> `gpt-5-mini`, `google` -> `gemini-2.5-flash`)
187
- - `OPENAI_MODEL`: optional OpenAI-only model override
188
- - `GIT_COMMIT_MESSAGE_LANGUAGE`: optional (default: `en-GB`)
189
- - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: optional token budget per diff chunk (default: 0 = single chunk + summary; -1 disables summarisation)
231
+ - `GIT_COMMIT_MESSAGE_PROVIDER`: default provider (`openai` by default). `--provider` overrides this.
232
+ - `GIT_COMMIT_MESSAGE_MODEL`: model override for any provider. `--model` overrides this.
233
+ - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
234
+ - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
235
+ - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
236
+ - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
237
+ - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
190
238
 
191
- Notes:
239
+ Default models (if not overridden):
192
240
 
193
- - If token counting fails for your provider while chunking, try `--chunk-tokens 0` (default) or `--chunk-tokens -1`.
241
+ - OpenAI: `gpt-5-mini`
242
+ - Google: `gemini-2.5-flash`
243
+ - Ollama: `gpt-oss:20b`
194
244
 
195
- ## AIgenerated code notice
245
+ ## AI-generated code notice
196
246
 
197
247
  Parts of this project were created with assistance from AI tools (e.g. large language models).
198
- All AIassisted contributions were reviewed and adapted by maintainers before inclusion.
248
+ All AI-assisted contributions were reviewed and adapted by maintainers before inclusion.
199
249
  If you need provenance for specific changes, please refer to the Git history and commit messages.
@@ -8,6 +8,7 @@ src/git_commit_message/_gemini.py
8
8
  src/git_commit_message/_git.py
9
9
  src/git_commit_message/_gpt.py
10
10
  src/git_commit_message/_llm.py
11
+ src/git_commit_message/_ollama.py
11
12
  src/git_commit_message.egg-info/PKG-INFO
12
13
  src/git_commit_message.egg-info/SOURCES.txt
13
14
  src/git_commit_message.egg-info/dependency_links.txt
@@ -1,4 +1,5 @@
1
1
  babel>=2.17.0
2
2
  google-genai>=1.56.0
3
+ ollama>=0.4.0
3
4
  openai>=2.6.1
4
5
  tiktoken>=0.12.0
@@ -1,149 +0,0 @@
1
- # git-commit-message
2
-
3
- Staged changes -> GPT commit message generator.
4
-
5
- [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
6
-
7
- ## Install (PyPI)
8
-
9
- Install the latest released version from PyPI:
10
-
11
- ```sh
12
- # User environment (recommended)
13
- python -m pip install --user git-commit-message
14
-
15
- # Or system/virtualenv as appropriate
16
- python -m pip install git-commit-message
17
-
18
- # Or with pipx for isolated CLI installs
19
- pipx install git-commit-message
20
-
21
- # Upgrade to the newest version
22
- python -m pip install --upgrade git-commit-message
23
- ```
24
-
25
- Quick check:
26
-
27
- ```sh
28
- git-commit-message --help
29
- ```
30
-
31
- Set your API key (POSIX sh):
32
-
33
- ```sh
34
- export OPENAI_API_KEY="sk-..."
35
- ```
36
-
37
- Or for the Google provider:
38
-
39
- ```sh
40
- export GOOGLE_API_KEY="..."
41
- ```
42
-
43
- Note (fish): In fish, set it as follows.
44
-
45
- ```fish
46
- set -x OPENAI_API_KEY "sk-..."
47
- ```
48
-
49
- ## Install (editable)
50
-
51
- ```sh
52
- python -m pip install -e .
53
- ```
54
-
55
- ## Usage
56
-
57
- - Print commit message only:
58
-
59
- ```sh
60
- git add -A
61
- git-commit-message "optional extra context about the change"
62
- ```
63
-
64
- - Force single-line subject only:
65
-
66
- ```sh
67
- git-commit-message --one-line "optional context"
68
- ```
69
-
70
- - Select provider (default: openai):
71
-
72
- ```sh
73
- git-commit-message --provider openai "optional context"
74
- ```
75
-
76
- - Select provider (Google Gemini via google-genai):
77
-
78
- ```sh
79
- git-commit-message --provider google "optional context"
80
- ```
81
-
82
- - Limit subject length (default 72):
83
-
84
- ```sh
85
- git-commit-message --one-line --max-length 50 "optional context"
86
- ```
87
-
88
- - Chunk long diffs by token budget (0 = single chunk + summary, -1 = disable chunking):
89
-
90
- ```sh
91
- # force a single summary pass over the whole diff (default)
92
- git-commit-message --chunk-tokens 0 "optional context"
93
-
94
- # chunk the diff into ~4000-token pieces before summarising
95
- git-commit-message --chunk-tokens 4000 "optional context"
96
-
97
- # disable summarisation and use the legacy one-shot prompt
98
- git-commit-message --chunk-tokens -1 "optional context"
99
- ```
100
-
101
- - Commit immediately with editor:
102
-
103
- ```sh
104
- git-commit-message --commit --edit "refactor parser for speed"
105
- ```
106
-
107
- - Print debug info (prompt/response + token usage):
108
-
109
- ```sh
110
- git-commit-message --debug "optional context"
111
- ```
112
-
113
- - Select output language/locale (default: en-GB):
114
-
115
- ```sh
116
- # American English
117
- git-commit-message --language en-US "optional context"
118
-
119
- # Korean
120
- git-commit-message --language ko-KR
121
-
122
- # Japanese
123
- git-commit-message --language ja-JP
124
- ```
125
-
126
- Notes:
127
-
128
- - The model is instructed to write using the selected language/locale.
129
- - In multi-line mode, the only allowed label ("Rationale:") is also translated into the target language.
130
-
131
- Environment:
132
-
133
- - `OPENAI_API_KEY`: required when provider is `openai`
134
- - `GOOGLE_API_KEY`: required when provider is `google`
135
- - `GIT_COMMIT_MESSAGE_PROVIDER`: optional (default: `openai`). `--provider` overrides this value.
136
- - `GIT_COMMIT_MESSAGE_MODEL`: optional model override (defaults: `openai` -> `gpt-5-mini`, `google` -> `gemini-2.5-flash`)
137
- - `OPENAI_MODEL`: optional OpenAI-only model override
138
- - `GIT_COMMIT_MESSAGE_LANGUAGE`: optional (default: `en-GB`)
139
- - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: optional token budget per diff chunk (default: 0 = single chunk + summary; -1 disables summarisation)
140
-
141
- Notes:
142
-
143
- - If token counting fails for your provider while chunking, try `--chunk-tokens 0` (default) or `--chunk-tokens -1`.
144
-
145
- ## AI‑generated code notice
146
-
147
- Parts of this project were created with assistance from AI tools (e.g. large language models).
148
- All AI‑assisted contributions were reviewed and adapted by maintainers before inclusion.
149
- If you need provenance for specific changes, please refer to the Git history and commit messages.