git-commit-message 0.8.1__tar.gz → 0.8.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (20) hide show
  1. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/PKG-INFO +44 -7
  2. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/README.md +43 -6
  3. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/pyproject.toml +1 -1
  4. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_cli.py +118 -4
  5. git_commit_message-0.8.2/src/git_commit_message/_llamacpp.py +145 -0
  6. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_llm.py +11 -1
  7. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/PKG-INFO +44 -7
  8. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/SOURCES.txt +1 -0
  9. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/UNLICENSE +0 -0
  10. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/setup.cfg +0 -0
  11. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/__init__.py +0 -0
  12. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/__main__.py +0 -0
  13. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_gemini.py +0 -0
  14. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_git.py +0 -0
  15. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_gpt.py +0 -0
  16. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message/_ollama.py +0 -0
  17. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/dependency_links.txt +0 -0
  18. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/entry_points.txt +0 -0
  19. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/requires.txt +0 -0
  20. {git_commit_message-0.8.1 → git_commit_message-0.8.2}/src/git_commit_message.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: git-commit-message
3
- Version: 0.8.1
3
+ Version: 0.8.2
4
4
  Summary: Generate Git commit messages from staged changes using LLM
5
5
  Maintainer-email: Mina Her <minacle@live.com>
6
6
  License: This is free and unencumbered software released into the public domain.
@@ -51,7 +51,7 @@ Requires-Dist: tiktoken>=0.12.0
51
51
 
52
52
  # git-commit-message
53
53
 
54
- Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
54
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, Ollama, or llama.cpp.
55
55
 
56
56
  [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
57
57
 
@@ -120,6 +120,23 @@ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
120
120
  export OLLAMA_MODEL=mistral
121
121
  ```
122
122
 
123
+ ### llama.cpp (local models)
124
+
125
+ 1. Build and run llama.cpp server with your model:
126
+
127
+ ```sh
128
+ llama-server -hf ggml-org/gpt-oss-20b-GGUF --host 0.0.0.0 --port 8080
129
+ ```
130
+
131
+ 2. The server runs on `http://localhost:8080` by default.
132
+
133
+ Optional: set defaults:
134
+
135
+ ```sh
136
+ export GIT_COMMIT_MESSAGE_PROVIDER=llamacpp
137
+ export LLAMACPP_HOST=http://localhost:8080
138
+ ```
139
+
123
140
  Note (fish):
124
141
 
125
142
  ```fish
@@ -141,10 +158,13 @@ git add -A
141
158
  git-commit-message "optional extra context about the change"
142
159
  ```
143
160
 
144
- Generate a single-line subject only:
161
+ Generate a single-line subject only (when no trailers are appended):
145
162
 
146
163
  ```sh
147
164
  git-commit-message --one-line "optional context"
165
+
166
+ # with trailers, output is subject plus trailer lines
167
+ git-commit-message --one-line --co-author 'John Doe <john.doe@example.com>'
148
168
  ```
149
169
 
150
170
  Select provider:
@@ -158,6 +178,9 @@ git-commit-message --provider google
158
178
 
159
179
  # Ollama
160
180
  git-commit-message --provider ollama
181
+
182
+ # llama.cpp
183
+ git-commit-message --provider llamacpp
161
184
  ```
162
185
 
163
186
  Commit immediately (optionally open editor):
@@ -165,6 +188,11 @@ Commit immediately (optionally open editor):
165
188
  ```sh
166
189
  git-commit-message --commit "refactor parser for speed"
167
190
  git-commit-message --commit --edit "refactor parser for speed"
191
+
192
+ # add co-author trailers
193
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>'
194
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>' --co-author 'Jane Doe <jane.doe@example.com>'
195
+ git-commit-message --commit --co-author copilot
168
196
  ```
169
197
 
170
198
  Amend the previous commit:
@@ -219,19 +247,26 @@ Configure Ollama host (if running on a different machine):
219
247
  git-commit-message --provider ollama --host http://192.168.1.100:11434
220
248
  ```
221
249
 
250
+ Configure llama.cpp host:
251
+
252
+ ```sh
253
+ git-commit-message --provider llamacpp --host http://192.168.1.100:8080
254
+ ```
255
+
222
256
  ## Options
223
257
 
224
- - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
225
- - `--model MODEL`: model override (provider-specific)
258
+ - `--provider {openai,google,ollama,llamacpp}`: provider to use (default: `openai`)
259
+ - `--model MODEL`: model override (provider-specific; ignored for llama.cpp)
226
260
  - `--language TAG`: output language/locale (default: `en-GB`)
227
- - `--one-line`: output subject only
261
+ - `--one-line`: output subject only when no trailers are appended; with `--co-author`, output is a single-line subject plus `Co-authored-by:` trailer lines
228
262
  - `--max-length N`: max subject length (default: 72)
229
263
  - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
230
264
  - `--debug`: print request/response details
231
265
  - `--commit`: run `git commit -m <message>`
232
266
  - `--amend`: generate a message suitable for amending the previous commit (diff is from the amended commit's parent to the staged index; if nothing is staged, this effectively becomes the diff introduced by `HEAD`)
233
267
  - `--edit`: with `--commit`, open editor for final message
234
- - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
268
+ - `--host URL`: host URL for providers like Ollama or llama.cpp (default: `http://localhost:11434` for Ollama, `http://localhost:8080` for llama.cpp)
269
+ - `--co-author VALUE`: append `Co-authored-by:` trailer(s). Repeat to add multiple values. Accepted forms: `Name <email@example.com>` or `copilot` (alias, case-insensitive).
235
270
 
236
271
  ## Environment variables
237
272
 
@@ -247,6 +282,7 @@ Optional:
247
282
  - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
248
283
  - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
249
284
  - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
285
+ - `LLAMACPP_HOST`: llama.cpp server URL (default: `http://localhost:8080`)
250
286
  - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
251
287
  - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
252
288
 
@@ -255,6 +291,7 @@ Default models (if not overridden):
255
291
  - OpenAI: `gpt-5-mini`
256
292
  - Google: `gemini-2.5-flash`
257
293
  - Ollama: `gpt-oss:20b`
294
+ - llama.cpp: uses pre-loaded model (model parameter is ignored)
258
295
 
259
296
  ## AI-generated code notice
260
297
 
@@ -1,6 +1,6 @@
1
1
  # git-commit-message
2
2
 
3
- Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
3
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, Ollama, or llama.cpp.
4
4
 
5
5
  [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
6
6
 
@@ -69,6 +69,23 @@ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
69
69
  export OLLAMA_MODEL=mistral
70
70
  ```
71
71
 
72
+ ### llama.cpp (local models)
73
+
74
+ 1. Build and run llama.cpp server with your model:
75
+
76
+ ```sh
77
+ llama-server -hf ggml-org/gpt-oss-20b-GGUF --host 0.0.0.0 --port 8080
78
+ ```
79
+
80
+ 2. The server runs on `http://localhost:8080` by default.
81
+
82
+ Optional: set defaults:
83
+
84
+ ```sh
85
+ export GIT_COMMIT_MESSAGE_PROVIDER=llamacpp
86
+ export LLAMACPP_HOST=http://localhost:8080
87
+ ```
88
+
72
89
  Note (fish):
73
90
 
74
91
  ```fish
@@ -90,10 +107,13 @@ git add -A
90
107
  git-commit-message "optional extra context about the change"
91
108
  ```
92
109
 
93
- Generate a single-line subject only:
110
+ Generate a single-line subject only (when no trailers are appended):
94
111
 
95
112
  ```sh
96
113
  git-commit-message --one-line "optional context"
114
+
115
+ # with trailers, output is subject plus trailer lines
116
+ git-commit-message --one-line --co-author 'John Doe <john.doe@example.com>'
97
117
  ```
98
118
 
99
119
  Select provider:
@@ -107,6 +127,9 @@ git-commit-message --provider google
107
127
 
108
128
  # Ollama
109
129
  git-commit-message --provider ollama
130
+
131
+ # llama.cpp
132
+ git-commit-message --provider llamacpp
110
133
  ```
111
134
 
112
135
  Commit immediately (optionally open editor):
@@ -114,6 +137,11 @@ Commit immediately (optionally open editor):
114
137
  ```sh
115
138
  git-commit-message --commit "refactor parser for speed"
116
139
  git-commit-message --commit --edit "refactor parser for speed"
140
+
141
+ # add co-author trailers
142
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>'
143
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>' --co-author 'Jane Doe <jane.doe@example.com>'
144
+ git-commit-message --commit --co-author copilot
117
145
  ```
118
146
 
119
147
  Amend the previous commit:
@@ -168,19 +196,26 @@ Configure Ollama host (if running on a different machine):
168
196
  git-commit-message --provider ollama --host http://192.168.1.100:11434
169
197
  ```
170
198
 
199
+ Configure llama.cpp host:
200
+
201
+ ```sh
202
+ git-commit-message --provider llamacpp --host http://192.168.1.100:8080
203
+ ```
204
+
171
205
  ## Options
172
206
 
173
- - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
174
- - `--model MODEL`: model override (provider-specific)
207
+ - `--provider {openai,google,ollama,llamacpp}`: provider to use (default: `openai`)
208
+ - `--model MODEL`: model override (provider-specific; ignored for llama.cpp)
175
209
  - `--language TAG`: output language/locale (default: `en-GB`)
176
- - `--one-line`: output subject only
210
+ - `--one-line`: output subject only when no trailers are appended; with `--co-author`, output is a single-line subject plus `Co-authored-by:` trailer lines
177
211
  - `--max-length N`: max subject length (default: 72)
178
212
  - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
179
213
  - `--debug`: print request/response details
180
214
  - `--commit`: run `git commit -m <message>`
181
215
  - `--amend`: generate a message suitable for amending the previous commit (diff is from the amended commit's parent to the staged index; if nothing is staged, this effectively becomes the diff introduced by `HEAD`)
182
216
  - `--edit`: with `--commit`, open editor for final message
183
- - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
217
+ - `--host URL`: host URL for providers like Ollama or llama.cpp (default: `http://localhost:11434` for Ollama, `http://localhost:8080` for llama.cpp)
218
+ - `--co-author VALUE`: append `Co-authored-by:` trailer(s). Repeat to add multiple values. Accepted forms: `Name <email@example.com>` or `copilot` (alias, case-insensitive).
184
219
 
185
220
  ## Environment variables
186
221
 
@@ -196,6 +231,7 @@ Optional:
196
231
  - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
197
232
  - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
198
233
  - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
234
+ - `LLAMACPP_HOST`: llama.cpp server URL (default: `http://localhost:8080`)
199
235
  - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
200
236
  - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
201
237
 
@@ -204,6 +240,7 @@ Default models (if not overridden):
204
240
  - OpenAI: `gpt-5-mini`
205
241
  - Google: `gemini-2.5-flash`
206
242
  - Ollama: `gpt-oss:20b`
243
+ - llama.cpp: uses pre-loaded model (model parameter is ignored)
207
244
 
208
245
  ## AI-generated code notice
209
246
 
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "git-commit-message"
3
- version = "0.8.1"
3
+ version = "0.8.2"
4
4
  description = "Generate Git commit messages from staged changes using LLM"
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.13"
@@ -9,6 +9,8 @@ from __future__ import annotations
9
9
  from argparse import ArgumentParser, Namespace
10
10
  from os import environ
11
11
  from pathlib import Path
12
+ import re
13
+ from re import Pattern
12
14
  from sys import exit as sys_exit
13
15
  from sys import stderr
14
16
  from typing import Final
@@ -43,6 +45,7 @@ class CliArgs(Namespace):
43
45
  "max_length",
44
46
  "chunk_tokens",
45
47
  "host",
48
+ "co_authors",
46
49
  )
47
50
 
48
51
  def __init__(
@@ -61,6 +64,83 @@ class CliArgs(Namespace):
61
64
  self.max_length: int | None = None
62
65
  self.chunk_tokens: int | None = None
63
66
  self.host: str | None = None
67
+ self.co_authors: list[str] | None = None
68
+
69
+
70
+ _CO_AUTHOR_LINE_RE: Final[Pattern[str]] = re.compile(
71
+ r"^\s*([^<>\s\n][^<>\n]*?)\s*<([^<>\s\n]+@[^<>\s\n]+)>\s*$"
72
+ )
73
+ _CO_AUTHOR_ALIASES: Final[dict[str, str]] = {
74
+ "copilot": "Copilot <copilot@github.com>",
75
+ }
76
+
77
+
78
+ def _co_author_alias_keywords_text() -> str:
79
+ """Return a readable list of accepted co-author alias keywords."""
80
+
81
+ keywords: list[str] = sorted(_CO_AUTHOR_ALIASES.keys())
82
+ return ", ".join(f"'{keyword}'" for keyword in keywords)
83
+
84
+
85
+ def _normalize_co_author(
86
+ raw: str,
87
+ /,
88
+ ) -> str:
89
+ """Normalize one co-author input into ``Name <email>`` form."""
90
+
91
+ value: str = raw.strip()
92
+ if not value:
93
+ raise ValueError("Co-author cannot be empty.")
94
+
95
+ alias: str | None = _CO_AUTHOR_ALIASES.get(value.lower())
96
+ if alias is not None:
97
+ return alias
98
+
99
+ match = _CO_AUTHOR_LINE_RE.match(value)
100
+ if match is None:
101
+ raise ValueError(
102
+ "Invalid co-author format: use 'Name <email@example.com>' "
103
+ f"or an alias keyword ({_co_author_alias_keywords_text()})."
104
+ )
105
+
106
+ name: str = match.group(1).strip()
107
+ email: str = match.group(2).strip()
108
+ return f"{name} <{email}>"
109
+
110
+
111
+ def _append_co_author_footers(
112
+ message: str,
113
+ normalized_co_authors: list[str],
114
+ /,
115
+ ) -> str:
116
+ """Append Git co-author trailers to a commit message."""
117
+
118
+ if not normalized_co_authors:
119
+ return message
120
+
121
+ base: str = message.rstrip()
122
+ footer_lines: list[str] = [
123
+ f"Co-authored-by: {author}" for author in normalized_co_authors
124
+ ]
125
+ return f"{base}\n\n" + "\n".join(footer_lines)
126
+
127
+
128
+ def _normalize_co_authors(
129
+ co_authors: list[str],
130
+ /,
131
+ ) -> list[str]:
132
+ """Normalize and deduplicate co-author values in insertion order."""
133
+
134
+ seen: set[str] = set()
135
+ normalized: list[str] = []
136
+ for raw in co_authors:
137
+ author = _normalize_co_author(raw)
138
+ key = author.lower()
139
+ if key in seen:
140
+ continue
141
+ seen.add(key)
142
+ normalized.append(author)
143
+ return normalized
64
144
 
65
145
 
66
146
  def _env_chunk_tokens_default() -> int | None:
@@ -125,7 +205,8 @@ def _build_parser() -> ArgumentParser:
125
205
  help=(
126
206
  "LLM provider to use (default: openai). "
127
207
  "You may also set GIT_COMMIT_MESSAGE_PROVIDER. "
128
- "The CLI flag overrides the environment variable."
208
+ "The CLI flag overrides the environment variable. "
209
+ "Supported providers: openai, google, ollama, llamacpp."
129
210
  ),
130
211
  )
131
212
 
@@ -133,7 +214,8 @@ def _build_parser() -> ArgumentParser:
133
214
  "--model",
134
215
  default=None,
135
216
  help=(
136
- "Model name to use. If unspecified, uses GIT_COMMIT_MESSAGE_MODEL or a provider-specific default (openai: gpt-5-mini; google: gemini-2.5-flash; ollama: gpt-oss:20b)."
217
+ "Model name to use. If unspecified, uses GIT_COMMIT_MESSAGE_MODEL or a provider-specific default "
218
+ "(openai: gpt-5-mini; google: gemini-2.5-flash; ollama: gpt-oss:20b; llamacpp: default)."
137
219
  ),
138
220
  )
139
221
 
@@ -185,8 +267,24 @@ def _build_parser() -> ArgumentParser:
185
267
  dest="host",
186
268
  default=None,
187
269
  help=(
188
- "Host URL for API providers like Ollama (default: http://localhost:11434). "
189
- "You may also set OLLAMA_HOST for Ollama."
270
+ "Host URL for API providers like Ollama or llama.cpp "
271
+ "(default: http://localhost:11434 for Ollama, http://localhost:8080 for llama.cpp). "
272
+ "You may also set OLLAMA_HOST for Ollama or LLAMACPP_HOST for llama.cpp."
273
+ ),
274
+ )
275
+
276
+ parser.add_argument(
277
+ "--co-author",
278
+ dest="co_authors",
279
+ action="append",
280
+ default=None,
281
+ help=(
282
+ "Add Co-authored-by trailer(s) to the generated message. "
283
+ "Repeat for multiple co-authors. "
284
+ "Use 'Name <email@example.com>' or an alias keyword "
285
+ f"({_co_author_alias_keywords_text()}). "
286
+ "When used with --one-line, the subject line remains single-line and these "
287
+ "trailers are appended on separate lines (i.e., the overall output is multi-line)."
190
288
  ),
191
289
  )
192
290
 
@@ -234,6 +332,14 @@ def _run(
234
332
  if chunk_tokens is None:
235
333
  chunk_tokens = 0
236
334
 
335
+ normalized_co_authors: list[str] | None = None
336
+ if args.co_authors:
337
+ try:
338
+ normalized_co_authors = _normalize_co_authors(args.co_authors)
339
+ except ValueError as exc:
340
+ print(str(exc), file=stderr)
341
+ return 2
342
+
237
343
  result: CommitMessageResult | None = None
238
344
  try:
239
345
  if args.debug:
@@ -278,6 +384,14 @@ def _run(
278
384
  else:
279
385
  message = ""
280
386
 
387
+ # Defensive check: one-line normalization can result in an empty message.
388
+ if not message.strip():
389
+ print("Failed to generate commit message: generated message is empty.", file=stderr)
390
+ return 3
391
+
392
+ if normalized_co_authors:
393
+ message = _append_co_author_footers(message, normalized_co_authors)
394
+
281
395
  if not args.commit:
282
396
  if args.debug and result is not None:
283
397
  # Print debug information
@@ -0,0 +1,145 @@
1
+ """llama.cpp provider implementation.
2
+
3
+ This module contains llama.cpp server-specific API calls and token counting.
4
+ llama.cpp server provides an OpenAI-compatible API, so we use the openai library.
5
+ Provider-agnostic orchestration/prompt logic lives in `_llm.py`.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ from os import environ
11
+ from typing import ClassVar, Final
12
+
13
+ from openai import OpenAI
14
+ from openai.types.chat import ChatCompletionMessageParam
15
+ from tiktoken import Encoding, get_encoding
16
+
17
+ from ._llm import LLMTextResult, LLMUsage
18
+
19
+
20
+ _DEFAULT_LLAMACPP_HOST: Final[str] = "http://localhost:8080"
21
+
22
+
23
+ def _resolve_llamacpp_host(
24
+ host: str | None,
25
+ /,
26
+ ) -> str:
27
+ """Resolve the llama.cpp server host URL from arg, env, or default."""
28
+
29
+ return host or environ.get("LLAMACPP_HOST") or _DEFAULT_LLAMACPP_HOST
30
+
31
+
32
+ def _get_encoding() -> Encoding:
33
+ """Get a fallback encoding for token counting."""
34
+
35
+ try:
36
+ return get_encoding("cl100k_base")
37
+ except Exception:
38
+ return get_encoding("gpt2")
39
+
40
+
41
+ class LlamaCppProvider:
42
+ """llama.cpp provider implementation for the LLM protocol.
43
+
44
+ Uses the OpenAI-compatible API provided by llama.cpp server.
45
+ """
46
+
47
+ __slots__ = (
48
+ "_host",
49
+ "_client",
50
+ )
51
+
52
+ name: ClassVar[str] = "llamacpp"
53
+
54
+ def __init__(
55
+ self,
56
+ /,
57
+ *,
58
+ host: str | None = None,
59
+ ) -> None:
60
+ self._host = _resolve_llamacpp_host(host)
61
+ # llama.cpp server uses OpenAI-compatible API
62
+ # api_key is not required but openai library needs a placeholder
63
+ self._client = OpenAI(
64
+ base_url=f"{self._host}/v1",
65
+ api_key="llamacpp", # Placeholder, llama.cpp doesn't require auth by default
66
+ )
67
+
68
+ def generate_text(
69
+ self,
70
+ /,
71
+ *,
72
+ model: str,
73
+ instructions: str,
74
+ user_text: str,
75
+ ) -> LLMTextResult:
76
+ """Generate text using llama.cpp server (OpenAI-compatible chat/completions API)."""
77
+
78
+ messages: list[ChatCompletionMessageParam] = [
79
+ {"role": "system", "content": instructions},
80
+ {"role": "user", "content": user_text},
81
+ ]
82
+
83
+ try:
84
+ resp = self._client.chat.completions.create(
85
+ model=model,
86
+ messages=messages,
87
+ )
88
+ except Exception as exc:
89
+ raise RuntimeError(
90
+ f"Failed to connect to llama.cpp server at {self._host}. "
91
+ f"Make sure llama.cpp server is running: {exc}"
92
+ ) from exc
93
+
94
+ text: str = ""
95
+ if resp.choices and len(resp.choices) > 0:
96
+ choice = resp.choices[0]
97
+ if choice.message and choice.message.content:
98
+ text = choice.message.content.strip()
99
+
100
+ if not text:
101
+ raise RuntimeError("An empty response text was generated by the provider.")
102
+
103
+ usage: LLMUsage | None = None
104
+ if resp.usage is not None:
105
+ usage = LLMUsage(
106
+ prompt_tokens=resp.usage.prompt_tokens,
107
+ completion_tokens=resp.usage.completion_tokens,
108
+ total_tokens=resp.usage.total_tokens,
109
+ )
110
+
111
+ return LLMTextResult(
112
+ text=text,
113
+ response_id=resp.id,
114
+ usage=usage,
115
+ )
116
+
117
+ def count_tokens(
118
+ self,
119
+ /,
120
+ *,
121
+ model: str,
122
+ text: str,
123
+ ) -> int:
124
+ """Count tokens using llama.cpp's official token counting API."""
125
+
126
+ try:
127
+ # Use llama.cpp's official token counting endpoint via OpenAI client's internal HTTP client
128
+ response = self._client.post(
129
+ "/messages/count_tokens",
130
+ body={
131
+ "model": model,
132
+ "messages": [
133
+ {"role": "user", "content": text}
134
+ ]
135
+ },
136
+ cast_to=dict,
137
+ )
138
+ return response.get("total", 0)
139
+ except Exception:
140
+ # Fallback to tiktoken approximation
141
+ try:
142
+ encoding = _get_encoding()
143
+ return len(encoding.encode(text))
144
+ except Exception:
145
+ return len(text.split())
@@ -19,6 +19,7 @@ _DEFAULT_PROVIDER: Final[str] = "openai"
19
19
  _DEFAULT_MODEL_OPENAI: Final[str] = "gpt-5-mini"
20
20
  _DEFAULT_MODEL_GOOGLE: Final[str] = "gemini-2.5-flash"
21
21
  _DEFAULT_MODEL_OLLAMA: Final[str] = "gpt-oss:20b"
22
+ _DEFAULT_MODEL_LLAMACPP: Final[str] = "default"
22
23
  _DEFAULT_LANGUAGE: Final[str] = "en-GB"
23
24
 
24
25
 
@@ -155,6 +156,9 @@ def _resolve_model(
155
156
  elif provider_name == "ollama":
156
157
  default_model = _DEFAULT_MODEL_OLLAMA
157
158
  provider_model = environ.get("OLLAMA_MODEL")
159
+ elif provider_name == "llamacpp":
160
+ default_model = _DEFAULT_MODEL_LLAMACPP
161
+ provider_model = environ.get("LLAMACPP_MODEL")
158
162
  else:
159
163
  default_model = _DEFAULT_MODEL_OPENAI
160
164
  provider_model = environ.get("OPENAI_MODEL")
@@ -195,8 +199,14 @@ def get_provider(
195
199
 
196
200
  return OllamaProvider(host=host)
197
201
 
202
+ if name == "llamacpp":
203
+ # Local import to avoid import cycles: providers may import shared types from this module.
204
+ from ._llamacpp import LlamaCppProvider
205
+
206
+ return LlamaCppProvider(host=host)
207
+
198
208
  raise UnsupportedProviderError(
199
- f"Unsupported provider: {name}. Supported providers: openai, google, ollama"
209
+ f"Unsupported provider: {name}. Supported providers: openai, google, ollama, llamacpp"
200
210
  )
201
211
 
202
212
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: git-commit-message
3
- Version: 0.8.1
3
+ Version: 0.8.2
4
4
  Summary: Generate Git commit messages from staged changes using LLM
5
5
  Maintainer-email: Mina Her <minacle@live.com>
6
6
  License: This is free and unencumbered software released into the public domain.
@@ -51,7 +51,7 @@ Requires-Dist: tiktoken>=0.12.0
51
51
 
52
52
  # git-commit-message
53
53
 
54
- Generate a commit message from your staged changes using OpenAI, Google Gemini, or Ollama.
54
+ Generate a commit message from your staged changes using OpenAI, Google Gemini, Ollama, or llama.cpp.
55
55
 
56
56
  [![asciicast](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN.svg)](https://asciinema.org/a/jk0phFqNnc5vaCiIZEYBwZOyN)
57
57
 
@@ -120,6 +120,23 @@ export GIT_COMMIT_MESSAGE_PROVIDER=ollama
120
120
  export OLLAMA_MODEL=mistral
121
121
  ```
122
122
 
123
+ ### llama.cpp (local models)
124
+
125
+ 1. Build and run llama.cpp server with your model:
126
+
127
+ ```sh
128
+ llama-server -hf ggml-org/gpt-oss-20b-GGUF --host 0.0.0.0 --port 8080
129
+ ```
130
+
131
+ 2. The server runs on `http://localhost:8080` by default.
132
+
133
+ Optional: set defaults:
134
+
135
+ ```sh
136
+ export GIT_COMMIT_MESSAGE_PROVIDER=llamacpp
137
+ export LLAMACPP_HOST=http://localhost:8080
138
+ ```
139
+
123
140
  Note (fish):
124
141
 
125
142
  ```fish
@@ -141,10 +158,13 @@ git add -A
141
158
  git-commit-message "optional extra context about the change"
142
159
  ```
143
160
 
144
- Generate a single-line subject only:
161
+ Generate a single-line subject only (when no trailers are appended):
145
162
 
146
163
  ```sh
147
164
  git-commit-message --one-line "optional context"
165
+
166
+ # with trailers, output is subject plus trailer lines
167
+ git-commit-message --one-line --co-author 'John Doe <john.doe@example.com>'
148
168
  ```
149
169
 
150
170
  Select provider:
@@ -158,6 +178,9 @@ git-commit-message --provider google
158
178
 
159
179
  # Ollama
160
180
  git-commit-message --provider ollama
181
+
182
+ # llama.cpp
183
+ git-commit-message --provider llamacpp
161
184
  ```
162
185
 
163
186
  Commit immediately (optionally open editor):
@@ -165,6 +188,11 @@ Commit immediately (optionally open editor):
165
188
  ```sh
166
189
  git-commit-message --commit "refactor parser for speed"
167
190
  git-commit-message --commit --edit "refactor parser for speed"
191
+
192
+ # add co-author trailers
193
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>'
194
+ git-commit-message --commit --co-author 'John Doe <john.doe@example.com>' --co-author 'Jane Doe <jane.doe@example.com>'
195
+ git-commit-message --commit --co-author copilot
168
196
  ```
169
197
 
170
198
  Amend the previous commit:
@@ -219,19 +247,26 @@ Configure Ollama host (if running on a different machine):
219
247
  git-commit-message --provider ollama --host http://192.168.1.100:11434
220
248
  ```
221
249
 
250
+ Configure llama.cpp host:
251
+
252
+ ```sh
253
+ git-commit-message --provider llamacpp --host http://192.168.1.100:8080
254
+ ```
255
+
222
256
  ## Options
223
257
 
224
- - `--provider {openai,google,ollama}`: provider to use (default: `openai`)
225
- - `--model MODEL`: model override (provider-specific)
258
+ - `--provider {openai,google,ollama,llamacpp}`: provider to use (default: `openai`)
259
+ - `--model MODEL`: model override (provider-specific; ignored for llama.cpp)
226
260
  - `--language TAG`: output language/locale (default: `en-GB`)
227
- - `--one-line`: output subject only
261
+ - `--one-line`: output subject only when no trailers are appended; with `--co-author`, output is a single-line subject plus `Co-authored-by:` trailer lines
228
262
  - `--max-length N`: max subject length (default: 72)
229
263
  - `--chunk-tokens N`: token budget per diff chunk (`0` = single summary pass, `-1` disables summarisation)
230
264
  - `--debug`: print request/response details
231
265
  - `--commit`: run `git commit -m <message>`
232
266
  - `--amend`: generate a message suitable for amending the previous commit (diff is from the amended commit's parent to the staged index; if nothing is staged, this effectively becomes the diff introduced by `HEAD`)
233
267
  - `--edit`: with `--commit`, open editor for final message
234
- - `--host URL`: host URL for providers like Ollama (default: `http://localhost:11434`)
268
+ - `--host URL`: host URL for providers like Ollama or llama.cpp (default: `http://localhost:11434` for Ollama, `http://localhost:8080` for llama.cpp)
269
+ - `--co-author VALUE`: append `Co-authored-by:` trailer(s). Repeat to add multiple values. Accepted forms: `Name <email@example.com>` or `copilot` (alias, case-insensitive).
235
270
 
236
271
  ## Environment variables
237
272
 
@@ -247,6 +282,7 @@ Optional:
247
282
  - `OPENAI_MODEL`: OpenAI-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
248
283
  - `OLLAMA_MODEL`: Ollama-only model override (used if `--model`/`GIT_COMMIT_MESSAGE_MODEL` are not set)
249
284
  - `OLLAMA_HOST`: Ollama server URL (default: `http://localhost:11434`)
285
+ - `LLAMACPP_HOST`: llama.cpp server URL (default: `http://localhost:8080`)
250
286
  - `GIT_COMMIT_MESSAGE_LANGUAGE`: default language/locale (default: `en-GB`)
251
287
  - `GIT_COMMIT_MESSAGE_CHUNK_TOKENS`: default chunk token budget (default: `0`)
252
288
 
@@ -255,6 +291,7 @@ Default models (if not overridden):
255
291
  - OpenAI: `gpt-5-mini`
256
292
  - Google: `gemini-2.5-flash`
257
293
  - Ollama: `gpt-oss:20b`
294
+ - llama.cpp: uses pre-loaded model (model parameter is ignored)
258
295
 
259
296
  ## AI-generated code notice
260
297
 
@@ -7,6 +7,7 @@ src/git_commit_message/_cli.py
7
7
  src/git_commit_message/_gemini.py
8
8
  src/git_commit_message/_git.py
9
9
  src/git_commit_message/_gpt.py
10
+ src/git_commit_message/_llamacpp.py
10
11
  src/git_commit_message/_llm.py
11
12
  src/git_commit_message/_ollama.py
12
13
  src/git_commit_message.egg-info/PKG-INFO