git-copilot-commit 0.6.1__tar.gz → 0.7.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/PKG-INFO +85 -50
  2. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/README.md +84 -49
  3. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/cli.py +72 -6
  4. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/llms/copilot.py +15 -1
  5. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/llms/core.py +194 -16
  6. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/llms/openai_api.py +67 -7
  7. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/llms/providers.py +10 -9
  8. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/test_cli.py +145 -6
  9. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/test_llm_and_copilot.py +191 -2
  10. git_copilot_commit-0.7.0/tests/test_providers.py +303 -0
  11. git_copilot_commit-0.6.1/tests/test_providers.py +0 -178
  12. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/.config/mise.toml +0 -0
  13. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/.github/dependabot.yml +0 -0
  14. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/.github/workflows/ci.yml +0 -0
  15. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/.gitignore +0 -0
  16. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/.python-version +0 -0
  17. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/LICENSE +0 -0
  18. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/pyproject.toml +0 -0
  19. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/__init__.py +0 -0
  20. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/git.py +0 -0
  21. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/llms/__init__.py +0 -0
  22. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/prompts/commit-message-generator-prompt.md +0 -0
  23. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/prompts/split-commit-planner-prompt.md +0 -0
  24. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/py.typed +0 -0
  25. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/settings.py +0 -0
  26. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/split_commits.py +0 -0
  27. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/src/git_copilot_commit/version.py +0 -0
  28. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/conftest.py +0 -0
  29. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/test_git.py +0 -0
  30. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/test_settings.py +0 -0
  31. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/tests/test_split_commits.py +0 -0
  32. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/uv.lock +0 -0
  33. {git_copilot_commit-0.6.1 → git_copilot_commit-0.7.0}/vhs/demo.vhs +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: git-copilot-commit
3
- Version: 0.6.1
3
+ Version: 0.7.0
4
4
  Summary: Automatically generate and commit changes using GitHub Copilot or OpenAI-compatible LLMs
5
5
  Author-email: Dheepak Krishnamurthy <1813121+kdheepak@users.noreply.github.com>
6
6
  License-File: LICENSE
@@ -18,14 +18,16 @@ Description-Content-Type: text/markdown
18
18
  [![PyPI](https://img.shields.io/pypi/v/git-copilot-commit)](https://pypi.org/project/git-copilot-commit/)
19
19
  [![License](https://img.shields.io/github/license/kdheepak/git-copilot-commit)](https://github.com/kdheepak/git-copilot-commit/blob/main/LICENSE)
20
20
 
21
- AI-powered Git commit assistant that generates conventional commit messages using GitHub Copilot or any OpenAI-compatible LLM.
21
+ AI-powered Git commit assistant that generates conventional commit messages using GitHub Copilot or
22
+ any OpenAI-compatible LLM.
22
23
 
23
24
  ![Screenshot of git-copilot-commit in action](https://github.com/user-attachments/assets/6a6d70a6-6060-44e6-8cf4-a6532e9e9142)
24
25
 
25
26
  ## Features
26
27
 
27
28
  - Generates commit messages based on your staged changes
28
- - Supports GitHub Copilot and OpenAI-compatible `/v1/models` + `/v1/chat/completions` APIs
29
+ - Supports GitHub Copilot and OpenAI-compatible `/v1/chat/completions`, `/v1/responses`,
30
+ and `/v1/models` endpoints
29
31
  - Supports multiple LLM models: GPT, Claude, Gemini, local models, and more
30
32
  - Allows editing of generated messages before committing
31
33
  - Follows the [Conventional Commits](https://www.conventionalcommits.org/) standard
@@ -51,8 +53,8 @@ You can run the latest version of tool directly every time by invoking this one
51
53
  uvx git-copilot-commit --help
52
54
  ```
53
55
 
54
- Alternatively, you can install the tool once into a global isolated environment
55
- and run `git-copilot-commit` to invoke it:
56
+ Alternatively, you can install the tool once into a global isolated environment and run
57
+ `git-copilot-commit` to invoke it:
56
58
 
57
59
  ```bash
58
60
  # Install into global isolated environment
@@ -96,39 +98,27 @@ git-copilot-commit --help
96
98
 
97
99
  ### OpenAI-compatible provider
98
100
 
99
- 1. Point the CLI at your server.
100
-
101
- The base URL can be either the provider root such as `http://127.0.0.1:11434/v1`
102
- or the full chat completions endpoint such as
103
- `http://127.0.0.1:11434/v1/chat/completions`.
101
+ 1. List models by pointing the CLI at your server's `/models` endpoint.
104
102
 
105
103
  ```bash
106
104
  uvx git-copilot-commit models \
107
105
  --provider openai \
108
- --base-url http://127.0.0.1:11434/v1
106
+ --base-url http://127.0.0.1:11434/v1/models
109
107
  ```
110
108
 
111
- 2. Generate and commit.
109
+ 2. Generate and commit by pointing the CLI at the generation endpoint you want to use.
112
110
 
113
111
  ```bash
114
112
  uvx git-copilot-commit commit \
115
113
  --provider openai \
116
- --base-url http://127.0.0.1:11434/v1 \
114
+ --base-url http://127.0.0.1:11434/v1/chat/completions \
117
115
  --model your-model-id
118
116
  ```
119
117
 
120
118
  If your server requires an API key, also pass `--api-key ...` or set `OPENAI_API_KEY`.
121
119
 
122
- 3. Example: use a self-hosted GPT-OSS model:
123
-
124
- ```bash
125
- uvx git-copilot-commit commit \
126
- --provider openai \
127
- --base-url http://example.com:8001/v1/chat/completions \
128
- --model openai/gpt-oss-120b
129
- ```
130
-
131
- Model ids with slashes such as `openai/gpt-oss-120b` are supported.
120
+ OpenAI-compatible generation URLs must end with `/chat/completions` or `/responses`.
121
+ Model listing URLs must end with `/models`.
132
122
 
133
123
  ## Usage
134
124
 
@@ -137,28 +127,39 @@ git-copilot-commit --help
137
127
  ```bash
138
128
  $ uvx git-copilot-commit commit --help
139
129
 
140
- Usage: git-copilot-commit commit [OPTIONS]
130
+ Usage: git-copilot-commit commit [ARGS]
141
131
 
142
132
  Generate commit message based on changes in the current git repository and commit them.
143
133
 
144
- ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────╮
145
- │ --all -a Stage all files before committing
146
- │ --split Split staged hunks into multiple commits automatically.
147
- Pass `--split=N` to express a preference for N commits.
148
- --model -m MODEL_ID Model to use for generating commit message
149
- │ --yes -y Automatically accept the generated commit message │
150
- │ --context -c TEXT Optional user-provided context to guide commit message
151
- --provider TEXT LLM provider to use: copilot or openai
152
- │ --base-url URL Base URL for an OpenAI-compatible provider
153
- --api-key TEXT API key for an OpenAI-compatible provider
154
- --ca-bundle PATH Path to a custom CA bundle (PEM)
155
- --insecure Disable SSL certificate verification.
156
- --native-tls --no-native-tls Use the OS's native certificate store via 'truststore'
157
- for httpx instead of the Python bundle. Ignored if
158
- --ca-bundle or --insecure is used.
159
- [default: no-native-tls]
160
- │ --help Show this message and exit.
161
- ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
134
+ ╭─ Parameters ─────────────────────────────────────────────────────────────────╮
135
+ ALL --all -a --no-all Stage all files before committing [default: False]
136
+ SPLIT --split --no-split Split staged hunks into multiple commits
137
+ automatically. Pass --split=N to express a
138
+ preference for N commits. [default: False]
139
+ MODEL --model -m Model to use for generating commit message │
140
+ YES --yes -y --no-yes Automatically accept the generated commit message
141
+ [default: False]
142
+ CONTEXT --context -c Optional user-provided context to guide commit
143
+ message [default: ""]
144
+ DISABLE-THINKING Disable or minimize reasoning/thinking tokens for
145
+ --disable-thinking commit-message requests. [default: True]
146
+ --enable-thinking
147
+ MAX-TOKENS --max-tokens Maximum output tokens for LLM generation.
148
+ [default: 1024]
149
+ PROVIDER --provider LLM provider to use: copilot or openai.
150
+ BASE-URL --base-url Endpoint URL for an OpenAI-compatible provider,
151
+ │ for example │
152
+ │ http://127.0.0.1:11434/v1/chat/completions. │
153
+ │ API-KEY --api-key API key for an OpenAI-compatible provider. Omit │
154
+ │ when the server does not require one. │
155
+ │ CA-BUNDLE --ca-bundle Path to a custom CA bundle (PEM) │
156
+ │ INSECURE --insecure Disable SSL certificate verification. [default: │
157
+ │ --no-insecure False] │
158
+ │ NATIVE-TLS --native-tls Use the OS's native certificate store via │
159
+ │ --no-native-tls 'truststore' for httpx instead of the Python │
160
+ │ bundle. Ignored if --ca-bundle or --insecure is │
161
+ │ used. [default: True] │
162
+ ╰──────────────────────────────────────────────────────────────────────────────╯
162
163
  ```
163
164
 
164
165
  ## Examples
@@ -186,17 +187,51 @@ Use a local OpenAI-compatible server:
186
187
  ```bash
187
188
  uvx git-copilot-commit commit \
188
189
  --provider openai \
189
- --base-url http://127.0.0.1:11434/v1 \
190
+ --base-url http://127.0.0.1:11434/v1/chat/completions \
190
191
  --model your-model-id
191
192
  ```
192
193
 
193
- Use a self-hosted GPT-OSS endpoint:
194
+ Example with `openai/gpt-oss-120b` and `Qwen/Qwen3.6-35B-A3B`:
194
195
 
195
196
  ```bash
196
197
  uvx git-copilot-commit commit \
197
198
  --provider openai \
198
199
  --base-url http://example.com:8001/v1/chat/completions \
199
200
  --model openai/gpt-oss-120b
201
+
202
+ uvx git-copilot-commit commit \
203
+ --provider openai \
204
+ --base-url http://example.com:8002/v1/chat/completions \
205
+ --model Qwen/Qwen3.6-35B-A3B
206
+ ```
207
+
208
+ Use the Responses API endpoint:
209
+
210
+ ```bash
211
+ uvx git-copilot-commit commit \
212
+ --provider openai \
213
+ --base-url http://example.com:8002/v1/responses \
214
+ --model your-model-id
215
+ ```
216
+
217
+ Increase the output token budget:
218
+
219
+ ```bash
220
+ uvx git-copilot-commit commit --max-tokens 4096
221
+ ```
222
+
223
+ Thinking/reasoning is disabled or minimized by default for commit-message requests. To let the
224
+ selected model use its default thinking behavior, pass:
225
+
226
+ ```bash
227
+ uvx git-copilot-commit commit --enable-thinking
228
+ ```
229
+
230
+ TLS uses the operating system's native certificate store by default. To use Python's default
231
+ certificate bundle instead, pass:
232
+
233
+ ```bash
234
+ uvx git-copilot-commit commit --no-native-tls
200
235
  ```
201
236
 
202
237
  Split staged hunks into separate commits:
@@ -246,8 +281,8 @@ Now you can run to review the message before committing:
246
281
  git ai-commit
247
282
  ```
248
283
 
249
- Alternatively, you can stage all files and auto accept the commit message and
250
- specify which model should be used to generate the commit in one CLI invocation.
284
+ Alternatively, you can stage all files and auto accept the commit message and specify which model
285
+ should be used to generate the commit in one CLI invocation.
251
286
 
252
287
  ```bash
253
288
  git ai-commit --all --yes --model claude-3.5-sonnet
@@ -257,7 +292,7 @@ You can also set provider defaults with environment variables:
257
292
 
258
293
  ```bash
259
294
  export GIT_COPILOT_COMMIT_PROVIDER=openai
260
- export GIT_COPILOT_COMMIT_BASE_URL=http://127.0.0.1:11434/v1
295
+ export GIT_COPILOT_COMMIT_BASE_URL=http://127.0.0.1:11434/v1/chat/completions
261
296
  export GIT_COPILOT_COMMIT_API_KEY=...
262
297
  export OPENAI_API_KEY=...
263
298
  git ai-commit --provider openai --model your-model-id
@@ -267,7 +302,7 @@ For example:
267
302
 
268
303
  ```bash
269
304
  export GIT_COPILOT_COMMIT_PROVIDER=openai
270
- export GIT_COPILOT_COMMIT_BASE_URL=http://example.com:8001/v1
305
+ export GIT_COPILOT_COMMIT_BASE_URL=http://example.com:8001/v1/chat/completions
271
306
  git ai-commit --model openai/gpt-oss-120b
272
307
  ```
273
308
 
@@ -279,5 +314,5 @@ git ai-commit --model openai/gpt-oss-120b
279
314
  > git config --global diff.context 3
280
315
  > ```
281
316
  >
282
- > This may be useful because this tool sends the diffs with surrounding context
283
- > to the LLM for generating a commit message
317
+ > This may be useful because this tool sends the diffs with surrounding context to the LLM for
318
+ > generating a commit message
@@ -4,14 +4,16 @@
4
4
  [![PyPI](https://img.shields.io/pypi/v/git-copilot-commit)](https://pypi.org/project/git-copilot-commit/)
5
5
  [![License](https://img.shields.io/github/license/kdheepak/git-copilot-commit)](https://github.com/kdheepak/git-copilot-commit/blob/main/LICENSE)
6
6
 
7
- AI-powered Git commit assistant that generates conventional commit messages using GitHub Copilot or any OpenAI-compatible LLM.
7
+ AI-powered Git commit assistant that generates conventional commit messages using GitHub Copilot or
8
+ any OpenAI-compatible LLM.
8
9
 
9
10
  ![Screenshot of git-copilot-commit in action](https://github.com/user-attachments/assets/6a6d70a6-6060-44e6-8cf4-a6532e9e9142)
10
11
 
11
12
  ## Features
12
13
 
13
14
  - Generates commit messages based on your staged changes
14
- - Supports GitHub Copilot and OpenAI-compatible `/v1/models` + `/v1/chat/completions` APIs
15
+ - Supports GitHub Copilot and OpenAI-compatible `/v1/chat/completions`, `/v1/responses`,
16
+ and `/v1/models` endpoints
15
17
  - Supports multiple LLM models: GPT, Claude, Gemini, local models, and more
16
18
  - Allows editing of generated messages before committing
17
19
  - Follows the [Conventional Commits](https://www.conventionalcommits.org/) standard
@@ -37,8 +39,8 @@ You can run the latest version of tool directly every time by invoking this one
37
39
  uvx git-copilot-commit --help
38
40
  ```
39
41
 
40
- Alternatively, you can install the tool once into a global isolated environment
41
- and run `git-copilot-commit` to invoke it:
42
+ Alternatively, you can install the tool once into a global isolated environment and run
43
+ `git-copilot-commit` to invoke it:
42
44
 
43
45
  ```bash
44
46
  # Install into global isolated environment
@@ -82,39 +84,27 @@ git-copilot-commit --help
82
84
 
83
85
  ### OpenAI-compatible provider
84
86
 
85
- 1. Point the CLI at your server.
86
-
87
- The base URL can be either the provider root such as `http://127.0.0.1:11434/v1`
88
- or the full chat completions endpoint such as
89
- `http://127.0.0.1:11434/v1/chat/completions`.
87
+ 1. List models by pointing the CLI at your server's `/models` endpoint.
90
88
 
91
89
  ```bash
92
90
  uvx git-copilot-commit models \
93
91
  --provider openai \
94
- --base-url http://127.0.0.1:11434/v1
92
+ --base-url http://127.0.0.1:11434/v1/models
95
93
  ```
96
94
 
97
- 2. Generate and commit.
95
+ 2. Generate and commit by pointing the CLI at the generation endpoint you want to use.
98
96
 
99
97
  ```bash
100
98
  uvx git-copilot-commit commit \
101
99
  --provider openai \
102
- --base-url http://127.0.0.1:11434/v1 \
100
+ --base-url http://127.0.0.1:11434/v1/chat/completions \
103
101
  --model your-model-id
104
102
  ```
105
103
 
106
104
  If your server requires an API key, also pass `--api-key ...` or set `OPENAI_API_KEY`.
107
105
 
108
- 3. Example: use a self-hosted GPT-OSS model:
109
-
110
- ```bash
111
- uvx git-copilot-commit commit \
112
- --provider openai \
113
- --base-url http://example.com:8001/v1/chat/completions \
114
- --model openai/gpt-oss-120b
115
- ```
116
-
117
- Model ids with slashes such as `openai/gpt-oss-120b` are supported.
106
+ OpenAI-compatible generation URLs must end with `/chat/completions` or `/responses`.
107
+ Model listing URLs must end with `/models`.
118
108
 
119
109
  ## Usage
120
110
 
@@ -123,28 +113,39 @@ git-copilot-commit --help
123
113
  ```bash
124
114
  $ uvx git-copilot-commit commit --help
125
115
 
126
- Usage: git-copilot-commit commit [OPTIONS]
116
+ Usage: git-copilot-commit commit [ARGS]
127
117
 
128
118
  Generate commit message based on changes in the current git repository and commit them.
129
119
 
130
- ╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────╮
131
- │ --all -a Stage all files before committing
132
- │ --split Split staged hunks into multiple commits automatically.
133
- Pass `--split=N` to express a preference for N commits.
134
- --model -m MODEL_ID Model to use for generating commit message
135
- │ --yes -y Automatically accept the generated commit message │
136
- │ --context -c TEXT Optional user-provided context to guide commit message
137
- --provider TEXT LLM provider to use: copilot or openai
138
- │ --base-url URL Base URL for an OpenAI-compatible provider
139
- --api-key TEXT API key for an OpenAI-compatible provider
140
- --ca-bundle PATH Path to a custom CA bundle (PEM)
141
- --insecure Disable SSL certificate verification.
142
- --native-tls --no-native-tls Use the OS's native certificate store via 'truststore'
143
- for httpx instead of the Python bundle. Ignored if
144
- --ca-bundle or --insecure is used.
145
- [default: no-native-tls]
146
- │ --help Show this message and exit.
147
- ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
120
+ ╭─ Parameters ─────────────────────────────────────────────────────────────────╮
121
+ ALL --all -a --no-all Stage all files before committing [default: False]
122
+ SPLIT --split --no-split Split staged hunks into multiple commits
123
+ automatically. Pass --split=N to express a
124
+ preference for N commits. [default: False]
125
+ MODEL --model -m Model to use for generating commit message │
126
+ YES --yes -y --no-yes Automatically accept the generated commit message
127
+ [default: False]
128
+ CONTEXT --context -c Optional user-provided context to guide commit
129
+ message [default: ""]
130
+ DISABLE-THINKING Disable or minimize reasoning/thinking tokens for
131
+ --disable-thinking commit-message requests. [default: True]
132
+ --enable-thinking
133
+ MAX-TOKENS --max-tokens Maximum output tokens for LLM generation.
134
+ [default: 1024]
135
+ PROVIDER --provider LLM provider to use: copilot or openai.
136
+ BASE-URL --base-url Endpoint URL for an OpenAI-compatible provider,
137
+ │ for example │
138
+ │ http://127.0.0.1:11434/v1/chat/completions. │
139
+ │ API-KEY --api-key API key for an OpenAI-compatible provider. Omit │
140
+ │ when the server does not require one. │
141
+ │ CA-BUNDLE --ca-bundle Path to a custom CA bundle (PEM) │
142
+ │ INSECURE --insecure Disable SSL certificate verification. [default: │
143
+ │ --no-insecure False] │
144
+ │ NATIVE-TLS --native-tls Use the OS's native certificate store via │
145
+ │ --no-native-tls 'truststore' for httpx instead of the Python │
146
+ │ bundle. Ignored if --ca-bundle or --insecure is │
147
+ │ used. [default: True] │
148
+ ╰──────────────────────────────────────────────────────────────────────────────╯
148
149
  ```
149
150
 
150
151
  ## Examples
@@ -172,17 +173,51 @@ Use a local OpenAI-compatible server:
172
173
  ```bash
173
174
  uvx git-copilot-commit commit \
174
175
  --provider openai \
175
- --base-url http://127.0.0.1:11434/v1 \
176
+ --base-url http://127.0.0.1:11434/v1/chat/completions \
176
177
  --model your-model-id
177
178
  ```
178
179
 
179
- Use a self-hosted GPT-OSS endpoint:
180
+ Example with `openai/gpt-oss-120b` and `Qwen/Qwen3.6-35B-A3B`:
180
181
 
181
182
  ```bash
182
183
  uvx git-copilot-commit commit \
183
184
  --provider openai \
184
185
  --base-url http://example.com:8001/v1/chat/completions \
185
186
  --model openai/gpt-oss-120b
187
+
188
+ uvx git-copilot-commit commit \
189
+ --provider openai \
190
+ --base-url http://example.com:8002/v1/chat/completions \
191
+ --model Qwen/Qwen3.6-35B-A3B
192
+ ```
193
+
194
+ Use the Responses API endpoint:
195
+
196
+ ```bash
197
+ uvx git-copilot-commit commit \
198
+ --provider openai \
199
+ --base-url http://example.com:8002/v1/responses \
200
+ --model your-model-id
201
+ ```
202
+
203
+ Increase the output token budget:
204
+
205
+ ```bash
206
+ uvx git-copilot-commit commit --max-tokens 4096
207
+ ```
208
+
209
+ Thinking/reasoning is disabled or minimized by default for commit-message requests. To let the
210
+ selected model use its default thinking behavior, pass:
211
+
212
+ ```bash
213
+ uvx git-copilot-commit commit --enable-thinking
214
+ ```
215
+
216
+ TLS uses the operating system's native certificate store by default. To use Python's default
217
+ certificate bundle instead, pass:
218
+
219
+ ```bash
220
+ uvx git-copilot-commit commit --no-native-tls
186
221
  ```
187
222
 
188
223
  Split staged hunks into separate commits:
@@ -232,8 +267,8 @@ Now you can run to review the message before committing:
232
267
  git ai-commit
233
268
  ```
234
269
 
235
- Alternatively, you can stage all files and auto accept the commit message and
236
- specify which model should be used to generate the commit in one CLI invocation.
270
+ Alternatively, you can stage all files and auto accept the commit message and specify which model
271
+ should be used to generate the commit in one CLI invocation.
237
272
 
238
273
  ```bash
239
274
  git ai-commit --all --yes --model claude-3.5-sonnet
@@ -243,7 +278,7 @@ You can also set provider defaults with environment variables:
243
278
 
244
279
  ```bash
245
280
  export GIT_COPILOT_COMMIT_PROVIDER=openai
246
- export GIT_COPILOT_COMMIT_BASE_URL=http://127.0.0.1:11434/v1
281
+ export GIT_COPILOT_COMMIT_BASE_URL=http://127.0.0.1:11434/v1/chat/completions
247
282
  export GIT_COPILOT_COMMIT_API_KEY=...
248
283
  export OPENAI_API_KEY=...
249
284
  git ai-commit --provider openai --model your-model-id
@@ -253,7 +288,7 @@ For example:
253
288
 
254
289
  ```bash
255
290
  export GIT_COPILOT_COMMIT_PROVIDER=openai
256
- export GIT_COPILOT_COMMIT_BASE_URL=http://example.com:8001/v1
291
+ export GIT_COPILOT_COMMIT_BASE_URL=http://example.com:8001/v1/chat/completions
257
292
  git ai-commit --model openai/gpt-oss-120b
258
293
  ```
259
294
 
@@ -265,5 +300,5 @@ git ai-commit --model openai/gpt-oss-120b
265
300
  > git config --global diff.context 3
266
301
  > ```
267
302
  >
268
- > This may be useful because this tool sends the diffs with surrounding context
269
- > to the LLM for generating a commit message
303
+ > This may be useful because this tool sends the diffs with surrounding context to the LLM for
304
+ > generating a commit message