git-llm-tool 0.1.0__tar.gz → 0.1.16__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/PKG-INFO +238 -8
  2. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/README.md +229 -6
  3. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/__init__.py +1 -1
  4. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/cli.py +20 -18
  5. git_llm_tool-0.1.16/git_llm_tool/commands/changelog_cmd.py +189 -0
  6. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/commands/commit_cmd.py +22 -15
  7. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/core/config.py +77 -13
  8. git_llm_tool-0.1.16/git_llm_tool/core/diff_optimizer.py +206 -0
  9. git_llm_tool-0.1.16/git_llm_tool/core/jira_helper.py +238 -0
  10. git_llm_tool-0.1.16/git_llm_tool/core/rate_limiter.py +136 -0
  11. git_llm_tool-0.1.16/git_llm_tool/core/smart_chunker.py +343 -0
  12. git_llm_tool-0.1.16/git_llm_tool/core/token_counter.py +169 -0
  13. git_llm_tool-0.1.16/git_llm_tool/providers/__init__.py +21 -0
  14. git_llm_tool-0.1.16/git_llm_tool/providers/anthropic_langchain.py +42 -0
  15. git_llm_tool-0.1.16/git_llm_tool/providers/azure_openai_langchain.py +59 -0
  16. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/providers/base.py +9 -8
  17. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/providers/factory.py +24 -16
  18. git_llm_tool-0.1.16/git_llm_tool/providers/gemini_langchain.py +57 -0
  19. git_llm_tool-0.1.16/git_llm_tool/providers/langchain_base.py +640 -0
  20. git_llm_tool-0.1.16/git_llm_tool/providers/ollama_langchain.py +45 -0
  21. git_llm_tool-0.1.16/git_llm_tool/providers/openai_langchain.py +42 -0
  22. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/pyproject.toml +9 -2
  23. git_llm_tool-0.1.0/git_llm_tool/core/jira_helper.py +0 -168
  24. git_llm_tool-0.1.0/git_llm_tool/providers/__init__.py +0 -17
  25. git_llm_tool-0.1.0/git_llm_tool/providers/anthropic.py +0 -90
  26. git_llm_tool-0.1.0/git_llm_tool/providers/azure_openai.py +0 -112
  27. git_llm_tool-0.1.0/git_llm_tool/providers/gemini.py +0 -83
  28. git_llm_tool-0.1.0/git_llm_tool/providers/openai.py +0 -93
  29. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/LICENSE +0 -0
  30. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/__main__.py +0 -0
  31. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/commands/__init__.py +0 -0
  32. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/core/__init__.py +0 -0
  33. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/core/exceptions.py +0 -0
  34. {git_llm_tool-0.1.0 → git_llm_tool-0.1.16}/git_llm_tool/core/git_helper.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.3
2
2
  Name: git-llm-tool
3
- Version: 0.1.0
3
+ Version: 0.1.16
4
4
  Summary: AI-powered git commit message and changelog generator
5
5
  License: MIT
6
6
  Keywords: git,commit,llm,ai,automation,jira,conventional-commits
@@ -17,11 +17,18 @@ Classifier: Programming Language :: Python :: 3.13
17
17
  Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
18
  Classifier: Topic :: Software Development :: Version Control :: Git
19
19
  Classifier: Topic :: Utilities
20
- Requires-Dist: anthropic (>=0.20.0,<0.21.0)
20
+ Requires-Dist: anthropic (>=0.30.0,<0.31.0)
21
21
  Requires-Dist: click (>=8.1.0,<9.0.0)
22
22
  Requires-Dist: google-generativeai (>=0.5.0,<0.6.0)
23
+ Requires-Dist: halo (>=0.0.31,<0.0.32)
24
+ Requires-Dist: langchain (>=0.2.0,<0.3.0)
25
+ Requires-Dist: langchain-anthropic (>=0.1.23,<0.2.0)
26
+ Requires-Dist: langchain-google-genai (>=1.0.0,<2.0.0)
27
+ Requires-Dist: langchain-ollama (>=0.1.0,<0.2.0)
28
+ Requires-Dist: langchain-openai (>=0.1.0,<0.2.0)
23
29
  Requires-Dist: openai (>=1.0.0,<2.0.0)
24
30
  Requires-Dist: pyyaml (>=6.0,<7.0)
31
+ Requires-Dist: tiktoken (>=0.7.0,<0.8.0)
25
32
  Project-URL: Documentation, https://github.com/z0890142/git-llm-tool#readme
26
33
  Project-URL: Homepage, https://github.com/z0890142/git-llm-tool
27
34
  Project-URL: Repository, https://github.com/z0890142/git-llm-tool
@@ -41,12 +48,14 @@ AI-powered git commit message and changelog generator using LLM APIs.
41
48
  - [Installation](#installation)
42
49
  - [Quick Start](#quick-start)
43
50
  - [Configuration](#configuration)
51
+ - [Advanced Features](#advanced-features)
44
52
  - [CLI Commands Reference](#cli-commands-reference)
45
53
  - [Environment Variables](#environment-variables)
46
54
  - [Usage Examples](#usage-examples)
47
55
  - [Supported Models](#supported-models)
48
56
  - [Development](#development)
49
57
  - [Contributing](#contributing)
58
+ - [Git Custom Command Integration](#git-custom-command-integration)
50
59
  - [Troubleshooting](#troubleshooting)
51
60
  - [License](#license)
52
61
 
@@ -54,12 +63,15 @@ AI-powered git commit message and changelog generator using LLM APIs.
54
63
 
55
64
  - 🤖 **Smart Commit Messages**: Automatically generate commit messages from git diff using AI
56
65
  - 📝 **Changelog Generation**: Generate structured changelogs from git history
57
- - 🔧 **Multiple LLM Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, and Azure OpenAI
66
+ - 🔧 **Multiple LLM Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, Azure OpenAI, and Ollama
67
+ - 🚀 **Intelligent Chunking**: Automatic diff splitting with parallel processing for large changes
68
+ - 🔄 **Hybrid Processing**: Use local Ollama for chunk processing + cloud LLM for final quality
69
+ - 📊 **Progress Indicators**: Beautiful progress bars with Halo for long-running operations
58
70
  - ⚙️ **Hierarchical Configuration**: Project-level and global configuration support
59
71
  - 🎯 **Jira Integration**: Automatic ticket detection and work hours tracking
60
72
  - 🌐 **Multi-language Support**: Generate messages in different languages
61
73
  - ✏️ **Editor Integration**: Configurable editor support for reviewing commit messages
62
- - 🚀 **Easy Setup**: Simple installation and configuration
74
+ - 🛠️ **Easy Setup**: Simple installation and configuration
63
75
 
64
76
  ## Installation
65
77
 
@@ -70,7 +82,7 @@ pip install git-llm-tool
70
82
 
71
83
  ### From Source
72
84
  ```bash
73
- git clone https://github.com/your-username/git-llm-tool.git
85
+ git clone https://github.com/z0890142/git-llm-tool.git
74
86
  cd git-llm-tool
75
87
  poetry install
76
88
  ```
@@ -152,6 +164,11 @@ git-llm config set llm.api_keys.google your-key
152
164
  git-llm config set llm.azure_openai.endpoint https://your-resource.openai.azure.com/
153
165
  git-llm config set llm.azure_openai.api_version 2024-12-01-preview
154
166
  git-llm config set llm.azure_openai.deployment_name gpt-4o
167
+
168
+ # Hybrid Ollama processing (optional)
169
+ git-llm config set llm.use_ollama_for_chunks true
170
+ git-llm config set llm.ollama_model llama3:8b
171
+ git-llm config set llm.ollama_base_url http://localhost:11434
155
172
  ```
156
173
 
157
174
  #### Editor Configuration
@@ -194,12 +211,21 @@ llm:
194
211
  api_version: '2024-12-01-preview'
195
212
  deployment_name: 'gpt-4o'
196
213
 
214
+ # LangChain and intelligent processing
215
+ use_langchain: true
216
+ chunking_threshold: 12000 # Enable chunking for diffs larger than 12k tokens
217
+
218
+ # Hybrid Ollama processing (optional)
219
+ use_ollama_for_chunks: false # Set to true to enable
220
+ ollama_model: "llama3:8b" # Local model for chunk processing
221
+ ollama_base_url: "http://localhost:11434"
222
+
197
223
  editor:
198
224
  preferred_editor: 'vi'
199
225
 
200
226
  jira:
201
227
  enabled: true
202
- branch_regex: '^(feat|fix|chore)\/([A-Z]+-\d+)\/.+$'
228
+ ticket_pattern: '^(feat|fix|chore)\/([A-Z]+-\d+)\/.+$'
203
229
  ```
204
230
 
205
231
  ### View Configuration
@@ -212,6 +238,86 @@ git-llm config get llm.default_model
212
238
  git-llm config get editor.preferred_editor
213
239
  ```
214
240
 
241
+ ## Advanced Features
242
+
243
+ ### Intelligent Chunking & Parallel Processing
244
+
245
+ For large diffs, git-llm-tool automatically uses intelligent chunking to break down changes into manageable pieces:
246
+
247
+ - **Automatic Threshold Detection**: Diffs larger than 12,000 tokens are automatically chunked
248
+ - **Smart Splitting**: Prioritizes file-based splitting, then hunks, then size-based splitting
249
+ - **Parallel Processing**: Multiple chunks processed simultaneously for faster results
250
+ - **Progress Indicators**: Beautiful progress bars show real-time processing status
251
+
252
+ ```bash
253
+ # Enable verbose mode to see chunking details
254
+ git-llm commit --verbose
255
+
256
+ # Example output:
257
+ # 🔄 Analyzing diff and creating intelligent chunks...
258
+ # ✅ Created 4 intelligent chunks
259
+ # 📄 Smart chunking stats:
260
+ # Total chunks: 4
261
+ # File chunks: 2
262
+ # Hunk chunks: 2
263
+ # Complete files: 2
264
+ # 🚀 Processing 4 chunks in parallel (4/4 completed)...
265
+ # ✅ Parallel processing completed: 4/4 chunks successful
266
+ # 🔄 Combining 4 summaries into final commit message...
267
+ # ✅ Final commit message generated successfully
268
+ ```
269
+
270
+ ### Hybrid Ollama Processing
271
+
272
+ Use local Ollama for chunk processing combined with cloud LLM for final quality:
273
+
274
+ #### Setup Ollama
275
+ ```bash
276
+ # Install Ollama (macOS/Linux)
277
+ curl -fsSL https://ollama.ai/install.sh | sh
278
+
279
+ # Pull a model
280
+ ollama pull llama3:8b
281
+ # or
282
+ ollama pull llama3.1:8b
283
+ ollama pull codellama:7b
284
+ ```
285
+
286
+ #### Enable Hybrid Mode
287
+ ```bash
288
+ # Enable hybrid processing
289
+ git-llm config set llm.use_ollama_for_chunks true
290
+ git-llm config set llm.ollama_model llama3:8b
291
+
292
+ # Verify Ollama is running
293
+ curl http://localhost:11434/api/version
294
+ ```
295
+
296
+ #### How It Works
297
+ 1. **Map Phase**: Each chunk processed locally with Ollama (fast, private)
298
+ 2. **Reduce Phase**: Final combination using cloud LLM (high quality)
299
+ 3. **Cost Efficient**: Reduces cloud API usage while maintaining quality
300
+ 4. **Privacy**: Sensitive code chunks processed locally
301
+
302
+ ```bash
303
+ # With verbose mode, you'll see:
304
+ git-llm commit --verbose
305
+
306
+ # 🔄 Hybrid processing mode:
307
+ # Map phase (chunks): Ollama (llama3:8b)
308
+ # Reduce phase (final): gpt-4o
309
+ # 🚀 Processing 4 chunks in parallel (4/4 completed)...
310
+ ```
311
+
312
+ ### LangChain Integration
313
+
314
+ Advanced LLM provider management with automatic model selection:
315
+
316
+ - **Automatic Provider Detection**: Based on model name
317
+ - **Retry Logic**: Exponential backoff for failed requests
318
+ - **Rate Limiting**: Prevents API quota exhaustion
319
+ - **Error Recovery**: Graceful fallbacks
320
+
215
321
  ## CLI Commands Reference
216
322
 
217
323
  ### Commit Command
@@ -326,12 +432,24 @@ jira:
326
432
  ### Azure OpenAI
327
433
  - Any deployment of the above OpenAI models
328
434
 
435
+ ### Ollama (Local)
436
+ For hybrid processing (chunk processing only):
437
+ - `llama3:8b` (recommended)
438
+ - `llama3.1:8b`
439
+ - `llama3:70b`
440
+ - `codellama:7b`
441
+ - `codellama:13b`
442
+ - `mistral:7b`
443
+ - `qwen2:7b`
444
+
445
+ **Note**: Ollama models are used only for chunk processing in hybrid mode. Final commit message generation still uses cloud LLMs for optimal quality.
446
+
329
447
  ## Development
330
448
 
331
449
  ### Setup Development Environment
332
450
  ```bash
333
451
  # Clone repository
334
- git clone https://github.com/your-username/git-llm-tool.git
452
+ git clone https://github.com/z0890142/git-llm-tool.git
335
453
  cd git-llm-tool
336
454
 
337
455
  # Install dependencies
@@ -385,11 +503,107 @@ poetry publish
385
503
  8. Push to the branch (`git push origin feature/amazing-feature`)
386
504
  9. Open a Pull Request
387
505
 
506
+ ## Git Custom Command Integration
507
+
508
+ You can integrate git-llm as a native git subcommand, allowing you to use `git llm` instead of `git-llm`.
509
+
510
+ ### Method 1: Git Aliases (Recommended)
511
+
512
+ Add aliases to your git configuration:
513
+
514
+ ```bash
515
+ # Add git aliases for all commands
516
+ git config --global alias.llm-commit '!git-llm commit'
517
+ git config --global alias.llm-changelog '!git-llm changelog'
518
+ git config --global alias.llm-config '!git-llm config'
519
+
520
+ # Or create a general alias
521
+ git config --global alias.llm '!git-llm'
522
+ ```
523
+
524
+ Now you can use:
525
+ ```bash
526
+ git llm commit # Instead of git-llm commit
527
+ git llm changelog # Instead of git-llm changelog
528
+ git llm config get # Instead of git-llm config get
529
+
530
+ # Or with specific aliases
531
+ git llm-commit # Direct alias to git-llm commit
532
+ git llm-changelog # Direct alias to git-llm changelog
533
+ ```
534
+
535
+ ### Method 2: Shell Aliases
536
+
537
+ Add to your shell profile (`.bashrc`, `.zshrc`, etc.):
538
+
539
+ ```bash
540
+ # Simple alias
541
+ alias gllm='git-llm'
542
+
543
+ # Or git-style aliases
544
+ alias gllmc='git-llm commit'
545
+ alias gllmcl='git-llm changelog'
546
+ alias gllmcfg='git-llm config'
547
+ ```
548
+
549
+ Usage:
550
+ ```bash
551
+ gllm commit # git-llm commit
552
+ gllmc # git-llm commit
553
+ gllmcl # git-llm changelog
554
+ ```
555
+
556
+ ### Method 3: Custom Git Script
557
+
558
+ Create a custom git command script:
559
+
560
+ ```bash
561
+ # Create git-llm script in your PATH
562
+ sudo tee /usr/local/bin/git-llm > /dev/null << 'EOF'
563
+ #!/bin/bash
564
+ # Git-LLM integration script
565
+ exec git-llm "$@"
566
+ EOF
567
+
568
+ sudo chmod +x /usr/local/bin/git-llm
569
+ ```
570
+
571
+ Now you can use:
572
+ ```bash
573
+ git llm commit # Calls git-llm commit
574
+ git llm changelog # Calls git-llm changelog
575
+ ```
576
+
577
+ ### Recommended Git Workflow
578
+
579
+ With git aliases configured, your workflow becomes:
580
+
581
+ ```bash
582
+ # Make changes
583
+ echo "console.log('Hello');" > app.js
584
+
585
+ # Stage changes
586
+ git add .
587
+
588
+ # Generate AI commit message (opens editor)
589
+ git llm commit
590
+
591
+ # Or commit directly
592
+ git llm commit --apply
593
+
594
+ # Generate changelog
595
+ git llm changelog
596
+
597
+ # Check configuration
598
+ git llm config get
599
+ ```
600
+
388
601
  ## Requirements
389
602
 
390
603
  - Python 3.12+
391
604
  - Git
392
- - At least one LLM provider API key
605
+ - At least one LLM provider API key (OpenAI, Anthropic, Google, or Azure OpenAI)
606
+ - **Optional**: Ollama for hybrid processing (local chunk processing)
393
607
 
394
608
  ## Troubleshooting
395
609
 
@@ -410,6 +624,22 @@ poetry publish
410
624
  - Make sure you have commits in the specified range
411
625
  - Check git log: `git log --oneline`
412
626
 
627
+ **"Ollama not available, using main LLM for chunks"**
628
+ - Make sure Ollama is installed and running: `ollama serve`
629
+ - Check Ollama is accessible: `curl http://localhost:11434/api/version`
630
+ - Verify the model is pulled: `ollama list`
631
+ - Pull the model if needed: `ollama pull llama3:8b`
632
+
633
+ **"Processing is slower than expected"**
634
+ - For large diffs, enable hybrid mode with Ollama for faster chunk processing
635
+ - Check your `chunking_threshold` setting - lower values use chunking sooner
636
+ - Use `--verbose` to see processing details and bottlenecks
637
+
638
+ **"Chunk processing failed"**
639
+ - If using Ollama, ensure sufficient system resources (RAM)
640
+ - Try a smaller model like `llama3:8b` instead of larger models
641
+ - Check Ollama logs: `ollama logs`
642
+
413
643
  ## License
414
644
 
415
645
  MIT License
@@ -12,12 +12,14 @@ AI-powered git commit message and changelog generator using LLM APIs.
12
12
  - [Installation](#installation)
13
13
  - [Quick Start](#quick-start)
14
14
  - [Configuration](#configuration)
15
+ - [Advanced Features](#advanced-features)
15
16
  - [CLI Commands Reference](#cli-commands-reference)
16
17
  - [Environment Variables](#environment-variables)
17
18
  - [Usage Examples](#usage-examples)
18
19
  - [Supported Models](#supported-models)
19
20
  - [Development](#development)
20
21
  - [Contributing](#contributing)
22
+ - [Git Custom Command Integration](#git-custom-command-integration)
21
23
  - [Troubleshooting](#troubleshooting)
22
24
  - [License](#license)
23
25
 
@@ -25,12 +27,15 @@ AI-powered git commit message and changelog generator using LLM APIs.
25
27
 
26
28
  - 🤖 **Smart Commit Messages**: Automatically generate commit messages from git diff using AI
27
29
  - 📝 **Changelog Generation**: Generate structured changelogs from git history
28
- - 🔧 **Multiple LLM Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, and Azure OpenAI
30
+ - 🔧 **Multiple LLM Providers**: Support for OpenAI, Anthropic Claude, Google Gemini, Azure OpenAI, and Ollama
31
+ - 🚀 **Intelligent Chunking**: Automatic diff splitting with parallel processing for large changes
32
+ - 🔄 **Hybrid Processing**: Use local Ollama for chunk processing + cloud LLM for final quality
33
+ - 📊 **Progress Indicators**: Beautiful progress bars with Halo for long-running operations
29
34
  - ⚙️ **Hierarchical Configuration**: Project-level and global configuration support
30
35
  - 🎯 **Jira Integration**: Automatic ticket detection and work hours tracking
31
36
  - 🌐 **Multi-language Support**: Generate messages in different languages
32
37
  - ✏️ **Editor Integration**: Configurable editor support for reviewing commit messages
33
- - 🚀 **Easy Setup**: Simple installation and configuration
38
+ - 🛠️ **Easy Setup**: Simple installation and configuration
34
39
 
35
40
  ## Installation
36
41
 
@@ -41,7 +46,7 @@ pip install git-llm-tool
41
46
 
42
47
  ### From Source
43
48
  ```bash
44
- git clone https://github.com/your-username/git-llm-tool.git
49
+ git clone https://github.com/z0890142/git-llm-tool.git
45
50
  cd git-llm-tool
46
51
  poetry install
47
52
  ```
@@ -123,6 +128,11 @@ git-llm config set llm.api_keys.google your-key
123
128
  git-llm config set llm.azure_openai.endpoint https://your-resource.openai.azure.com/
124
129
  git-llm config set llm.azure_openai.api_version 2024-12-01-preview
125
130
  git-llm config set llm.azure_openai.deployment_name gpt-4o
131
+
132
+ # Hybrid Ollama processing (optional)
133
+ git-llm config set llm.use_ollama_for_chunks true
134
+ git-llm config set llm.ollama_model llama3:8b
135
+ git-llm config set llm.ollama_base_url http://localhost:11434
126
136
  ```
127
137
 
128
138
  #### Editor Configuration
@@ -165,12 +175,21 @@ llm:
165
175
  api_version: '2024-12-01-preview'
166
176
  deployment_name: 'gpt-4o'
167
177
 
178
+ # LangChain and intelligent processing
179
+ use_langchain: true
180
+ chunking_threshold: 12000 # Enable chunking for diffs larger than 12k tokens
181
+
182
+ # Hybrid Ollama processing (optional)
183
+ use_ollama_for_chunks: false # Set to true to enable
184
+ ollama_model: "llama3:8b" # Local model for chunk processing
185
+ ollama_base_url: "http://localhost:11434"
186
+
168
187
  editor:
169
188
  preferred_editor: 'vi'
170
189
 
171
190
  jira:
172
191
  enabled: true
173
- branch_regex: '^(feat|fix|chore)\/([A-Z]+-\d+)\/.+$'
192
+ ticket_pattern: '^(feat|fix|chore)\/([A-Z]+-\d+)\/.+$'
174
193
  ```
175
194
 
176
195
  ### View Configuration
@@ -183,6 +202,86 @@ git-llm config get llm.default_model
183
202
  git-llm config get editor.preferred_editor
184
203
  ```
185
204
 
205
+ ## Advanced Features
206
+
207
+ ### Intelligent Chunking & Parallel Processing
208
+
209
+ For large diffs, git-llm-tool automatically uses intelligent chunking to break down changes into manageable pieces:
210
+
211
+ - **Automatic Threshold Detection**: Diffs larger than 12,000 tokens are automatically chunked
212
+ - **Smart Splitting**: Prioritizes file-based splitting, then hunks, then size-based splitting
213
+ - **Parallel Processing**: Multiple chunks processed simultaneously for faster results
214
+ - **Progress Indicators**: Beautiful progress bars show real-time processing status
215
+
216
+ ```bash
217
+ # Enable verbose mode to see chunking details
218
+ git-llm commit --verbose
219
+
220
+ # Example output:
221
+ # 🔄 Analyzing diff and creating intelligent chunks...
222
+ # ✅ Created 4 intelligent chunks
223
+ # 📄 Smart chunking stats:
224
+ # Total chunks: 4
225
+ # File chunks: 2
226
+ # Hunk chunks: 2
227
+ # Complete files: 2
228
+ # 🚀 Processing 4 chunks in parallel (4/4 completed)...
229
+ # ✅ Parallel processing completed: 4/4 chunks successful
230
+ # 🔄 Combining 4 summaries into final commit message...
231
+ # ✅ Final commit message generated successfully
232
+ ```
233
+
234
+ ### Hybrid Ollama Processing
235
+
236
+ Use local Ollama for chunk processing combined with cloud LLM for final quality:
237
+
238
+ #### Setup Ollama
239
+ ```bash
240
+ # Install Ollama (macOS/Linux)
241
+ curl -fsSL https://ollama.ai/install.sh | sh
242
+
243
+ # Pull a model
244
+ ollama pull llama3:8b
245
+ # or
246
+ ollama pull llama3.1:8b
247
+ ollama pull codellama:7b
248
+ ```
249
+
250
+ #### Enable Hybrid Mode
251
+ ```bash
252
+ # Enable hybrid processing
253
+ git-llm config set llm.use_ollama_for_chunks true
254
+ git-llm config set llm.ollama_model llama3:8b
255
+
256
+ # Verify Ollama is running
257
+ curl http://localhost:11434/api/version
258
+ ```
259
+
260
+ #### How It Works
261
+ 1. **Map Phase**: Each chunk processed locally with Ollama (fast, private)
262
+ 2. **Reduce Phase**: Final combination using cloud LLM (high quality)
263
+ 3. **Cost Efficient**: Reduces cloud API usage while maintaining quality
264
+ 4. **Privacy**: Sensitive code chunks processed locally
265
+
266
+ ```bash
267
+ # With verbose mode, you'll see:
268
+ git-llm commit --verbose
269
+
270
+ # 🔄 Hybrid processing mode:
271
+ # Map phase (chunks): Ollama (llama3:8b)
272
+ # Reduce phase (final): gpt-4o
273
+ # 🚀 Processing 4 chunks in parallel (4/4 completed)...
274
+ ```
275
+
276
+ ### LangChain Integration
277
+
278
+ Advanced LLM provider management with automatic model selection:
279
+
280
+ - **Automatic Provider Detection**: Based on model name
281
+ - **Retry Logic**: Exponential backoff for failed requests
282
+ - **Rate Limiting**: Prevents API quota exhaustion
283
+ - **Error Recovery**: Graceful fallbacks
284
+
186
285
  ## CLI Commands Reference
187
286
 
188
287
  ### Commit Command
@@ -297,12 +396,24 @@ jira:
297
396
  ### Azure OpenAI
298
397
  - Any deployment of the above OpenAI models
299
398
 
399
+ ### Ollama (Local)
400
+ For hybrid processing (chunk processing only):
401
+ - `llama3:8b` (recommended)
402
+ - `llama3.1:8b`
403
+ - `llama3:70b`
404
+ - `codellama:7b`
405
+ - `codellama:13b`
406
+ - `mistral:7b`
407
+ - `qwen2:7b`
408
+
409
+ **Note**: Ollama models are used only for chunk processing in hybrid mode. Final commit message generation still uses cloud LLMs for optimal quality.
410
+
300
411
  ## Development
301
412
 
302
413
  ### Setup Development Environment
303
414
  ```bash
304
415
  # Clone repository
305
- git clone https://github.com/your-username/git-llm-tool.git
416
+ git clone https://github.com/z0890142/git-llm-tool.git
306
417
  cd git-llm-tool
307
418
 
308
419
  # Install dependencies
@@ -356,11 +467,107 @@ poetry publish
356
467
  8. Push to the branch (`git push origin feature/amazing-feature`)
357
468
  9. Open a Pull Request
358
469
 
470
+ ## Git Custom Command Integration
471
+
472
+ You can integrate git-llm as a native git subcommand, allowing you to use `git llm` instead of `git-llm`.
473
+
474
+ ### Method 1: Git Aliases (Recommended)
475
+
476
+ Add aliases to your git configuration:
477
+
478
+ ```bash
479
+ # Add git aliases for all commands
480
+ git config --global alias.llm-commit '!git-llm commit'
481
+ git config --global alias.llm-changelog '!git-llm changelog'
482
+ git config --global alias.llm-config '!git-llm config'
483
+
484
+ # Or create a general alias
485
+ git config --global alias.llm '!git-llm'
486
+ ```
487
+
488
+ Now you can use:
489
+ ```bash
490
+ git llm commit # Instead of git-llm commit
491
+ git llm changelog # Instead of git-llm changelog
492
+ git llm config get # Instead of git-llm config get
493
+
494
+ # Or with specific aliases
495
+ git llm-commit # Direct alias to git-llm commit
496
+ git llm-changelog # Direct alias to git-llm changelog
497
+ ```
498
+
499
+ ### Method 2: Shell Aliases
500
+
501
+ Add to your shell profile (`.bashrc`, `.zshrc`, etc.):
502
+
503
+ ```bash
504
+ # Simple alias
505
+ alias gllm='git-llm'
506
+
507
+ # Or git-style aliases
508
+ alias gllmc='git-llm commit'
509
+ alias gllmcl='git-llm changelog'
510
+ alias gllmcfg='git-llm config'
511
+ ```
512
+
513
+ Usage:
514
+ ```bash
515
+ gllm commit # git-llm commit
516
+ gllmc # git-llm commit
517
+ gllmcl # git-llm changelog
518
+ ```
519
+
520
+ ### Method 3: Custom Git Script
521
+
522
+ Create a custom git command script:
523
+
524
+ ```bash
525
+ # Create git-llm script in your PATH
526
+ sudo tee /usr/local/bin/git-llm > /dev/null << 'EOF'
527
+ #!/bin/bash
528
+ # Git-LLM integration script
529
+ exec git-llm "$@"
530
+ EOF
531
+
532
+ sudo chmod +x /usr/local/bin/git-llm
533
+ ```
534
+
535
+ Now you can use:
536
+ ```bash
537
+ git llm commit # Calls git-llm commit
538
+ git llm changelog # Calls git-llm changelog
539
+ ```
540
+
541
+ ### Recommended Git Workflow
542
+
543
+ With git aliases configured, your workflow becomes:
544
+
545
+ ```bash
546
+ # Make changes
547
+ echo "console.log('Hello');" > app.js
548
+
549
+ # Stage changes
550
+ git add .
551
+
552
+ # Generate AI commit message (opens editor)
553
+ git llm commit
554
+
555
+ # Or commit directly
556
+ git llm commit --apply
557
+
558
+ # Generate changelog
559
+ git llm changelog
560
+
561
+ # Check configuration
562
+ git llm config get
563
+ ```
564
+
359
565
  ## Requirements
360
566
 
361
567
  - Python 3.12+
362
568
  - Git
363
- - At least one LLM provider API key
569
+ - At least one LLM provider API key (OpenAI, Anthropic, Google, or Azure OpenAI)
570
+ - **Optional**: Ollama for hybrid processing (local chunk processing)
364
571
 
365
572
  ## Troubleshooting
366
573
 
@@ -381,6 +588,22 @@ poetry publish
381
588
  - Make sure you have commits in the specified range
382
589
  - Check git log: `git log --oneline`
383
590
 
591
+ **"Ollama not available, using main LLM for chunks"**
592
+ - Make sure Ollama is installed and running: `ollama serve`
593
+ - Check Ollama is accessible: `curl http://localhost:11434/api/version`
594
+ - Verify the model is pulled: `ollama list`
595
+ - Pull the model if needed: `ollama pull llama3:8b`
596
+
597
+ **"Processing is slower than expected"**
598
+ - For large diffs, enable hybrid mode with Ollama for faster chunk processing
599
+ - Check your `chunking_threshold` setting - lower values use chunking sooner
600
+ - Use `--verbose` to see processing details and bottlenecks
601
+
602
+ **"Chunk processing failed"**
603
+ - If using Ollama, ensure sufficient system resources (RAM)
604
+ - Try a smaller model like `llama3:8b` instead of larger models
605
+ - Check Ollama logs: `ollama logs`
606
+
384
607
  ## License
385
608
 
386
609
  MIT License
@@ -1,5 +1,5 @@
1
1
  """Git-LLM-Tool: AI-powered git commit message and changelog generator."""
2
2
 
3
- __version__ = "0.1.0"
3
+ __version__ = "0.1.5"
4
4
  __author__ = "skyler-gogolook"
5
5
  __email__ = "skyler.lo@gogolook.com"