glance-cli 0.14.0 → 0.15.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +33 -0
- package/README.md +13 -1
- package/dist/cli.js +42 -42
- package/package.json +1 -1
- package/src/cli/config.ts +1 -1
- package/src/core/summarizer.ts +106 -51
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,39 @@ All notable changes to glance-cli will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [0.15.0] - 2026-01-06
|
|
9
|
+
|
|
10
|
+
### 🚀 Added: Comprehensive Local Model Support
|
|
11
|
+
- **Fixed gpt-oss model compatibility**: Added special handling for gpt-oss models that use "thinking" field in responses
|
|
12
|
+
- **Tested and verified 7 Ollama models**:
|
|
13
|
+
- ✅ llama3:latest - Fast, reliable general-purpose model
|
|
14
|
+
- ✅ gemma3:4b - Lightweight, efficient for quick summaries
|
|
15
|
+
- ✅ mistral:7b / mistral:latest - Excellent quality responses
|
|
16
|
+
- ✅ gpt-oss:20b - Advanced local model with reasoning capabilities
|
|
17
|
+
- ✅ gpt-oss:120b-cloud - Largest model for complex tasks
|
|
18
|
+
- ✅ deepseek-r1:latest - Strong reasoning and analysis
|
|
19
|
+
- **100% test success rate**: All models work with --tldr, --key-points, and --eli5 modes
|
|
20
|
+
- **Improved response parsing**: Better extraction of meaningful content from various model response formats
|
|
21
|
+
|
|
22
|
+
### 🐛 Fixed
|
|
23
|
+
- **gpt-oss models now properly route to Ollama**: Previously failing gpt-oss models now correctly use local Ollama endpoint
|
|
24
|
+
- **Response extraction for thinking-based models**: Models that output reasoning in "thinking" field now have their responses properly extracted
|
|
25
|
+
- **Consistent model detection**: Enhanced provider detection to correctly identify local vs cloud models
|
|
26
|
+
|
|
27
|
+
### 📝 Documentation
|
|
28
|
+
- Added comprehensive list of tested and supported local models in README
|
|
29
|
+
- Included model-specific notes for optimal usage
|
|
30
|
+
- Updated examples with various model options
|
|
31
|
+
|
|
32
|
+
## [0.14.0] - 2026-01-06
|
|
33
|
+
|
|
34
|
+
### ✨ Added: Clipboard Copy Feature
|
|
35
|
+
- **New --copy/-c flag**: Copy summaries directly to clipboard
|
|
36
|
+
- **Cross-platform support**: Works on macOS, Windows, and Linux via clipboardy package
|
|
37
|
+
- **Smart formatting**: Copies raw text for terminal output, formatted for JSON/markdown
|
|
38
|
+
- **Updated documentation**: Added examples and help text for copy feature
|
|
39
|
+
- **Fixed metadata display**: Now shows actual extracted values instead of placeholders
|
|
40
|
+
|
|
8
41
|
## [0.13.1] - 2026-01-05
|
|
9
42
|
|
|
10
43
|
### 📦 Optimized: Package Size (Breaking: Major Size Reduction)
|
package/README.md
CHANGED
|
@@ -81,9 +81,21 @@ glance <url> --full -l es --voice isabella --read # Override to Spanish + voic
|
|
|
81
81
|
```
|
|
82
82
|
|
|
83
83
|
### 🤖 **AI Models**
|
|
84
|
+
|
|
85
|
+
#### Tested & Supported Local Models (via Ollama)
|
|
86
|
+
- ✅ **llama3:latest** - Fast, reliable, great for general use
|
|
87
|
+
- ✅ **gemma3:4b** - Lightweight, efficient for quick summaries
|
|
88
|
+
- ✅ **mistral:7b** / **mistral:latest** - Excellent quality responses
|
|
89
|
+
- ✅ **gpt-oss:20b** - Advanced local model with reasoning capabilities
|
|
90
|
+
- ✅ **gpt-oss:120b-cloud** - Largest model for complex tasks
|
|
91
|
+
- ✅ **deepseek-r1:latest** - Strong reasoning and analysis
|
|
92
|
+
|
|
84
93
|
```bash
|
|
85
94
|
# Free local AI (recommended)
|
|
86
|
-
glance <url> --model llama3
|
|
95
|
+
glance <url> --model llama3:latest
|
|
96
|
+
glance <url> --model gemma3:4b
|
|
97
|
+
glance <url> --model mistral:7b
|
|
98
|
+
glance <url> --model gpt-oss:20b
|
|
87
99
|
|
|
88
100
|
# Premium cloud AI (optional, requires API keys)
|
|
89
101
|
glance <url> --model gpt-4o-mini # OpenAI
|