@tecet/ollm 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (86) hide show
  1. package/dist/cli.js +20 -14
  2. package/dist/cli.js.map +3 -3
  3. package/dist/services/documentService.d.ts.map +1 -1
  4. package/dist/services/documentService.js +12 -2
  5. package/dist/services/documentService.js.map +1 -1
  6. package/dist/ui/components/docs/DocsPanel.d.ts.map +1 -1
  7. package/dist/ui/components/docs/DocsPanel.js +1 -1
  8. package/dist/ui/components/docs/DocsPanel.js.map +1 -1
  9. package/dist/ui/components/launch/VersionBanner.js +1 -1
  10. package/dist/ui/components/launch/VersionBanner.js.map +1 -1
  11. package/dist/ui/components/layout/KeybindsLegend.d.ts.map +1 -1
  12. package/dist/ui/components/layout/KeybindsLegend.js +1 -1
  13. package/dist/ui/components/layout/KeybindsLegend.js.map +1 -1
  14. package/dist/ui/components/tabs/BugReportTab.js +1 -1
  15. package/dist/ui/components/tabs/BugReportTab.js.map +1 -1
  16. package/dist/ui/services/docsService.d.ts +12 -27
  17. package/dist/ui/services/docsService.d.ts.map +1 -1
  18. package/dist/ui/services/docsService.js +40 -67
  19. package/dist/ui/services/docsService.js.map +1 -1
  20. package/docs/README.md +3 -410
  21. package/package.json +10 -7
  22. package/scripts/copy-docs-to-user.cjs +34 -0
  23. package/docs/Context/CheckpointFlowDiagram.md +0 -673
  24. package/docs/Context/ContextArchitecture.md +0 -898
  25. package/docs/Context/ContextCompression.md +0 -1102
  26. package/docs/Context/ContextManagment.md +0 -750
  27. package/docs/Context/Index.md +0 -209
  28. package/docs/Context/README.md +0 -390
  29. package/docs/DevelopmentRoadmap/Index.md +0 -238
  30. package/docs/DevelopmentRoadmap/OLLM-CLI_Releases.md +0 -419
  31. package/docs/DevelopmentRoadmap/PlanedFeatures.md +0 -448
  32. package/docs/DevelopmentRoadmap/README.md +0 -174
  33. package/docs/DevelopmentRoadmap/Roadmap.md +0 -572
  34. package/docs/DevelopmentRoadmap/RoadmapVisual.md +0 -372
  35. package/docs/Hooks/Architecture.md +0 -885
  36. package/docs/Hooks/Index.md +0 -244
  37. package/docs/Hooks/KeyboardShortcuts.md +0 -248
  38. package/docs/Hooks/Protocol.md +0 -817
  39. package/docs/Hooks/README.md +0 -403
  40. package/docs/Hooks/UserGuide.md +0 -1483
  41. package/docs/Hooks/VisualGuide.md +0 -598
  42. package/docs/Index.md +0 -506
  43. package/docs/Installation.md +0 -586
  44. package/docs/Introduction.md +0 -367
  45. package/docs/LLM Models/Index.md +0 -239
  46. package/docs/LLM Models/LLM_GettingStarted.md +0 -748
  47. package/docs/LLM Models/LLM_Index.md +0 -701
  48. package/docs/LLM Models/LLM_MemorySystem.md +0 -337
  49. package/docs/LLM Models/LLM_ModelCompatibility.md +0 -499
  50. package/docs/LLM Models/LLM_ModelsArchitecture.md +0 -933
  51. package/docs/LLM Models/LLM_ModelsCommands.md +0 -839
  52. package/docs/LLM Models/LLM_ModelsConfiguration.md +0 -1094
  53. package/docs/LLM Models/LLM_ModelsList.md +0 -1071
  54. package/docs/LLM Models/LLM_ModelsList.md.backup +0 -400
  55. package/docs/LLM Models/README.md +0 -355
  56. package/docs/MCP/MCP_Architecture.md +0 -1086
  57. package/docs/MCP/MCP_Commands.md +0 -1111
  58. package/docs/MCP/MCP_GettingStarted.md +0 -590
  59. package/docs/MCP/MCP_Index.md +0 -524
  60. package/docs/MCP/MCP_Integration.md +0 -866
  61. package/docs/MCP/MCP_Marketplace.md +0 -160
  62. package/docs/MCP/README.md +0 -415
  63. package/docs/Prompts System/Architecture.md +0 -760
  64. package/docs/Prompts System/Index.md +0 -223
  65. package/docs/Prompts System/PromptsRouting.md +0 -1047
  66. package/docs/Prompts System/PromptsTemplates.md +0 -1102
  67. package/docs/Prompts System/README.md +0 -389
  68. package/docs/Prompts System/SystemPrompts.md +0 -856
  69. package/docs/Quickstart.md +0 -535
  70. package/docs/Tools/Architecture.md +0 -884
  71. package/docs/Tools/GettingStarted.md +0 -624
  72. package/docs/Tools/Index.md +0 -216
  73. package/docs/Tools/ManifestReference.md +0 -141
  74. package/docs/Tools/README.md +0 -440
  75. package/docs/Tools/UserGuide.md +0 -773
  76. package/docs/Troubleshooting.md +0 -1265
  77. package/docs/UI&Settings/Architecture.md +0 -729
  78. package/docs/UI&Settings/ColorASCII.md +0 -34
  79. package/docs/UI&Settings/Commands.md +0 -755
  80. package/docs/UI&Settings/Configuration.md +0 -872
  81. package/docs/UI&Settings/Index.md +0 -293
  82. package/docs/UI&Settings/Keybinds.md +0 -372
  83. package/docs/UI&Settings/README.md +0 -278
  84. package/docs/UI&Settings/Terminal.md +0 -637
  85. package/docs/UI&Settings/Themes.md +0 -604
  86. package/docs/UI&Settings/UIGuide.md +0 -550
@@ -1,367 +0,0 @@
1
- # Welcome to OLLM CLI
2
-
3
- **Your Local AI Assistant in the Terminal**
4
-
5
- OLLM CLI is a friendly command-line tool that brings the power of AI language models right to your terminal. Think of it as having a knowledgeable assistant who can help you code, write, research, and solve problems—all while keeping your data private and secure on your own computer.
6
-
7
- ---
8
-
9
- ## What is OLLM CLI?
10
-
11
- Imagine having a conversation with an AI that can:
12
-
13
- - Help you write and debug code
14
- - Answer questions about your project
15
- - Search the web for information
16
- - Read and edit files in your workspace
17
- - Remember important details across conversations
18
- - Run commands and automate tasks
19
-
20
- That's OLLM CLI. It's like ChatGPT, but it runs entirely on your computer using open-source AI models through [Ollama](https://ollama.ai).
21
-
22
- ---
23
-
24
- ## Why Use OLLM CLI?
25
-
26
- ### 🔒 Privacy First
27
-
28
- Your conversations, code, and data never leave your computer. No cloud services, no data collection, no privacy concerns.
29
-
30
- ### 💰 Completely Free
31
-
32
- No subscriptions, no API costs, no hidden fees. Once installed, it's yours to use as much as you want.
33
-
34
- ### 🚀 Powerful Features
35
-
36
- - **Smart Tools** - The AI can read files, search the web, and run commands
37
- - **Memory System** - Remembers important information across sessions
38
- - **Multiple Models** - Choose from dozens of open-source AI models
39
- - **Customizable** - Themes, modes, and settings to match your workflow
40
-
41
- ### 🎯 Built for Developers
42
-
43
- - Syntax highlighting for code
44
- - Git integration
45
- - File explorer
46
- - Terminal integration
47
- - Keyboard-driven interface
48
-
49
- ---
50
-
51
- ## How Does It Work?
52
-
53
- OLLM CLI is like a bridge between you and AI models:
54
-
55
- ```
56
- You → OLLM CLI → Ollama → AI Model → Response → OLLM CLI → You
57
- ```
58
-
59
- 1. **You type a message** in the terminal
60
- 2. **OLLM CLI** processes your request and adds context
61
- 3. **Ollama** runs the AI model on your computer
62
- 4. **The AI** generates a response
63
- 5. **OLLM CLI** displays the response with nice formatting
64
-
65
- The AI can also use "tools" to help you:
66
-
67
- - Read files in your project
68
- - Search the internet
69
- - Run shell commands
70
- - Edit code
71
- - And more!
72
-
73
- ---
74
-
75
- ## What Can You Do With It?
76
-
77
- ### For Coding
78
-
79
- ```
80
- You: "Read the main.ts file and explain what it does"
81
- AI: [Reads file] "This file is the entry point for your application..."
82
-
83
- You: "Add error handling to the login function"
84
- AI: [Edits file] "I've added try-catch blocks and proper error messages..."
85
- ```
86
-
87
- ### For Learning
88
-
89
- ```
90
- You: "Explain how async/await works in JavaScript"
91
- AI: "Async/await is a way to handle asynchronous operations..."
92
-
93
- You: "Show me an example"
94
- AI: "Here's a practical example: [code example]"
95
- ```
96
-
97
- ### For Research
98
-
99
- ```
100
- You: "Search for the latest React 19 features"
101
- AI: [Searches web] "React 19 introduces several new features..."
102
-
103
- You: "How do I use the new 'use' hook?"
104
- AI: "The 'use' hook allows you to..."
105
- ```
106
-
107
- ### For Automation
108
-
109
- ```
110
- You: "Run the tests and tell me if they pass"
111
- AI: [Runs npm test] "All 42 tests passed! Here's the summary..."
112
-
113
- You: "If they failed, fix the issues"
114
- AI: [Analyzes errors, edits files] "I've fixed the failing tests..."
115
- ```
116
-
117
- ---
118
-
119
- ## Key Features Explained
120
-
121
- ### 🎨 Beautiful Terminal Interface
122
-
123
- OLLM CLI has a modern, colorful interface that makes conversations easy to follow:
124
-
125
- - **Header Bar** - Shows your current model, context usage, and mode
126
- - **Chat Area** - Your conversation with the AI
127
- - **Side Panel** - Quick access to tools, files, and settings
128
- - **Status Bar** - Helpful keyboard shortcuts and status
129
-
130
- ### 🧠 Smart Context Management
131
-
132
- The AI needs to remember your conversation, but it has limits. OLLM CLI automatically:
133
-
134
- - Tracks how much "memory" is being used
135
- - Compresses old messages when needed
136
- - Keeps important information
137
- - Lets you save and restore conversation states
138
-
139
- ### 🛠️ Powerful Tools
140
-
141
- The AI can use tools to help you:
142
-
143
- - **File Tools** - Read, write, and edit files
144
- - **Web Tools** - Search the internet and fetch web pages
145
- - **Shell Tool** - Run commands in your terminal
146
- - **Memory Tools** - Remember important facts across sessions
147
- - **MCP Tools** - Connect to external services (GitHub, databases, etc.)
148
-
149
- ### 🎯 Multiple Modes
150
-
151
- Switch between different "personalities" for different tasks:
152
-
153
- - **Assistant Mode** - General help and conversation
154
- - **Developer Mode** - Focused on coding and technical tasks
155
- - **Planning Mode** - Help with project planning and architecture
156
- - **Debugger Mode** - Systematic problem-solving
157
-
158
- ### 🎨 Customizable Themes
159
-
160
- Choose from beautiful color schemes:
161
-
162
- - Solarized Dark (default)
163
- - Neon Dark
164
- - Dracula
165
- - Nord
166
- - Monokai
167
-
168
- ---
169
-
170
- ## Who Is It For?
171
-
172
- ### Developers
173
-
174
- - Write code faster with AI assistance
175
- - Debug issues with intelligent help
176
- - Learn new technologies
177
- - Automate repetitive tasks
178
-
179
- ### Students
180
-
181
- - Get explanations of complex topics
182
- - Help with homework and projects
183
- - Learn programming concepts
184
- - Practice coding with feedback
185
-
186
- ### Researchers
187
-
188
- - Search and summarize information
189
- - Analyze documents
190
- - Generate reports
191
- - Organize research notes
192
-
193
- ### Anyone Who Uses a Terminal
194
-
195
- - Automate tasks
196
- - Get quick answers
197
- - Manage files and projects
198
- - Learn command-line tools
199
-
200
- ---
201
-
202
- ## What Makes It Different?
203
-
204
- ### vs. ChatGPT
205
-
206
- - ✅ Runs on your computer (private)
207
- - ✅ Free to use (no subscription)
208
- - ✅ Can access your files and run commands
209
- - ✅ Customizable and extensible
210
- - ❌ Requires more setup
211
- - ❌ Needs a decent computer
212
-
213
- ### vs. GitHub Copilot
214
-
215
- - ✅ Works in the terminal, not just editors
216
- - ✅ Can search the web and run commands
217
- - ✅ Free and open-source
218
- - ✅ More conversational
219
- - ❌ Not integrated into your code editor
220
- - ❌ Requires manual setup
221
-
222
- ### vs. Other CLI Tools
223
-
224
- - ✅ Full conversation interface
225
- - ✅ Rich terminal UI with colors and formatting
226
- - ✅ Tool system for file operations
227
- - ✅ Memory across sessions
228
- - ✅ Multiple AI models to choose from
229
-
230
- ---
231
-
232
- ## System Requirements
233
-
234
- ### Minimum Requirements
235
-
236
- - **Computer:** Modern laptop or desktop
237
- - **RAM:** 8GB (16GB recommended)
238
- - **Storage:** 10GB free space
239
- - **OS:** Windows 10+, macOS 11+, or Linux
240
- - **Node.js:** Version 20 or higher
241
-
242
- ### Recommended Setup
243
-
244
- - **RAM:** 16GB or more
245
- - **GPU:** NVIDIA, AMD, or Apple Silicon (for faster AI)
246
- - **Storage:** SSD with 20GB+ free space
247
- - **Terminal:** Modern terminal emulator (Windows Terminal, iTerm2, Alacritty)
248
-
249
- ---
250
-
251
- ## Quick Example
252
-
253
- Here's what a typical conversation looks like:
254
-
255
- ```
256
- $ ollm
257
-
258
- ┌─────────────────────────────────────────────────────────────────┐
259
- │ 🦙 llama3.2:3b │ 1.2K/8K │ 🎯 Assistant │ 🕐 14:32:15 │
260
- ├─────────────────────────────────────────────────────────────────┤
261
- │ │
262
- │ You: Hello! Can you help me with my project? │
263
- │ │
264
- │ Assistant: Of course! I'd be happy to help. What are you │
265
- │ working on? │
266
- │ │
267
- │ You: I'm building a web app with React. Can you read my │
268
- │ package.json and tell me what dependencies I have? │
269
- │ │
270
- │ Assistant: [Uses read_file tool] │
271
- │ I can see you're using React 18.2.0, TypeScript 5.0, and │
272
- │ several other libraries. Here's a summary: │
273
- │ │
274
- │ - React 18.2.0 (UI framework) │
275
- │ - TypeScript 5.0 (type safety) │
276
- │ - Vite 4.3 (build tool) │
277
- │ - React Router 6.11 (routing) │
278
- │ │
279
- │ Would you like me to suggest any updates or improvements? │
280
- │ │
281
- │ You: Yes, please check if there are newer versions │
282
- │ │
283
- │ Assistant: [Uses web_search tool] │
284
- │ I found that React 19 was recently released with some great │
285
- │ new features. Would you like me to help you upgrade? │
286
- │ │
287
- │ > _ │
288
- ├─────────────────────────────────────────────────────────────────┤
289
- │ ● Ready | Ctrl+K: Commands | Ctrl+P: Panel | Ctrl+/: Help │
290
- └─────────────────────────────────────────────────────────────────┘
291
- ```
292
-
293
- ---
294
-
295
- ## Getting Started
296
-
297
- Ready to try OLLM CLI? Here's what to do next:
298
-
299
- 1. **[Install OLLM CLI](Installation.md)** - Step-by-step installation guide
300
- 2. **[Quick Start](Quickstart.md)** - Get up and running in 5 minutes
301
- 3. **[User Guide](UI&Settings/UIGuide.md)** - Learn the interface
302
- 4. **[Commands Reference](UI&Settings/Commands.md)** - All available commands
303
-
304
- ---
305
-
306
- ## Need Help?
307
-
308
- - **[Troubleshooting Guide](Troubleshooting.md)** - Common issues and solutions
309
- - **[Documentation](README.md)** - Complete documentation
310
- - **[GitHub Issues](https://github.com/tecet/ollm/issues)** - Report bugs or request features
311
- - **[Discussions](https://github.com/tecet/ollm/discussions)** - Ask questions and share ideas
312
-
313
- ---
314
-
315
- ## Philosophy
316
-
317
- OLLM CLI is built on these principles:
318
-
319
- **🔒 Privacy First**
320
- Your data stays on your computer. No telemetry, no tracking, no cloud services.
321
-
322
- **💡 User-Friendly**
323
- Powerful features with a simple, intuitive interface. You shouldn't need a PhD to use AI.
324
-
325
- **🎯 Practical**
326
- Built for real work, not demos. Tools that actually help you get things done.
327
-
328
- **🌟 Open Source**
329
- Free forever, community-driven, transparent development.
330
-
331
- **⚡ Fast & Efficient**
332
- Optimized for performance, minimal resource usage, quick responses.
333
-
334
- ---
335
-
336
- ## What's Next?
337
-
338
- OLLM CLI is actively developed with exciting features coming soon:
339
-
340
- - **v0.2.0** - Enhanced context management
341
- - **v0.3.0** - Advanced compression strategies
342
- - **v0.4.0** - Better reasoning model support
343
- - **v0.5.0** - Improved session management
344
- - **v0.6.0** - Multiple AI providers (Claude, GPT, Gemini)
345
-
346
- See the [Roadmap](DevelopmentRoadmap/Roadmap.md) for details.
347
-
348
- ---
349
-
350
- ## Join the Community
351
-
352
- OLLM CLI is open source and welcomes contributions:
353
-
354
- - **GitHub:** [github.com/tecet/ollm](https://github.com/tecet/ollm)
355
- - **Issues:** Report bugs and request features
356
- - **Discussions:** Share ideas and get help
357
- - **Pull Requests:** Contribute code and documentation
358
-
359
- ---
360
-
361
- **Ready to get started?** Head to the [Installation Guide](Installation.md) to install OLLM CLI on your computer!
362
-
363
- ---
364
-
365
- **Last Updated:** January 26, 2026
366
- **Version:** 0.1.0
367
- **Author:** tecet
@@ -1,239 +0,0 @@
1
- # Model Management Documentation Index
2
-
3
- **Quick Reference with Links**
4
-
5
- This index provides quick navigation to all Model Management documentation with brief descriptions.
6
-
7
- ---
8
-
9
- ## 📚 Core Documentation
10
-
11
- ### [README.md](README.md)
12
-
13
- **Overview and Navigation Guide**
14
-
15
- Main entry point for Model Management documentation. Provides overview of model lifecycle, provider integration, context configuration, and tool support.
16
-
17
- **Topics:** Overview, Models, Providers, Context, Tools
18
- **Audience:** All users
19
-
20
- ---
21
-
22
- ### [LLM Index](LLM_Index.md)
23
-
24
- **Complete Documentation Index**
25
-
26
- Comprehensive index with detailed summaries of all Model Management documentation. Includes line counts, navigation by audience and topic, and documentation status.
27
-
28
- **Topics:** Complete Index, Summaries, Navigation
29
- **Audience:** All users
30
-
31
- ---
32
-
33
- ### [Models List](LLM_ModelsList.md)
34
-
35
- **Ollama Models Reference**
36
-
37
- Complete reference for Ollama-compatible models. Includes context windows, VRAM requirements, tool calling support, quantization guide, and performance benchmarks.
38
-
39
- **Topics:** Models, VRAM, Quantization, Performance
40
- **Audience:** All users
41
-
42
- **Key Sections:**
43
-
44
- - Context window fundamentals
45
- - VRAM requirements and calculations
46
- - Model selection matrix
47
- - Tool calling support tiers
48
- - Quantization guide
49
- - Configuration examples
50
- - Performance benchmarks
51
-
52
- ---
53
-
54
- ### [Model Compatibility](LLM_ModelCompatibility.md)
55
-
56
- **Tested Models and Compatibility Matrix**
57
-
58
- Comprehensive compatibility information for various LLM models tested with OLLM CLI. Documents which features work with which models, known issues, and workarounds.
59
-
60
- **Topics:** Compatibility, Testing, Known Issues
61
- **Audience:** All users
62
-
63
- **Key Sections:**
64
-
65
- - Model categories
66
- - Compatibility results per model
67
- - Model selection guide
68
- - Known issues and workarounds
69
- - Testing methodology
70
-
71
- ---
72
-
73
- ### [Memory System](LLM_MemorySystem.md)
74
-
75
- **Cross-Session Memory Guide**
76
-
77
- Complete guide to the cross-session memory system. Covers memory storage, injection, management, and best practices.
78
-
79
- **Topics:** Memory, Persistence, Management
80
- **Audience:** All users
81
-
82
- **Key Sections:**
83
-
84
- - What is memory
85
- - Memory storage and persistence
86
- - Memory injection into prompts
87
- - Memory management commands
88
- - LLM-initiated memory
89
- - Token budget management
90
- - Best practices
91
-
92
- ---
93
-
94
- ## 📖 Documentation by Topic
95
-
96
- ### Model Management
97
-
98
- - [README](README.md#model-management) - Overview
99
- - [Models List](LLM_ModelsList.md) - Model reference
100
- - [Model Compatibility](LLM_ModelCompatibility.md) - Compatibility
101
-
102
- ### Provider System
103
-
104
- - [README](README.md#provider-integration) - Provider overview
105
- - [LLM Index](LLM_Index.md#provider-system) - Provider details
106
-
107
- ### Context Windows
108
-
109
- - [README](README.md#context-window-configuration) - Context overview
110
- - [Models List](LLM_ModelsList.md#context-windows) - Context sizes
111
- - [Model Compatibility](LLM_ModelCompatibility.md#context) - Context compatibility
112
-
113
- ### Tool Support
114
-
115
- - [README](README.md#tool-support-detection) - Tool detection
116
- - [Models List](LLM_ModelsList.md#tool-calling) - Tool support tiers
117
- - [Model Compatibility](LLM_ModelCompatibility.md#tools) - Tool compatibility
118
-
119
- ### Memory System
120
-
121
- - [Memory System](LLM_MemorySystem.md) - Complete guide
122
- - [README](README.md#memory-system) - Overview
123
- - [LLM Index](LLM_Index.md#memory-system) - Memory docs
124
-
125
- ### Reasoning Models
126
-
127
- - [README](README.md#reasoning-model-support) - Reasoning overview
128
- - [Model Compatibility](LLM_ModelCompatibility.md#reasoning) - Reasoning models
129
-
130
- ---
131
-
132
- ## 📖 Documentation by Audience
133
-
134
- ### For New Users
135
-
136
- 1. [README](README.md) - Start here
137
- 2. [Models List](LLM_ModelsList.md) - Model reference
138
- 3. [Memory System](LLM_MemorySystem.md) - Using memory
139
-
140
- ### For Regular Users
141
-
142
- 1. [Model Compatibility](LLM_ModelCompatibility.md) - Model selection
143
- 2. [Models List](LLM_ModelsList.md#quantization) - Quantization guide
144
- 3. [README](README.md#configuration) - Configuration
145
-
146
- ### For Developers
147
-
148
- 1. [LLM Index](LLM_Index.md) - Complete index
149
- 2. [Model Compatibility](LLM_ModelCompatibility.md#testing) - Testing methodology
150
- 3. Knowledge DB: `dev_ModelManagement.md` - Architecture
151
-
152
- ---
153
-
154
- ## 🔗 Related Documentation
155
-
156
- ### Core Systems
157
-
158
- - [Context Management](../Context/ContextManagment.md) - Context system
159
- - [Prompts System](../Prompts%20System/README.md) - System prompts
160
- - [Tools System](../Tools/README.md) - Tool execution
161
-
162
- ### Commands
163
-
164
- - [Model Commands](../UI&Settings/Commands.md#model-management) - CLI commands
165
- - [Memory Commands](../UI&Settings/Commands.md#memory-management) - Memory commands
166
-
167
- ### Developer Resources
168
-
169
- - Knowledge DB: `dev_ModelManagement.md` - Architecture details
170
- - Knowledge DB: `dev_ProviderSystem.md` - Provider system
171
- - Knowledge DB: `dev_ReasoningModels.md` - Reasoning models
172
-
173
- ---
174
-
175
- ## 📊 Documentation Status
176
-
177
- ### Completed ✅
178
-
179
- | Document | Status |
180
- | ------------------------- | ----------- |
181
- | README.md | ✅ Complete |
182
- | Index.md | ✅ Complete |
183
- | LLM_Index.md | ✅ Complete |
184
- | LLM_ModelsList.md | ✅ Complete |
185
- | LLM_ModelCompatibility.md | ✅ Complete |
186
- | LLM_MemorySystem.md | ✅ Complete |
187
-
188
- **Overall Progress:** 100% complete (6/6 files)
189
-
190
- ---
191
-
192
- ## 🎯 Quick Links
193
-
194
- ### Common Tasks
195
-
196
- - List models → [README](README.md#manage-models)
197
- - Select model → [Models List](LLM_ModelsList.md#model-selection)
198
- - Configure context → [README](README.md#configure-context-size)
199
- - Use memory → [Memory System](LLM_MemorySystem.md)
200
-
201
- ### Understanding Systems
202
-
203
- - How models work → [README](README.md)
204
- - Model selection → [Model Compatibility](LLM_ModelCompatibility.md)
205
- - Context windows → [Models List](LLM_ModelsList.md#context-windows)
206
- - Memory system → [Memory System](LLM_MemorySystem.md)
207
-
208
- ### Reference
209
-
210
- - Model list → [Models List](LLM_ModelsList.md)
211
- - Compatibility matrix → [Model Compatibility](LLM_ModelCompatibility.md)
212
- - Complete index → [LLM Index](LLM_Index.md)
213
-
214
- ---
215
-
216
- ## 🎓 Model Selection Guide
217
-
218
- ### By VRAM
219
-
220
- - **<4GB** → [Models List](LLM_ModelsList.md#low-vram)
221
- - **4-8GB** → [Models List](LLM_ModelsList.md#medium-vram)
222
- - **8GB+** → [Models List](LLM_ModelsList.md#high-vram)
223
-
224
- ### By Use Case
225
-
226
- - **General Chat** → [Model Compatibility](LLM_ModelCompatibility.md#general-purpose)
227
- - **Code** → [Model Compatibility](LLM_ModelCompatibility.md#code-specialized)
228
- - **Fast** → [Model Compatibility](LLM_ModelCompatibility.md#small-fast)
229
-
230
- ### By Features
231
-
232
- - **Tool Calling** → [Models List](LLM_ModelsList.md#tool-calling)
233
- - **Large Context** → [Models List](LLM_ModelsList.md#context-windows)
234
- - **Reasoning** → [Model Compatibility](LLM_ModelCompatibility.md#reasoning)
235
-
236
- ---
237
-
238
- **Last Updated:** January 26, 2026
239
- **Version:** 0.1.0