@tecet/ollm 0.1.4 → 0.1.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli.js +20 -14
- package/dist/cli.js.map +3 -3
- package/dist/services/documentService.d.ts.map +1 -1
- package/dist/services/documentService.js +12 -2
- package/dist/services/documentService.js.map +1 -1
- package/dist/ui/components/docs/DocsPanel.d.ts.map +1 -1
- package/dist/ui/components/docs/DocsPanel.js +1 -1
- package/dist/ui/components/docs/DocsPanel.js.map +1 -1
- package/dist/ui/components/launch/VersionBanner.js +1 -1
- package/dist/ui/components/launch/VersionBanner.js.map +1 -1
- package/dist/ui/components/layout/KeybindsLegend.d.ts.map +1 -1
- package/dist/ui/components/layout/KeybindsLegend.js +1 -1
- package/dist/ui/components/layout/KeybindsLegend.js.map +1 -1
- package/dist/ui/components/tabs/BugReportTab.js +1 -1
- package/dist/ui/components/tabs/BugReportTab.js.map +1 -1
- package/dist/ui/services/docsService.d.ts +12 -27
- package/dist/ui/services/docsService.d.ts.map +1 -1
- package/dist/ui/services/docsService.js +40 -67
- package/dist/ui/services/docsService.js.map +1 -1
- package/docs/README.md +3 -410
- package/package.json +10 -7
- package/scripts/copy-docs-to-user.cjs +34 -0
- package/docs/Context/CheckpointFlowDiagram.md +0 -673
- package/docs/Context/ContextArchitecture.md +0 -898
- package/docs/Context/ContextCompression.md +0 -1102
- package/docs/Context/ContextManagment.md +0 -750
- package/docs/Context/Index.md +0 -209
- package/docs/Context/README.md +0 -390
- package/docs/DevelopmentRoadmap/Index.md +0 -238
- package/docs/DevelopmentRoadmap/OLLM-CLI_Releases.md +0 -419
- package/docs/DevelopmentRoadmap/PlanedFeatures.md +0 -448
- package/docs/DevelopmentRoadmap/README.md +0 -174
- package/docs/DevelopmentRoadmap/Roadmap.md +0 -572
- package/docs/DevelopmentRoadmap/RoadmapVisual.md +0 -372
- package/docs/Hooks/Architecture.md +0 -885
- package/docs/Hooks/Index.md +0 -244
- package/docs/Hooks/KeyboardShortcuts.md +0 -248
- package/docs/Hooks/Protocol.md +0 -817
- package/docs/Hooks/README.md +0 -403
- package/docs/Hooks/UserGuide.md +0 -1483
- package/docs/Hooks/VisualGuide.md +0 -598
- package/docs/Index.md +0 -506
- package/docs/Installation.md +0 -586
- package/docs/Introduction.md +0 -367
- package/docs/LLM Models/Index.md +0 -239
- package/docs/LLM Models/LLM_GettingStarted.md +0 -748
- package/docs/LLM Models/LLM_Index.md +0 -701
- package/docs/LLM Models/LLM_MemorySystem.md +0 -337
- package/docs/LLM Models/LLM_ModelCompatibility.md +0 -499
- package/docs/LLM Models/LLM_ModelsArchitecture.md +0 -933
- package/docs/LLM Models/LLM_ModelsCommands.md +0 -839
- package/docs/LLM Models/LLM_ModelsConfiguration.md +0 -1094
- package/docs/LLM Models/LLM_ModelsList.md +0 -1071
- package/docs/LLM Models/LLM_ModelsList.md.backup +0 -400
- package/docs/LLM Models/README.md +0 -355
- package/docs/MCP/MCP_Architecture.md +0 -1086
- package/docs/MCP/MCP_Commands.md +0 -1111
- package/docs/MCP/MCP_GettingStarted.md +0 -590
- package/docs/MCP/MCP_Index.md +0 -524
- package/docs/MCP/MCP_Integration.md +0 -866
- package/docs/MCP/MCP_Marketplace.md +0 -160
- package/docs/MCP/README.md +0 -415
- package/docs/Prompts System/Architecture.md +0 -760
- package/docs/Prompts System/Index.md +0 -223
- package/docs/Prompts System/PromptsRouting.md +0 -1047
- package/docs/Prompts System/PromptsTemplates.md +0 -1102
- package/docs/Prompts System/README.md +0 -389
- package/docs/Prompts System/SystemPrompts.md +0 -856
- package/docs/Quickstart.md +0 -535
- package/docs/Tools/Architecture.md +0 -884
- package/docs/Tools/GettingStarted.md +0 -624
- package/docs/Tools/Index.md +0 -216
- package/docs/Tools/ManifestReference.md +0 -141
- package/docs/Tools/README.md +0 -440
- package/docs/Tools/UserGuide.md +0 -773
- package/docs/Troubleshooting.md +0 -1265
- package/docs/UI&Settings/Architecture.md +0 -729
- package/docs/UI&Settings/ColorASCII.md +0 -34
- package/docs/UI&Settings/Commands.md +0 -755
- package/docs/UI&Settings/Configuration.md +0 -872
- package/docs/UI&Settings/Index.md +0 -293
- package/docs/UI&Settings/Keybinds.md +0 -372
- package/docs/UI&Settings/README.md +0 -278
- package/docs/UI&Settings/Terminal.md +0 -637
- package/docs/UI&Settings/Themes.md +0 -604
- package/docs/UI&Settings/UIGuide.md +0 -550
|
@@ -1,872 +0,0 @@
|
|
|
1
|
-
# Configuration Reference
|
|
2
|
-
|
|
3
|
-
## Overview
|
|
4
|
-
|
|
5
|
-
OLLM CLI uses a layered configuration system that allows you to customize behavior at multiple levels. Configuration can be specified through config files, environment variables, and command-line flags, with a clear precedence order.
|
|
6
|
-
|
|
7
|
-
## Configuration File Locations
|
|
8
|
-
|
|
9
|
-
OLLM CLI looks for configuration files in two locations:
|
|
10
|
-
|
|
11
|
-
### User Configuration
|
|
12
|
-
|
|
13
|
-
**Location:** `~/.ollm/config.yaml`
|
|
14
|
-
|
|
15
|
-
This is your personal configuration that applies to all OLLM CLI sessions across all projects. Use this for your preferred defaults like model selection, UI preferences, and provider endpoints.
|
|
16
|
-
|
|
17
|
-
**Example path:**
|
|
18
|
-
|
|
19
|
-
- Linux/macOS: `/home/username/.ollm/config.yaml`
|
|
20
|
-
- Windows: `C:\Users\username\.ollm\config.yaml`
|
|
21
|
-
|
|
22
|
-
### Workspace Configuration
|
|
23
|
-
|
|
24
|
-
**Location:** `.ollm/config.yaml` (in your project directory)
|
|
25
|
-
|
|
26
|
-
This is project-specific configuration that only applies when running OLLM CLI from within that project directory. Use this for project-specific settings like custom models, tool policies, or provider configurations.
|
|
27
|
-
|
|
28
|
-
**Example path:**
|
|
29
|
-
|
|
30
|
-
- Linux/macOS: `/path/to/project/.ollm/config.yaml`
|
|
31
|
-
- Windows: `C:\path\to\project\.ollm\config.yaml`
|
|
32
|
-
|
|
33
|
-
## Configuration Precedence
|
|
34
|
-
|
|
35
|
-
When multiple configuration sources are present, they are merged with the following precedence order (highest priority first):
|
|
36
|
-
|
|
37
|
-
1. **CLI Flags** - Command-line arguments (e.g., `--model llama3.1`)
|
|
38
|
-
2. **Environment Variables** - Environment variables (e.g., `OLLAMA_HOST`)
|
|
39
|
-
3. **Workspace Config** - Project-specific `.ollm/config.yaml`
|
|
40
|
-
4. **User Config** - Personal `~/.ollm/config.yaml`
|
|
41
|
-
5. **System Defaults** - Built-in default values
|
|
42
|
-
|
|
43
|
-
Settings from higher priority sources override settings from lower priority sources. For nested objects, the merge is performed at the field level, not the object level.
|
|
44
|
-
|
|
45
|
-
### Precedence Example
|
|
46
|
-
|
|
47
|
-
```yaml
|
|
48
|
-
# User config (~/.ollm/config.yaml)
|
|
49
|
-
model:
|
|
50
|
-
default: llama3.2:3b
|
|
51
|
-
temperature: 0.7
|
|
52
|
-
|
|
53
|
-
# Workspace config (.ollm/config.yaml)
|
|
54
|
-
model:
|
|
55
|
-
default: codellama:13b
|
|
56
|
-
|
|
57
|
-
# Result: workspace model overrides user model, but temperature is inherited
|
|
58
|
-
# Final config:
|
|
59
|
-
# model.default: codellama:13b
|
|
60
|
-
# model.temperature: 0.7
|
|
61
|
-
```
|
|
62
|
-
|
|
63
|
-
### CLI Flag Precedence
|
|
64
|
-
|
|
65
|
-
```bash
|
|
66
|
-
# User config has model: llama3.2:3b
|
|
67
|
-
# This command overrides it:
|
|
68
|
-
ollm --model llama3.1:8b
|
|
69
|
-
|
|
70
|
-
# Final model used: llama3.1:8b
|
|
71
|
-
```
|
|
72
|
-
|
|
73
|
-
### Environment Variable Precedence
|
|
74
|
-
|
|
75
|
-
```bash
|
|
76
|
-
# User config has provider.ollama.host: http://localhost:11434
|
|
77
|
-
# This environment variable overrides it:
|
|
78
|
-
export OLLAMA_HOST=http://192.168.1.100:11434
|
|
79
|
-
ollm
|
|
80
|
-
|
|
81
|
-
# Final host used: http://192.168.1.100:11434
|
|
82
|
-
```
|
|
83
|
-
|
|
84
|
-
## Configuration File Format
|
|
85
|
-
|
|
86
|
-
Configuration files use YAML format. Here's the basic structure:
|
|
87
|
-
|
|
88
|
-
```yaml
|
|
89
|
-
# Provider configuration
|
|
90
|
-
provider:
|
|
91
|
-
default: ollama
|
|
92
|
-
ollama:
|
|
93
|
-
host: http://localhost:11434
|
|
94
|
-
timeout: 30000
|
|
95
|
-
|
|
96
|
-
# Model configuration
|
|
97
|
-
model:
|
|
98
|
-
default: llama3.2:3b
|
|
99
|
-
temperature: 0.7
|
|
100
|
-
maxTokens: 4096
|
|
101
|
-
|
|
102
|
-
# UI configuration
|
|
103
|
-
ui:
|
|
104
|
-
layout: hybrid
|
|
105
|
-
sidePanel: true
|
|
106
|
-
showGpuStats: true
|
|
107
|
-
showCost: true
|
|
108
|
-
|
|
109
|
-
# Status monitoring
|
|
110
|
-
status:
|
|
111
|
-
pollInterval: 5000
|
|
112
|
-
highTempThreshold: 80
|
|
113
|
-
lowVramThreshold: 512
|
|
114
|
-
|
|
115
|
-
# Diff review
|
|
116
|
-
review:
|
|
117
|
-
enabled: true
|
|
118
|
-
inlineThreshold: 5
|
|
119
|
-
|
|
120
|
-
# Session management
|
|
121
|
-
session:
|
|
122
|
-
autoSave: true
|
|
123
|
-
saveInterval: 60000
|
|
124
|
-
```
|
|
125
|
-
|
|
126
|
-
## Settings Reference
|
|
127
|
-
|
|
128
|
-
### Provider Settings
|
|
129
|
-
|
|
130
|
-
Controls which LLM provider to use and how to connect to it.
|
|
131
|
-
|
|
132
|
-
| Setting | Type | Default | Description |
|
|
133
|
-
| ---------------------------------- | ------------ | -------------------------- | --------------------------------------------------------------- |
|
|
134
|
-
| `provider.default` | string | `"ollama"` | Default provider to use (`ollama`, `vllm`, `openai-compatible`) |
|
|
135
|
-
| `provider.ollama.host` | string (URI) | `"http://localhost:11434"` | Ollama server endpoint URL |
|
|
136
|
-
| `provider.ollama.timeout` | number | `30000` | Request timeout in milliseconds (min: 0) |
|
|
137
|
-
| `provider.vllm.host` | string (URI) | - | vLLM server endpoint URL |
|
|
138
|
-
| `provider.vllm.apiKey` | string | - | Optional API key for vLLM authentication |
|
|
139
|
-
| `provider.openaiCompatible.host` | string (URI) | - | OpenAI-compatible server endpoint URL |
|
|
140
|
-
| `provider.openaiCompatible.apiKey` | string | - | Optional API key for OpenAI-compatible authentication |
|
|
141
|
-
|
|
142
|
-
**Configuration Methods:**
|
|
143
|
-
|
|
144
|
-
```yaml
|
|
145
|
-
# Config file
|
|
146
|
-
provider:
|
|
147
|
-
default: ollama
|
|
148
|
-
ollama:
|
|
149
|
-
host: http://localhost:11434
|
|
150
|
-
timeout: 30000
|
|
151
|
-
```
|
|
152
|
-
|
|
153
|
-
```bash
|
|
154
|
-
# Environment variables
|
|
155
|
-
export OLLAMA_HOST=http://localhost:11434
|
|
156
|
-
export VLLM_HOST=http://localhost:8000
|
|
157
|
-
export VLLM_API_KEY=your-api-key
|
|
158
|
-
export OPENAI_COMPATIBLE_HOST=http://localhost:1234
|
|
159
|
-
export OPENAI_COMPATIBLE_API_KEY=your-api-key
|
|
160
|
-
|
|
161
|
-
# CLI flags
|
|
162
|
-
ollm --provider ollama --host http://localhost:11434
|
|
163
|
-
```
|
|
164
|
-
|
|
165
|
-
### Model Settings
|
|
166
|
-
|
|
167
|
-
Controls model selection and generation parameters.
|
|
168
|
-
|
|
169
|
-
| Setting | Type | Default | Description |
|
|
170
|
-
| ------------------- | ------ | --------------- | ------------------------------------------------------ |
|
|
171
|
-
| `model.default` | string | `"llama3.2:3b"` | Default model name to use |
|
|
172
|
-
| `model.temperature` | number | `0.7` | Sampling temperature (0.0-2.0, higher = more creative) |
|
|
173
|
-
| `model.maxTokens` | number | `4096` | Maximum tokens to generate (min: 1) |
|
|
174
|
-
|
|
175
|
-
**Configuration Methods:**
|
|
176
|
-
|
|
177
|
-
```yaml
|
|
178
|
-
# Config file
|
|
179
|
-
model:
|
|
180
|
-
default: llama3.2:3b
|
|
181
|
-
temperature: 0.7
|
|
182
|
-
maxTokens: 4096
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
```bash
|
|
186
|
-
# Environment variables
|
|
187
|
-
export OLLM_DEFAULT_MODEL=llama3.1:8b
|
|
188
|
-
|
|
189
|
-
# CLI flags
|
|
190
|
-
ollm --model llama3.1:8b
|
|
191
|
-
```
|
|
192
|
-
|
|
193
|
-
### Context Settings
|
|
194
|
-
|
|
195
|
-
Controls context management and VRAM-aware sizing (optional, uses defaults if not specified).
|
|
196
|
-
|
|
197
|
-
| Setting | Type | Default | Description |
|
|
198
|
-
| ------------------------------ | ------- | ------------ | --------------------------------------------- |
|
|
199
|
-
| `context.targetSize` | number | `8192` | Target context size in tokens |
|
|
200
|
-
| `context.minSize` | number | `2048` | Minimum context size in tokens |
|
|
201
|
-
| `context.maxSize` | number | `32768` | Maximum context size in tokens |
|
|
202
|
-
| `context.autoSize` | boolean | `true` | Enable automatic context sizing based on VRAM |
|
|
203
|
-
| `context.vramBuffer` | number | `1073741824` | VRAM buffer to reserve in bytes (1GB default) |
|
|
204
|
-
| `context.compressionEnabled` | boolean | `true` | Enable automatic context compression |
|
|
205
|
-
| `context.compressionThreshold` | number | `0.8` | Threshold (0-1) to trigger compression |
|
|
206
|
-
| `context.snapshotsEnabled` | boolean | `true` | Enable snapshots for context restoration |
|
|
207
|
-
| `context.maxSnapshots` | number | `5` | Maximum number of snapshots to keep |
|
|
208
|
-
|
|
209
|
-
**Configuration Methods:**
|
|
210
|
-
|
|
211
|
-
```yaml
|
|
212
|
-
# Config file
|
|
213
|
-
context:
|
|
214
|
-
targetSize: 8192
|
|
215
|
-
autoSize: true
|
|
216
|
-
compressionEnabled: true
|
|
217
|
-
compressionThreshold: 0.8
|
|
218
|
-
```
|
|
219
|
-
|
|
220
|
-
```bash
|
|
221
|
-
# Environment variables
|
|
222
|
-
# (Context settings are primarily configured via config file)
|
|
223
|
-
|
|
224
|
-
# CLI flags
|
|
225
|
-
# (Context settings are not available as CLI flags)
|
|
226
|
-
```
|
|
227
|
-
|
|
228
|
-
### UI Settings
|
|
229
|
-
|
|
230
|
-
Controls the terminal user interface appearance and behavior.
|
|
231
|
-
|
|
232
|
-
| Setting | Type | Default | Description |
|
|
233
|
-
| ------------------------------------- | ------- | ---------- | --------------------------------------------------- |
|
|
234
|
-
| `ui.layout` | string | `"hybrid"` | UI layout mode (`hybrid` or `simple`) |
|
|
235
|
-
| `ui.sidePanel` | boolean | `true` | Show side panel with context info |
|
|
236
|
-
| `ui.showGpuStats` | boolean | `true` | Display GPU statistics in status bar |
|
|
237
|
-
| `ui.showCost` | boolean | `true` | Display token cost estimates |
|
|
238
|
-
| `ui.metrics.enabled` | boolean | `true` | Enable performance metrics display |
|
|
239
|
-
| `ui.metrics.compactMode` | boolean | `false` | Use compact metrics display |
|
|
240
|
-
| `ui.metrics.showPromptTokens` | boolean | `true` | Show prompt token count |
|
|
241
|
-
| `ui.metrics.showTTFT` | boolean | `true` | Show time to first token |
|
|
242
|
-
| `ui.metrics.showInStatusBar` | boolean | `true` | Display metrics in status bar |
|
|
243
|
-
| `ui.reasoning.enabled` | boolean | `true` | Enable reasoning display for models that support it |
|
|
244
|
-
| `ui.reasoning.maxVisibleLines` | number | `8` | Maximum lines to show for reasoning (min: 1) |
|
|
245
|
-
| `ui.reasoning.autoCollapseOnComplete` | boolean | `true` | Auto-collapse reasoning when complete |
|
|
246
|
-
|
|
247
|
-
**Configuration Methods:**
|
|
248
|
-
|
|
249
|
-
```yaml
|
|
250
|
-
# Config file
|
|
251
|
-
ui:
|
|
252
|
-
layout: hybrid
|
|
253
|
-
sidePanel: true
|
|
254
|
-
showGpuStats: true
|
|
255
|
-
showCost: true
|
|
256
|
-
metrics:
|
|
257
|
-
enabled: true
|
|
258
|
-
compactMode: false
|
|
259
|
-
showPromptTokens: true
|
|
260
|
-
showTTFT: true
|
|
261
|
-
showInStatusBar: true
|
|
262
|
-
reasoning:
|
|
263
|
-
enabled: true
|
|
264
|
-
maxVisibleLines: 8
|
|
265
|
-
autoCollapseOnComplete: true
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
```bash
|
|
269
|
-
# Environment variables
|
|
270
|
-
export NO_COLOR=1 # Disable colored output
|
|
271
|
-
|
|
272
|
-
# CLI flags
|
|
273
|
-
ollm --no-color # Disable colored output
|
|
274
|
-
```
|
|
275
|
-
|
|
276
|
-
### Status Settings
|
|
277
|
-
|
|
278
|
-
Controls GPU and system status monitoring.
|
|
279
|
-
|
|
280
|
-
| Setting | Type | Default | Description |
|
|
281
|
-
| -------------------------- | ------ | ------- | --------------------------------------------------- |
|
|
282
|
-
| `status.pollInterval` | number | `5000` | Status polling interval in milliseconds (min: 1000) |
|
|
283
|
-
| `status.highTempThreshold` | number | `80` | Temperature threshold for warnings in °C (min: 0) |
|
|
284
|
-
| `status.lowVramThreshold` | number | `512` | Low VRAM warning threshold in MB (min: 0) |
|
|
285
|
-
|
|
286
|
-
**Configuration Methods:**
|
|
287
|
-
|
|
288
|
-
```yaml
|
|
289
|
-
# Config file
|
|
290
|
-
status:
|
|
291
|
-
pollInterval: 5000
|
|
292
|
-
highTempThreshold: 80
|
|
293
|
-
lowVramThreshold: 512
|
|
294
|
-
```
|
|
295
|
-
|
|
296
|
-
```bash
|
|
297
|
-
# Environment variables
|
|
298
|
-
# (Status settings are configured via config file)
|
|
299
|
-
|
|
300
|
-
# CLI flags
|
|
301
|
-
# (Status settings are not available as CLI flags)
|
|
302
|
-
```
|
|
303
|
-
|
|
304
|
-
### Review Settings
|
|
305
|
-
|
|
306
|
-
Controls diff review behavior for file modifications.
|
|
307
|
-
|
|
308
|
-
| Setting | Type | Default | Description |
|
|
309
|
-
| ------------------------ | ------- | ------- | ------------------------------------------------------------- |
|
|
310
|
-
| `review.enabled` | boolean | `true` | Enable diff review for file changes |
|
|
311
|
-
| `review.inlineThreshold` | number | `5` | Max lines to show inline (larger diffs open in pager, min: 0) |
|
|
312
|
-
|
|
313
|
-
**Configuration Methods:**
|
|
314
|
-
|
|
315
|
-
```yaml
|
|
316
|
-
# Config file
|
|
317
|
-
review:
|
|
318
|
-
enabled: true
|
|
319
|
-
inlineThreshold: 5
|
|
320
|
-
```
|
|
321
|
-
|
|
322
|
-
```bash
|
|
323
|
-
# Environment variables
|
|
324
|
-
# (Review settings are configured via config file)
|
|
325
|
-
|
|
326
|
-
# CLI flags
|
|
327
|
-
ollm --review-diffs # Enable diff review
|
|
328
|
-
ollm --no-review # Disable diff review
|
|
329
|
-
```
|
|
330
|
-
|
|
331
|
-
### Session Settings
|
|
332
|
-
|
|
333
|
-
Controls session recording and auto-save behavior.
|
|
334
|
-
|
|
335
|
-
| Setting | Type | Default | Description |
|
|
336
|
-
| ---------------------- | ------- | ------- | ---------------------------------------------- |
|
|
337
|
-
| `session.autoSave` | boolean | `true` | Automatically save session state |
|
|
338
|
-
| `session.saveInterval` | number | `60000` | Auto-save interval in milliseconds (min: 1000) |
|
|
339
|
-
|
|
340
|
-
**Configuration Methods:**
|
|
341
|
-
|
|
342
|
-
```yaml
|
|
343
|
-
# Config file
|
|
344
|
-
session:
|
|
345
|
-
autoSave: true
|
|
346
|
-
saveInterval: 60000
|
|
347
|
-
```
|
|
348
|
-
|
|
349
|
-
```bash
|
|
350
|
-
# Environment variables
|
|
351
|
-
# (Session settings are configured via config file)
|
|
352
|
-
|
|
353
|
-
# CLI flags
|
|
354
|
-
ollm --session <id> # Resume a specific session
|
|
355
|
-
```
|
|
356
|
-
|
|
357
|
-
## Environment Variables
|
|
358
|
-
|
|
359
|
-
OLLM CLI supports the following environment variables for configuration:
|
|
360
|
-
|
|
361
|
-
### Provider Configuration
|
|
362
|
-
|
|
363
|
-
| Variable | Type | Default | Description |
|
|
364
|
-
| --------------------------- | ------------ | ------------------------ | -------------------------------------------- |
|
|
365
|
-
| `OLLAMA_HOST` | string (URI) | `http://localhost:11434` | Ollama server endpoint URL |
|
|
366
|
-
| `VLLM_HOST` | string (URI) | - | vLLM server endpoint URL |
|
|
367
|
-
| `VLLM_API_KEY` | string | - | API key for vLLM authentication |
|
|
368
|
-
| `OPENAI_COMPATIBLE_HOST` | string (URI) | - | OpenAI-compatible server endpoint URL |
|
|
369
|
-
| `OPENAI_COMPATIBLE_API_KEY` | string | - | API key for OpenAI-compatible authentication |
|
|
370
|
-
|
|
371
|
-
### Model Configuration
|
|
372
|
-
|
|
373
|
-
| Variable | Type | Default | Description |
|
|
374
|
-
| -------------------- | ------ | ------------- | ------------------------- |
|
|
375
|
-
| `OLLM_DEFAULT_MODEL` | string | `llama3.2:3b` | Default model name to use |
|
|
376
|
-
|
|
377
|
-
### Logging and Debugging
|
|
378
|
-
|
|
379
|
-
| Variable | Type | Default | Description |
|
|
380
|
-
| ---------------- | ------ | ------- | ---------------------------------------------------- |
|
|
381
|
-
| `OLLM_LOG_LEVEL` | string | `info` | Logging verbosity (`debug`, `info`, `warn`, `error`) |
|
|
382
|
-
|
|
383
|
-
### System Paths
|
|
384
|
-
|
|
385
|
-
| Variable | Type | Default | Description |
|
|
386
|
-
| ------------------ | ------ | ---------------- | ----------------------------------------------------- |
|
|
387
|
-
| `OLLM_CONFIG_PATH` | string | - | Custom config file path (overrides default locations) |
|
|
388
|
-
| `OLLM_INDEX_PATH` | string | - | Custom codebase index location |
|
|
389
|
-
| `XDG_CONFIG_HOME` | string | `~/.config` | Linux config directory (affects `~/.ollm` location) |
|
|
390
|
-
| `XDG_DATA_HOME` | string | `~/.local/share` | Linux data directory |
|
|
391
|
-
| `XDG_CACHE_HOME` | string | `~/.cache` | Linux cache directory |
|
|
392
|
-
|
|
393
|
-
### Feature Flags
|
|
394
|
-
|
|
395
|
-
| Variable | Type | Default | Description |
|
|
396
|
-
| ------------------------ | ------- | ------- | -------------------------------------------- |
|
|
397
|
-
| `OLLM_DISABLE_INDEXING` | boolean | `false` | Disable automatic codebase indexing |
|
|
398
|
-
| `OLLM_DISABLE_EXECUTION` | boolean | `false` | Disable code execution sandbox |
|
|
399
|
-
| `NO_COLOR` | boolean | `false` | Disable colored output (standard convention) |
|
|
400
|
-
|
|
401
|
-
### GPU Monitoring
|
|
402
|
-
|
|
403
|
-
| Variable | Type | Default | Description |
|
|
404
|
-
| ----------------- | ------ | ------- | ------------------------------------ |
|
|
405
|
-
| `NVIDIA_SMI_PATH` | string | - | Custom path to nvidia-smi executable |
|
|
406
|
-
|
|
407
|
-
### Usage Examples
|
|
408
|
-
|
|
409
|
-
```bash
|
|
410
|
-
# Set Ollama host
|
|
411
|
-
export OLLAMA_HOST=http://192.168.1.100:11434
|
|
412
|
-
|
|
413
|
-
# Enable debug logging
|
|
414
|
-
export OLLM_LOG_LEVEL=debug
|
|
415
|
-
|
|
416
|
-
# Use custom config file
|
|
417
|
-
export OLLM_CONFIG_PATH=/path/to/custom/config.yaml
|
|
418
|
-
|
|
419
|
-
# Disable colored output
|
|
420
|
-
export NO_COLOR=1
|
|
421
|
-
|
|
422
|
-
# Run with environment variables
|
|
423
|
-
OLLAMA_HOST=http://localhost:11434 OLLM_LOG_LEVEL=debug ollm
|
|
424
|
-
```
|
|
425
|
-
|
|
426
|
-
## Configuration Examples
|
|
427
|
-
|
|
428
|
-
### Minimal Configuration
|
|
429
|
-
|
|
430
|
-
This is the simplest configuration that just changes the default model:
|
|
431
|
-
|
|
432
|
-
```yaml
|
|
433
|
-
# ~/.ollm/config.yaml or .ollm/config.yaml
|
|
434
|
-
model:
|
|
435
|
-
default: llama3.1:8b
|
|
436
|
-
```
|
|
437
|
-
|
|
438
|
-
### Basic Configuration
|
|
439
|
-
|
|
440
|
-
A typical configuration for everyday use:
|
|
441
|
-
|
|
442
|
-
```yaml
|
|
443
|
-
# ~/.ollm/config.yaml
|
|
444
|
-
provider:
|
|
445
|
-
default: ollama
|
|
446
|
-
ollama:
|
|
447
|
-
host: http://localhost:11434
|
|
448
|
-
timeout: 30000
|
|
449
|
-
|
|
450
|
-
model:
|
|
451
|
-
default: llama3.2:3b
|
|
452
|
-
temperature: 0.7
|
|
453
|
-
maxTokens: 4096
|
|
454
|
-
|
|
455
|
-
ui:
|
|
456
|
-
layout: hybrid
|
|
457
|
-
sidePanel: true
|
|
458
|
-
showGpuStats: true
|
|
459
|
-
|
|
460
|
-
session:
|
|
461
|
-
autoSave: true
|
|
462
|
-
saveInterval: 60000
|
|
463
|
-
```
|
|
464
|
-
|
|
465
|
-
### Advanced Configuration
|
|
466
|
-
|
|
467
|
-
A comprehensive configuration with all major settings:
|
|
468
|
-
|
|
469
|
-
```yaml
|
|
470
|
-
# ~/.ollm/config.yaml - Advanced configuration example
|
|
471
|
-
|
|
472
|
-
# Provider configuration
|
|
473
|
-
provider:
|
|
474
|
-
default: ollama
|
|
475
|
-
ollama:
|
|
476
|
-
host: http://localhost:11434
|
|
477
|
-
timeout: 30000
|
|
478
|
-
vllm:
|
|
479
|
-
host: http://localhost:8000
|
|
480
|
-
apiKey: your-api-key-here
|
|
481
|
-
openaiCompatible:
|
|
482
|
-
host: http://localhost:1234
|
|
483
|
-
apiKey: your-api-key-here
|
|
484
|
-
|
|
485
|
-
# Model configuration
|
|
486
|
-
model:
|
|
487
|
-
default: llama3.2:3b
|
|
488
|
-
temperature: 0.7
|
|
489
|
-
maxTokens: 4096
|
|
490
|
-
|
|
491
|
-
# Context management (optional - uses smart defaults if omitted)
|
|
492
|
-
context:
|
|
493
|
-
targetSize: 8192
|
|
494
|
-
minSize: 2048
|
|
495
|
-
maxSize: 32768
|
|
496
|
-
autoSize: true
|
|
497
|
-
vramBuffer: 1073741824 # 1GB in bytes
|
|
498
|
-
compressionEnabled: true
|
|
499
|
-
compressionThreshold: 0.8
|
|
500
|
-
snapshotsEnabled: true
|
|
501
|
-
maxSnapshots: 5
|
|
502
|
-
|
|
503
|
-
# UI configuration
|
|
504
|
-
ui:
|
|
505
|
-
layout: hybrid
|
|
506
|
-
sidePanel: true
|
|
507
|
-
showGpuStats: true
|
|
508
|
-
showCost: true
|
|
509
|
-
metrics:
|
|
510
|
-
enabled: true
|
|
511
|
-
compactMode: false
|
|
512
|
-
showPromptTokens: true
|
|
513
|
-
showTTFT: true
|
|
514
|
-
showInStatusBar: true
|
|
515
|
-
reasoning:
|
|
516
|
-
enabled: true
|
|
517
|
-
maxVisibleLines: 8
|
|
518
|
-
autoCollapseOnComplete: true
|
|
519
|
-
|
|
520
|
-
# Status monitoring
|
|
521
|
-
status:
|
|
522
|
-
pollInterval: 5000
|
|
523
|
-
highTempThreshold: 80
|
|
524
|
-
lowVramThreshold: 512
|
|
525
|
-
|
|
526
|
-
# Diff review
|
|
527
|
-
review:
|
|
528
|
-
enabled: true
|
|
529
|
-
inlineThreshold: 5
|
|
530
|
-
|
|
531
|
-
# Session management
|
|
532
|
-
session:
|
|
533
|
-
autoSave: true
|
|
534
|
-
saveInterval: 60000
|
|
535
|
-
```
|
|
536
|
-
|
|
537
|
-
### Project-Specific Configuration
|
|
538
|
-
|
|
539
|
-
Example workspace configuration for a coding project:
|
|
540
|
-
|
|
541
|
-
```yaml
|
|
542
|
-
# .ollm/config.yaml - Project-specific configuration
|
|
543
|
-
|
|
544
|
-
# Use a code-specialized model for this project
|
|
545
|
-
model:
|
|
546
|
-
default: codellama:13b
|
|
547
|
-
temperature: 0.3 # Lower temperature for more deterministic code generation
|
|
548
|
-
maxTokens: 8192
|
|
549
|
-
|
|
550
|
-
# Enable diff review for code changes
|
|
551
|
-
review:
|
|
552
|
-
enabled: true
|
|
553
|
-
inlineThreshold: 10 # Show larger diffs inline for code review
|
|
554
|
-
|
|
555
|
-
# Optimize UI for coding
|
|
556
|
-
ui:
|
|
557
|
-
layout: simple
|
|
558
|
-
sidePanel: false
|
|
559
|
-
showGpuStats: false
|
|
560
|
-
metrics:
|
|
561
|
-
enabled: true
|
|
562
|
-
compactMode: true
|
|
563
|
-
```
|
|
564
|
-
|
|
565
|
-
### Remote Server Configuration
|
|
566
|
-
|
|
567
|
-
Example configuration for connecting to a remote Ollama server:
|
|
568
|
-
|
|
569
|
-
```yaml
|
|
570
|
-
# ~/.ollm/config.yaml - Remote server configuration
|
|
571
|
-
|
|
572
|
-
provider:
|
|
573
|
-
default: ollama
|
|
574
|
-
ollama:
|
|
575
|
-
host: http://192.168.1.100:11434
|
|
576
|
-
timeout: 60000 # Longer timeout for remote connections
|
|
577
|
-
|
|
578
|
-
model:
|
|
579
|
-
default: llama3.1:70b # Use larger model on powerful remote server
|
|
580
|
-
temperature: 0.7
|
|
581
|
-
maxTokens: 8192
|
|
582
|
-
```
|
|
583
|
-
|
|
584
|
-
### Multi-Provider Configuration
|
|
585
|
-
|
|
586
|
-
Example configuration with multiple providers configured:
|
|
587
|
-
|
|
588
|
-
```yaml
|
|
589
|
-
# ~/.ollm/config.yaml - Multi-provider setup
|
|
590
|
-
|
|
591
|
-
provider:
|
|
592
|
-
default: ollama # Default to local Ollama
|
|
593
|
-
ollama:
|
|
594
|
-
host: http://localhost:11434
|
|
595
|
-
timeout: 30000
|
|
596
|
-
vllm:
|
|
597
|
-
host: http://gpu-server:8000
|
|
598
|
-
apiKey: vllm-secret-key
|
|
599
|
-
openaiCompatible:
|
|
600
|
-
host: http://localhost:1234
|
|
601
|
-
# No API key needed for local LM Studio
|
|
602
|
-
|
|
603
|
-
model:
|
|
604
|
-
default: llama3.2:3b
|
|
605
|
-
temperature: 0.7
|
|
606
|
-
maxTokens: 4096
|
|
607
|
-
|
|
608
|
-
# Switch providers at runtime:
|
|
609
|
-
# ollm --provider vllm --model mistral-7b
|
|
610
|
-
# ollm --provider openai-compatible --model local-model
|
|
611
|
-
```
|
|
612
|
-
|
|
613
|
-
### Performance-Optimized Configuration
|
|
614
|
-
|
|
615
|
-
Example configuration optimized for performance on limited hardware:
|
|
616
|
-
|
|
617
|
-
```yaml
|
|
618
|
-
# ~/.ollm/config.yaml - Performance-optimized
|
|
619
|
-
|
|
620
|
-
model:
|
|
621
|
-
default: llama3.2:3b # Smaller model for faster inference
|
|
622
|
-
temperature: 0.7
|
|
623
|
-
maxTokens: 2048 # Limit output length
|
|
624
|
-
|
|
625
|
-
context:
|
|
626
|
-
targetSize: 4096 # Smaller context window
|
|
627
|
-
autoSize: true
|
|
628
|
-
compressionEnabled: true
|
|
629
|
-
compressionThreshold: 0.7 # Compress earlier
|
|
630
|
-
|
|
631
|
-
ui:
|
|
632
|
-
layout: simple
|
|
633
|
-
sidePanel: false
|
|
634
|
-
showGpuStats: true
|
|
635
|
-
metrics:
|
|
636
|
-
enabled: true
|
|
637
|
-
compactMode: true
|
|
638
|
-
|
|
639
|
-
status:
|
|
640
|
-
pollInterval: 10000 # Poll less frequently
|
|
641
|
-
```
|
|
642
|
-
|
|
643
|
-
## Validation
|
|
644
|
-
|
|
645
|
-
OLLM CLI validates your configuration on startup. If there are errors, you'll see detailed messages indicating:
|
|
646
|
-
|
|
647
|
-
- Which file contains the error
|
|
648
|
-
- The line and column number (for YAML syntax errors)
|
|
649
|
-
- What's wrong and how to fix it
|
|
650
|
-
- Suggestions for common mistakes
|
|
651
|
-
|
|
652
|
-
Example validation error:
|
|
653
|
-
|
|
654
|
-
```
|
|
655
|
-
Configuration Error: Expected number, got string
|
|
656
|
-
File: /home/user/.ollm/config.yaml
|
|
657
|
-
Line: 5, Column: 15
|
|
658
|
-
|
|
659
|
-
5 | temperature: "high"
|
|
660
|
-
^
|
|
661
|
-
|
|
662
|
-
Tip - Check for missing or unmatched quotes
|
|
663
|
-
```
|
|
664
|
-
|
|
665
|
-
## Tips
|
|
666
|
-
|
|
667
|
-
1. **Start Simple**: Begin with a minimal configuration and add settings as needed
|
|
668
|
-
2. **Use Workspace Configs**: Keep project-specific settings in `.ollm/config.yaml`
|
|
669
|
-
3. **Environment Variables**: Use environment variables for temporary overrides
|
|
670
|
-
4. **CLI Flags**: Use CLI flags for one-off changes without modifying config files
|
|
671
|
-
5. **Validation**: Run `ollm --help` to verify your configuration loads correctly
|
|
672
|
-
6. **Comments**: YAML supports comments with `#` - use them to document your settings
|
|
673
|
-
|
|
674
|
-
---
|
|
675
|
-
|
|
676
|
-
## Planned Future Configuration Options
|
|
677
|
-
|
|
678
|
-
The following configuration options are **planned for future development** and are not yet available in the current version. These will be added as new features are implemented according to the roadmap (../DevelopmentRoadmap/Roadmap.md).
|
|
679
|
-
|
|
680
|
-
### 🔮 Kraken Integration (Planned)
|
|
681
|
-
|
|
682
|
-
External LLM provider configuration for accessing cloud models:
|
|
683
|
-
|
|
684
|
-
```yaml
|
|
685
|
-
# PLANNED - Not yet available
|
|
686
|
-
# kraken:
|
|
687
|
-
# enabled: true
|
|
688
|
-
# confirmBeforeRelease: true
|
|
689
|
-
# autoEscalation:
|
|
690
|
-
# enabled: false
|
|
691
|
-
# triggers:
|
|
692
|
-
# - contextOverflow
|
|
693
|
-
# - localModelError
|
|
694
|
-
# providers:
|
|
695
|
-
# geminiCli:
|
|
696
|
-
# enabled: true
|
|
697
|
-
# executable: gemini
|
|
698
|
-
# timeout: 120000
|
|
699
|
-
# defaultModel: gemini-pro
|
|
700
|
-
# claudeCode:
|
|
701
|
-
# enabled: true
|
|
702
|
-
# executable: claude-code
|
|
703
|
-
# timeout: 120000
|
|
704
|
-
# openai:
|
|
705
|
-
# enabled: true
|
|
706
|
-
# apiKey: ${OPENAI_API_KEY}
|
|
707
|
-
# model: gpt-4
|
|
708
|
-
# maxTokens: 8192
|
|
709
|
-
# anthropic:
|
|
710
|
-
# enabled: true
|
|
711
|
-
# apiKey: ${ANTHROPIC_API_KEY}
|
|
712
|
-
# model: claude-3-opus
|
|
713
|
-
# costTracking:
|
|
714
|
-
# enabled: true
|
|
715
|
-
# sessionBudget: 5.00 # USD
|
|
716
|
-
# warnThreshold: 1.00
|
|
717
|
-
```
|
|
718
|
-
|
|
719
|
-
**Status:** Planned for Stage 10
|
|
720
|
-
**Documentation:** Kraken Integration Spec (.kiro/specs/stage-10-kraken-integration-future-dev/)
|
|
721
|
-
|
|
722
|
-
### 🔮 Developer Productivity Tools (Planned)
|
|
723
|
-
|
|
724
|
-
Git integration and diff review configuration:
|
|
725
|
-
|
|
726
|
-
```yaml
|
|
727
|
-
# PLANNED - Not yet available
|
|
728
|
-
# git:
|
|
729
|
-
# enabled: true
|
|
730
|
-
# includeInSystemPrompt: true
|
|
731
|
-
# autoCommit:
|
|
732
|
-
# enabled: false
|
|
733
|
-
# messageStyle: semantic # semantic, descriptive, conventional
|
|
734
|
-
# groupChanges: true
|
|
735
|
-
|
|
736
|
-
# mentions:
|
|
737
|
-
# enabled: true
|
|
738
|
-
# maxFilesPerGlob: 50
|
|
739
|
-
# maxTokensPerMention: 4096
|
|
740
|
-
# warnOnLargeContext: true
|
|
741
|
-
|
|
742
|
-
# diffReview:
|
|
743
|
-
# enabled: true
|
|
744
|
-
# autoApprove:
|
|
745
|
-
# readOperations: true
|
|
746
|
-
# smallChanges: true
|
|
747
|
-
# smallChangeThreshold: 10 # lines
|
|
748
|
-
# showFullContext: true
|
|
749
|
-
# contextLines: 3
|
|
750
|
-
```
|
|
751
|
-
|
|
752
|
-
**Status:** Planned for Stage 11
|
|
753
|
-
**Documentation:** Developer Productivity Spec (.kiro/specs/stage-11-developer-productivity-future-dev/)
|
|
754
|
-
|
|
755
|
-
### 🔮 Cross-Platform Support (Planned)
|
|
756
|
-
|
|
757
|
-
Platform-specific configuration options:
|
|
758
|
-
|
|
759
|
-
```yaml
|
|
760
|
-
# PLANNED - Not yet available
|
|
761
|
-
# platform:
|
|
762
|
-
# shell:
|
|
763
|
-
# windows: cmd.exe
|
|
764
|
-
# unix: /bin/sh
|
|
765
|
-
# python:
|
|
766
|
-
# windows: python
|
|
767
|
-
# unix: python3
|
|
768
|
-
# paths:
|
|
769
|
-
# normalizeDisplay: true # Show forward slashes in UI
|
|
770
|
-
# terminal:
|
|
771
|
-
# forceUnicode: false
|
|
772
|
-
# forceColor: false
|
|
773
|
-
```
|
|
774
|
-
|
|
775
|
-
**Status:** Planned for Stage 12
|
|
776
|
-
**Documentation:** Cross-Platform Spec (.kiro/specs/stage-12-cross-platform-future-dev/)
|
|
777
|
-
|
|
778
|
-
### 🔮 File Upload System (Planned)
|
|
779
|
-
|
|
780
|
-
File upload and storage configuration:
|
|
781
|
-
|
|
782
|
-
```yaml
|
|
783
|
-
# PLANNED - Not yet available
|
|
784
|
-
# uploads:
|
|
785
|
-
# enabled: true
|
|
786
|
-
# methods:
|
|
787
|
-
# slashCommand: true
|
|
788
|
-
# clipboard: true
|
|
789
|
-
# dragDrop: true
|
|
790
|
-
# mentions: true
|
|
791
|
-
# storage:
|
|
792
|
-
# perFileLimit: 10485760 # 10MB in bytes
|
|
793
|
-
# perSessionLimit: 104857600 # 100MB in bytes
|
|
794
|
-
# retentionDays: 7
|
|
795
|
-
# images:
|
|
796
|
-
# maxWidth: 2048
|
|
797
|
-
# maxHeight: 2048
|
|
798
|
-
# quality: 85
|
|
799
|
-
# allowedTypes:
|
|
800
|
-
# - image/*
|
|
801
|
-
# - text/*
|
|
802
|
-
# - application/json
|
|
803
|
-
```
|
|
804
|
-
|
|
805
|
-
**Status:** Planned for Stage 14
|
|
806
|
-
**Documentation:** File Upload Spec (.kiro/specs/stage-14-file-upload-future-dev/)
|
|
807
|
-
|
|
808
|
-
### 🔮 Intelligence Layer (Planned)
|
|
809
|
-
|
|
810
|
-
Advanced AI capabilities configuration:
|
|
811
|
-
|
|
812
|
-
```yaml
|
|
813
|
-
# PLANNED - Not yet available
|
|
814
|
-
# intelligence:
|
|
815
|
-
# codebaseIndex:
|
|
816
|
-
# enabled: true
|
|
817
|
-
# autoIndex: true
|
|
818
|
-
# extensions:
|
|
819
|
-
# - .ts
|
|
820
|
-
# - .js
|
|
821
|
-
# - .py
|
|
822
|
-
# - .java
|
|
823
|
-
# - .go
|
|
824
|
-
# excludePatterns:
|
|
825
|
-
# - node_modules/**
|
|
826
|
-
# - dist/**
|
|
827
|
-
# - .git/**
|
|
828
|
-
# maxFileSize: 1048576 # 1MB
|
|
829
|
-
|
|
830
|
-
# structuredOutput:
|
|
831
|
-
# enabled: true
|
|
832
|
-
# maxRetries: 3
|
|
833
|
-
# useGuidedDecoding: true
|
|
834
|
-
|
|
835
|
-
# sandbox:
|
|
836
|
-
# enabled: true
|
|
837
|
-
# timeout: 30000
|
|
838
|
-
# allowedLanguages:
|
|
839
|
-
# - javascript
|
|
840
|
-
# - python
|
|
841
|
-
# - bash
|
|
842
|
-
# restrictions:
|
|
843
|
-
# networkAccess: false
|
|
844
|
-
# filesystemAccess: false
|
|
845
|
-
# memoryLimit: 536870912 # 512MB
|
|
846
|
-
|
|
847
|
-
# vision:
|
|
848
|
-
# enabled: true
|
|
849
|
-
# defaultModel: llava:13b
|
|
850
|
-
# maxImageSize: 5242880 # 5MB
|
|
851
|
-
|
|
852
|
-
# costTracking:
|
|
853
|
-
# enabled: true
|
|
854
|
-
# monthlyBudget: 50.00 # USD
|
|
855
|
-
# warnThreshold: 40.00
|
|
856
|
-
```
|
|
857
|
-
|
|
858
|
-
**Status:** Planned for Stage 15
|
|
859
|
-
**Documentation:** Intelligence Layer Spec (.kiro/specs/stage-15-intelligence-layer-future-dev/)
|
|
860
|
-
|
|
861
|
-
---
|
|
862
|
-
|
|
863
|
-
### Notes on Planned Features
|
|
864
|
-
|
|
865
|
-
- **Not Yet Available**: These configuration options are documented for planning purposes and will be implemented in future releases
|
|
866
|
-
- **Subject to Change**: Exact configuration structure may change during implementation based on technical requirements and user feedback
|
|
867
|
-
- **Roadmap**: See the full roadmap (../DevelopmentRoadmap/Roadmap.md) for timeline and priority information
|
|
868
|
-
- **Contributions Welcome**: If you're interested in implementing any of these features, please review the detailed specifications and open a discussion
|
|
869
|
-
|
|
870
|
-
---
|
|
871
|
-
|
|
872
|
-
**Last Updated:** January 15, 2026
|