loclaude 0.0.1-alpha.2 → 0.0.1-alpha.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.0.1-alpha.3] - 2025-01-21
11
+
12
+ ### Added
13
+
14
+ - Adds support for CPU Only Ollama Hosts
15
+ - Adds `auto-start` model enablement to `x` command
16
+
17
+ ### Changed
18
+
19
+ - Bumps `@loclaude-internal/cli` dependency reference from `v0.0.1-alpha.1` to pinned version `v0.0.1-alpha.2`
20
+ - Modifies documentation on output files from `init` command
21
+
10
22
  ## [0.0.1-alpha.2] - 2025-01-20
11
23
 
12
24
  ### Changed
package/README.md CHANGED
@@ -7,6 +7,7 @@ loclaude provides a CLI to:
7
7
  - Manage Ollama + Open WebUI Docker containers
8
8
  - Pull and manage Ollama models
9
9
  - Scaffold new projects with opinionated Docker configs
10
+ - **Supports both GPU and CPU-only modes**
10
11
 
11
12
  ## Installation
12
13
 
@@ -21,9 +22,17 @@ bun install -g loclaude
21
22
  ## Prerequisites
22
23
 
23
24
  - [Docker](https://docs.docker.com/get-docker/) with Docker Compose v2
24
- - [NVIDIA GPU](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with drivers and container toolkit
25
25
  - [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code) installed (`npm install -g @anthropic-ai/claude-code`)
26
26
 
27
+ ### For GPU Mode (Recommended)
28
+
29
+ - [NVIDIA GPU](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with drivers
30
+ - [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
31
+
32
+ ### CPU-Only Mode
33
+
34
+ No GPU required! Use `--no-gpu` flag during init for systems without NVIDIA GPUs.
35
+
27
36
  Check your setup with:
28
37
 
29
38
  ```bash
@@ -32,8 +41,10 @@ loclaude doctor
32
41
 
33
42
  ## Quick Start
34
43
 
44
+ ### With GPU (Auto-detected)
45
+
35
46
  ```bash
36
- # Initialize a new project with Docker configs
47
+ # Initialize a new project (auto-detects GPU)
37
48
  loclaude init
38
49
 
39
50
  # Start Ollama + Open WebUI containers
@@ -42,10 +53,48 @@ loclaude docker-up
42
53
  # Pull a model
43
54
  loclaude models-pull qwen3-coder:30b
44
55
 
45
- # Run Claude Code with local LLM (interactive model selection)
56
+ # Run Claude Code with local LLM
57
+ loclaude run
58
+ ```
59
+
60
+ ### CPU-Only Mode
61
+
62
+ ```bash
63
+ # Initialize without GPU support
64
+ loclaude init --no-gpu
65
+
66
+ # Start containers
67
+ loclaude docker-up
68
+
69
+ # Pull a CPU-optimized model
70
+ loclaude models-pull qwen2.5-coder:7b
71
+
72
+ # Run Claude Code
46
73
  loclaude run
47
74
  ```
48
75
 
76
+ ## Features
77
+
78
+ ### Automatic Model Loading
79
+
80
+ When you run `loclaude run`, it automatically:
81
+ 1. Checks if your selected model is loaded in Ollama
82
+ 2. If not loaded, warms up the model with a 10-minute keep-alive
83
+ 3. Shows `[loaded]` indicator in model selection for running models
84
+
85
+ ### Colorful CLI Output
86
+
87
+ All commands feature colorful, themed output for better readability:
88
+ - Status indicators with colors (green/yellow/red)
89
+ - Model sizes color-coded by magnitude
90
+ - Clear headers and structured output
91
+
92
+ ### GPU Auto-Detection
93
+
94
+ `loclaude init` automatically detects NVIDIA GPUs and configures the appropriate Docker setup:
95
+ - **GPU detected**: Uses `runtime: nvidia` and CUDA-enabled images
96
+ - **No GPU**: Uses CPU-only configuration with smaller default models
97
+
49
98
  ## Commands
50
99
 
51
100
  ### Running Claude Code
@@ -59,7 +108,9 @@ loclaude run -- --help # Pass args to claude
59
108
  ### Project Setup
60
109
 
61
110
  ```bash
62
- loclaude init # Scaffold docker-compose.yml, config, mise.toml
111
+ loclaude init # Auto-detect GPU, scaffold project
112
+ loclaude init --gpu # Force GPU mode
113
+ loclaude init --no-gpu # Force CPU-only mode
63
114
  loclaude init --force # Overwrite existing files
64
115
  loclaude init --no-webui # Skip Open WebUI in compose file
65
116
  ```
@@ -94,6 +145,24 @@ loclaude config # Show current configuration
94
145
  loclaude config-paths # Show config file search paths
95
146
  ```
96
147
 
148
+ ## Recommended Models
149
+
150
+ ### For GPU (16GB+ VRAM)
151
+
152
+ | Model | Size | Use Case |
153
+ |-------|------|----------|
154
+ | `qwen3-coder:30b` | ~17 GB | Best coding performance |
155
+ | `deepseek-coder:33b` | ~18 GB | Code understanding |
156
+ | `gpt-oss:20b` | ~13 GB | General purpose |
157
+
158
+ ### For CPU or Limited VRAM
159
+
160
+ | Model | Size | Use Case |
161
+ |-------|------|----------|
162
+ | `qwen2.5-coder:7b` | ~4 GB | Coding on CPU |
163
+ | `llama3.2:3b` | ~2 GB | Fast, simple tasks |
164
+ | `gemma2:9b` | ~5 GB | General purpose |
165
+
97
166
  ## Configuration
98
167
 
99
168
  loclaude supports configuration via files and environment variables.
@@ -162,7 +231,7 @@ After running `loclaude init`:
162
231
  ├── .loclaude/
163
232
  │ └── config.json # Loclaude configuration
164
233
  ├── models/ # Ollama model storage (gitignored)
165
- ├── docker-compose.yml # Container definitions
234
+ ├── docker-compose.yml # Container definitions (GPU or CPU mode)
166
235
  ├── mise.toml # Task runner configuration
167
236
  └── README.md
168
237
  ```
@@ -189,8 +258,8 @@ loclaude doctor
189
258
 
190
259
  This verifies:
191
260
  - Docker and Docker Compose installation
192
- - NVIDIA GPU detection
193
- - NVIDIA Container Toolkit
261
+ - NVIDIA GPU detection (optional)
262
+ - NVIDIA Container Toolkit (optional)
194
263
  - Claude Code CLI
195
264
  - Ollama API connectivity
196
265
 
@@ -215,6 +284,23 @@ If Claude Code can't connect to Ollama:
215
284
  2. Check the API: `curl http://localhost:11434/api/tags`
216
285
  3. Verify your config: `loclaude config`
217
286
 
287
+ ### GPU Not Detected
288
+
289
+ If you have a GPU but it's not detected:
290
+
291
+ 1. Check NVIDIA drivers: `nvidia-smi`
292
+ 2. Test Docker GPU access: `docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi`
293
+ 3. Install NVIDIA Container Toolkit if missing
294
+ 4. Re-run `loclaude init --gpu` to force GPU mode
295
+
296
+ ### Running on CPU
297
+
298
+ If inference is slow on CPU:
299
+
300
+ 1. Use smaller, quantized models: `qwen2.5-coder:7b`, `llama3.2:3b`
301
+ 2. Expect ~10-20 tokens/sec on modern CPUs
302
+ 3. Consider cloud models via Ollama: `glm-4.7:cloud`
303
+
218
304
  ## Development
219
305
 
220
306
  ### Building from Source
@@ -1,78 +1,159 @@
1
+ # =============================================================================
2
+ # LOCLAUDE DOCKER COMPOSE - GPU MODE
3
+ # =============================================================================
4
+ # This configuration runs Ollama with NVIDIA GPU acceleration for fast inference.
5
+ # Bundled with loclaude package for use as a fallback when no local compose exists.
6
+ #
7
+ # Prerequisites:
8
+ # - NVIDIA GPU with CUDA support
9
+ # - NVIDIA drivers installed on host
10
+ # - NVIDIA Container Toolkit: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit
11
+ #
12
+ # Quick test for GPU support:
13
+ # docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
14
+ #
15
+ # =============================================================================
16
+
1
17
  services:
18
+ # ===========================================================================
19
+ # OLLAMA - Local LLM Inference Server
20
+ # ===========================================================================
21
+ # Ollama provides the AI backend that Claude Code connects to.
22
+ # It runs large language models locally on your hardware.
23
+ #
24
+ # API Documentation: https://github.com/ollama/ollama/blob/main/docs/api.md
25
+ # Model Library: https://ollama.com/library
26
+ # ===========================================================================
2
27
  ollama:
28
+ # Official Ollama image - 'latest' ensures newest features and model support
3
29
  image: ollama/ollama:latest
30
+
31
+ # Fixed container name for easy CLI access:
32
+ # docker exec ollama ollama list
33
+ # docker logs ollama
4
34
  container_name: ollama
5
- # Use nvidia runtime for GPU acceleration
6
- # This enables access to Nvidia GPUs from within the container
35
+
36
+ # NVIDIA Container Runtime - Required for GPU access
37
+ # This makes CUDA libraries available inside the container
7
38
  runtime: nvidia
39
+
8
40
  environment:
9
- - NVIDIA_VISIBLE_DEVICES=all # Make all GPUs visible to the container
10
- - NVIDIA_DRIVER_CAPABILITIES=compute,utility # Grant compute and utility capabilities (needed for GPU inference)
11
- # OPTIONAL: Set memory limits for Ollama process (in bytes)
12
- # Uncomment if you want to prevent Ollama from consuming unlimited RAM
41
+ # ---------------------------------------------------------------------------
42
+ # GPU Configuration
43
+ # ---------------------------------------------------------------------------
44
+ # NVIDIA_VISIBLE_DEVICES: Which GPUs to expose to the container
45
+ # - 'all': Use all available GPUs (recommended for most setups)
46
+ # - '0': Use only GPU 0
47
+ # - '0,1': Use GPUs 0 and 1
48
+ - NVIDIA_VISIBLE_DEVICES=all
49
+
50
+ # NVIDIA_DRIVER_CAPABILITIES: What GPU features to enable
51
+ # - 'compute': CUDA compute (required for inference)
52
+ # - 'utility': nvidia-smi and other tools
53
+ - NVIDIA_DRIVER_CAPABILITIES=compute,utility
54
+
55
+ # ---------------------------------------------------------------------------
56
+ # Ollama Configuration (Optional)
57
+ # ---------------------------------------------------------------------------
58
+ # Uncomment these to customize Ollama behavior:
59
+
60
+ # Maximum number of models loaded in memory simultaneously
61
+ # Lower this if you're running out of VRAM
13
62
  # - OLLAMA_MAX_LOADED_MODELS=1
63
+
64
+ # Maximum parallel inference requests per model
65
+ # Higher values use more VRAM but handle more concurrent requests
14
66
  # - OLLAMA_NUM_PARALLEL=1
15
-
16
- # OPTIONAL: Set log level for debugging
67
+
68
+ # Enable debug logging for troubleshooting
17
69
  # - OLLAMA_DEBUG=1
18
-
19
- # Volume mounts: maps host directories/files into the container
70
+
20
71
  volumes:
21
- # Map the models directory so they persist on your host
22
- # Models downloaded in container go to /root/.ollama, we mount it to ./models on host
72
+ # ---------------------------------------------------------------------------
73
+ # Model Storage
74
+ # ---------------------------------------------------------------------------
75
+ # Maps ./models on your host to /root/.ollama in the container
76
+ # This persists downloaded models across container restarts
77
+ #
78
+ # Disk space requirements (approximate):
79
+ # - 7B model: ~4GB
80
+ # - 13B model: ~8GB
81
+ # - 30B model: ~16GB
82
+ # - 70B model: ~40GB
23
83
  - ./models:/root/.ollama
24
-
25
- # Keep container time in sync with host (good practice)
26
- # - /etc/localtime:/etc/localtime:ro
27
-
28
- # OPTIONAL: Mount a custom config directory
29
- # Uncomment if you want to customize Ollama settings
30
- # - ./config:/root/.ollama/config
31
84
 
32
85
  ports:
86
+ # Ollama API port - access at http://localhost:11434
87
+ # Used by Claude Code and other Ollama clients
33
88
  - "11434:11434"
89
+
90
+ # Restart policy - keeps Ollama running unless manually stopped
34
91
  restart: unless-stopped
92
+
35
93
  healthcheck:
94
+ # Verify Ollama is responsive by listing models
36
95
  test: ["CMD", "ollama", "list"]
37
- interval: 300s
38
- timeout: 2s
39
- retries: 3
40
- start_period: 40s
41
-
42
- # OPTIONAL: Resource limits and reservations
43
- # Uncomment to constrain CPU and memory usage
96
+ interval: 300s # Check every 5 minutes
97
+ timeout: 2s # Fail if no response in 2 seconds
98
+ retries: 3 # Mark unhealthy after 3 consecutive failures
99
+ start_period: 40s # Grace period for initial model loading
100
+
44
101
  deploy:
45
102
  resources:
46
- # limits:
47
- # cpus: '4' # Limit to 4 CPU cores
48
- # memory: 32G # Limit to 32GB RAM
49
103
  reservations:
50
- # cpus: '2' # Reserve at least 2 CPU cores
51
- # memory: 16G # Reserve at least 16GB RAM
52
104
  devices:
105
+ # Request GPU access from Docker
53
106
  - driver: nvidia
54
- count: all # Use all available GPUs
55
- capabilities: [gpu]
107
+ count: all # Use all available GPUs
108
+ capabilities: [gpu] # Request GPU compute capability
109
+
110
+ # ===========================================================================
111
+ # OPEN WEBUI - Chat Interface (Optional)
112
+ # ===========================================================================
113
+ # Open WebUI provides a ChatGPT-like interface for your local models.
114
+ # Access at http://localhost:3000 after starting containers.
115
+ #
116
+ # Features:
117
+ # - Multi-model chat interface
118
+ # - Conversation history
119
+ # - Model management UI
120
+ # - RAG/document upload support
121
+ #
122
+ # Documentation: https://docs.openwebui.com/
123
+ # ===========================================================================
56
124
  open-webui:
57
- image: ghcr.io/open-webui/open-webui:cuda # For Nvidia GPU support, you change the image from ghcr.io/open-webui/open-webui:main to ghcr.io/open-webui/open-webui:cuda
125
+ # CUDA-enabled image for GPU-accelerated features (embeddings, etc.)
126
+ # Change to :main if you don't need GPU features in the UI
127
+ image: ghcr.io/open-webui/open-webui:cuda
128
+
58
129
  container_name: open-webui
130
+
59
131
  ports:
132
+ # Web UI port - access at http://localhost:3000
60
133
  - "3000:8080"
134
+
61
135
  environment:
62
- # Point Open WebUI to the Ollama service
63
- # Use the service name (ollama) as the hostname since they're on the same Docker network
136
+ # Tell Open WebUI where to find Ollama
137
+ # Uses Docker internal networking (service name as hostname)
64
138
  - OLLAMA_BASE_URL=http://ollama:11434
139
+
140
+ # Wait for Ollama to be ready before starting
65
141
  depends_on:
66
- - ollama # Ensure Ollama starts before Open WebUI
142
+ - ollama
143
+
67
144
  restart: unless-stopped
145
+
68
146
  healthcheck:
69
147
  test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
70
148
  interval: 30s
71
149
  timeout: 10s
72
150
  retries: 3
73
151
  start_period: 60s
152
+
74
153
  volumes:
154
+ # Persistent storage for conversations, settings, and user data
75
155
  - open-webui:/app/backend/data
156
+
76
157
  deploy:
77
158
  resources:
78
159
  reservations:
@@ -81,5 +162,11 @@ services:
81
162
  count: all
82
163
  capabilities: [gpu]
83
164
 
165
+ # =============================================================================
166
+ # VOLUMES
167
+ # =============================================================================
168
+ # Named volumes for persistent data that survives container recreation
84
169
  volumes:
85
170
  open-webui:
171
+ # Open WebUI data: conversations, user settings, uploads
172
+ # Located at /var/lib/docker/volumes/open-webui/_data on host
@@ -0,0 +1,59 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [Unreleased]
9
+
10
+ ## [0.0.1-alpha.2] - 2025-01-20
11
+
12
+ ### Added
13
+
14
+ - Adds support for CPU Only Ollama Hosts
15
+
16
+ ### Changed
17
+
18
+ - Modifies documentation on output files from `init` command
19
+
20
+ ## [0.0.1-alpha.1] - 2025-01-19
21
+
22
+ ### Added
23
+
24
+ - **CLI Commands**
25
+ - `loclaude run` - Run Claude Code with local Ollama (interactive model selection)
26
+ - `loclaude init` - Scaffold docker-compose.yml, config, and mise.toml
27
+ - `loclaude doctor` - Check system prerequisites (Docker, GPU, Claude CLI)
28
+ - `loclaude config` / `loclaude config-paths` - View configuration
29
+ - `loclaude docker-up/down/status/logs/restart` - Docker container management
30
+ - `loclaude models` - List installed Ollama models
31
+ - `loclaude models-pull/rm/show/run` - Model management commands
32
+
33
+ - **Configuration System**
34
+ - Project-local config: `./.loclaude/config.json`
35
+ - User global config: `~/.config/loclaude/config.json`
36
+ - Environment variable support (`OLLAMA_URL`, `OLLAMA_MODEL`, etc.)
37
+ - Layered config merging with clear priority
38
+
39
+ - **Cross-Runtime Support**
40
+ - Works with both Bun and Node.js runtimes
41
+ - Dual entry points: `bin/index.ts` (Bun) and `bin/index.mjs` (Node)
42
+
43
+ - **Docker Integration**
44
+ - Bundled docker-compose.yml template with Ollama + Open WebUI
45
+ - NVIDIA GPU support out of the box
46
+ - Health checks for both services
47
+
48
+ - **Project Scaffolding**
49
+ - `loclaude init` creates complete project structure
50
+ - Generates mise.toml with task aliases
51
+ - Creates .claude/CLAUDE.md for Claude Code instructions
52
+ - Sets up .gitignore for model directory
53
+
54
+ ### Notes
55
+
56
+ This is an alpha release. The API and command structure may change before 1.0.
57
+
58
+ [Unreleased]: https://github.com/nicholasgalante1997/loclaude/compare/v0.0.1-rc.1...HEAD
59
+ [0.0.1-alpha.1]: https://github.com/nicholasgalante1997/loclaude/releases/tag/v0.0.1-alpha.1
@@ -1 +1 @@
1
- {"version":3,"file":"cac.d.ts","sourceRoot":"","sources":["../lib/cac.ts"],"names":[],"mappings":"AAqBA,QAAA,MAAM,GAAG,mBAAkB,CAAC;AAoI5B,eAAO,MAAM,IAAI,YAAyB,CAAC;AAC3C,eAAO,MAAM,OAAO,YAA4B,CAAC;AAEjD,eAAO,MAAM,OAAO,QAAO,IAE1B,CAAC;AAEF,OAAO,EAAE,GAAG,EAAE,CAAC"}
1
+ {"version":3,"file":"cac.d.ts","sourceRoot":"","sources":["../lib/cac.ts"],"names":[],"mappings":"AAqBA,QAAA,MAAM,GAAG,mBAAkB,CAAC;AAsI5B,eAAO,MAAM,IAAI,YAAyB,CAAC;AAC3C,eAAO,MAAM,OAAO,YAA4B,CAAC;AAEjD,eAAO,MAAM,OAAO,QAAO,IAE1B,CAAC;AAEF,OAAO,EAAE,GAAG,EAAE,CAAC"}
@@ -1 +1 @@
1
- {"version":3,"file":"config.d.ts","sourceRoot":"","sources":["../../lib/commands/config.ts"],"names":[],"mappings":"AAAA;;GAEG;AAKH,wBAAsB,UAAU,IAAI,OAAO,CAAC,IAAI,CAAC,CAahD;AAED,wBAAsB,WAAW,IAAI,OAAO,CAAC,IAAI,CAAC,CAgBjD"}
1
+ {"version":3,"file":"config.d.ts","sourceRoot":"","sources":["../../lib/commands/config.ts"],"names":[],"mappings":"AAAA;;GAEG;AAKH,wBAAsB,UAAU,IAAI,OAAO,CAAC,IAAI,CAAC,CAmChD;AAED,wBAAsB,WAAW,IAAI,OAAO,CAAC,IAAI,CAAC,CA6BjD"}
@@ -1 +1 @@
1
- {"version":3,"file":"docker.d.ts","sourceRoot":"","sources":["../../lib/commands/docker.ts"],"names":[],"mappings":"AAAA;;GAEG;AAgEH,MAAM,WAAW,aAAa;IAC5B,IAAI,CAAC,EAAE,MAAM,CAAC;IACd,MAAM,CAAC,EAAE,OAAO,CAAC;CAClB;AAeD,wBAAsB,QAAQ,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAiBzE;AAED,wBAAsB,UAAU,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAS3E;AAED,wBAAsB,YAAY,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAG7E;AAED,wBAAsB,UAAU,CAC9B,OAAO,GAAE,aAAa,GAAG;IAAE,MAAM,CAAC,EAAE,OAAO,CAAC;IAAC,OAAO,CAAC,EAAE,MAAM,CAAA;CAAO,GACnE,OAAO,CAAC,IAAI,CAAC,CAaf;AAED,wBAAsB,aAAa,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAS9E;AAED,wBAAsB,UAAU,CAC9B,OAAO,EAAE,MAAM,EACf,OAAO,EAAE,MAAM,EAAE,EACjB,OAAO,GAAE,aAAkB,GAC1B,OAAO,CAAC,MAAM,CAAC,CAWjB"}
1
+ {"version":3,"file":"docker.d.ts","sourceRoot":"","sources":["../../lib/commands/docker.ts"],"names":[],"mappings":"AAAA;;GAEG;AAiEH,MAAM,WAAW,aAAa;IAC5B,IAAI,CAAC,EAAE,MAAM,CAAC;IACd,MAAM,CAAC,EAAE,OAAO,CAAC;CAClB;AAeD,wBAAsB,QAAQ,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAoBzE;AAED,wBAAsB,UAAU,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAW3E;AAED,wBAAsB,YAAY,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAK7E;AAED,wBAAsB,UAAU,CAC9B,OAAO,GAAE,aAAa,GAAG;IAAE,MAAM,CAAC,EAAE,OAAO,CAAC;IAAC,OAAO,CAAC,EAAE,MAAM,CAAA;CAAO,GACnE,OAAO,CAAC,IAAI,CAAC,CAiBf;AAED,wBAAsB,aAAa,CAAC,OAAO,GAAE,aAAkB,GAAG,OAAO,CAAC,IAAI,CAAC,CAW9E;AAED,wBAAsB,UAAU,CAC9B,OAAO,EAAE,MAAM,EACf,OAAO,EAAE,MAAM,EAAE,EACjB,OAAO,GAAE,aAAkB,GAC1B,OAAO,CAAC,MAAM,CAAC,CAWjB"}
@@ -2,4 +2,8 @@
2
2
  * doctor command - Check prerequisites and system health
3
3
  */
4
4
  export declare function doctor(): Promise<void>;
5
+ /**
6
+ * Check if NVIDIA GPU is available (exported for use by init command)
7
+ */
8
+ export declare function hasNvidiaGpu(): Promise<boolean>;
5
9
  //# sourceMappingURL=doctor.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"doctor.d.ts","sourceRoot":"","sources":["../../lib/commands/doctor.ts"],"names":[],"mappings":"AAAA;;GAEG;AAuMH,wBAAsB,MAAM,IAAI,OAAO,CAAC,IAAI,CAAC,CA+B5C"}
1
+ {"version":3,"file":"doctor.d.ts","sourceRoot":"","sources":["../../lib/commands/doctor.ts"],"names":[],"mappings":"AAAA;;GAEG;AAqLH,wBAAsB,MAAM,IAAI,OAAO,CAAC,IAAI,CAAC,CA8B5C;AAED;;GAEG;AACH,wBAAsB,YAAY,IAAI,OAAO,CAAC,OAAO,CAAC,CAMrD"}
@@ -4,6 +4,8 @@
4
4
  export interface InitOptions {
5
5
  force?: boolean;
6
6
  noWebui?: boolean;
7
+ gpu?: boolean;
8
+ noGpu?: boolean;
7
9
  }
8
10
  export declare function init(options?: InitOptions): Promise<void>;
9
11
  //# sourceMappingURL=init.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"init.d.ts","sourceRoot":"","sources":["../../lib/commands/init.ts"],"names":[],"mappings":"AAAA;;GAEG;AA0TH,MAAM,WAAW,WAAW;IAC1B,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,OAAO,CAAC,EAAE,OAAO,CAAC;CACnB;AAED,wBAAsB,IAAI,CAAC,OAAO,GAAE,WAAgB,GAAG,OAAO,CAAC,IAAI,CAAC,CAqGnE"}
1
+ {"version":3,"file":"init.d.ts","sourceRoot":"","sources":["../../lib/commands/init.ts"],"names":[],"mappings":"AAAA;;GAEG;AA4nBH,MAAM,WAAW,WAAW;IAC1B,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,OAAO,CAAC,EAAE,OAAO,CAAC;IAClB,GAAG,CAAC,EAAE,OAAO,CAAC;IACd,KAAK,CAAC,EAAE,OAAO,CAAC;CACjB;AAED,wBAAsB,IAAI,CAAC,OAAO,GAAE,WAAgB,GAAG,OAAO,CAAC,IAAI,CAAC,CAsInE"}
@@ -1 +1 @@
1
- {"version":3,"file":"models.d.ts","sourceRoot":"","sources":["../../lib/commands/models.ts"],"names":[],"mappings":"AAAA;;GAEG;AAoDH,wBAAsB,UAAU,IAAI,OAAO,CAAC,IAAI,CAAC,CAoChD;AAED,wBAAsB,UAAU,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAgBjE;AAED,wBAAsB,QAAQ,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAe/D;AAED,wBAAsB,UAAU,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CASjE;AAED,wBAAsB,SAAS,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAShE"}
1
+ {"version":3,"file":"models.d.ts","sourceRoot":"","sources":["../../lib/commands/models.ts"],"names":[],"mappings":"AAAA;;GAEG;AAiFH,wBAAsB,UAAU,IAAI,OAAO,CAAC,IAAI,CAAC,CAyChD;AAED,wBAAsB,UAAU,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAkBjE;AAED,wBAAsB,QAAQ,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAiB/D;AAED,wBAAsB,UAAU,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAWjE;AAED,wBAAsB,SAAS,CAAC,SAAS,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAWhE"}