clonebox 0.1.25__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1382 @@
1
+ Metadata-Version: 2.4
2
+ Name: clonebox
3
+ Version: 0.1.25
4
+ Summary: Clone your workstation environment to an isolated VM with selective apps, paths and services
5
+ Author: CloneBox Team
6
+ License: Apache-2.0
7
+ Project-URL: Homepage, https://github.com/wronai/clonebox
8
+ Project-URL: Repository, https://github.com/wronai/clonebox
9
+ Project-URL: Issues, https://github.com/wronai/clonebox/issues
10
+ Keywords: vm,virtualization,libvirt,clone,workstation,qemu,kvm
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Environment :: Console
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Intended Audience :: System Administrators
15
+ Classifier: License :: OSI Approved :: Apache Software License
16
+ Classifier: Operating System :: POSIX :: Linux
17
+ Classifier: Programming Language :: Python :: 3
18
+ Classifier: Programming Language :: Python :: 3.8
19
+ Classifier: Programming Language :: Python :: 3.9
20
+ Classifier: Programming Language :: Python :: 3.10
21
+ Classifier: Programming Language :: Python :: 3.11
22
+ Classifier: Programming Language :: Python :: 3.12
23
+ Classifier: Topic :: System :: Systems Administration
24
+ Classifier: Topic :: Utilities
25
+ Requires-Python: >=3.8
26
+ Description-Content-Type: text/markdown
27
+ License-File: LICENSE
28
+ Requires-Dist: libvirt-python>=9.0.0
29
+ Requires-Dist: rich>=13.0.0
30
+ Requires-Dist: questionary>=2.0.0
31
+ Requires-Dist: psutil>=5.9.0
32
+ Requires-Dist: pyyaml>=6.0
33
+ Requires-Dist: pydantic>=2.0.0
34
+ Requires-Dist: python-dotenv>=1.0.0
35
+ Provides-Extra: dev
36
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
37
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
38
+ Requires-Dist: black>=23.0.0; extra == "dev"
39
+ Requires-Dist: ruff>=0.1.0; extra == "dev"
40
+ Requires-Dist: mypy>=1.0.0; extra == "dev"
41
+ Provides-Extra: test
42
+ Requires-Dist: pytest>=7.0.0; extra == "test"
43
+ Requires-Dist: pytest-cov>=4.0.0; extra == "test"
44
+ Requires-Dist: pytest-timeout>=2.0.0; extra == "test"
45
+ Provides-Extra: dashboard
46
+ Requires-Dist: fastapi>=0.100.0; extra == "dashboard"
47
+ Requires-Dist: uvicorn>=0.22.0; extra == "dashboard"
48
+ Dynamic: license-file
49
+
50
+ # CloneBox ๐Ÿ“ฆ
51
+
52
+ [![CI](https://github.com/wronai/clonebox/workflows/CI/badge.svg)](https://github.com/wronai/clonebox/actions)
53
+ [![PyPI version](https://badge.fury.io/py/clonebox.svg)](https://pypi.org/project/clonebox/)
54
+ [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
55
+ [![License](https://img.shields.io/badge/license-Apache%202.0-green.svg)](LICENSE)
56
+
57
+ ![img.png](img.png)
58
+
59
+ ```commandline
60
+ โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
61
+ โ•‘ ____ _ ____ โ•‘
62
+ โ•‘ / ___|| | ___ _ __ ___| _ \ ___ __ __ โ•‘
63
+ โ•‘ | | | | / _ \ | '_ \ / _ \ |_) |/ _ \\ \/ / โ•‘
64
+ โ•‘ | |___ | || (_) || | | | __/ _ <| (_) |> < โ•‘
65
+ โ•‘ \____||_| \___/ |_| |_|\___|_| \_\\___//_/\_\ โ•‘
66
+ โ•‘ โ•‘
67
+ โ•‘ Clone your workstation to an isolated VM โ•‘
68
+ โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
69
+ ```
70
+
71
+ > **Clone your workstation environment to an isolated VM in 60 seconds using bind mounts instead of disk cloning.**
72
+
73
+ CloneBox lets you create isolated virtual machines with only the applications, directories and services you need - using bind mounts instead of full disk cloning. Perfect for development, testing, or creating reproducible environments.
74
+
75
+ ## Features
76
+
77
+ - ๐ŸŽฏ **Selective cloning** - Choose exactly which paths, services and apps to include
78
+ - ๐Ÿ” **Auto-detection** - Automatically detects running services, applications, and project directories
79
+ - ๐Ÿ”— **Bind mounts** - Share directories with the VM without copying data
80
+ - โ˜๏ธ **Cloud-init** - Automatic package installation and service setup
81
+ - ๐Ÿ–ฅ๏ธ **GUI support** - SPICE graphics with virt-viewer integration
82
+ - โšก **Fast creation** - No full disk cloning, VMs are ready in seconds
83
+ - ๐Ÿ“ฅ **Auto-download** - Automatically downloads and caches Ubuntu cloud images (stored in ~/Downloads)
84
+ - ๐Ÿ“Š **Health monitoring** - Built-in health checks for packages, services, and mounts
85
+ - ๐Ÿ”„ **Self-healing** - Automatic monitoring and repair of apps and services
86
+ - ๐Ÿ“ˆ **Live monitoring** - Real-time dashboard for running applications and services
87
+ - ๐Ÿ”ง **Repair tools** - One-click fix for common VM issues (audio, permissions, mounts)
88
+ - ๐Ÿ”„ **VM migration** - Export/import VMs with data between workstations
89
+ - ๐Ÿงช **Configuration testing** - Validate VM settings and functionality
90
+ - ๐Ÿ“ **App data sync** - Include browser profiles, IDE settings, and app configs
91
+
92
+ ## Use Cases
93
+
94
+ CloneBox excels in scenarios where developers need:
95
+ - Isolated sandbox environments for testing AI agents, edge computing simulations, or integration workflows without risking host system stability
96
+ - Reproducible development setups that can be quickly spun up with identical configurations across different machines
97
+ - Safe experimentation with system-level changes that can be discarded by simply deleting the VM
98
+ - Quick onboarding for new team members who need a fully configured development environment
99
+
100
+ ## What's Next
101
+
102
+ Project roadmap includes:
103
+ - **v0.2.0**: `clonebox exec` command, VM snapshots, web dashboard MVP
104
+ - **v0.3.0**: Container runtime integration (Podman/Docker), multi-VM orchestration
105
+ - **v0.4.0**: Cloud provider support (AWS, GCP, Azure), Windows WSL2 support
106
+ - **v1.0.0**: Production-ready with full monitoring, backup/restore, enterprise features
107
+
108
+ See [TODO.md](TODO.md) for detailed roadmap and [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
109
+
110
+
111
+
112
+
113
+ CloneBox to narzฤ™dzie CLI do **szybkiego klonowania aktualnego ล›rodowiska workstation do izolowanej maszyny wirtualnej (VM)**.
114
+ Zamiast peล‚nego kopiowania dysku, uลผywa **bind mounts** (udostฤ™pnianie katalogรณw na ลผywo) i **cloud-init** do selektywnego przeniesienia tylko potrzebnych elementรณw: uruchomionych usล‚ug (Docker, PostgreSQL, nginx), aplikacji, ล›cieลผek projektรณw i konfiguracji. Automatycznie pobiera obrazy Ubuntu, instaluje pakiety i uruchamia VM z SPICE GUI. Idealne dla deweloperรณw na Linuxie โ€“ VM powstaje w minuty, bez duplikowania danych.
115
+
116
+ Kluczowe komendy:
117
+ - `clonebox` โ€“ interaktywny wizard (detect + create + start)
118
+ - `clonebox detect` โ€“ skanuje usล‚ugi/apps/ล›cieลผki
119
+ - `clonebox clone . --user --run` โ€“ szybki klon bieลผฤ…cego katalogu z uลผytkownikiem i autostartem
120
+ - `clonebox watch . --user` โ€“ monitoruj na ลผywo aplikacje i usล‚ugi w VM
121
+ - `clonebox repair . --user` โ€“ napraw problemy z uprawnieniami, audio, usล‚ugami
122
+ - `clonebox container up|ps|stop|rm` โ€“ lekki runtime kontenerowy (podman/docker)
123
+ - `clonebox dashboard` โ€“ lokalny dashboard (VM + containers)
124
+
125
+ ### Dlaczego wirtualne klony workstation majฤ… sens?
126
+
127
+ **Problem**: Developerzy/Vibecoderzy nie izolujฤ… ล›rodowisk dev/test (np. dla AI agentรณw), bo rฤ™czne odtwarzanie setupu to bรณl โ€“ godziny na instalacjฤ™ apps, usล‚ug, configรณw, dotfiles. Przechodzenie z fizycznego PC na VM wymagaล‚oby peล‚nego rebuilda, co blokuje workflow.
128
+
129
+ **Rozwiฤ…zanie CloneBox**: Automatycznie **skanuje i klonuje stan "tu i teraz"** (usล‚ugi z `ps`, dockery z `docker ps`, projekty z git/.env). VM dziedziczy ล›rodowisko bez kopiowania caล‚ego ล›mietnika โ€“ tylko wybrane bind mounty.
130
+
131
+ **Korzyล›ci w twoim kontekล›cie (embedded/distributed systems, AI automation)**:
132
+ - **Sandbox dla eksperymentรณw**: Testuj AI agenty, edge computing (RPi/ESP32 symulacje) czy Camel/ERP integracje w izolacji, bez psucia hosta.
133
+ - **Reprodukcja workstation**: Na firmowym PC masz setup z domu (Python/Rust/Go envs, Docker compose, Postgres dev DB) โ€“ klonujesz i pracujesz identycznie.
134
+ - **Szybkoล›ฤ‡ > dotfiles**: Dotfiles odtwarzajฤ… configi, ale nie ล‚apiฤ… runtime stanu (uruchomione serwery, otwarte projekty). CloneBox to "snapshot na sterydach".
135
+ - **Bezpieczeล„stwo/cost-optymalizacja**: Izolacja od plikรณw hosta (tylko mounts), zero downtime, tanie w zasobach (libvirt/QEMU). Dla SME: szybki onboarding dev env bez migracji fizycznej.
136
+ - **AI-friendly**: Agenci LLMs (jak te z twoich hobby) mogฤ… dziaล‚aฤ‡ w VM z peล‚nym kontekstem, bez ryzyka "zasmiecania" main PC.
137
+
138
+ Przykล‚ad: Masz uruchomiony Kubernetes Podman z twoim home labem + projekt automotive leasing. `clonebox clone ~/projects --run` โ†’ VM gotowa w 30s, z tymi samymi serwisami, ale izolowana. Lepsze niลผ Docker (brak GUI/full OS) czy peล‚na migracja.
139
+
140
+ **Dlaczego ludzie tego nie robiฤ…?** Brak automatyzacji โ€“ nikt nie chce rฤ™cznie rebuildowaฤ‡.
141
+ - CloneBox rozwiฤ…zuje to jednym poleceniem. Super match dla twoich interesรณw (distributed infra, AI tools, business automation).
142
+
143
+
144
+
145
+ ## Installation
146
+
147
+ ### Quick Setup (Recommended)
148
+
149
+ Run the setup script to automatically install dependencies and configure the environment:
150
+
151
+ ```bash
152
+ # Clone the repository
153
+ git clone https://github.com/wronai/clonebox.git
154
+ cd clonebox
155
+
156
+ # Run the setup script
157
+ ./setup.sh
158
+ ```
159
+
160
+ The setup script will:
161
+ - Install all required packages (QEMU, libvirt, Python, etc.)
162
+ - Add your user to the necessary groups
163
+ - Configure libvirt networks
164
+ - Install clonebox in development mode
165
+
166
+ ### Manual Installation
167
+
168
+ #### Prerequisites
169
+
170
+ ```bash
171
+ # Install libvirt and QEMU/KVM
172
+ sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager virt-viewer
173
+
174
+ # Enable and start libvirtd
175
+ sudo systemctl enable --now libvirtd
176
+
177
+ # Add user to libvirt group
178
+ sudo usermod -aG libvirt $USER
179
+ newgrp libvirt
180
+
181
+ # Install genisoimage for cloud-init
182
+ sudo apt install genisoimage
183
+ ```
184
+
185
+ #### Install CloneBox
186
+
187
+ ```bash
188
+ # From source
189
+ git clone https://github.com/wronai/clonebox.git
190
+ cd clonebox
191
+ pip install -e .
192
+
193
+ # Or directly
194
+ pip install clonebox
195
+ ```
196
+
197
+ Dashboard ma opcjonalne zaleลผnoล›ci:
198
+ ```bash
199
+ pip install "clonebox[dashboard]"
200
+ ```
201
+ lub
202
+ ```bash
203
+ # Aktywuj venv
204
+ source .venv/bin/activate
205
+
206
+ # Interaktywny tryb (wizard)
207
+ clonebox
208
+
209
+ # Lub poszczegรณlne komendy
210
+ clonebox detect # Pokaลผ wykryte usล‚ugi/apps/ล›cieลผki
211
+ clonebox list # Lista VM
212
+ clonebox create --config ... # Utwรณrz VM z JSON config
213
+ clonebox start <name> # Uruchom VM
214
+ clonebox stop <name> # Zatrzymaj VM
215
+ clonebox delete <name> # Usuล„ VM
216
+ ```
217
+
218
+ ## Development and Testing
219
+
220
+ ### Running Tests
221
+
222
+ CloneBox has comprehensive test coverage with unit tests and end-to-end tests:
223
+
224
+ ```bash
225
+ # Run unit tests only (fast, no libvirt required)
226
+ make test
227
+
228
+ # Run fast unit tests (excludes slow tests)
229
+ make test-unit
230
+
231
+ # Run end-to-end tests (requires libvirt/KVM)
232
+ make test-e2e
233
+
234
+ # Run all tests including e2e
235
+ make test-all
236
+
237
+ # Run tests with coverage
238
+ make test-cov
239
+
240
+ # Run tests with verbose output
241
+ make test-verbose
242
+ ```
243
+
244
+ ### Test Categories
245
+
246
+ Tests are organized with pytest markers:
247
+
248
+ - **Unit tests**: Fast tests that mock libvirt/system calls (default)
249
+ - **E2E tests**: End-to-end tests requiring actual VM creation (marked with `@pytest.mark.e2e`)
250
+ - **Slow tests**: Tests that take longer to run (marked with `@pytest.mark.slow`)
251
+
252
+ E2E tests are automatically skipped when:
253
+ - libvirt is not installed
254
+ - `/dev/kvm` is not available
255
+ - Running in CI environment (`CI=true` or `GITHUB_ACTIONS=true`)
256
+
257
+ ### Manual Test Execution
258
+
259
+ ```bash
260
+ # Run only unit tests (exclude e2e)
261
+ pytest tests/ -m "not e2e"
262
+
263
+ # Run only e2e tests
264
+ pytest tests/e2e/ -m "e2e" -v
265
+
266
+ # Run specific test file
267
+ pytest tests/test_cloner.py -v
268
+
269
+ # Run with coverage
270
+ pytest tests/ -m "not e2e" --cov=clonebox --cov-report=html
271
+ ```
272
+
273
+ ## Quick Start
274
+
275
+ ### Interactive Mode (Recommended)
276
+
277
+ Simply run `clonebox` to start the interactive wizard:
278
+
279
+ ```bash
280
+ clonebox
281
+
282
+ clonebox clone . --user --run --replace --base-image ~/ubuntu-22.04-cloud.qcow2 --disk-size-gb 30
283
+
284
+ clonebox test . --user --validate
285
+ clonebox test . --user --validate --require-running-apps
286
+ ```
287
+
288
+ ### Profiles (Reusable presets)
289
+
290
+ Profiles pozwalajฤ… trzymaฤ‡ gotowe presety dla VM/container (np. `ml-dev`, `web-dev`) i nakล‚adaฤ‡ je na bazowฤ… konfiguracjฤ™.
291
+
292
+ ```bash
293
+ # Przykล‚ad: uruchom kontener z profilem
294
+ clonebox container up . --profile ml-dev --engine podman
295
+
296
+ # Przykล‚ad: generuj VM config z profilem
297
+ clonebox clone . --profile ml-dev --user --run
298
+ ```
299
+
300
+ Domyล›lne lokalizacje profili:
301
+ - `~/.clonebox.d/<name>.yaml`
302
+ - `./.clonebox.d/<name>.yaml`
303
+ - wbudowane: `src/clonebox/templates/profiles/<name>.yaml`
304
+
305
+ ### Dashboard
306
+
307
+ ```bash
308
+ clonebox dashboard --port 8080
309
+ # http://127.0.0.1:8080
310
+ ```
311
+
312
+ The wizard will:
313
+ 1. Detect running services (Docker, PostgreSQL, nginx, etc.)
314
+ 2. Detect running applications and their working directories
315
+ 3. Detect project directories and config files
316
+ 4. Let you select what to include in the VM
317
+ 5. Create and optionally start the VM
318
+
319
+ ### Command Line
320
+
321
+ ```bash
322
+ # Create VM with specific config
323
+ clonebox create --name my-dev-vm --config '{
324
+ "paths": {
325
+ "/home/user/projects": "/mnt/projects",
326
+ "/home/user/.config": "/mnt/config"
327
+ },
328
+ "packages": ["python3", "nodejs", "docker.io"],
329
+ "services": ["docker"]
330
+ }' --ram 4096 --vcpus 4 --disk-size-gb 20 --start
331
+
332
+ # Create VM with larger root disk
333
+ clonebox create --name my-dev-vm --disk-size-gb 30 --config '{"paths": {}, "packages": [], "services": []}'
334
+
335
+ # List VMs
336
+ clonebox list
337
+
338
+ # Start/Stop VM
339
+ clonebox start my-dev-vm
340
+ clonebox stop my-dev-vm
341
+
342
+ # Delete VM
343
+ clonebox delete my-dev-vm
344
+
345
+ # Detect system state (useful for scripting)
346
+ clonebox detect --json
347
+ ```
348
+
349
+ ## Usage Examples
350
+
351
+ ### Basic Workflow
352
+
353
+ ```bash
354
+ # 1. Clone current directory with auto-detection
355
+ clonebox clone . --user
356
+
357
+ # 2. Review generated config
358
+ cat .clonebox.yaml
359
+
360
+ # 3. Create and start VM
361
+ clonebox start . --user --viewer
362
+
363
+ # 4. Check VM status
364
+ clonebox status . --user
365
+
366
+ # 5. Open VM window later
367
+ clonebox open . --user
368
+
369
+ # 6. Stop VM when done
370
+ clonebox stop . --user
371
+
372
+ # 7. Delete VM if needed
373
+ clonebox delete . --user --yes
374
+ ```
375
+
376
+ ### Development Environment with Browser Profiles
377
+
378
+ ```bash
379
+ # Clone with app data (browser profiles, IDE settings)
380
+ clonebox clone . --user --run
381
+
382
+ # VM will have:
383
+ # - All your project directories
384
+ # - Browser profiles (Chrome, Firefox) with bookmarks and passwords
385
+ # - IDE settings (PyCharm, VSCode)
386
+ # - Docker containers and services
387
+
388
+ # Access in VM:
389
+ ls ~/.config/google-chrome # Chrome profile
390
+
391
+ # Firefox profile (Ubuntu czฤ™sto uลผywa snap):
392
+ ls ~/snap/firefox/common/.mozilla/firefox
393
+ ls ~/.mozilla/firefox
394
+
395
+ # PyCharm profile (snap):
396
+ ls ~/snap/pycharm-community/common/.config/JetBrains
397
+ ls ~/.config/JetBrains
398
+ ```
399
+
400
+ ### Container workflow (podman/docker)
401
+
402
+ ```bash
403
+ # Start a dev container (auto-detect engine if not specified)
404
+ clonebox container up . --engine podman --detach
405
+
406
+ # List running containers
407
+ clonebox container ps
408
+
409
+ # Stop/remove
410
+ clonebox container stop <name>
411
+ clonebox container rm <name>
412
+ ```
413
+
414
+ ### Full validation (VM)
415
+
416
+ `clonebox test` weryfikuje, ลผe VM faktycznie ma zamontowane ล›cieลผki i speล‚nia wymagania z `.clonebox.yaml`.
417
+
418
+ ```bash
419
+ clonebox test . --user --validate
420
+ ```
421
+
422
+ Walidowane kategorie:
423
+ - **Mounts** (9p)
424
+ - **Packages** (apt)
425
+ - **Snap packages**
426
+ - **Services** (enabled + running)
427
+ - **Apps** (instalacja + dostฤ™pnoล›ฤ‡ profilu: Firefox/PyCharm/Chrome)
428
+
429
+ ### Testing and Validating VM Configuration
430
+
431
+ ```bash
432
+ # Quick test - basic checks
433
+ clonebox test . --user --quick
434
+
435
+ # Full validation - checks EVERYTHING against YAML config
436
+ clonebox test . --user --validate
437
+
438
+ # Validation checks:
439
+ # โœ… All mount points (paths + app_data_paths) are mounted and accessible
440
+ # โœ… All APT packages are installed
441
+ # โœ… All snap packages are installed
442
+ # โœ… All services are enabled and running
443
+ # โœ… Reports file counts for each mount
444
+ # โœ… Shows package versions
445
+ # โœ… Comprehensive summary table
446
+
447
+ # Example output:
448
+ # ๐Ÿ’พ Validating Mount Points...
449
+ # โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
450
+ # โ”‚ Guest Path โ”‚ Mounted โ”‚ Accessible โ”‚ Files โ”‚
451
+ # โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
452
+ # โ”‚ /home/ubuntu/Downloads โ”‚ โœ… โ”‚ โœ… โ”‚ 199 โ”‚
453
+ # โ”‚ ~/.config/JetBrains โ”‚ โœ… โ”‚ โœ… โ”‚ 45 โ”‚
454
+ # โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
455
+ # 12/14 mounts working
456
+ #
457
+ # ๐Ÿ“ฆ Validating APT Packages...
458
+ # โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
459
+ # โ”‚ Package โ”‚ Status โ”‚ Version โ”‚
460
+ # โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
461
+ # โ”‚ firefox โ”‚ โœ… Installed โ”‚ 122.0+b... โ”‚
462
+ # โ”‚ docker.io โ”‚ โœ… Installed โ”‚ 24.0.7-... โ”‚
463
+ # โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
464
+ # 8/8 packages installed
465
+ #
466
+ # ๐Ÿ“Š Validation Summary
467
+ # โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
468
+ # โ”‚ Category โ”‚ Passed โ”‚ Failed โ”‚ Total โ”‚
469
+ # โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
470
+ # โ”‚ Mounts โ”‚ 12 โ”‚ 2 โ”‚ 14 โ”‚
471
+ # โ”‚ APT Packages โ”‚ 8 โ”‚ 0 โ”‚ 8 โ”‚
472
+ # โ”‚ Snap Packages โ”‚ 2 โ”‚ 0 โ”‚ 2 โ”‚
473
+ # โ”‚ Services โ”‚ 5 โ”‚ 1 โ”‚ 6 โ”‚
474
+ # โ”‚ TOTAL โ”‚ 27 โ”‚ 3 โ”‚ 30 โ”‚
475
+ # โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
476
+ ```
477
+
478
+ ### VM Health Monitoring and Mount Validation
479
+
480
+ ```bash
481
+ # Check overall status including mount validation
482
+ clonebox status . --user
483
+
484
+ # Output shows:
485
+ # ๐Ÿ“Š VM State: running
486
+ # ๐Ÿ” Network and IP address
487
+ # โ˜๏ธ Cloud-init: Complete
488
+ # ๐Ÿ’พ Mount Points status table:
489
+ # โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
490
+ # โ”‚ Guest Path โ”‚ Status โ”‚ Files โ”‚
491
+ # โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
492
+ # โ”‚ /home/ubuntu/Downloads โ”‚ โœ… Mounted โ”‚ 199 โ”‚
493
+ # โ”‚ /home/ubuntu/Documents โ”‚ โŒ Not mountedโ”‚ ? โ”‚
494
+ # โ”‚ ~/.config/JetBrains โ”‚ โœ… Mounted โ”‚ 45 โ”‚
495
+ # โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
496
+ # 12/14 mounts active
497
+ # ๐Ÿฅ Health Check Status: OK
498
+
499
+ # Trigger full health check
500
+ clonebox status . --user --health
501
+
502
+ # If mounts are missing, remount or rebuild:
503
+ # In VM: sudo mount -a
504
+ # Or rebuild: clonebox clone . --user --run --replace
505
+ ```
506
+
507
+ ## ๐Ÿ“Š Monitoring and Self-Healing
508
+
509
+ CloneBox includes continuous monitoring and automatic self-healing capabilities for both GUI applications and system services.
510
+
511
+ ### Monitor Running Applications and Services
512
+
513
+ ```bash
514
+ # Watch real-time status of apps and services
515
+ clonebox watch . --user
516
+
517
+ # Output shows live dashboard:
518
+ # โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
519
+ # โ•‘ CloneBox Live Monitor โ•‘
520
+ # โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
521
+ # โ•‘ ๐Ÿ–ฅ๏ธ GUI Apps: โ•‘
522
+ # โ•‘ โœ… pycharm-community PID: 1234 Memory: 512MB โ•‘
523
+ # โ•‘ โœ… firefox PID: 5678 Memory: 256MB โ•‘
524
+ # โ•‘ โŒ chromium Not running โ•‘
525
+ # โ•‘ โ•‘
526
+ # โ•‘ ๐Ÿ”ง System Services: โ•‘
527
+ # โ•‘ โœ… docker Active: 2h 15m โ•‘
528
+ # โ•‘ โœ… nginx Active: 1h 30m โ•‘
529
+ # โ•‘ โœ… uvicorn Active: 45m (port 8000) โ•‘
530
+ # โ•‘ โ•‘
531
+ # โ•‘ ๐Ÿ“Š Last check: 2024-01-31 13:25:30 โ•‘
532
+ # โ•‘ ๐Ÿ”„ Next check in: 25 seconds โ•‘
533
+ # โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
534
+
535
+ # Check detailed status with logs
536
+ clonebox status . --user --verbose
537
+
538
+ # View monitor logs from host
539
+ ./scripts/clonebox-logs.sh # Interactive log viewer
540
+ # Or via SSH:
541
+ ssh ubuntu@<IP_VM> "tail -f /var/log/clonebox-monitor.log"
542
+ ```
543
+
544
+ ### Repair and Troubleshooting
545
+
546
+ ```bash
547
+ # Run automatic repair from host
548
+ clonebox repair . --user
549
+
550
+ # This triggers the repair script inside VM which:
551
+ # - Fixes directory permissions (pulse, ibus, dconf)
552
+ # - Restarts audio services (PulseAudio/PipeWire)
553
+ # - Reconnects snap interfaces
554
+ # - Remounts missing filesystems
555
+ # - Resets GNOME keyring if needed
556
+
557
+ # Interactive repair menu (via SSH)
558
+ ssh ubuntu@<IP_VM> "clonebox-repair"
559
+
560
+ # Manual repair options from host:
561
+ clonebox repair . --user --auto # Full automatic repair
562
+ clonebox repair . --user --perms # Fix permissions only
563
+ clonebox repair . --user --audio # Fix audio only
564
+ clonebox repair . --user --snaps # Reconnect snaps only
565
+ clonebox repair . --user --mounts # Remount filesystems only
566
+
567
+ # Check repair status (via SSH)
568
+ ssh ubuntu@<IP_VM> "cat /var/run/clonebox-status"
569
+
570
+ # View repair logs
571
+ ./scripts/clonebox-logs.sh # Interactive viewer
572
+ # Or via SSH:
573
+ ssh ubuntu@<IP_VM> "tail -n 50 /var/log/clonebox-boot.log"
574
+ ```
575
+
576
+ ### Monitor Configuration
577
+
578
+ The monitoring system is configured through environment variables in `.env`:
579
+
580
+ ```bash
581
+ # Enable/disable monitoring
582
+ CLONEBOX_ENABLE_MONITORING=true
583
+ CLONEBOX_MONITOR_INTERVAL=30 # Check every 30 seconds
584
+ CLONEBOX_AUTO_REPAIR=true # Auto-restart failed services
585
+ CLONEBOX_WATCH_APPS=true # Monitor GUI apps
586
+ CLONEBOX_WATCH_SERVICES=true # Monitor system services
587
+ ```
588
+
589
+ ### Inside the VM - Manual Controls
590
+
591
+ ```bash
592
+ # Check monitor service status
593
+ systemctl --user status clonebox-monitor
594
+
595
+ # View monitor logs
596
+ journalctl --user -u clonebox-monitor -f
597
+ tail -f /var/log/clonebox-monitor.log
598
+
599
+ # Stop/start monitoring
600
+ systemctl --user stop clonebox-monitor
601
+ systemctl --user start clonebox-monitor
602
+
603
+ # Check last status
604
+ cat /var/run/clonebox-monitor-status
605
+
606
+ # Run repair manually
607
+ clonebox-repair --all # Run all fixes
608
+ clonebox-repair --status # Show current status
609
+ clonebox-repair --logs # Show recent logs
610
+ ```
611
+
612
+ ### Export/Import Workflow
613
+
614
+ ```bash
615
+ # On workstation A - Export VM with all data
616
+ clonebox export . --user --include-data -o my-dev-env.tar.gz
617
+
618
+ # Transfer file to workstation B, then import
619
+ clonebox import my-dev-env.tar.gz --user
620
+
621
+ # Start VM on new workstation
622
+ clonebox start . --user
623
+ clonebox open . --user
624
+
625
+ # VM includes:
626
+ # - Complete disk image
627
+ # - All browser profiles and settings
628
+ # - Project files
629
+ # - Docker images and containers
630
+ ```
631
+
632
+ ### Troubleshooting Common Issues
633
+
634
+ ```bash
635
+ # If mounts are empty after reboot:
636
+ clonebox status . --user # Check VM status
637
+ # Then in VM:
638
+ sudo mount -a # Remount all fstab entries
639
+
640
+ # If browser profiles don't sync:
641
+ rm .clonebox.yaml
642
+ clonebox clone . --user --run --replace
643
+
644
+ # If GUI doesn't open:
645
+ clonebox open . --user # Easiest way
646
+ # or:
647
+ virt-viewer --connect qemu:///session clone-clonebox
648
+
649
+ # Check VM details:
650
+ clonebox list # List all VMs
651
+ virsh --connect qemu:///session dominfo clone-clonebox
652
+
653
+ # Restart VM if needed:
654
+ clonebox restart . --user # Easiest - stop and start
655
+ clonebox stop . --user && clonebox start . --user # Manual restart
656
+ clonebox restart . --user --open # Restart and open GUI
657
+ virsh --connect qemu:///session reboot clone-clonebox # Direct reboot
658
+ virsh --connect qemu:///session reset clone-clonebox # Hard reset if frozen
659
+ ```
660
+
661
+ ## Legacy Examples (Manual Config)
662
+
663
+ These examples use the older `create` command with manual JSON config. For most users, the `clone` command with auto-detection is easier.
664
+
665
+ ### Python Development Environment
666
+
667
+ ```bash
668
+ clonebox create --name python-dev --config '{
669
+ "paths": {
670
+ "/home/user/my-python-project": "/workspace",
671
+ "/home/user/.pyenv": "/root/.pyenv"
672
+ },
673
+ "packages": ["python3", "python3-pip", "python3-venv", "build-essential"],
674
+ "services": []
675
+ }' --ram 2048 --start
676
+ ```
677
+
678
+ ### Docker Development
679
+
680
+ ```bash
681
+ clonebox create --name docker-dev --config '{
682
+ "paths": {
683
+ "/home/user/docker-projects": "/projects",
684
+ "/var/run/docker.sock": "/var/run/docker.sock"
685
+ },
686
+ "packages": ["docker.io", "docker-compose"],
687
+ "services": ["docker"]
688
+ }' --ram 4096 --start
689
+ ```
690
+
691
+ ### Full Stack (Node.js + PostgreSQL)
692
+
693
+ ```bash
694
+ clonebox create --name fullstack --config '{
695
+ "paths": {
696
+ "/home/user/my-app": "/app",
697
+ "/home/user/pgdata": "/var/lib/postgresql/data"
698
+ },
699
+ "packages": ["nodejs", "npm", "postgresql"],
700
+ "services": ["postgresql"]
701
+ }' --ram 4096 --vcpus 4 --start
702
+ ```
703
+
704
+ ## Inside the VM
705
+
706
+ After the VM boots, shared directories are automatically mounted via fstab entries. You can check their status:
707
+
708
+ ```bash
709
+ # Check mount status
710
+ mount | grep 9p
711
+
712
+ # View health check report
713
+ cat /var/log/clonebox-health.log
714
+
715
+ # Re-run health check manually
716
+ clonebox-health
717
+
718
+ # Check cloud-init status
719
+ sudo cloud-init status
720
+
721
+ # Manual mount (if needed)
722
+ sudo mkdir -p /mnt/projects
723
+ sudo mount -t 9p -o trans=virtio,version=9p2000.L,nofail mount0 /mnt/projects
724
+ ```
725
+
726
+ ### Health Check System
727
+
728
+ CloneBox includes automated health checks that verify:
729
+ - Package installation (apt/snap)
730
+ - Service status
731
+ - Mount points accessibility
732
+ - GUI readiness
733
+
734
+ Health check logs are saved to `/var/log/clonebox-health.log` with a summary in `/var/log/clonebox-health-status`.
735
+
736
+ ## Architecture
737
+
738
+ ```
739
+ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
740
+ โ”‚ HOST SYSTEM โ”‚
741
+ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
742
+ โ”‚ โ”‚ /home/user/ โ”‚ โ”‚ /var/www/ โ”‚ โ”‚ Docker โ”‚ โ”‚
743
+ โ”‚ โ”‚ projects/ โ”‚ โ”‚ html/ โ”‚ โ”‚ Socket โ”‚ โ”‚
744
+ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
745
+ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
746
+ โ”‚ โ”‚ 9p/virtio โ”‚ โ”‚ โ”‚
747
+ โ”‚ โ”‚ bind mounts โ”‚ โ”‚ โ”‚
748
+ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
749
+ โ”‚ โ”‚ CloneBox VM โ”‚ โ”‚
750
+ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚
751
+ โ”‚ โ”‚ โ”‚ /mnt/proj โ”‚ โ”‚ /mnt/www โ”‚ โ”‚ /var/run/ โ”‚ โ”‚ โ”‚
752
+ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ docker.sockโ”‚ โ”‚ โ”‚
753
+ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚
754
+ โ”‚ โ”‚ โ”‚ โ”‚
755
+ โ”‚ โ”‚ cloud-init installed packages & services โ”‚ โ”‚
756
+ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
757
+ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
758
+ ```
759
+
760
+ ## Quick Clone (Recommended)
761
+
762
+ The fastest way to clone your current working directory:
763
+
764
+ ```bash
765
+ # Clone current directory - generates .clonebox.yaml and asks to create VM
766
+ # Base OS image is automatically downloaded to ~/Downloads on first run
767
+ clonebox clone .
768
+
769
+ # Increase VM disk size (recommended for GUI + large tooling)
770
+ clonebox clone . --user --disk-size-gb 30
771
+
772
+ # Clone specific path
773
+ clonebox clone ~/projects/my-app
774
+
775
+ # Clone with custom name and auto-start
776
+ clonebox clone ~/projects/my-app --name my-dev-vm --run
777
+
778
+ # Clone and edit config before creating
779
+ clonebox clone . --edit
780
+
781
+ # Replace existing VM (stops, deletes, and recreates)
782
+ clonebox clone . --replace
783
+
784
+ # Use custom base image instead of auto-download
785
+ clonebox clone . --base-image ~/ubuntu-22.04-cloud.qcow2
786
+
787
+ # User session mode (no root required)
788
+ clonebox clone . --user
789
+ ```
790
+
791
+ Later, start the VM from any directory with `.clonebox.yaml`:
792
+
793
+ ```bash
794
+ # Start VM from config in current directory
795
+ clonebox start .
796
+
797
+ # Start VM from specific path
798
+ clonebox start ~/projects/my-app
799
+ ```
800
+
801
+ ### Export YAML Config
802
+
803
+ ```bash
804
+ # Export detected state as YAML (with deduplication)
805
+ clonebox detect --yaml --dedupe
806
+
807
+ # Save to file
808
+ clonebox detect --yaml --dedupe -o my-config.yaml
809
+ ```
810
+
811
+ ### Base Images
812
+
813
+ CloneBox automatically downloads a bootable Ubuntu cloud image on first run:
814
+
815
+ ```bash
816
+ # Auto-download (default) - downloads Ubuntu 22.04 to ~/Downloads on first run
817
+ clonebox clone .
818
+
819
+ # Use custom base image
820
+ clonebox clone . --base-image ~/my-custom-image.qcow2
821
+
822
+ # Manual download (optional - clonebox does this automatically)
823
+ wget -O ~/Downloads/clonebox-ubuntu-jammy-amd64.qcow2 \
824
+ https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
825
+ ```
826
+
827
+ **Base image behavior:**
828
+ - If no `--base-image` is specified, Ubuntu 22.04 cloud image is auto-downloaded
829
+ - Downloaded images are cached in `~/Downloads/clonebox-ubuntu-jammy-amd64.qcow2`
830
+ - Subsequent VMs reuse the cached image (no re-download)
831
+ - Each VM gets its own disk using the base image as a backing file (copy-on-write)
832
+
833
+ ### VM Login Credentials
834
+
835
+ VM credentials are managed through `.env` file for security:
836
+
837
+ **Setup:**
838
+ 1. Copy `.env.example` to `.env`:
839
+ ```bash
840
+ cp .env.example .env
841
+ ```
842
+
843
+ 2. Edit `.env` and set your password:
844
+ ```bash
845
+ # .env file
846
+ VM_PASSWORD=your_secure_password
847
+ VM_USERNAME=ubuntu
848
+ ```
849
+
850
+ 3. The `.clonebox.yaml` file references the password from `.env`:
851
+ ```yaml
852
+ vm:
853
+ username: ubuntu
854
+ password: ${VM_PASSWORD} # Loaded from .env
855
+ ```
856
+
857
+ **Default credentials (if .env not configured):**
858
+ - **Username:** `ubuntu`
859
+ - **Password:** `ubuntu`
860
+
861
+ **Security notes:**
862
+ - `.env` is automatically gitignored (never committed)
863
+ - Username is stored in YAML (not sensitive)
864
+ - Password is stored in `.env` (sensitive, not committed)
865
+ - Change password after first login: `passwd`
866
+ - User has passwordless sudo access
867
+
868
+ ### User Session & Networking
869
+
870
+ CloneBox supports creating VMs in user session (no root required) with automatic network fallback:
871
+
872
+ ```bash
873
+ # Create VM in user session (uses ~/.local/share/libvirt/images)
874
+ clonebox clone . --user
875
+
876
+ # Explicitly use user-mode networking (slirp) - works without libvirt network
877
+ clonebox clone . --user --network user
878
+
879
+ # Force libvirt default network (may fail in user session)
880
+ clonebox clone . --network default
881
+
882
+ # Auto mode (default): tries libvirt network, falls back to user-mode if unavailable
883
+ clonebox clone . --network auto
884
+ ```
885
+
886
+ **Network modes:**
887
+ - `auto` (default): Uses libvirt default network if available, otherwise falls back to user-mode (slirp)
888
+ - `default`: Forces use of libvirt default network
889
+ - `user`: Uses user-mode networking (slirp) - no bridge setup required
890
+
891
+ ## Commands Reference
892
+
893
+ | Command | Description |
894
+ |---------|-------------|
895
+ | `clonebox` | Interactive VM creation wizard |
896
+ | `clonebox clone <path>` | Generate `.clonebox.yaml` from path + running processes |
897
+ | `clonebox clone . --run` | Clone and immediately start VM |
898
+ | `clonebox clone . --edit` | Clone, edit config, then create |
899
+ | `clonebox clone . --replace` | Replace existing VM (stop, delete, recreate) |
900
+ | `clonebox clone . --user` | Clone in user session (no root) |
901
+ | `clonebox clone . --base-image <path>` | Use custom base image |
902
+ | `clonebox clone . --disk-size-gb <gb>` | Set root disk size in GB (generated configs default to 20GB) |
903
+ | `clonebox clone . --network user` | Use user-mode networking (slirp) |
904
+ | `clonebox clone . --network auto` | Auto-detect network mode (default) |
905
+ | `clonebox create --config <json> --disk-size-gb <gb>` | Create VM from JSON config with specified disk size |
906
+ | `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
907
+ | `clonebox start . --viewer` | Start VM and open GUI window |
908
+ | `clonebox start <name>` | Start existing VM by name |
909
+ | `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
910
+ | `clonebox stop . -f` | Force stop VM |
911
+ | `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
912
+ | `clonebox delete . --yes` | Delete VM without confirmation |
913
+ | `clonebox list` | List all VMs |
914
+ | `clonebox detect` | Show detected services/apps/paths |
915
+ | `clonebox detect --yaml` | Output as YAML config |
916
+ | `clonebox detect --yaml --dedupe` | YAML with duplicates removed |
917
+ | `clonebox detect --json` | Output as JSON |
918
+ | `clonebox container up .` | Start a dev container for given path |
919
+ | `clonebox container ps` | List containers |
920
+ | `clonebox container stop <name>` | Stop a container |
921
+ | `clonebox container rm <name>` | Remove a container |
922
+ | `clonebox dashboard` | Run local dashboard (VM + containers) |
923
+ | `clonebox status . --user` | Check VM health, cloud-init, IP, and mount status |
924
+ | `clonebox status . --user --health` | Check VM status and run full health check |
925
+ | `clonebox test . --user` | Test VM configuration (basic checks) |
926
+ | `clonebox test . --user --validate` | Full validation: mounts, packages, services vs YAML |
927
+ | `clonebox export . --user` | Export VM for migration to another workstation |
928
+ | `clonebox export . --user --include-data` | Export VM with browser profiles and configs |
929
+ | `clonebox import archive.tar.gz --user` | Import VM from export archive |
930
+ | `clonebox open . --user` | Open GUI viewer for VM (same as virt-viewer) |
931
+ | `virt-viewer --connect qemu:///session <vm>` | Open GUI for running VM |
932
+ | `virsh --connect qemu:///session console <vm>` | Open text console (Ctrl+] to exit) |
933
+
934
+ ## Requirements
935
+
936
+ - Linux with KVM support (`/dev/kvm`)
937
+ - libvirt daemon running
938
+ - Python 3.8+
939
+ - User in `libvirt` group
940
+
941
+ ## Troubleshooting
942
+
943
+ ### Critical: Insufficient Disk Space
944
+
945
+ If you install a full desktop environment and large development tools (e.g. `ubuntu-desktop-minimal`, `docker.io`, large snaps like `pycharm-community`/`chromium`), you may hit low disk space warnings inside the VM.
946
+
947
+ Recommended fix:
948
+ - Set a larger root disk in `.clonebox.yaml`:
949
+
950
+ ```yaml
951
+ vm:
952
+ disk_size_gb: 30
953
+ ```
954
+
955
+ You can also set it during config generation:
956
+ ```bash
957
+ clonebox clone . --user --disk-size-gb 30
958
+ ```
959
+
960
+ Notes:
961
+ - New configs generated by `clonebox clone` default to `disk_size_gb: 20`.
962
+ - You can override this by setting `vm.disk_size_gb` in `.clonebox.yaml`.
963
+
964
+ Workaround for an existing VM (host-side resize + guest filesystem grow):
965
+ ```bash
966
+ clonebox stop . --user
967
+ qemu-img resize ~/.local/share/libvirt/images/<vm-name>/root.qcow2 +10G
968
+ clonebox start . --user
969
+ ```
970
+
971
+ Inside the VM:
972
+ ```bash
973
+ sudo growpart /dev/vda 1
974
+ sudo resize2fs /dev/vda1
975
+ df -h /
976
+ ```
977
+
978
+ ### Known Issue: IBus Preferences crash
979
+
980
+ During validation you may occasionally see a crash dialog from **IBus Preferences** in the Ubuntu desktop environment.
981
+ This is an upstream issue related to the input method daemon (`ibus-daemon`) and obsolete system packages (e.g. `libglib2.0`, `libssl3`, `libxml2`, `openssl`).
982
+ It does **not** affect CloneBox functionality and the VM operates normally.
983
+
984
+ Workaround:
985
+ - Dismiss the crash dialog
986
+ - Or run `sudo apt upgrade` inside the VM to update system packages
987
+
988
+ ### Snap Apps Not Launching (PyCharm, Chromium, Firefox)
989
+
990
+ If snap-installed applications (e.g., PyCharm, Chromium) are installed but don't launch when clicked, the issue is usually **disconnected snap interfaces**. This happens because snap interfaces are not auto-connected when installing via cloud-init.
991
+
992
+ **New VMs created with updated CloneBox automatically connect snap interfaces**, but for older VMs or manual installs:
993
+
994
+ ```bash
995
+ # Check snap interface connections
996
+ snap connections pycharm-community
997
+
998
+ # If you see "-" instead of ":desktop", interfaces are NOT connected
999
+
1000
+ # Connect required interfaces
1001
+ sudo snap connect pycharm-community:desktop :desktop
1002
+ sudo snap connect pycharm-community:desktop-legacy :desktop-legacy
1003
+ sudo snap connect pycharm-community:x11 :x11
1004
+ sudo snap connect pycharm-community:wayland :wayland
1005
+ sudo snap connect pycharm-community:home :home
1006
+ sudo snap connect pycharm-community:network :network
1007
+
1008
+ # Restart snap daemon and try again
1009
+ sudo systemctl restart snapd
1010
+ snap run pycharm-community
1011
+ ```
1012
+
1013
+ **For Chromium/Firefox:**
1014
+ ```bash
1015
+ sudo snap connect chromium:desktop :desktop
1016
+ sudo snap connect chromium:x11 :x11
1017
+ sudo snap connect firefox:desktop :desktop
1018
+ sudo snap connect firefox:x11 :x11
1019
+ ```
1020
+
1021
+ **Debug launch:**
1022
+ ```bash
1023
+ PYCHARM_DEBUG=true snap run pycharm-community 2>&1 | tee /tmp/pycharm-debug.log
1024
+ ```
1025
+
1026
+ **Nuclear option (reinstall):**
1027
+ ```bash
1028
+ snap remove pycharm-community
1029
+ rm -rf ~/snap/pycharm-community
1030
+ sudo snap install pycharm-community --classic
1031
+ sudo snap connect pycharm-community:desktop :desktop
1032
+ ```
1033
+
1034
+ ### Network Issues
1035
+
1036
+ If you encounter "Network not found" or "network 'default' is not active" errors:
1037
+
1038
+ ```bash
1039
+ # Option 1: Use user-mode networking (no setup required)
1040
+ clonebox clone . --user --network user
1041
+
1042
+ # Option 2: Run the network fix script
1043
+ ./fix-network.sh
1044
+
1045
+ # Or manually fix:
1046
+ virsh --connect qemu:///session net-destroy default 2>/dev/null
1047
+ virsh --connect qemu:///session net-undefine default 2>/dev/null
1048
+ virsh --connect qemu:///session net-define /tmp/default-network.xml
1049
+ virsh --connect qemu:///session net-start default
1050
+ ```
1051
+
1052
+ ### Permission Issues
1053
+
1054
+ If you get permission errors:
1055
+
1056
+ ```bash
1057
+ # Ensure user is in libvirt and kvm groups
1058
+ sudo usermod -aG libvirt $USER
1059
+ sudo usermod -aG kvm $USER
1060
+
1061
+ # Log out and log back in for groups to take effect
1062
+ ```
1063
+
1064
+ ### VM Already Exists
1065
+
1066
+ If you get "VM already exists" error:
1067
+
1068
+ ```bash
1069
+ # Option 1: Use --replace flag to automatically replace it
1070
+ clonebox clone . --replace
1071
+
1072
+ # Option 2: Delete manually first
1073
+ clonebox delete <vm-name>
1074
+
1075
+ # Option 3: Use virsh directly
1076
+ virsh --connect qemu:///session destroy <vm-name>
1077
+ virsh --connect qemu:///session undefine <vm-name>
1078
+
1079
+ # Option 4: Start the existing VM instead
1080
+ clonebox start <vm-name>
1081
+ ```
1082
+
1083
+ ### virt-viewer not found
1084
+
1085
+ If GUI doesn't open:
1086
+
1087
+ ```bash
1088
+ # Install virt-viewer
1089
+ sudo apt install virt-viewer
1090
+
1091
+ # Then connect manually
1092
+ virt-viewer --connect qemu:///session <vm-name>
1093
+ ```
1094
+
1095
+ ### Browser Profiles and PyCharm Not Working
1096
+
1097
+ If browser profiles or PyCharm configs aren't available, or you get permission errors:
1098
+
1099
+ **Root cause:** VM was created with old version without proper mount permissions.
1100
+
1101
+ **Solution - Rebuild VM with latest fixes:**
1102
+
1103
+ ```bash
1104
+ # Stop and delete old VM
1105
+ clonebox stop . --user
1106
+ clonebox delete . --user --yes
1107
+
1108
+ # Recreate VM with fixed permissions and app data mounts
1109
+ clonebox clone . --user --run --replace
1110
+ ```
1111
+
1112
+ **After rebuild, verify mounts in VM:**
1113
+ ```bash
1114
+ # Check all mounts are accessible
1115
+ ls ~/.config/google-chrome # Chrome profile
1116
+ ls ~/.mozilla/firefox # Firefox profile
1117
+ ls ~/.config/JetBrains # PyCharm settings
1118
+ ls ~/Downloads # Downloads folder
1119
+ ls ~/Documents # Documents folder
1120
+ ```
1121
+
1122
+ **What changed in v0.1.12:**
1123
+ - All mounts use `uid=1000,gid=1000` for ubuntu user access
1124
+ - Both `paths` and `app_data_paths` are properly mounted
1125
+ - No sudo needed to access any shared directories
1126
+
1127
+ ### Mount Points Empty or Permission Denied
1128
+
1129
+ If you get "must be superuser to use mount" error when accessing Downloads/Documents:
1130
+
1131
+ **Solution:** VM was created with old mount configuration. Recreate VM:
1132
+
1133
+ ```bash
1134
+ # Stop and delete old VM
1135
+ clonebox stop . --user
1136
+ clonebox delete . --user --yes
1137
+
1138
+ # Recreate with fixed permissions
1139
+ clonebox clone . --user --run --replace
1140
+ ```
1141
+
1142
+ **What was fixed:**
1143
+ - Mounts now use `uid=1000,gid=1000` so ubuntu user has access
1144
+ - No need for sudo to access shared directories
1145
+ - Applies to new VMs created after v0.1.12
1146
+
1147
+ ### Mount Points Empty After Reboot
1148
+
1149
+ If shared directories appear empty after VM restart:
1150
+
1151
+ 1. **Check fstab entries:**
1152
+ ```bash
1153
+ cat /etc/fstab | grep 9p
1154
+ ```
1155
+
1156
+ 2. **Mount manually:**
1157
+ ```bash
1158
+ sudo mount -a
1159
+ ```
1160
+
1161
+ 3. **Verify access mode:**
1162
+ - VMs created with `accessmode="mapped"` allow any user to access mounts
1163
+ - Mount options include `uid=1000,gid=1000` for user access
1164
+
1165
+ ## Advanced Usage
1166
+
1167
+ ### VM Migration Between Workstations
1168
+
1169
+ Export your complete VM environment:
1170
+
1171
+ ```bash
1172
+ # Export VM with all data
1173
+ clonebox export . --user --include-data -o my-dev-env.tar.gz
1174
+
1175
+ # Transfer to new workstation, then import
1176
+ clonebox import my-dev-env.tar.gz --user
1177
+ clonebox start . --user
1178
+ ```
1179
+
1180
+ ### Testing VM Configuration
1181
+
1182
+ Validate your VM setup:
1183
+
1184
+ ```bash
1185
+ # Quick test (basic checks)
1186
+ clonebox test . --user --quick
1187
+
1188
+ # Full test (includes health checks)
1189
+ clonebox test . --user --verbose
1190
+ ```
1191
+
1192
+ ### Monitoring VM Health
1193
+
1194
+ Check VM status from workstation:
1195
+
1196
+ ```bash
1197
+ # Check VM state, IP, cloud-init, and health
1198
+ clonebox status . --user
1199
+
1200
+ # Trigger health check in VM
1201
+ clonebox status . --user --health
1202
+ ```
1203
+
1204
+ ### Reopening VM Window
1205
+
1206
+ If you close the VM window, you can reopen it:
1207
+
1208
+ ```bash
1209
+ # Open GUI viewer (easiest)
1210
+ clonebox open . --user
1211
+
1212
+ # Start VM and open GUI (if VM is stopped)
1213
+ clonebox start . --user --viewer
1214
+
1215
+ # Open GUI for running VM
1216
+ virt-viewer --connect qemu:///session clone-clonebox
1217
+
1218
+ # List VMs to get the correct name
1219
+ clonebox list
1220
+
1221
+ # Text console (no GUI)
1222
+ virsh --connect qemu:///session console clone-clonebox
1223
+ # Press Ctrl + ] to exit console
1224
+ ```
1225
+
1226
+ ## Exporting to Proxmox
1227
+
1228
+ To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
1229
+
1230
+ ### Step 1: Locate VM Disk Image
1231
+
1232
+ ```bash
1233
+ # Find VM disk location
1234
+ clonebox list
1235
+
1236
+ # Check VM details for disk path
1237
+ virsh --connect qemu:///session dominfo clone-clonebox
1238
+
1239
+ # Typical locations:
1240
+ # User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
1241
+ # System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
1242
+ ```
1243
+
1244
+ ### Step 2: Export VM with CloneBox
1245
+
1246
+ ```bash
1247
+ # Export VM with all data (from current directory with .clonebox.yaml)
1248
+ clonebox export . --user --include-data -o clonebox-vm.tar.gz
1249
+
1250
+ # Or export specific VM by name
1251
+ clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
1252
+
1253
+ # Extract to get the disk image
1254
+ tar -xzf clonebox-vm.tar.gz
1255
+ cd clonebox-clonebox
1256
+ ls -la # Should show disk.qcow2, vm.xml, etc.
1257
+ ```
1258
+
1259
+ ### Step 3: Convert to Proxmox Format
1260
+
1261
+ ```bash
1262
+ # Install qemu-utils if not installed
1263
+ sudo apt install qemu-utils
1264
+
1265
+ # Convert qcow2 to raw format (Proxmox preferred)
1266
+ qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
1267
+
1268
+ # Or convert to qcow2 with compression for smaller size
1269
+ qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
1270
+ ```
1271
+
1272
+ ### Step 4: Transfer to Proxmox Host
1273
+
1274
+ ```bash
1275
+ # Using scp (replace with your Proxmox host IP)
1276
+ scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
1277
+
1278
+ # Or using rsync for large files
1279
+ rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
1280
+ ```
1281
+
1282
+ ### Step 5: Create VM in Proxmox
1283
+
1284
+ 1. **Log into Proxmox Web UI**
1285
+
1286
+ 2. **Create new VM:**
1287
+ - Click "Create VM"
1288
+ - Enter VM ID and Name
1289
+ - Set OS: "Do not use any media"
1290
+
1291
+ 3. **Configure Hardware:**
1292
+ - **Hard Disk:**
1293
+ - Delete default disk
1294
+ - Click "Add" โ†’ "Hard Disk"
1295
+ - Select your uploaded image file
1296
+ - Set Disk size (can be larger than image)
1297
+ - Set Bus: "VirtIO SCSI"
1298
+ - Set Cache: "Write back" for better performance
1299
+
1300
+ 4. **CPU & Memory:**
1301
+ - Set CPU cores (match original VM config)
1302
+ - Set Memory (match original VM config)
1303
+
1304
+ 5. **Network:**
1305
+ - Set Model: "VirtIO (paravirtualized)"
1306
+
1307
+ 6. **Confirm:** Click "Finish" to create VM
1308
+
1309
+ ### Step 6: Post-Import Configuration
1310
+
1311
+ 1. **Start the VM in Proxmox**
1312
+
1313
+ 2. **Update network configuration:**
1314
+ ```bash
1315
+ # In VM console, update network interfaces
1316
+ sudo nano /etc/netplan/01-netcfg.yaml
1317
+
1318
+ # Example for Proxmox bridge:
1319
+ network:
1320
+ version: 2
1321
+ renderer: networkd
1322
+ ethernets:
1323
+ ens18: # Proxmox typically uses ens18
1324
+ dhcp4: true
1325
+ ```
1326
+
1327
+ 3. **Apply network changes:**
1328
+ ```bash
1329
+ sudo netplan apply
1330
+ ```
1331
+
1332
+ 4. **Update mount points (if needed):**
1333
+ ```bash
1334
+ # Mount points will fail in Proxmox, remove them
1335
+ sudo nano /etc/fstab
1336
+ # Comment out or remove 9p mount entries
1337
+
1338
+ # Reboot to apply changes
1339
+ sudo reboot
1340
+ ```
1341
+
1342
+ ### Alternative: Direct Import to Proxmox Storage
1343
+
1344
+ If you have Proxmox with shared storage:
1345
+
1346
+ ```bash
1347
+ # On Proxmox host
1348
+ # Create a temporary directory
1349
+ mkdir /tmp/import
1350
+
1351
+ # Copy disk directly to Proxmox storage (example for local-lvm)
1352
+ scp vm-disk.raw root@proxmox:/tmp/import/
1353
+
1354
+ # On Proxmox host, create VM using CLI
1355
+ qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
1356
+
1357
+ # Import disk to VM
1358
+ qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
1359
+
1360
+ # Attach disk to VM
1361
+ qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
1362
+
1363
+ # Set boot disk
1364
+ qm set 9000 --boot c --bootdisk scsi0
1365
+ ```
1366
+
1367
+ ### Troubleshooting
1368
+
1369
+ - **VM won't boot:** Check if disk format is compatible (raw is safest)
1370
+ - **Network not working:** Update network configuration for Proxmox's NIC naming
1371
+ - **Performance issues:** Use VirtIO drivers and set cache to "Write back"
1372
+ - **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
1373
+
1374
+ ### Notes
1375
+
1376
+ - CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
1377
+ - Browser profiles and app data exported with `--include-data` will be available in the VM disk
1378
+ - For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
1379
+
1380
+ ## License
1381
+
1382
+ Apache License - see [LICENSE](LICENSE) file.