clonebox 0.1.12__tar.gz → 0.1.14__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: clonebox
3
- Version: 0.1.12
3
+ Version: 0.1.14
4
4
  Summary: Clone your workstation environment to an isolated VM with selective apps, paths and services
5
5
  Author: CloneBox Team
6
6
  License: Apache-2.0
@@ -235,6 +235,9 @@ clonebox open . --user
235
235
 
236
236
  # 6. Stop VM when done
237
237
  clonebox stop . --user
238
+
239
+ # 7. Delete VM if needed
240
+ clonebox delete . --user --yes
238
241
  ```
239
242
 
240
243
  ### Development Environment with Browser Profiles
@@ -273,24 +276,33 @@ clonebox test . --user --verbose
273
276
  # ✅ Health check triggered
274
277
  ```
275
278
 
276
- ### VM Health Monitoring
279
+ ### VM Health Monitoring and Mount Validation
277
280
 
278
281
  ```bash
279
- # Check overall status
282
+ # Check overall status including mount validation
280
283
  clonebox status . --user
281
284
 
282
- # Output:
283
- # 📊 Checking VM status: clone-clonebox
284
- # VM State: running
285
- # VM has network access
286
- # ☁️ Cloud-init: Still running (packages installing)
287
- # 🏥 Health Check Status... ⏳ Health check not yet run
288
-
289
- # Trigger health check
285
+ # Output shows:
286
+ # 📊 VM State: running
287
+ # 🔍 Network and IP address
288
+ # ☁️ Cloud-init: Complete
289
+ # 💾 Mount Points status table:
290
+ # ┌─────────────────────────┬──────────────┬────────┐
291
+ # │ Guest Path │ Status │ Files │
292
+ # ├─────────────────────────┼──────────────┼────────┤
293
+ # │ /home/ubuntu/Downloads │ ✅ Mounted │ 199 │
294
+ # │ /home/ubuntu/Documents │ ❌ Not mounted│ ? │
295
+ # │ ~/.config/JetBrains │ ✅ Mounted │ 45 │
296
+ # └─────────────────────────┴──────────────┴────────┘
297
+ # 12/14 mounts active
298
+ # 🏥 Health Check Status: OK
299
+
300
+ # Trigger full health check
290
301
  clonebox status . --user --health
291
302
 
292
- # View detailed health report in VM:
293
- # cat /var/log/clonebox-health.log
303
+ # If mounts are missing, remount or rebuild:
304
+ # In VM: sudo mount -a
305
+ # Or rebuild: clonebox clone . --user --run --replace
294
306
  ```
295
307
 
296
308
  ### Export/Import Workflow
@@ -578,15 +590,17 @@ clonebox clone . --network auto
578
590
  | `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
579
591
  | `clonebox start . --viewer` | Start VM and open GUI window |
580
592
  | `clonebox start <name>` | Start existing VM by name |
581
- | `clonebox stop <name>` | Stop a VM (graceful shutdown) |
582
- | `clonebox stop -f <name>` | Force stop a VM |
583
- | `clonebox delete <name>` | Delete VM and storage |
593
+ | `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
594
+ | `clonebox stop . -f` | Force stop VM |
595
+ | `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
596
+ | `clonebox delete . --yes` | Delete VM without confirmation |
584
597
  | `clonebox list` | List all VMs |
585
598
  | `clonebox detect` | Show detected services/apps/paths |
586
599
  | `clonebox detect --yaml` | Output as YAML config |
587
600
  | `clonebox detect --yaml --dedupe` | YAML with duplicates removed |
588
601
  | `clonebox detect --json` | Output as JSON |
589
- | `clonebox status . --user` | Check VM health, cloud-init status, and IP address |
602
+ | `clonebox status . --user` | Check VM health, cloud-init, IP, and mount status |
603
+ | `clonebox status . --user --health` | Check VM status and run full health check |
590
604
  | `clonebox test . --user` | Test VM configuration and validate all settings |
591
605
  | `clonebox export . --user` | Export VM for migration to another workstation |
592
606
  | `clonebox export . --user --include-data` | Export VM with browser profiles and configs |
@@ -665,22 +679,57 @@ sudo apt install virt-viewer
665
679
  virt-viewer --connect qemu:///session <vm-name>
666
680
  ```
667
681
 
668
- ### Browser Profiles Not Syncing
682
+ ### Browser Profiles and PyCharm Not Working
669
683
 
670
- If browser profiles or app data aren't available:
684
+ If browser profiles or PyCharm configs aren't available, or you get permission errors:
671
685
 
672
- 1. **Regenerate config with app data:**
673
- ```bash
674
- rm .clonebox.yaml
675
- clonebox clone . --user --run --replace
676
- ```
686
+ **Root cause:** VM was created with old version without proper mount permissions.
677
687
 
678
- 2. **Check mount permissions in VM:**
679
- ```bash
680
- # Verify mounts are accessible
681
- ls -la ~/.config/google-chrome
682
- ls -la ~/.mozilla/firefox
683
- ```
688
+ **Solution - Rebuild VM with latest fixes:**
689
+
690
+ ```bash
691
+ # Stop and delete old VM
692
+ clonebox stop . --user
693
+ clonebox delete . --user --yes
694
+
695
+ # Recreate VM with fixed permissions and app data mounts
696
+ clonebox clone . --user --run --replace
697
+ ```
698
+
699
+ **After rebuild, verify mounts in VM:**
700
+ ```bash
701
+ # Check all mounts are accessible
702
+ ls ~/.config/google-chrome # Chrome profile
703
+ ls ~/.mozilla/firefox # Firefox profile
704
+ ls ~/.config/JetBrains # PyCharm settings
705
+ ls ~/Downloads # Downloads folder
706
+ ls ~/Documents # Documents folder
707
+ ```
708
+
709
+ **What changed in v0.1.12:**
710
+ - All mounts use `uid=1000,gid=1000` for ubuntu user access
711
+ - Both `paths` and `app_data_paths` are properly mounted
712
+ - No sudo needed to access any shared directories
713
+
714
+ ### Mount Points Empty or Permission Denied
715
+
716
+ If you get "must be superuser to use mount" error when accessing Downloads/Documents:
717
+
718
+ **Solution:** VM was created with old mount configuration. Recreate VM:
719
+
720
+ ```bash
721
+ # Stop and delete old VM
722
+ clonebox stop . --user
723
+ clonebox delete . --user --yes
724
+
725
+ # Recreate with fixed permissions
726
+ clonebox clone . --user --run --replace
727
+ ```
728
+
729
+ **What was fixed:**
730
+ - Mounts now use `uid=1000,gid=1000` so ubuntu user has access
731
+ - No need for sudo to access shared directories
732
+ - Applies to new VMs created after v0.1.12
684
733
 
685
734
  ### Mount Points Empty After Reboot
686
735
 
@@ -698,7 +747,7 @@ If shared directories appear empty after VM restart:
698
747
 
699
748
  3. **Verify access mode:**
700
749
  - VMs created with `accessmode="mapped"` allow any user to access mounts
701
- - Older VMs used `accessmode="passthrough"` which preserves host UIDs
750
+ - Mount options include `uid=1000,gid=1000` for user access
702
751
 
703
752
  ## Advanced Usage
704
753
 
@@ -761,6 +810,160 @@ virsh --connect qemu:///session console clone-clonebox
761
810
  # Press Ctrl + ] to exit console
762
811
  ```
763
812
 
813
+ ## Exporting to Proxmox
814
+
815
+ To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
816
+
817
+ ### Step 1: Locate VM Disk Image
818
+
819
+ ```bash
820
+ # Find VM disk location
821
+ clonebox list
822
+
823
+ # Check VM details for disk path
824
+ virsh --connect qemu:///session dominfo clone-clonebox
825
+
826
+ # Typical locations:
827
+ # User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
828
+ # System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
829
+ ```
830
+
831
+ ### Step 2: Export VM with CloneBox
832
+
833
+ ```bash
834
+ # Export VM with all data (from current directory with .clonebox.yaml)
835
+ clonebox export . --user --include-data -o clonebox-vm.tar.gz
836
+
837
+ # Or export specific VM by name
838
+ clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
839
+
840
+ # Extract to get the disk image
841
+ tar -xzf clonebox-vm.tar.gz
842
+ cd clonebox-clonebox
843
+ ls -la # Should show disk.qcow2, vm.xml, etc.
844
+ ```
845
+
846
+ ### Step 3: Convert to Proxmox Format
847
+
848
+ ```bash
849
+ # Install qemu-utils if not installed
850
+ sudo apt install qemu-utils
851
+
852
+ # Convert qcow2 to raw format (Proxmox preferred)
853
+ qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
854
+
855
+ # Or convert to qcow2 with compression for smaller size
856
+ qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
857
+ ```
858
+
859
+ ### Step 4: Transfer to Proxmox Host
860
+
861
+ ```bash
862
+ # Using scp (replace with your Proxmox host IP)
863
+ scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
864
+
865
+ # Or using rsync for large files
866
+ rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
867
+ ```
868
+
869
+ ### Step 5: Create VM in Proxmox
870
+
871
+ 1. **Log into Proxmox Web UI**
872
+
873
+ 2. **Create new VM:**
874
+ - Click "Create VM"
875
+ - Enter VM ID and Name
876
+ - Set OS: "Do not use any media"
877
+
878
+ 3. **Configure Hardware:**
879
+ - **Hard Disk:**
880
+ - Delete default disk
881
+ - Click "Add" → "Hard Disk"
882
+ - Select your uploaded image file
883
+ - Set Disk size (can be larger than image)
884
+ - Set Bus: "VirtIO SCSI"
885
+ - Set Cache: "Write back" for better performance
886
+
887
+ 4. **CPU & Memory:**
888
+ - Set CPU cores (match original VM config)
889
+ - Set Memory (match original VM config)
890
+
891
+ 5. **Network:**
892
+ - Set Model: "VirtIO (paravirtualized)"
893
+
894
+ 6. **Confirm:** Click "Finish" to create VM
895
+
896
+ ### Step 6: Post-Import Configuration
897
+
898
+ 1. **Start the VM in Proxmox**
899
+
900
+ 2. **Update network configuration:**
901
+ ```bash
902
+ # In VM console, update network interfaces
903
+ sudo nano /etc/netplan/01-netcfg.yaml
904
+
905
+ # Example for Proxmox bridge:
906
+ network:
907
+ version: 2
908
+ renderer: networkd
909
+ ethernets:
910
+ ens18: # Proxmox typically uses ens18
911
+ dhcp4: true
912
+ ```
913
+
914
+ 3. **Apply network changes:**
915
+ ```bash
916
+ sudo netplan apply
917
+ ```
918
+
919
+ 4. **Update mount points (if needed):**
920
+ ```bash
921
+ # Mount points will fail in Proxmox, remove them
922
+ sudo nano /etc/fstab
923
+ # Comment out or remove 9p mount entries
924
+
925
+ # Reboot to apply changes
926
+ sudo reboot
927
+ ```
928
+
929
+ ### Alternative: Direct Import to Proxmox Storage
930
+
931
+ If you have Proxmox with shared storage:
932
+
933
+ ```bash
934
+ # On Proxmox host
935
+ # Create a temporary directory
936
+ mkdir /tmp/import
937
+
938
+ # Copy disk directly to Proxmox storage (example for local-lvm)
939
+ scp vm-disk.raw root@proxmox:/tmp/import/
940
+
941
+ # On Proxmox host, create VM using CLI
942
+ qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
943
+
944
+ # Import disk to VM
945
+ qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
946
+
947
+ # Attach disk to VM
948
+ qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
949
+
950
+ # Set boot disk
951
+ qm set 9000 --boot c --bootdisk scsi0
952
+ ```
953
+
954
+ ### Troubleshooting
955
+
956
+ - **VM won't boot:** Check if disk format is compatible (raw is safest)
957
+ - **Network not working:** Update network configuration for Proxmox's NIC naming
958
+ - **Performance issues:** Use VirtIO drivers and set cache to "Write back"
959
+ - **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
960
+
961
+ ### Notes
962
+
963
+ - CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
964
+ - Browser profiles and app data exported with `--include-data` will be available in the VM disk
965
+ - For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
966
+
764
967
  ## License
765
968
 
766
969
  MIT License - see [LICENSE](LICENSE) file.
@@ -196,6 +196,9 @@ clonebox open . --user
196
196
 
197
197
  # 6. Stop VM when done
198
198
  clonebox stop . --user
199
+
200
+ # 7. Delete VM if needed
201
+ clonebox delete . --user --yes
199
202
  ```
200
203
 
201
204
  ### Development Environment with Browser Profiles
@@ -234,24 +237,33 @@ clonebox test . --user --verbose
234
237
  # ✅ Health check triggered
235
238
  ```
236
239
 
237
- ### VM Health Monitoring
240
+ ### VM Health Monitoring and Mount Validation
238
241
 
239
242
  ```bash
240
- # Check overall status
243
+ # Check overall status including mount validation
241
244
  clonebox status . --user
242
245
 
243
- # Output:
244
- # 📊 Checking VM status: clone-clonebox
245
- # VM State: running
246
- # VM has network access
247
- # ☁️ Cloud-init: Still running (packages installing)
248
- # 🏥 Health Check Status... ⏳ Health check not yet run
249
-
250
- # Trigger health check
246
+ # Output shows:
247
+ # 📊 VM State: running
248
+ # 🔍 Network and IP address
249
+ # ☁️ Cloud-init: Complete
250
+ # 💾 Mount Points status table:
251
+ # ┌─────────────────────────┬──────────────┬────────┐
252
+ # │ Guest Path │ Status │ Files │
253
+ # ├─────────────────────────┼──────────────┼────────┤
254
+ # │ /home/ubuntu/Downloads │ ✅ Mounted │ 199 │
255
+ # │ /home/ubuntu/Documents │ ❌ Not mounted│ ? │
256
+ # │ ~/.config/JetBrains │ ✅ Mounted │ 45 │
257
+ # └─────────────────────────┴──────────────┴────────┘
258
+ # 12/14 mounts active
259
+ # 🏥 Health Check Status: OK
260
+
261
+ # Trigger full health check
251
262
  clonebox status . --user --health
252
263
 
253
- # View detailed health report in VM:
254
- # cat /var/log/clonebox-health.log
264
+ # If mounts are missing, remount or rebuild:
265
+ # In VM: sudo mount -a
266
+ # Or rebuild: clonebox clone . --user --run --replace
255
267
  ```
256
268
 
257
269
  ### Export/Import Workflow
@@ -539,15 +551,17 @@ clonebox clone . --network auto
539
551
  | `clonebox start .` | Start VM from `.clonebox.yaml` in current dir |
540
552
  | `clonebox start . --viewer` | Start VM and open GUI window |
541
553
  | `clonebox start <name>` | Start existing VM by name |
542
- | `clonebox stop <name>` | Stop a VM (graceful shutdown) |
543
- | `clonebox stop -f <name>` | Force stop a VM |
544
- | `clonebox delete <name>` | Delete VM and storage |
554
+ | `clonebox stop .` | Stop VM from `.clonebox.yaml` in current dir |
555
+ | `clonebox stop . -f` | Force stop VM |
556
+ | `clonebox delete .` | Delete VM from `.clonebox.yaml` in current dir |
557
+ | `clonebox delete . --yes` | Delete VM without confirmation |
545
558
  | `clonebox list` | List all VMs |
546
559
  | `clonebox detect` | Show detected services/apps/paths |
547
560
  | `clonebox detect --yaml` | Output as YAML config |
548
561
  | `clonebox detect --yaml --dedupe` | YAML with duplicates removed |
549
562
  | `clonebox detect --json` | Output as JSON |
550
- | `clonebox status . --user` | Check VM health, cloud-init status, and IP address |
563
+ | `clonebox status . --user` | Check VM health, cloud-init, IP, and mount status |
564
+ | `clonebox status . --user --health` | Check VM status and run full health check |
551
565
  | `clonebox test . --user` | Test VM configuration and validate all settings |
552
566
  | `clonebox export . --user` | Export VM for migration to another workstation |
553
567
  | `clonebox export . --user --include-data` | Export VM with browser profiles and configs |
@@ -626,22 +640,57 @@ sudo apt install virt-viewer
626
640
  virt-viewer --connect qemu:///session <vm-name>
627
641
  ```
628
642
 
629
- ### Browser Profiles Not Syncing
643
+ ### Browser Profiles and PyCharm Not Working
630
644
 
631
- If browser profiles or app data aren't available:
645
+ If browser profiles or PyCharm configs aren't available, or you get permission errors:
632
646
 
633
- 1. **Regenerate config with app data:**
634
- ```bash
635
- rm .clonebox.yaml
636
- clonebox clone . --user --run --replace
637
- ```
647
+ **Root cause:** VM was created with old version without proper mount permissions.
638
648
 
639
- 2. **Check mount permissions in VM:**
640
- ```bash
641
- # Verify mounts are accessible
642
- ls -la ~/.config/google-chrome
643
- ls -la ~/.mozilla/firefox
644
- ```
649
+ **Solution - Rebuild VM with latest fixes:**
650
+
651
+ ```bash
652
+ # Stop and delete old VM
653
+ clonebox stop . --user
654
+ clonebox delete . --user --yes
655
+
656
+ # Recreate VM with fixed permissions and app data mounts
657
+ clonebox clone . --user --run --replace
658
+ ```
659
+
660
+ **After rebuild, verify mounts in VM:**
661
+ ```bash
662
+ # Check all mounts are accessible
663
+ ls ~/.config/google-chrome # Chrome profile
664
+ ls ~/.mozilla/firefox # Firefox profile
665
+ ls ~/.config/JetBrains # PyCharm settings
666
+ ls ~/Downloads # Downloads folder
667
+ ls ~/Documents # Documents folder
668
+ ```
669
+
670
+ **What changed in v0.1.12:**
671
+ - All mounts use `uid=1000,gid=1000` for ubuntu user access
672
+ - Both `paths` and `app_data_paths` are properly mounted
673
+ - No sudo needed to access any shared directories
674
+
675
+ ### Mount Points Empty or Permission Denied
676
+
677
+ If you get "must be superuser to use mount" error when accessing Downloads/Documents:
678
+
679
+ **Solution:** VM was created with old mount configuration. Recreate VM:
680
+
681
+ ```bash
682
+ # Stop and delete old VM
683
+ clonebox stop . --user
684
+ clonebox delete . --user --yes
685
+
686
+ # Recreate with fixed permissions
687
+ clonebox clone . --user --run --replace
688
+ ```
689
+
690
+ **What was fixed:**
691
+ - Mounts now use `uid=1000,gid=1000` so ubuntu user has access
692
+ - No need for sudo to access shared directories
693
+ - Applies to new VMs created after v0.1.12
645
694
 
646
695
  ### Mount Points Empty After Reboot
647
696
 
@@ -659,7 +708,7 @@ If shared directories appear empty after VM restart:
659
708
 
660
709
  3. **Verify access mode:**
661
710
  - VMs created with `accessmode="mapped"` allow any user to access mounts
662
- - Older VMs used `accessmode="passthrough"` which preserves host UIDs
711
+ - Mount options include `uid=1000,gid=1000` for user access
663
712
 
664
713
  ## Advanced Usage
665
714
 
@@ -722,6 +771,160 @@ virsh --connect qemu:///session console clone-clonebox
722
771
  # Press Ctrl + ] to exit console
723
772
  ```
724
773
 
774
+ ## Exporting to Proxmox
775
+
776
+ To use CloneBox VMs in Proxmox, you need to convert the qcow2 disk image to Proxmox format.
777
+
778
+ ### Step 1: Locate VM Disk Image
779
+
780
+ ```bash
781
+ # Find VM disk location
782
+ clonebox list
783
+
784
+ # Check VM details for disk path
785
+ virsh --connect qemu:///session dominfo clone-clonebox
786
+
787
+ # Typical locations:
788
+ # User session: ~/.local/share/libvirt/images/<vm-name>/<vm-name>.qcow2
789
+ # System session: /var/lib/libvirt/images/<vm-name>/<vm-name>.qcow2
790
+ ```
791
+
792
+ ### Step 2: Export VM with CloneBox
793
+
794
+ ```bash
795
+ # Export VM with all data (from current directory with .clonebox.yaml)
796
+ clonebox export . --user --include-data -o clonebox-vm.tar.gz
797
+
798
+ # Or export specific VM by name
799
+ clonebox export safetytwin-vm --include-data -o safetytwin.tar.gz
800
+
801
+ # Extract to get the disk image
802
+ tar -xzf clonebox-vm.tar.gz
803
+ cd clonebox-clonebox
804
+ ls -la # Should show disk.qcow2, vm.xml, etc.
805
+ ```
806
+
807
+ ### Step 3: Convert to Proxmox Format
808
+
809
+ ```bash
810
+ # Install qemu-utils if not installed
811
+ sudo apt install qemu-utils
812
+
813
+ # Convert qcow2 to raw format (Proxmox preferred)
814
+ qemu-img convert -f qcow2 -O raw disk.qcow2 vm-disk.raw
815
+
816
+ # Or convert to qcow2 with compression for smaller size
817
+ qemu-img convert -f qcow2 -O qcow2 -c disk.qcow2 vm-disk-compressed.qcow2
818
+ ```
819
+
820
+ ### Step 4: Transfer to Proxmox Host
821
+
822
+ ```bash
823
+ # Using scp (replace with your Proxmox host IP)
824
+ scp vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
825
+
826
+ # Or using rsync for large files
827
+ rsync -avh --progress vm-disk.raw root@proxmox:/var/lib/vz/template/iso/
828
+ ```
829
+
830
+ ### Step 5: Create VM in Proxmox
831
+
832
+ 1. **Log into Proxmox Web UI**
833
+
834
+ 2. **Create new VM:**
835
+ - Click "Create VM"
836
+ - Enter VM ID and Name
837
+ - Set OS: "Do not use any media"
838
+
839
+ 3. **Configure Hardware:**
840
+ - **Hard Disk:**
841
+ - Delete default disk
842
+ - Click "Add" → "Hard Disk"
843
+ - Select your uploaded image file
844
+ - Set Disk size (can be larger than image)
845
+ - Set Bus: "VirtIO SCSI"
846
+ - Set Cache: "Write back" for better performance
847
+
848
+ 4. **CPU & Memory:**
849
+ - Set CPU cores (match original VM config)
850
+ - Set Memory (match original VM config)
851
+
852
+ 5. **Network:**
853
+ - Set Model: "VirtIO (paravirtualized)"
854
+
855
+ 6. **Confirm:** Click "Finish" to create VM
856
+
857
+ ### Step 6: Post-Import Configuration
858
+
859
+ 1. **Start the VM in Proxmox**
860
+
861
+ 2. **Update network configuration:**
862
+ ```bash
863
+ # In VM console, update network interfaces
864
+ sudo nano /etc/netplan/01-netcfg.yaml
865
+
866
+ # Example for Proxmox bridge:
867
+ network:
868
+ version: 2
869
+ renderer: networkd
870
+ ethernets:
871
+ ens18: # Proxmox typically uses ens18
872
+ dhcp4: true
873
+ ```
874
+
875
+ 3. **Apply network changes:**
876
+ ```bash
877
+ sudo netplan apply
878
+ ```
879
+
880
+ 4. **Update mount points (if needed):**
881
+ ```bash
882
+ # Mount points will fail in Proxmox, remove them
883
+ sudo nano /etc/fstab
884
+ # Comment out or remove 9p mount entries
885
+
886
+ # Reboot to apply changes
887
+ sudo reboot
888
+ ```
889
+
890
+ ### Alternative: Direct Import to Proxmox Storage
891
+
892
+ If you have Proxmox with shared storage:
893
+
894
+ ```bash
895
+ # On Proxmox host
896
+ # Create a temporary directory
897
+ mkdir /tmp/import
898
+
899
+ # Copy disk directly to Proxmox storage (example for local-lvm)
900
+ scp vm-disk.raw root@proxmox:/tmp/import/
901
+
902
+ # On Proxmox host, create VM using CLI
903
+ qm create 9000 --name clonebox-vm --memory 4096 --cores 4 --net0 virtio,bridge=vmbr0
904
+
905
+ # Import disk to VM
906
+ qm importdisk 9000 /tmp/import/vm-disk.raw local-lvm
907
+
908
+ # Attach disk to VM
909
+ qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
910
+
911
+ # Set boot disk
912
+ qm set 9000 --boot c --bootdisk scsi0
913
+ ```
914
+
915
+ ### Troubleshooting
916
+
917
+ - **VM won't boot:** Check if disk format is compatible (raw is safest)
918
+ - **Network not working:** Update network configuration for Proxmox's NIC naming
919
+ - **Performance issues:** Use VirtIO drivers and set cache to "Write back"
920
+ - **Mount errors:** Remove 9p mount entries from /etc/fstab as they won't work in Proxmox
921
+
922
+ ### Notes
923
+
924
+ - CloneBox's bind mounts (9p filesystem) are specific to libvirt/QEMU and won't work in Proxmox
925
+ - Browser profiles and app data exported with `--include-data` will be available in the VM disk
926
+ - For shared folders in Proxmox, use Proxmox's shared folders or network shares instead
927
+
725
928
  ## License
726
929
 
727
930
  MIT License - see [LICENSE](LICENSE) file.
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "clonebox"
7
- version = "0.1.12"
7
+ version = "0.1.14"
8
8
  description = "Clone your workstation environment to an isolated VM with selective apps, paths and services"
9
9
  readme = "README.md"
10
10
  license = {text = "Apache-2.0"}
@@ -5,7 +5,7 @@ Selectively clone applications, paths and services to a new virtual machine
5
5
  with bind mounts instead of full disk cloning.
6
6
  """
7
7
 
8
- __version__ = "0.1.12"
8
+ __version__ = "0.1.13"
9
9
  __author__ = "CloneBox Team"
10
10
 
11
11
  from clonebox.cloner import SelectiveVMCloner